# Sum columns in r dplyr ## Sum Across Multiple Rows & Columns Using dplyr Package in R (2 Examples)

In this R tutorial you’ll learn how to calculate the sums of multiple rows and columns of a data frame based on the dplyr package.

The article contains the following topics:

Let’s do this:

### Example Data & Add-On Packages

First, we have to create some example data:

 data <- data.frame(x1 =1:5, # Example data x2 = c(NA, 5, 1, 1, NA), x3 =9:5, x4 = c(4, 1, NA, 2, 8)) data # Print example data# x1 x2 x3 x4# 1 1 NA 9 4# 2 2 5 8 1# 3 3 1 7 NA# 4 4 1 6 2# 5 5 NA 5 8

data <- data.frame(x1 = 1:5, # Example data x2 = c(NA, 5, 1, 1, NA), x3 = 9:5, x4 = c(4, 1, NA, 2, 8)) data # Print example data # x1 x2 x3 x4 # 1 1 NA 9 4 # 2 2 5 8 1 # 3 3 1 7 NA # 4 4 1 6 2 # 5 5 NA 5 8

Have a look at the previous output of the RStudio console. It shows that our exemplifying data contains five rows and four columns. Note that all of the variables are numeric and some of the variables contain NA values (i.e. missing values).

We also need to install and load the dplyr package, if we want to use the corresponding functions:

 install.packages("dplyr")# Install & load dplyr library("dplyr")

install.packages("dplyr") # Install & load dplyr library("dplyr")

### Example 1: Sums of Columns Using dplyr Package

In this Example, I’ll explain how to use the replace, is.na, summarise_all, and sum functions.

 data %>%# Compute column sums replace(is.na(.), 0)%>% summarise_all(sum)# x1 x2 x3 x4# 1 15 7 35 15

data %>% # Compute column sums replace(is.na(.), 0) %>% summarise_all(sum) # x1 x2 x3 x4 # 1 15 7 35 15

You can see the colSums in the previous output: The column sum of x1 is 15, the column sum of x2 is 7, the column sum of x3 is 35, and the column sum of x4 is 15.

### Example 2: Sums of Rows Using dplyr Package

The following syntax illustrates how to compute the rowSums of each row of our data frame using the replace, is.na, mutate, and rowSums functions.

 data %>%# Compute row sums replace(is.na(.), 0)%>% mutate(sum = rowSums(.))# x1 x2 x3 x4 sum# 1 1 0 9 4 14# 2 2 5 8 1 16# 3 3 1 7 0 11# 4 4 1 6 2 13# 5 5 0 5 8 18

data %>% # Compute row sums replace(is.na(.), 0) %>% mutate(sum = rowSums(.)) # x1 x2 x3 x4 sum # 1 1 0 9 4 14 # 2 2 5 8 1 16 # 3 3 1 7 0 11 # 4 4 1 6 2 13 # 5 5 0 5 8 18

Have a look at the previous output: We have created a data frame with an additional column showing the sum of each row. Note that the NA values were replaced by 0 in this output.

### Video & Further Resources

Do you need further explanations on the R programming codes of this tutorial? Then you may have a look at the following video of my YouTube channel. In the video, I show the R programming code of this tutorial in RStudio.

In addition, you could read the related articles of my website. A selection of interesting articles is shown below.

Sours: https://statisticsglobe.com/sum-across-multiple-rows-columns-using-dplyr-package-in-r

## Data manipulation using dplyr

### What is dplyr?

The package is a fairly new (2014) package that tries to provide easy tools for the most common data manipulation tasks. It is built to work directly with data frames. The thinking behind it was largely inspired by the package which has been in use for some time but suffered from being slow in some cases. addresses this by porting much of the computation to C++. An additional feature is the ability to work with data stored directly in an external database. The benefits of doing this are that the data can be managed natively in a relational database, queries can be conducted on that database, and only the results of the query returned.

This addresses a common problem with R in that all operations are conducted in memory and thus the amount of data you can work with is limited by available memory. The database connections essentially remove that limitation in that you can have a database of many 100s GB, conduct queries on it directly and pull back just what you need for analysis in R.

### Selecting columns and filtering rows

We’re going to learn some of the most common functions: , , , , and . To select columns of a data frame, use . The first argument to this function is the data frame (), and the subsequent arguments are the columns to keep.

To choose rows, use :

### Pipes

But what if you wanted to select and filter? There are three ways to do this: use intermediate steps, nested functions, or pipes. With the intermediate steps, you essentially create a temporary data frame and use that as input to the next function. This can clutter up your workspace with lots of objects. You can also nest functions (i.e. one function inside of another). This is handy, but can be difficult to read if too many functions are nested as the process from inside out. The last option, pipes, are a fairly recent addition to R. Pipes let you take the output of one function and send it directly to the next, which is useful when you need to many things to the same data set. Pipes in R look like and are made available via the package installed as part of .

In the above we use the pipe to send the data set first through , to keep rows where was equal to ‘plus’, and then through to keep the and and columns. When the data frame is being passed to the and functions through a pipe, we don’t need to include it as an argument to these functions anymore.

If we wanted to create a new object with this smaller version of the data we could do so by assigning it a new name:

### Challenge

Using pipes, subset the data to include rows where the clade is ‘Cit+’. Retain columns , , and

### Split-apply-combine data analysis and the summarize() function

Many data analysis tasks can be approached using the “split-apply-combine” paradigm: split the data into groups, apply some analysis to each group, and then combine the results. makes this very easy through the use of the function, which splits the data into groups. When the data is grouped in this way can be used to collapse each group into a single-row summary. does this by applying an aggregating or summary function to each group. For example, if we wanted to group by citrate-using mutant status and find the number of rows of data for each status, we would do:

Here the summary function used was to find the count for each group. We can also apply many other functions to individual columns to get other summary statistics. For example, in the R base package we can use built-in functions like , , , and . By default, all R functions operating on vectors that contains missing data will return NA. It’s a way to make sure that users know they have missing data, and make a conscious decision on how to deal with it. When dealing with simple statistics like the mean, the easiest way to ignore (the missing data) is to use ( stands for remove).

So to view mean by mutant status:

You can group by multiple columns too:

Looks like for one of these clones, the clade is missing. We could then discard those rows using :

All of a sudden this isn’t running off the screen anymore. That’s because has changed our to a . This is a data structure that’s very similar to a data frame; for our purposes the only difference is that it won’t automatically show tons of data going off the screen.

You can also summarize multiple variables at the same time:

Handy dplyr cheatsheet

Much of this lesson was copied or adapted from Jeff Hollister’s materials

Sours: https://datacarpentry.org/R-genomics/04-dplyr.html

## Sum Across Multiple Rows and Columns Using dplyr Package in R

In this article, we are going to see how to sum multiple Rows and columns using Dplyr Package in R Programming language.

The dplyr package is used to perform simulations in the data by performing manipulations and transformations. It can be installed into the working space using the following command :

install.packages("dplyr")

### Calculating row sums

The is.na() method in R is used to check if the variable value is equivalent to NA or not. This is important since the result of most of the arithmetic operations with NA value is NA. The replace() method in R can be used to replace the value of a variable in a data frame. This method is applied over the input data frame’s all cells and swapped with a 0 wherever found.

Syntax: replace(data, replace-val)

The mutate() method is then applied over the output data frame, to modify the structure of the data frame by modifying the structure of the data frame. New columns or rows can be added or modified in the existing data frame. A new column name can be mentioned in the method argument and assigned to a pre-defined R function.

Syntax: mutate(new-col-name = rowSums(.))

The rowSums() method is used to calculate the sum of each row and then append the value at the end of each row under the new column name specified. The argument . is used to apply the function over all the cells of the data frame.

Syntax: rowSums(.)

Code:

### R

Output: ### Calculating column sums

The NA values, if present, can be removed from the data frame using the replace() method in R. Successively, the data frame is then subjected to a method summarise_all() which is applied to every variable in the data frame. It takes as argument the function sum to calculate the sum over each column of the data frame.

Syntax: summarise_all (sum)

Code:

### R

Output: Sours: https://www.geeksforgeeks.org/sum-across-multiple-rows-and-columns-using-dplyr-package-in-r/?ref=rp
R: sum columns/rows in data frames - dplyr -- 10

It’s often useful to perform the same operation on multiple columns, but copying and pasting is both tedious and error prone:

(If you’re trying to compute for each row, instead see )

This vignette will introduce you to the function, which lets you rewrite the previous code more succinctly:

We’ll start by discussing the basic usage of , particularly as it applies to , and show how to use it with multiple functions. We’ll then show a few uses with other verbs. We’ll finish off with a bit of history, showing why we prefer to our last approach (the , and functions) and how to translate your old code to the new syntax.

### Basic usage

has two primary arguments:

• The first argument, , selects the columns you want to operate on. It uses tidy selection (like ) so you can pick variables by position, name, and type.

• The second argument, , is a function or list of functions to apply to each column. This can also be a purrr style formula (or list of formulas) like . (This argument is optional, and you can omit it if you just want to get the underlying data; you’ll see that technique used in .)

Here are a couple of examples of in conjunction with its favourite verb, . But you can use with any dplyr verb, as you’ll see a little later.

Because is usually used in combination with and , it doesn’t select grouping variables in order to avoid accidentally modifying them:

### Multiple functions

You can transform each variable with more than one function by supplying a named list of functions or lambda functions in the second argument:

Control how the names are created with the argument which takes a glue spec:

If you’d prefer all summaries with the same function to be grouped together, you’ll have to expand the calls yourself:

(One day this might become an argument to but we’re not yet sure how it would work.)

We cannot however use in that last case because the second would pick up the variables that were newly created (“min_height”, “min_mass” and “min_birth_year”).

We can work around this by combining both calls to into a single expression that returns a tibble:

Alternatively we could reorganize results with :

### Current column

If you need to, you can access the name of the “current” column inside by calling . This can be useful if you want to perform some sort of context dependent transformation that’s already encoded in a vector:

### Gotchas

Be careful when combining numeric summaries with :

Here becomes because is numeric, so the computes its standard deviation, and the standard deviation of 3 (a constant) is . You probably want to compute last to avoid this problem:

Alternatively, you could explicitly exclude from the columns to operate on:

Another approach is to combine both the call to and in a single expression that returns a tibble:

### Other verbs

So far we’ve focused on the use of with , but it works with any other dplyr verb that uses data masking:

• Rescale all numeric variables to range 0-1:

For some verbs, like , and , you can omit the summary functions:

• Find all distinct

• Count all combinations of variables with a given pattern:

doesn’t work with or because they already use tidy select syntax; if you want to transform column names with a function, you can use .

### filter()

We cannot directly use in because we need an extra step to combine the results. To that end, has two special purpose companion functions:

• keeps the rows where the predicate is true for at least one selected column:
• keeps the rows where the predicate is true for all selected columns:
• Find all rows where no variable has missing values:

### , ,

Prior versions of dplyr allowed you to apply a function to multiple columns in a different way: using functions with , , and suffixes. These functions solved a pressing need and are used by many people, but are now superseded. That means that they’ll stay around, but won’t receive any new features and will only get critical bug fixes.

### Why do we like ?

Why did we decide to move away from these functions in favour of ?

1. makes it possible to express useful summaries that were previously impossible:

2. reduces the number of functions that dplyr needs to provide. This makes dplyr easier for you to use (because there are fewer functions to remember) and easier for us to implement new verbs (since we only need to implement one function, not four).

3. unifies and semantics so that you can select by position, name, and type, and you can now create compound selections that were previously impossible. For example, you can now transform all numeric columns whose name begins with “x”: .

4. doesn’t need to use . The functions are the only place in dplyr where you have to manually quote variable names, which makes them a little weird and hence harder to remember.

### Why did it take so long to discover ?

It’s disappointing that we didn’t discover earlier, and instead worked through several false starts (first not realising that it was a common problem, then with the functions, and most recently with the // functions). But couldn’t work without three recent discoveries:

• You can have a column of a data frame that is itself a data frame. This is something provided by base R, but it’s not very well documented, and it took a while to see that it was useful, not just a theoretical curiosity.

• We can use data frames to allow summary functions to return multiple columns.

• We can use the absence of an outer name as a convention that you want to unpack a data frame column into individual columns.

### How do you convert existing code?

Fortunately, it’s generally straightforward to translate your existing code to use :

• Strip the , and suffix off the function.

• Call . The first argument will be:

1. For , the old second argument wrapped in .
2. For , the old second argument, with the call to removed.
3. For , .

The subsequent arguments can be copied as is.

For example:

There are a few exceptions to this rule:

• and follow a different pattern. They already have select semantics, so are generally used in a different way that doesn’t have a direct equivalent with ; use the new instead.

• Previously, were paired with the and helpers. The new helpers and can be used inside to keep rows for which the predicate is true for at least one, or all selected columns:

• When used in a , all transformations performed by an are applied at once. This is different to the behaviour of , , and , which apply the transformations one at a time. We expect that you’ll generally find the new behaviour less surprising:

Sours: https://dplyr.tidyverse.org/articles/colwise.html

## Sum Function in R – sum()

Sum function in R – sum(), is used to calculate the sum of vector elements. sum of a particular column of a dataframe. sum of a group can also calculated using sum() function in R by providing it inside the aggregate function. with sum() function we can also perform row wise sum using dplyr package and also column wise sum lets see an example of each.

• sum of the list of vector elements with NA values
• Sum of a particular column of the dataframe in R
• column wise sum of the dataframe using sum() function
• Sum of the group in R dataframe using aggregate() and dplyr package
• Row wise sum of the dataframe in R using sum() function

#### Syntax for sum function :

sum(x, na.rm = FALSE, …)

• x – numeric vector
• rm- whether NA should be removed, if not, NA will be returned

#### Example of sum function in R

sum of vectors is depicted below.

# R sum function sum(1:10) sum(c(2,5,6,7,1,2))

output:

#### Example of sum function with NA:

sum() function doesn’t give desired output, If NAs are present in the vector. so it has to be handled by using na.rm=TRUE in sum() function

# sum() function in R for input vector which has NA. x = c(1.234,2.342,-4.562,5.671,12.345,-14.567,NA) sum(x,na.rm=TRUE)

output:

 2.463

#### Example of sum() function in R dataframe:

Lets create the data frame to demonstrate sum function – sum() in r

### create the dataframe my_basket = data.frame(ITEM_GROUP = c("Fruit","Fruit","Fruit","Fruit","Fruit","Vegetable","Vegetable","Vegetable","Vegetable","Dairy","Dairy","Dairy","Dairy","Dairy"), ITEM_NAME = c("Apple","Banana","Orange","Mango","Papaya","Carrot","Potato","Brinjal","Raddish","Milk","Curd","Cheese","Milk","Paneer"), Price = c(100,80,80,90,65,70,60,70,25,60,40,35,50,120), Tax = c(2,4,5,6,2,3,5,1,3,4,5,6,4,3)) my_basket

so the resultant dataframe will be #### sum of a column in R data frame using sum() function : sum() function takes the column name as argument and calculates the sum of that particular column

# sum() function in R : sum of a column in data frame sum(my_basket\$Price)

so the resultant sum of “Price” column will be

output:

 945

#### column wise sum using sum() function:

sum() function is applied to the required column through mapply() function, so that it  calculates the sum of required column as shown below.

# sum() function in R : sum of multiple column in data frame mapply(sum,my_basket[,c(-1,-2)])

so the resultant sum of “Price” and “Tax” columns will be #### Sum of the column by group using sum() function

aggregate() function along with the sum() function calculates the sum of a group. here sum of “Price” column, for “Item_Group” is calculated.

##### Sum of the column by group aggregate(x= my_basket\$Price, by= list(my_basket\$ITEM_GROUP), FUN=sum)

Item_group has three groups “Dairy”,”Fruit” & “Vegetable”. sum of price for each group is calculated as shown below #### Sum of the column by group  and populate it by using sum() function:

group_by() function along with the sum() function calculates the sum of a group. here sum of “Price” column, for “Item_Group” is calculated and populated across as shown below

#### sum of the column by group and populate it using dplyr library(dplyr) my_basket %>% group_by(ITEM_GROUP) %>% mutate(sum_by_group = sum(Price))

Item_group has three groups “Dairy”,”Fruit” & “Vegetable”. sum of price for each group is calculated and populated as shown below #### Row wise sum using sum() function along with dplyr Row wise sum is calculated with the help rowwise() function of dplyr package  and sum() function as shown below

## row wise sum using dplyr library(dplyr) my_basket %>% rowwise() %>% mutate( Total_price = sum(c(Price,Tax)) )

row wise sum of “Price” and “Tax” is calculated and  populated for each row as shown below For further understanding of sum() function in R using dplyr one can refer the dplyr documentation

#### Related Topics:  Get the Sum of Each Column in R

## Sum across multiple columns with dplyr

### dplyr >= 1.0.0

In newer versions of you can use along with to perform row-wise aggregation for functions that do not have specific row-wise variants, but if the row-wise variant exists it should be faster.

Since is just a special form of grouping and changes the way verbs work you'll likely want to pipe it to after doing your row-wise operation.

To select a range by name:

To select by type:

To select by column name:

You can use any number of tidy selection helpers like , , , etc.

To select by column index:

will work for any summary function. However, in your specific case a row-wise variant exists () so you can do the following (note the use of instead), which will be faster:

Benchmarking

For this example, the the row-wise variant takes about half as much time:

c_across versus across

In the particular case of the function, and give the same output for much of the code above:

The row-wise output of is a vector (hence the ), while the row-wise output of is a 1-row object:

The function you want to apply will necessitate, which verb you use. As shown above with you can use them nearly interchangeably. However, and many other common functions expect a (numeric) vector as its first argument:

Ignoring the row-wise variant that exists for mean () then in this case should be used:

, , etc. can take a numeric data frame as the first argument, which is why they work with .

Sours: https://stackoverflow.com/questions/28873057/sum-across-multiple-columns-with-dplyr

### Now discussing:

dplyr, and R in general, are particularly well suited to performing operations over columns, and performing operations over rows is much harder. In this vignette, you’ll learn dplyr’s approach centred around the row-wise data frame created by .

There are three common use cases that we discuss in this vignette:

• Row-wise aggregates (e.g. compute the mean of x, y, z).
• Calling a function multiple times with varying arguments.
• Working with list-columns.

These types of problems are often easily solved with a for loop, but it’s nice to have a solution that fits naturally into a pipeline.

Of course, someone has to write loops. It doesn’t have to be you. — Jenny Bryan

### Creating

Row-wise operations require a special type of grouping where each group consists of a single row. You create this with :

Like , doesn’t really do anything itself; it just changes how the other verbs work. For example, compare the results of in the following code:

If you use with a regular data frame, it computes the mean of , , and across all rows. If you apply it to a row-wise data frame, it computes the mean for each row.

You can optionally supply “identifier” variables in your call to . These variables are preserved when you call , so they behave somewhat similarly to the grouping variables passed to :

is just a special form of grouping, so if you want to remove it from a data frame, just call .

### Per row summary statistics

makes it really easy to summarise values across rows within one column. When combined with it also makes it easy to summarise values across columns within one row. To see how, we’ll start by making a little dataset:

Let’s say we want compute the sum of , , , and for each row. We start by making a row-wise data frame:

We can then use to add a new column to each row, or to return just that one summary:

Of course, if you have a lot of variables, it’s going to be tedious to type in every variable name. Instead, you can use which uses tidy selection syntax so you can to succinctly select many variables:

You could combine this with column-wise operations (see for more details) to compute the proportion of the total for each column:

### Row-wise summary functions

The approach will work for any summary function. But if you need greater speed, it’s worth looking for a built-in row-wise variant of your summary function. These are more efficient because they operate on the data frame as whole; they don’t split it into rows, compute the summary, and then join the results back together again.

NB: I use (not ) and (not ) here because and take a multi-row data frame as input.

### List-columns

operations are a natural pairing when you have list-columns. They allow you to avoid explicit loops and/or functions from the or families.

### Motivation

Imagine you have this data frame, and you want to count the lengths of each element:

You might try calling :

But that returns the length of the column, not the length of the individual values. If you’re an R documentation aficionado, you might know there’s already a base R function just for this purpose:

Or if you’re an experienced R programmer, you might know how to apply a function to each element of a list using , , or one of the purrr functions:

But wouldn’t it be nice if you could just write and dplyr would figure out that you wanted to compute the length of the element inside of ? Since you’re here, you might already be guessing at the answer: this is just another application of the row-wise pattern.

### Subsetting

Before we continue on, I wanted to briefly mention the magic that makes this work. This isn’t something you’ll generally need to think about (it’ll just work), but it’s useful to know about when something goes wrong.

There’s an important difference between a grouped data frame where each group happens to have one row, and a row-wise data frame where every group always has one row. Take these two data frames:

If we compute some properties of , you’ll notice the results look different:

They key difference is that when slices up the columns to pass to the grouped mutate uses and the row-wise mutate uses . The following code gives a flavour of the differences if you used a for loop:

Note that this magic only applies when you’re referring to existing columns, not when you’re creating new rows. This is potentially confusing, but we’re fairly confident it’s the least worst solution, particularly given the hint in the error message.

### Modelling

data frames allow you to solve a variety of modelling problems in what I think is a particularly elegant way. We’ll start by creating a nested data frame:

This is a little different to the usual output: we have visibly changed the structure of the data. Now we have three rows (one for each group), and we have a list-col, , that stores the data for that group. Also note that the output is ; this is important because it’s going to make working with that list of data frames much easier.

Once we have one data frame per row, it’s straightforward to make one model per row:

And supplement that with one set of predictions per row:

You could then summarise the model in a variety of ways:

Or easily access the parameters of each model:

### Repeated function calls

doesn’t just work with functions that return a length-1 vector (aka summary functions); it can work with any function if the result is a list. This means that and provide an elegant way to call a function many times with varying arguments, storing the outputs alongside the inputs.

### Simulations

I think this is a particularly elegant way to perform simulations, because it lets you store simulated values along with the parameters that generated them. For example, imagine you have the following data frame that describes the properties of 3 samples from the uniform distribution:

You can supply these parameters to by using and :

Note the use of here - returns multiple values and a expression has to return something of length 1. means that we’ll get a list column where each row is a list containing multiple values. If you forget to use , dplyr will give you a hint:

### Multiple combinations

What if you want to call a function for every combination of inputs? You can use (or ) to generate the data frame and then repeat the same pattern as above:

### Varying functions

In more complicated problems, you might also want to vary the function being called. This tends to be a bit more of an awkward fit with this approach because the columns in the input tibble will be less regular. But it’s still possible, and it’s a natural place to use :

### Previously

was also questioning for quite some time, partly because I didn’t appreciate how many people needed the native ability to compute summaries across multiple variables for each row. As an alternative, we recommended performing row-wise operations with the purrr functions. However, this was challenging because you needed to pick a map function based on the number of arguments that were varying and the type of result, which required quite some knowledge of purrr functions.

I was also resistant to because I felt like automatically switching between to was too magical in the same way that automatically -ing results made too magical. I’ve now persuaded myself that the row-wise magic is good magic partly because most people find the distinction between and mystifying and means that you don’t need to think about it.

Since clearly is useful it is not longer questioning, and we expect it to be around for the long term.

We’ve questioned the need for for quite some time, because it never felt very similar to the other dplyr verbs. It had two main modes of operation:

• Without argument names: you could call functions that input and output data frames using to refer to the “current” group. For example, the following code gets the first row of each group:

This has been superseded plus the more permissive which can now create multiple columns and multiple rows.

• With arguments: it worked like but automatically wrapped every element in a list:

I now believe that behaviour is both too magical and not very useful, and it can be replaced by and .

If needed (unlike here), you can wrap the results in a list yourself.

The addition of / and the increased scope of means that is no longer needed, so it is now superseded.

Sours: https://dplyr.tidyverse.org/articles/rowwise.html

83 84 85 86 87