I'm having trouble rearranging the following data frame:

dat1 <- data.frame(
    name = rep(c("firstName", "secondName"), each=4),
    numbers = rep(1:4, 2),
    value = rnorm(8)

       name  numbers      value
1  firstName       1  0.3407997
2  firstName       2 -0.7033403
3  firstName       3 -0.3795377
4  firstName       4 -0.7460474
5 secondName       1 -0.8981073
6 secondName       2 -0.3347941
7 secondName       3 -0.5013782
8 secondName       4 -0.1745357

I want to reshape it so that each unique "name" variable is a rowname, with the "values" as observations along that row and the "numbers" as colnames. Sort of like this:

     name          1          2          3         4
1  firstName  0.3407997 -0.7033403 -0.3795377 -0.7460474
5 secondName -0.8981073 -0.3347941 -0.5013782 -0.1745357

I've looked at melt and cast and a few other things, but none seem to do the job.

1 upvote
2 upvote
@Frank: this is a much better title. long-form and wide-form are the standard terms used. The other answer cannot be found by searching on those terms. – smci
2 upvote
@smci Having a better title is not a good reason not to link the questions, is it? If the questions are essentially the same, it's better for future visitors that they be linked so that all the answers can be found easily. You could mark the duplicate in the other direction instead, I suppose... Also, I do not know why you have made new tags here. – Frank
@smci I'm going to go ahead and get rid of these tags. That's the consensus (of two) in the R chat room, but you can take it up there or on meta if you disagree. – Frank
^^ @Frank - this answer cannot be found by searching on the terms users are likely to use. It's not a question of aesthetics. – smci
1 upvote
@smci By "this answer" you mean... the one I linked to? Yeah, I agree that it is unlikely but someone might land on one or the other in their search, and we want that person to find all the relevant answers (here and there). It's the exact same question so the answers in both places are relevant -- that's why it should be marked as a dupe. The titles don't matter; and which one is a dupe of the other doesn't matter either (as far as I can think of). If you're talking about the tags, maybe you should take it to meta...? – Frank

8 Answers 11

You can do this with the reshape() function, or with the melt() / cast() functions in the reshape package. For the second option, example code is

cast(dat1, name ~ numbers)

Or using reshape2

dcast(dat1, name ~ numbers)
thank you! I can't believe I didn't see that - I just looked at 'cast' before posting, but couldn't get it to work how I wanted. – Steve
7 upvote
+1 And use reshape2 for a performance gain. – Andrie
It might be worth noting that just using cast or dcast will not work nicely if you don't have a clear "value" column. Try dat <- data.frame(id=c(1,1,2,2),blah=c(8,4,7,6),index=c(1,2,1,2)); dcast(dat, id ~ index); cast(dat, id ~ index) and you will not get what you expect. You need to explicitly note the value/value.var - cast(dat, id ~ index, value="blah") and dcast(dat, id ~ index, value.var="blah") for instance. – thelatemail

Using your example dataframe, we could:

xtabs(value ~ name + numbers, data = dat1)
this one is good, but the result is of format table which not may be not so easy to handle as data.frame or data.table, both has plenty of packages – cloudscomputes
up vote 138 down vote accepted

Using reshape function:

reshape(dat1, idvar = "name", timevar = "numbers", direction = "wide")
9 upvote
+1 and you don't need to rely on external packages, since reshape comes with stats. Not to mention that it's faster! =) – aL3xa
82 upvote
Good luck figuring out the arguments you need though – hadley
2 upvote
@hadley, yepp... that's true! =) – aL3xa
@indra_patil - I would likely use the reshape2 package as indicated in one of the other answers. You could create a new question that's specific to your use case and post it if you can't figure it out. – Chase
it seems it will create duplicated "name" columns, that may be wanted or not wanted by the author of this thread – cloudscomputes
reshape is an outstanding example for a horrible function API. It is very close to useless. – NoBackingDown
The reshape comments and similar argument names aren't all that helpful. However, I have found that for long to wide, you need to provide data = your data.frame, idvar = the variable that identifies your groups, v.names = the variables that will become multiple columns in wide format, timevar = the variable containing the values that will be appended to v.names in wide format, direction = wide, and sep = "_". Clear enough? ;) – Brian D

The new (in 2014) tidyr package also does this simply, with gather()/spread() being the terms for melt/cast.

spread(dat1, key = numbers, value = value)

From github,

tidyr is a reframing of reshape2 designed to accompany the tidy data framework, and to work hand-in-hand with magrittr and dplyr to build a solid pipeline for data analysis.

Just as reshape2 did less than reshape, tidyr does less than reshape2. It's designed specifically for tidying data, not the general reshaping that reshape2 does, or the general aggregation that reshape did. In particular, built-in methods only work for data frames, and tidyr provides no margins or aggregation.

Just wanted to add a link to the R Cookbook page that discusses the use of these functions from tidyr and reshape2. It provides good examples and explanations. – Jake

Other two options:

Base package:

df <- unstack(dat1, form = value ~ numbers)
rownames(df) <- unique(dat1$name)

sqldf package:

sqldf('SELECT name,
      MAX(CASE WHEN numbers = 1 THEN value ELSE NULL END) x1, 
      MAX(CASE WHEN numbers = 2 THEN value ELSE NULL END) x2,
      MAX(CASE WHEN numbers = 3 THEN value ELSE NULL END) x3,
      MAX(CASE WHEN numbers = 4 THEN value ELSE NULL END) x4
      FROM dat1
      GROUP BY name')

Another option if performance is a concern is to use data.table's extension of reshape2's melt & dcast functions

(Reference: Efficient reshaping using data.tables)


dcast(dat1, name ~ numbers, value.var = "value")

#          name          1          2         3         4
# 1:  firstName  0.1836433 -0.8356286 1.5952808 0.3295078
# 2: secondName -0.8204684  0.4874291 0.7383247 0.5757814

And, as of data.table v1.9.6 we can cast on multiple columns

## add an extra column
dat1[, value2 := value * 2]

## cast multiple value columns
dcast(dat1, name ~ numbers, value.var = c("value", "value2"))

#          name    value_1    value_2   value_3   value_4   value2_1   value2_2 value2_3  value2_4
# 1:  firstName  0.1836433 -0.8356286 1.5952808 0.3295078  0.3672866 -1.6712572 3.190562 0.6590155
# 2: secondName -0.8204684  0.4874291 0.7383247 0.5757814 -1.6409368  0.9748581 1.476649 1.1515627
1 upvote
data.table approach is the best ! very efficient ... you will see the difference when name is a combination of 30-40 columns !! – joel.wilson
2 upvote
I personally think this is the best answer – cloudscomputes

Using base R aggregate function:

aggregate(value ~ name, dat1, I)

# name           value.1  value.2  value.3  value.4
#1 firstName      0.4145  -0.4747   0.0659   -0.5024
#2 secondName    -0.8259   0.1669  -0.8962    0.1681
why use the paste function instead of the I (identity function)? – Onyambu

There's very powerful new package from genius data scientists at Win-Vector (folks that made vtreat, seplyr and replyr) called cdata. It implements "coordinated data" principles described in this document and also in this blog post. The idea is that regardless how you organize your data, it should be possible to identify individual data points using a system of "data coordinates". Here's a excerpt from the recent blog post by John Mount:

The whole system is based on two primitives or operators cdata::moveValuesToRowsD() and cdata::moveValuesToColumnsD(). These operators have pivot, un-pivot, one-hot encode, transpose, moving multiple rows and columns, and many other transforms as simple special cases.

It is easy to write many different operations in terms of the cdata primitives. These operators can work-in memory or at big data scale (with databases and Apache Spark; for big data use the cdata::moveValuesToRowsN() and cdata::moveValuesToColumnsN() variants). The transforms are controlled by a control table that itself is a diagram of (or picture of) the transform.

We will first build the control table (see blog post for details) and then perform the move of data from rows to columns.

# first build the control table
pivotControlTable <- buildPivotControlTableD(table = dat1, # reference to dataset
                        columnToTakeKeysFrom = 'numbers', # this will become column headers
                        columnToTakeValuesFrom = 'value', # this contains data
                        sep="_")                          # optional for making column names

# perform the move of data to columns
dat_wide <- moveValuesToColumnsD(tallTable =  dat1, # reference to dataset
                    keyColumns = c('name'),         # this(these) column(s) should stay untouched 
                    controlTable = pivotControlTable# control table above

#>         name  numbers_1  numbers_2  numbers_3  numbers_4
#> 1  firstName  0.3407997 -0.7033403 -0.3795377 -0.7460474
#> 2 secondName -0.8981073 -0.3347941 -0.5013782 -0.1745357

Not the answer you're looking for? Browse other questions tagged or ask your own question.