Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

I have annual financial data for several stocks which needs to be blown out to a monthly time series. I want to blow it out to a monthly time series so I can compare stocks with differing year-ends, and potentially compute some monthly metrics using some monthly prices which I have somewhere else.

My monthly data looks like this (dput below):

  date      |    value   |  sec_id  |  metric
-----------------------------------------------
2012-06-29  |   2.44376  |   1676   |  ROE
2013-06-28  |   1.73054  |   1676   |  ROE
2014-06-30  |   1.68171  |   1676   |  ROE
2012-05-31  |   2.07297  |   1572   |  ROE
2013-05-30  |   1.77073  |   1572   |  ROE 
2012-06-29  |   5.44376  |   1676   |  EPS_GROWTH
2013-06-28  |   -0.3054  |   1676   |  EPS_GROWTH
2014-06-30  |   3.68171  |   1676   |  EPS_GROWTH
2012-05-31  |   -1.7297  |   1572   |  EPS_GROWTH
2013-05-30  |   6.77073  |   1572   |  EPS_GROWTH

What I want is to roll forward all the data points to appear on a monthly basis:

  date      |    value   |  sec_id  |  metric
-------------------------------------------------
2012-06-29  |   2.44376  |   1676   |  ROE
2012-07-31  |   2.44376  |   1676   |  ROE
2012-08-31  |   2.44376  |   1676   |  ROE
  ...             ...
2013-05-31  |   2.44376  |   1676   |  ROE
2013-06-28  |   1.73054  |   1676   |  ROE
  ...             ...

and so on for each security/metric combination. Complicating matters is that the year-ends are not consistent: one of the companies switch from a May year-end to a June year-end in 2000, for example.

Does anyone know a slick, non-for-loop way of doing this in R?

Note that this is an expanded version of a similar (simpler) question I'd asked before. The best answer there used purrr package, but the best ideas I can come up with involves nested for-loops.

Here's a toy version of my df data frame:

library(tidyverse) # because it's a tibble with lubridate dates
df = structure(list(sec_id = c(1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L, 1676L), 
date = structure(c(6908, 7273, 7638, 8002, 8369, 8734, 9099, 9464, 9829, 10193, 10560, 10925, 11291, 11656, 12020, 12384, 12752, 13117, 13482, 13847, 14211, 14578, 14943, 15308, 15674, 16038, 16402, 16769, 17135, 6784, 7151, 7516, 7881, 8247, 8611, 8975, 9342, 9708, 10073, 10438, 10802, 11200, 11565, 11929, 12293, 12661, 13026, 13391, 13756, 14120, 14487, 14852, 15217, 15583, 15947, 16311, 16678, 17044, 6175, 6542, 6908, 7273, 7638, 8002, 8369, 8734, 9099,  9464, 9829, 10193, 10560, 10925, 11291, 11656, 12020, 12384, 12752, 13117, 13482, 13847, 14211, 14578, 14943, 15308, 15674, 16038, 16402, 16769, 17135, 6420, 6784, 7151, 7516, 7881, 8247, 8611, 8975, 9342, 9708, 10073, 10438, 10802, 11200, 11565, 11929, 12293, 12661, 13026, 13391, 13756, 14120, 14487, 14852, 15217, 15583, 15947, 16311, 16678, 17044), class = "Date"),    
value = c(0.291022, 0.197242, -0.470581, -0.406716, -3.294938, 1.22421, 3.140976, 0.250468, 0.091548, -0.100863, 0.058375, 0.24784, 0.178765, 0.099276, 0.25472, -0.033291, 0.124165, 0.050947, 0.243008, 0.1205, -0.239625, -0.231221, 0.365649, 0.163779, 0.024976, 0.08388, 0.154777, 0.016473, -0.272928, 0.237446, 0.391304, 0.064583, 0.286497, -0.571342, 1.05039, -0.001038, -0.018711, -0.162076, -0.599241, -4.071504, -0.37761, 1.694085, 0.045113, -0.064748, 0.098901, 0.823333, 0.793419, 0.759225, 0.329818, 0.199564, -0.616418, 1.164773, 0.877078, -0.325099, -0.294199, 0.272016, -0.706077, -2.57027, 0.129316, 0.131234, 0.143554, 0.131058, 0.099229, 0.057275, -0.121562, 0.051187, 0.15589, 0.188605, 0.186155, 0.154765, 0.153816, 0.166873, 0.181998, 0.183937, 0.211225, 0.186216, 0.177731, 0.161529, 0.187527, 0.199608, 0.136866, 0.10001, 0.13546, 0.148756, 0.143177, 0.144643, 0.153461, 0.140774, 0.099231, 0.117254, 0.141094, 0.166662, 0.154047, 0.172634, 0.062807, 0.145724, 0.13481, 0.11031, 0.087178, 0.033243, -0.122207, -0.22856, 0.171272, 0.187948, 0.186266, 0.154494, 0.238273, 0.352929, 0.494341, 0.50497, 0.459309, 0.178825, 0.270483, 0.43428, 0.254988, 0.149881, 0.187119, 0.048877, -0.111647), 
metric = c("EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "EPS_GROWTH", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE", "ROE")), 
class = c("tbl_df", "tbl", "data.frame"), 
row.names = c(NA, -119L), .Names = c("sec_id", "date", "value", "metric"))

EDIT: My solution so far (takes about ten minutes given my ~100k rows)

library(tidyverse)
if (exists("factors.monthly")) rm(factors.monthly)
for (s in unique(df$sec_id)) {
    for (m in unique(df$metric)) {
        # cat(s, " : ", m, "
")
        x = df %>% filter(sec_id == s) %>% filter(metric == m) %>% mutate(metric = as.character(metric)) %>% select(sec_id, date, value = metric_value, metric)
        if (nrow(x) > 0) {
            y = x %>%
            mutate(date = ceiling_date(date, 'month'), date = map2(date, lead(date - 1, default = last(date)), seq, by = 'month')) %>%
            unnest() %>%
            mutate(date = date - 1)

            if (exists("factors.monthly")) {
                factors.monthly = rbind(factors.monthly, y)
            } else {
                factors.monthly = y
            }
        }
    }
}
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
625 views
Welcome To Ask or Share your Answers For Others

1 Answer

I think this is what you want. It should be plenty fast.

df %>%
    group_by(sec_id, metric) %>%
    mutate(date = ceiling_date(date, 'month'),
           date = map2(date, lead(date - 1, default = last(date)), seq, by = 'month')) %>%
    unnest() %>%
    mutate(date = date - 1) %>%
    arrange(sec_id, metric, date)
# Source: local data frame [1,386 x 4]
# Groups: sec_id, metric [4]
# 
#    sec_id    value     metric       date
#     <int>    <dbl>      <chr>     <date>
# 1    1572 0.291022 EPS_GROWTH 1988-11-30
# 2    1572 0.291022 EPS_GROWTH 1988-12-31
# 3    1572 0.291022 EPS_GROWTH 1989-01-31
# 4    1572 0.291022 EPS_GROWTH 1989-02-28
# 5    1572 0.291022 EPS_GROWTH 1989-03-31
# 6    1572 0.291022 EPS_GROWTH 1989-04-30
# 7    1572 0.291022 EPS_GROWTH 1989-05-31
# 8    1572 0.291022 EPS_GROWTH 1989-06-30
# 9    1572 0.291022 EPS_GROWTH 1989-07-31
# 10   1572 0.291022 EPS_GROWTH 1989-08-31
# # ... with 1,376 more rows

I'd recommend reading some basic dplyr documentation, see especially the "Grouped Operations" section of this introduction.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...