Epidemiology: How contagious is Novel Coronavirus (2019-nCoV)?


A new invisible enemy, only 30kb in size, has emerged and is on a killing spree around the world: 2019-nCoV, the Novel Coronavirus!

It has already killed more people than the SARS pandemic and its outbreak has been declared a Public Health Emergency of International Concern (PHEIC) by the World Health Organization (WHO).

If you want to learn how epidemiologists estimate how contagious a new virus is and how to do it in R read on!

There are many epidemiological models around, we will use one of the simplest here, the so-called SIR model. We will use this model with the latest data from the current outbreak of 2019-nCoV (from here: Wikipedia: Case statistics). You can use the following R code as a starting point for your own experiments and estimations.

Before we start to calculate a forecast let us begin with what is confirmed so far:

Infected <- c(45, 62, 121, 198, 291, 440, 571, 830, 1287, 1975, 2744, 4515, 5974, 7711, 9692, 11791, 14380, 17205, 20440)
Day <- 1:(length(Infected))
N <- 1400000000 # population of mainland china

old <- par(mfrow = c(1, 2))
plot(Day, Infected, type ="b")
plot(Day, Infected, log = "y")
abline(lm(log10(Infected) ~ Day))
title("Confirmed Cases 2019-nCoV China", outer = TRUE, line = -2)

On the left, we see the confirmed cases in mainland China and on the right the same but with a log scale on the y-axis (a so-called semi-log plot or more precisely log-linear plot here), which indicates that the epidemic is in an exponential phase, although at a slightly smaller rate than at the beginning. By the way: many people were not alarmed at all at the beginning. Why? Because an exponential function looks linear in the beginning. It was the same with HIV/AIDS when it first started.

Now we come to the prediction part with the SIR model, which basic idea is quite simple. There are three groups of people: those that are healthy but susceptible to the disease (S), the infected (I) and the people who have recovered (R):

Source: wikimedia

To model the dynamics of the outbreak we need three differential equations, one for the change in each group, where \beta is the parameter that controls the transition between S and I and \gamma which controls the transition between I and R:

    \[\frac{dS}{dt} = - \frac{\beta I S}{N}\]

    \[\frac{dI}{dt} = \frac{\beta I S}{N}- \gamma I\]

    \[\frac{dR}{dt} = \gamma I\]

This can easily be put into R code:

SIR <- function(time, state, parameters) {
  par <- as.list(c(state, parameters))
  with(par, {
    dS <- -beta/N * I * S
    dI <- beta/N * I * S - gamma * I
    dR <- gamma * I
    list(c(dS, dI, dR))
    })
}

To fit the model to the data we need two things: a solver for differential equations and an optimizer. To solve differential equations the function ode from the deSolve package (on CRAN) is an excellent choice, to optimize we will use the optim function from base R. Concretely, we will minimize the sum of the squared differences between the number of infected I at time t and the corresponding number of predicted cases by our model \^{I}(t):

    \[RSS(\beta, \gamma) = \sum_{t} \left( I(t)-\^{I}(t) \right)^2\]

Putting it all together:

library(deSolve)
init <- c(S = N-Infected[1], I = Infected[1], R = 0)
RSS <- function(parameters) {
  names(parameters) <- c("beta", "gamma")
  out <- ode(y = init, times = Day, func = SIR, parms = parameters)
  fit <- out[ , 3]
  sum((Infected - fit)^2)
}

Opt <- optim(c(0.5, 0.5), RSS, method = "L-BFGS-B", lower = c(0, 0), upper = c(1, 1)) # optimize with some sensible conditions
Opt$message
## [1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"

Opt_par <- setNames(Opt$par, c("beta", "gamma"))
Opt_par
##      beta     gamma 
## 0.6746089 0.3253912

t <- 1:70 # time in days
fit <- data.frame(ode(y = init, times = t, func = SIR, parms = Opt_par))
col <- 1:3 # colour

matplot(fit$time, fit[ , 2:4], type = "l", xlab = "Day", ylab = "Number of subjects", lwd = 2, lty = 1, col = col)
matplot(fit$time, fit[ , 2:4], type = "l", xlab = "Day", ylab = "Number of subjects", lwd = 2, lty = 1, col = col, log = "y")
## Warning in xy.coords(x, y, xlabel, ylabel, log = log): 1 y value <= 0
## omitted from logarithmic plot

points(Day, Infected)
legend("bottomright", c("Susceptibles", "Infecteds", "Recovereds"), lty = 1, lwd = 2, col = col, inset = 0.05)
title("SIR model 2019-nCoV China", outer = TRUE, line = -2)

We see in the right log-linear plot that the model seems to fit the values quite well. We can now extract some interesting statistics. One important number is the so-called basic reproduction number (also basic reproduction ratio) R_0 (pronounced “R naught”) which basically shows how many healthy people get infected by a sick person on average:

    \[R_0 = \frac{\beta}{\gamma}\]

par(old)

R0 <- setNames(Opt_par["beta"] / Opt_par["gamma"], "R0")
R0
##       R0 
## 2.073224

fit[fit$I == max(fit$I), "I", drop = FALSE] # height of pandemic
##            I
## 50 232001865

max(fit$I) * 0.02 # max deaths with supposed 2% fatality rate
## [1] 4640037

So, R_0 is slightly above 2, which is the number many researchers and the WHO give and which is around the same range of SARS, Influenza or Ebola (while transmission of Ebola is via bodily fluids and not airborne droplets). Additionally, according to this model, the height of a possible pandemic would be reached by the beginning of March (50 days after it started) with over 200 million Chinese infected and over 4 million dead!

Do not panic! All of this is preliminary and hopefully (probably!) false. When you play along with the above model you will see that the fitted parameters are far from stable. On the one hand, the purpose of this post was just to give an illustration of how such analyses are done in general with a very simple (probably too simple!) model, on the other hand, we are in good company here; the renowned scientific journal nature writes:

Researchers are struggling to accurately model the outbreak and predict how it might unfold.

On the other hand, I wouldn’t go that far that the numbers are impossibly high. H1N1, also known as swine flu, infected up to 1.5 billion people during 2009/2010 and nearly 600,000 died. And this wasn’t the first pandemic of this proportion in history (think Spanish flu). Yet, this is one of the few times where I hope that my model is wrong and we will all stay healthy!


UPDATE March 16, 2020
I posted an updated version for the situation in Germany:
COVID-19: The Case of Germany

78 thoughts on “Epidemiology: How contagious is Novel Coronavirus (2019-nCoV)?”

  1. Problem is the data. No idea how many mild cases never seek treatment. No idea how many cases die waiting for a bed. I did see one Chinese paper a week ago where they let slip that fatalities were in the thousands. Didn’t bookmark it.

        1. What would be needed are the currently infected persons (cumulative infected minus the removed, i.e. recovered or dead) but the numbers of recovered persons weren´t available at the time (to the best of my knowledge) and the difference wasn´t that big anyway because we were still in the relatively early stages (i.e. only a few recoveries).

  2. > Because an exponential function looks linear in the beginning.
    Nonsense, an exponential function looks exponential at all scales. Meaning if you look at the function in real time, or if you cut any part of it, it looks exponential, not linear.

    1. Harsh words, my friend 😉
      Mathematically you are of course right but not in practice (and that is what we are dealing with here!)

      See e.g. the following quote from Dr. John D. Cook, a well known applied mathematician:
      “How do you know whether you’re on an exponential curve? This is not as easy as it sounds. Because of random noise, it may be hard to tell from a small amount of data whether growth is linear or exponential”.
      Source: https://www.johndcook.com/blog/2008/01/17/coping-with-exponential-growth/

      Another thing is that we, as humans, are not really used to exponential thinking and therefore underestimate such developments, exponential growth indeed looks linear to us in the beginning (although it is not mathematically).

      1. Well let me apologize in advance because I agree with Dr Cook but I’m going to be annoyingly nitpicky: if the number of new cases / previous day increases every day, you are most likely on an exponential trend, if the delta of new cases is randomly greater or smaller than the previous day, then you may be on a linear growth.
        So the reason is not “because the exponential function looks linear at the beginning”, but because at the start of the epidemy, there may be 0 new contamination one day, or more likely because we haven’t identified all the contaminations yet. Aka it’s a problem of lack of data which makes it very hard to identify the trend.

        1. Yes, lack of data, data quality problems, and psychology because we, as humans, “think” linearly and not exponentially (also because the absolute change is so small at the beginning!)… which is why I think “looks linear at the beginning” might not be the best of characterizations (and is mathematically wrong!) but delivers the point concisely. Yet, in essence, we agree!

  3. Very nice analysis of an worrisome situation. I learned a great deal from your presentation and explanation. I’m accustomed to solving systems of differential equations the old fashioned way, with FORTRAN ;-), so seeing how you set this up and solved it in R, which I’m still learning, was very useful.

    Now let’s all hope that your numbers are indeed incorrect, and are over-estimates.

  4. Hello,
    Thanks for posting.
    With the EpiDynamics package we have:

    > parameters N initials sir max(sir$results$I)
    [1] 0.165818
    > 0.165818*N
    [1] 232145200
    > 0.002*232145200
    [1] 464290.4
    

    All the best,
    Vlad

  5. Sorry,

    > parameters N initials sir max(sir$results$I)
    [1] 0.165818
    > 0.165818*N
    [1] 232145200
    > 0.002*232145200
    [1] 464290. #about the same result
    
      1. Thanks for great article and quick reply.

        Apologies as im not really a coder but have just started studying stats at uni.

        I assumed you were inputting beta and gamma manually…

        Forgive me for being dumb but im struggling to understand how you can calculate gamma from this?

        Would you mind explaining for someone who is not familiar?

        1. Thank you for your great feedback, I really appreciate it.

          No, you give the optimizer some sensible starting values for beta and gamma and it then “intelligently” (some maths magic….) tries to find better and better values for both parameters which fit the given data best.

          Is it clearer now?

          1. Yes thats really helpful thank you.

            My only question now would be what the beta and gamma are actually quantities of?

            Also i really want to emphasise that you taking time to answer these questions is very appreciated!

  6. Cross posted from Stackexchange
    This is only marginally related to the detailed coding discussion, but seems highly relevant to the original question concerning modeling of the current 2019-nCoV epidemic. Please see arxiv:2002.00418v1 (paper at https://arxiv.org/pdf/2002.00418v1.pdf ) for a delayed diff equation system ~5 component model, with parameter estimation and predictions using dde23 in MatLab. These are compared to daily published reports of confirmed cases, number cured, etc. To me, it is quite worthy of discussion, refinement, and updating. It concludes that there is a bifurcation in the solution space dependent upon the efficacy of isolation, thus explaining the strong public health measures recently taken, which have a fair chance of success so far.

  7. Thanks a lot for the amazing model. I recreated it but there is still something that I not entirely understand. The model has three states S,I,R. If I sum these states up they come to a total of N (total population) at each time point. As there are also people who (unfortunately) die. Should there not also be a D state in the model? like dD/dt = (1-gamma)*I. So that always the following is true –> N = S(t) + I(t) +R(t) + D(t)

  8. Hi
    How can i create dynamic line chart and dynamic scatterplot(like gapminder) for Corona virus data???
    Please help me.
    I need to this plots.
    Very very necessary
    My gmail for answer me:m.rostami4543@gmail.com

  9. Hi, thanks a lot for this post. I have been looking for a while how to fit ode to real data.

    I was wondering
    – 1° Why we need 1/N in the equations. As N is constant I would assume we don’t need it and it could be implicit in the parameters.
    – 2° Why we need 1/N in equations with beta but not with gamma.
    – 3° So are beta and gamma really comparable (same units), given the fact that 1/N is only in the equations with beta?

    I tried
    – to remove 1/N in dS/dt and dI/dt and I obtained estimated values of beta and gamma of 0.
    – to include 1/N in the equation of dR/dt : dR/dt=(gamma/N)*I, and it didn’t change the estimated parameters. Weird.

    Sorry in advance if these are dumb questions.

  10. I found this very interesting and have been working on a presentation for undergraduates with up-to-date data, maybe some more graphics.

    Some comments:
    1. The package nCov2019 (remotes::install_github("GuangchuangYu/nCov2019")) was supplying up-to-date data very close to the Wikipedia table, but the time series data “x$ChinaDayList” in that package seem to have gone empty in the last couple of days. I have also tried the “rvest” package to scrape the wikipedia table. This is difficult, the format of that table changes quite often.

    2. In the current (2/24) Wikipedia table, Confirmed_cumulative = Active_Confirmed+Deaths_and_Recovered_cumulative except for a few record keeping differences. The Infectious (I) numbers in the SIR model should be the Active_Confirmed column, not the Confirmed_cumulative (=I+R) numbers used in your example. Interesting to note that the Active numbers have been decreasing the last few days.

    3. For fitting beta and gamma, it is useful to note that Confirmed_cumulative is still less than 100000 (much less than 1.4 billion), so essentially S=N. For now we should have

    dI/dt = (beta-gamma)I,
    with solution
    I=I_0 e^{(beta-gamma)t}.

    So the data can only determine the difference beta-gamma. This can be done simply using regression e.g. lm(log(Infected)~Day, timeline). One approach to getting beta used in https://kingaa.github.io/clim-dis/parest/parest.html is to guess that since 1/gamma is the typical infectious period, that 1/gamma is about 14.

    That the optimization converges at all for both parameters is probably a good sign the SIR model is not very accurate (maybe the quarantines are working? or the data is just bad?).

    1. Thank you, Charles. Great that you want to take this as the basis for your teaching material!

      As always, data availability and data quality are serious issues. Especially in this fast-moving situation several changes concerning what was reported when and in which format occurred. You are of course right that you have to take the infected cases only. The problem was that especially at the beginning there are several values missing in the “infected” column and the difference between the two is negligible for the time period examined.

      As mentioned in the post the parameters being optimized are not stable… yet the purpose of this post was primarily educational.

    2. In fact, at early stage when S=N,
      dS/dR=-Ro,
      which gives d(I+R)/dR=Ro
      Thus we may have an estimate for Ro by using early stage data, if we plot (I+R) against R and find slope of the best fit straight line.
      Is that true?

    1. These high value of gamma and beta would relate to a fast transition from susceptible to infected to recovered. Roughly about 5 days (the inverse of the gamma parameter).

      I have been wondering for a longer time about such numbers. It is being said by many that the incubation time is long and can be up to 2 weeks. But if that is the case, then how can the growth be so fast if the reproduction rate is only around 2?

      We must have either a high reproduction rate or a fast incubation period. But instead many reports say that both are not the case.

      Another possibility (to explain the diamond princess numbers) is that what we see is not a doubling for each individual (and many generations), but more something like superspreaders. Then the pattern in the sinusoidal curve is not due to an SIR model type of transmission among multiple generations, but instead much less generations and the sinusoidal curve pattern is due to the variability of the onset of the symptoms and detection of the Covid-19.

  11. hey, first of all, nice post really appreciated.
    So, i’m trying to use this code with updated data but the model gets weird, the curve of infected is quite to the right from the inicial infected data you used. I don’t know why, it is a parametrization problem? i just updated the number of infected.

      1. # new data
        Infected <- c(45, 62, 121, 198, 291, 440, 571, 830, 1287, 1975, 2744, 4515, 5974, 7711, 9692, 11791, 14380, 17205, 20440, 24324, 28018, 31161, 34546, 37198, 40171, 42638, 44653, 59804, 63851, 66492, 68500, 70548, 72436, 74185, 74576, 75465, 76288, 76936, 77150, 77658, 78064, 78497, 78824, 79251, 79824, 80026, 80151, 80270, 80409, 80552, 80651, 80695, 80735)
        
        
        Day <- 1:(length(Infected))
        N <- 1400000000 # population of mainland china
        
        old <- par(mfrow = c(1, 2))
        plot(Day, Infected, type ="b")
        plot(Day, Infected, log = "y")
        abline(lm(log10(Infected) ~ Day))
        title("Confirmed Cases 2019-nCoV China", outer = TRUE, line = -2)
        
        
        SIR <- function(time, state, parameters) {
          par <- as.list(c(state, parameters))
          with(par, {
            dS <- -beta/N * I * S
            dI <- beta/N * I * S - gamma * I
            dR <- gamma * I
            list(c(dS, dI, dR))
          })
        }
        
        
        library(deSolve)
        init <- c(S = N-Infected[1], I = Infected[1], R = 0)
        RSS <- function(parameters) {
          names(parameters) <- c("beta", "gamma")
          out <- ode(y = init, times = Day, func = SIR, parms = parameters)
          fit <- out[ , 3]
          sum((Infected - fit)^2)
        }
        
        Opt <- optim(c(0.5, 0.5), RSS, method = "L-BFGS-B", lower = c(0, 0), upper = c(1, 1)) # optimize with some sensible conditions
        Opt$message
        ## [1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
        
        Opt_par <- setNames(Opt$par, c("beta", "gamma"))
        Opt_par
        ##      beta     gamma 
        ## 0.6746089 0.3253912
        
        t <- 1:100 # time in days
        fit <- data.frame(ode(y = init, times = t, func = SIR, parms = Opt_par))
        col <- 1:3 # colour
        
        matplot(fit$time, fit[ , 2:4], type = "l", xlab = "Day", ylab = "Number of subjects", lwd = 2, lty = 1, col = col)
        matplot(fit$time, fit[ , 2:4], type = "l", xlab = "Day", ylab = "Number of subjects", lwd = 2, lty = 1, col = col, log = "y")
        ## Warning in xy.coords(x, y, xlabel, ylabel, log = log): 1 y value <= 0
        ## omitted from logarithmic plot
        
        points(Day, Infected)
        legend("bottomright", c("Susceptibles", "Infecteds", "Recovereds"), lty = 1, lwd = 2, col = col, inset = 0.05)
        title("SIR model 2019-nCoV China", outer = TRUE, line = -2)
        
        1. Just by inspection, I guess that the problem might be that you have to take the actively infected persons and not the cumulative infected. I am guilty of having taken those values myself (see in one of the comments above) because especially at the beginning there were several values missing in the “infected” column and the difference between the two was negligible for the time period examined… but not much later where they diverge considerably!

  12. I know this is a really stupid thing to ask but the vector at the start of your code:
    nfected <- c(45, 62, 121, 198, 291, 440, 571, 830, 1287, 1975, 2744, 4515, 5974, 7711, 9692, 11791, 14380, 17205, 20440)
    I assume this is NEW cases per day. Not accumulative?

  13. Have you thought of trying to adapt so you can run on every country for which the John Hopkins database has info?

      1. Ah thats interesting. I’ll keep an eye out to see if you do.

        Would then be interested to run a regression on it to see relationship with average temperature in these countries!

  14. Thank you very much for this amazing work, this is very interesting.
    I have a little question.
    As in SIR model, 1/gamma corresponds to the duration of the infection (for one person), I would be interested in fixing gamma, to study the evolution of the outbreak according to R0.

    Do you think I can use a similar Rcode that the one you use (to estimate beta et gamma), to calibrate beta (and maybe I(0)), by fixing gamma?

    Thank you very much for your help!

    1. Thank you very much for your feedback!

      I don’t know if I understand correctly what you are trying to do but it seems that a direct simulation approach would be more fitting.

      See e.g. here: SIR models in R

      If this doesn’t solve your problem please give a little bit more context.

      1. Thank you very much for your answer! Indeed, I got beta by simulation, so this solve my problem 🙂
        I still have one little question: I am trying to get a 95% confidence interval on the number of infected (I) predicted by the model at each time. Assuming that the error is normally distributed, a 95%CI could be derived using the standard error on the estimation of I(t)… But I don’t know how to get this…

        Would you have any idea to derive this 95% CI?

        1. To be honest with you I don’t think that it makes any sense to try to give sensible error ranges.

          There are too many sources of potential errors, like the number of true infections (because of the number of persons tested, availability of tests and their potential false results), the nonavailability of the recovered persons, potential instabilities in the fitting procedure, the relative oversimplicity of the model itself vs. the changing dynamics of the pandemic and so on…

          So the CIs would necessarily have to be so huge that they would practically be useless.

  15. I know this is a really stupid thing to ask but it is possible fit Italy’data at your SIR model? Assumptions would be the same? (N=60000000)

    Thank you very much for this amazing work, this is very interesting.

  16. I applied your fantastic model to Italy, but in the end I get this error:

    Data_Source: https://public.flourish.studio/visualisation/1443766/?utm_source=embed&utm_campaign=visualisation/1443766

    Infected
    length(Infected)
    Day
    N
    old  
    SIR <- function(time, state, parameters) {par <- as.list(c(state, parameters))
    + with(par, {dS <- -beta/N * I * S
    + dI <- beta/N * I * S - gamma * I
    + dR  init  RSS <- function(parameters) {names(parameters) <- c("beta", "gamma")
    + out <- ode(y = init, times = Day, func = SIR, parms = parameters)
    + fit
    Opt
    Opt$message
    [1] "ERROR: ABNORMAL_TERMINATION_IN_LNSRCH"
    

    Why?

      1. Exactly I also had to ‘play’ with the upper and lower values. In this regard, I would ask:
        Can you better explain the optim () function and above all:

        -What do the initial, lower and upper values ​​represent?
        -How the L-BFGS-B method works?

        Thanks so much!

    1. The LNSRCH stands for line-search.

      It means that the linesearch terminated without finding a minimum. The optim function repeatingly makes a linesearch for a better optimum and goes into the direction of the gradient of the cost function (in this case the residual sum of squares).

      To be honest, I do not exactly know when or why this problem exactly may occur. Maybe you are to close to the boundary, maybe the gradient is not determined accurately enough, or maybe the linesearch is using too large step sizes.

      This can be changed in a control parameter. You can read about it in the the help files.

      Try this:

      Opt <- optim(c(0.5, 0.5), RSS, method = "L-BFGS-B", lower = c(0, 0), upper = c(1, 1), control = list(parscale = c(10^-4,10^-4)))
      

      By changing parscale to smaller values you make the algorithm make more finer steps and evaluate gradients at smaller distances.

      In addition, make sure you get the `init` parameter correctly set (I imagine you changed the data, but not the starting value for the ode function). This starting value needs to be the value of S, I, and R at time zero (possibly you have the Chinese case).

  17. Thanks for putting all of this together!
    My concern is that the data that you are using for your SIR model is the cumulative number of confirmed cases. The cumulative number of confirmed cases is not equal to the current number of active cases at any point in time. The active cases would be the cumulative number minus the deaths and minus the recovered. In the SIR model I = C – R where C is the cumulative number and R is the recovered with the standard macabre caveat that a death is counted as recovered. Your code suggests that you are almost doing this as your optimizer fits the cumulative infected with the recovered R (line 10 of your code uses “fit <- out[ , 3]" which represents the predicted R. The recovered R in an SIR model will lag behind the cumulative infected. Alternatively, you could adjust your optimizer by fitting the data of the cumulative number of cases at each time point with the models predicted value of I + R at the same point in time.
    A further refinement would be to enclose your system into a second optimizer that is used to pick the best starting value for I. I don't think these changes will qualitatively change your conclusions. I will play around and see what happens.

    1. No, not in the simple SIR model (which is technically called SIR model without vital dynamics). Having said that you can think of R not only as Recovered but as Removed which would include people having died from the disease.

  18. Hello Prof. Holger,

    Thanks a lot for this clear and inspiring entry.
    I have already applied your code to monitor and forecast the evolution of the pandemic in Madrid (Spain).

    In the process, I modified some of the charts you use to use ggplot way of doing that.
    This is your code with the changes (please note that it is needed packages data.table and patchwork).

    #--------------
    
    #-------- Library loading 
    suppressPackageStartupMessages({
      library(data.table)
      library(patchwork)
      library(deSolve)
    })
    
    Infected <- c(45, 62, 121, 198, 291, 440, 571, 830, 1287, 1975, 2744, 4515, 5974, 7711, 9692, 11791, 14380, 17205, 20440)
    Day <- 1:(length(Infected))
    N <- 1400000000 # population of mainland china
    
    # old <- par(mfrow = c(1, 2))
    # plot(Day, Infected, type ="b")
    # plot(Day, Infected, log = "y")
    # abline(lm(log10(Infected) ~ Day))
    # title("Confirmed Cases 2019-nCoV Madrid", outer = TRUE, line = -2)
    
    DayInfec_df <- data.frame(
      Day = Day,
      Infected = Infected,
      logInfec = log10(Infected)
    )
    
    gr_a <- ggplot(DayInfec_df, aes( x = Day, y = Infected)) +
      geom_line( colour = 'grey', alpha = 0.25) +
      geom_point(size = 2) +
      ggtitle("Cases - China") +
      ylab("# Infected") + xlab("Day") +
      theme_bw()
    
    gr_b <- ggplot(DayInfec_df, aes( x = Day, y = logInfec)) +
      geom_point(size = 2) +
      geom_smooth(method = lm, se = FALSE) +
      ggtitle("Cases (log) - China") +
      ylab("# Infected") + xlab("Day") + 
      theme_bw()
    gr_a + gr_b  + 
      plot_annotation(title = 'Real Cases - Cov-19 - China',
                      theme = theme(plot.title = element_text(size = 18)))
    
    
    SIR <- function(time, state, parameters) {
      par <- as.list(c(state, parameters))
      with(par, {
        dS <- -beta/N * I * S
        dI <- beta/N * I * S - gamma * I
        dR <- gamma * I
        list(c(dS, dI, dR))
      })
    }
    
    init <- c(S = N - Infected[1], I = Infected[1], R = 0)
    RSS <- function(parameters) {
      names(parameters) <- c("beta", "gamma")
      out <- ode(y = init, times = Day, func = SIR, parms = parameters)
      fit <- out[ , 3]
      sum((Infected - fit)^2)
    }
    
    Opt <- optim(c(0.5, 0.5), RSS, method = "L-BFGS-B", lower = c(0, 0), upper = c(1, 1)) # optimize with some sensible conditions
    Opt$message
    ## [1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
    # [1] "ERROR: ABNORMAL_TERMINATION_IN_LNSRCH"..... :-(
    
    Opt_par <- setNames(Opt$par, c("beta", "gamma"))
    Opt_par
    # beta     gamma 
    # 1.0000000 0.7127451 
    
    t <- 1:70 # time in days
    fit <- data.frame(ode(y = init, times = t, func = SIR, parms = Opt_par))
    col <- 1:3 # colour
    
    # matplot(fit$time, fit[ , 2:4], type = "l", xlab = "Day", ylab = "Number of subjects", lwd = 2, lty = 1, col = col)
    # matplot(fit$time, fit[ , 2:4], type = "l", xlab = "Day", ylab = "Number of subjects", lwd = 2, lty = 1, col = col, log = "y")
    # ## Warning in xy.coords(x, y, xlabel, ylabel, log = log): 1 y value <= 0
    # ## omitted from logarithmic plot
    # 
    # points(Day, Infected)
    # legend("bottomright", c("Susceptibles", "Infecteds", "Recovereds"), lty = 1, lwd = 2, col = col, inset = 0.05)
    # par(old)
    
    max_infe <- fit[fit$I == max(fit$I), "I", drop = FALSE]
    val_max  <- data.frame(Day = as.numeric(rownames(max_infe)) , 
                           Value = max_infe$I )
    val_lab  <- paste('Infected_Peak:\n', as.character(round(val_max$Value),0))
    dea_lab  <- paste('Deaths (2%):\n', as.character(round(val_max$Value * 0.02,0)) )
    
    fit_dt   <- as.data.table(fit)
    fit_tidy <- melt(fit_dt, id.vars = c('time') , measure.vars = c('S', 'I','R') )
    fit_gd   <- merge(fit_dt, DayInfec_df[, c(1:2)], 
                      by.x = c('time'), by.y = c('Day'), 
                      all.x = TRUE)
    fit_all <- melt(as.data.table(fit_gd), id.vars = c('time') , 
                    measure.vars = c('S', 'I','R', 'Infected') )
    fit_all[ , Type := ifelse( variable != 'Infected', 'Simul', 'Real')]
    
    gr_left <- ggplot(fit_tidy, aes( x = time, y = value, 
                                     group = variable, 
                                     color = variable) ) +
      geom_line( aes(colour = variable)) +
      ylab("# People") + xlab("Day") + 
      guides(color = FALSE) +
      theme_bw()
    gr_left
    
    gr_right <- ggplot(fit_all, aes( x = time, y = value,
                                     group = variable, 
                                     color = variable, 
                                     linetype = Type) ) +
      geom_line( aes(colour = variable)) +
      # geom_point(aes(x = val_max$Day, y = val_max$Value)) +
      scale_y_log10() +
      ylab("# People (log)") + xlab("Day") + 
      guides(color = guide_legend(title = "")) +
      scale_color_manual(labels = c('Susceptibles', 'Infecteds', 'Recovered','Real_Cases'),
                         values = c('red', 'green', 'blue', 'black') ) +
      theme_bw() +
      theme(legend.position = "right", legend.text=element_text(size = 6)) +
      annotate( 'text', x = val_max$Day, y = val_max$Value, 
                size = 2.5, label = val_lab) +
      annotate( 'text', x = val_max$Day, y = 0.05*val_max$Value,
                size = 2.5, label = dea_lab)
    
    gr_right
    
    gr_left + gr_right  + 
      plot_annotation(title = 'China - Projection Evolution Cov-19 (SIR Model)',
                      theme = theme(plot.title = element_text(size = 18)))
    
    #--------------
    

    Hope that it helps .

    Kind Regards,
    Carlos Ortega

  19. Dear Prof. Holger and Prof. Ortega,

    Thank you both for sharing your code and for the excellent tutorial using this model. I applied the model to data for the US from 15 Feb 2020 through today (5 Apr 2020). Comparing the results from 3 days ago to the data through today, I was encouraged to see a slight departure from the expected logarithmic growth in number of infected cases, hopefully suggesting a slowing. However, the model seems to predict the peak coming around April 30, over 24 million infections at the peak, and nearly 488,000 deaths, which seems to be a good bit more than the worst CDC projections. Could this model be overpredicting these outcomes?

    Many thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.