Formulas are the R versions of statistical equations passed to the R functions for estimation. We use formulas to specify the models, such as what terms the model will have and their transformations. This section introduces various options for specifying a model using formulas. Pay attention to specifying the intercept and interactions of variables.
Transformations
You can also perform transformations inside the model formula. For example, log(y) ~ sqrt(x1) + x2
is transformed to log(y) = a_1 + a_2 * sqrt(x1) + a_3 * x2
. If your transformation involves +
, *
, ^
, or -
, you'll need to wrap it in I()
so R doesn't treat it like part of the model specification. For example, y ~ x + I(x ^ 2)
is translated to y = a_1 + a_2 * x + a_3 * x^2
. If you forget the I()
and specify y ~ x ^ 2 + x
, R will compute y ~ x * x + x
. x * x
means the interaction of x
with itself, which is the same as x
. R automatically drops redundant variables so x + x
become x
, meaning that y ~ x ^ 2 + x
specifies the function y = a_1 + a_2 * x
. That's probably not what you intended!
Again, if you get confused about what your model is doing, you can always use model_matrix()
to see exactly what equation lm()
is fitting:
df <- tribble( ~y, ~x, 1, 1, 2, 2, 3, 3 ) model_matrix(df, y ~ x^2 + x) #> # A tibble: 3 x 2 #> `(Intercept)` x #> <dbl> <dbl> #> 1 1 1 #> 2 1 2 #> 3 1 3 model_matrix(df, y ~ I(x^2) + x) #> # A tibble: 3 x 3 #> `(Intercept)` `I(x^2)` x #> <dbl> <dbl> <dbl> #> 1 1 1 1 #> 2 1 4 2 #> 3 1 9 3
Transformations
are useful because you can use them to approximate non-linear
functions. If you've taken a calculus class, you may have heard of
Taylor's theorem which says you can approximate any smooth function with
an infinite sum of polynomials. That means you can use a polynomial
function to get arbitrarily close to a smooth function by fitting an
equation like y = a_1 + a_2 * x + a_3 * x^2 + a_4 * x ^ 3
. Typing that sequence by hand is tedious, so R provides a helper function: poly()
:
model_matrix(df, y ~ poly(x, 2)) #> # A tibble: 3 x 3 #> `(Intercept)` `poly(x, 2)1` `poly(x, 2)2` #> <dbl> <dbl> <dbl> #> 1 1 -7.07e- 1 0.408 #> 2 1 -7.85e-17 -0.816 #> 3 1 7.07e- 1 0.408
However there's one major problem with using poly()
:
outside the range of the data, polynomials rapidly shoot off to
positive or negative infinity. One safer alternative is to use the
natural spline, splines::ns()
.
library(splines) model_matrix(df, y ~ ns(x, 2)) #> # A tibble: 3 x 3 #> `(Intercept)` `ns(x, 2)1` `ns(x, 2)2` #> <dbl> <dbl> <dbl> #> 1 1 0 0 #> 2 1 0.566 -0.211 #> 3 1 0.344 0.771
Let's see what that looks like when we try and approximate a non-linear function:
sim5 <- tibble( x = seq(0, 3.5 * pi, length = 50), y = 4 * sin(x) + rnorm(length(x)) ) ggplot(sim5, aes(x, y)) + geom_point()

I'm going to fit five models to this data.
mod1 <- lm(y ~ ns(x, 1), data = sim5) mod2 <- lm(y ~ ns(x, 2), data = sim5) mod3 <- lm(y ~ ns(x, 3), data = sim5) mod4 <- lm(y ~ ns(x, 4), data = sim5) mod5 <- lm(y ~ ns(x, 5), data = sim5) grid <- sim5 %>% data_grid(x = seq_range(x, n = 50, expand = 0.1)) %>% gather_predictions(mod1, mod2, mod3, mod4, mod5, .pred = "y") ggplot(sim5, aes(x, y)) + geom_point() + geom_line(data = grid, colour = "red") + facet_wrap(~ model)

Notice that the extrapolation outside the range of the data is clearly bad. This is the downside to approximating a function with a polynomial. But this is a very real problem with every model: the model can never tell you if the behaviour is true when you start extrapolating outside the range of the data that you have seen. You must rely on theory and science.