Lesson 4 - Basic random effects in RTMB
Click here to view presentation online
16 December 2024
We avoid getting bogged down with the technical details underlying how TMB/RTMB addresses challenges of AD combined with the Laplace transformation. I refer you to Kristensen et al. (2016) an open source pub for more technical material: https://www.jstatsoft.org/article/view/v070i05
Assuming REs come from a common distribution allows information to be shared. E.g., if we know mean length across a number of ponds, we have information about likely mean length in a poorly sampled pond
Sometimes we can analyze data that could not be analyzed with a fixed effect model
Inferences can be more general (about the distribution from which the random effects arose)
If dissimilar things are combined.
Too few instances of the random effects to estimate their distributional parameters
The joint likelihood (sometimes aka penalized likelihood)
The marginal (true) likelihood
Why we want to maximize marginal likelihood
\[ L(\underline{\theta}, \underline{\gamma} \mid \underline{X})=L\left(\underline{\theta} \mid \underline{\gamma}, \underline{X}\right) p\left(\underline{\gamma} \mid \underline{\theta}\right) \]
Joint likelihood found by taking product of the likelihood conditioned on both RE (\(\underline{\gamma}\)) and data (\(\underline{X}\)) and pdf for random effect conditioned on parameters
Maximizing the joint likelihood is sometimes called penalized likelihood. Basically treats random effects like parameters.
\[ L(\underline{\theta} \mid \underline{X})=\int_{\underline{\gamma}} L(\underline{\theta}, \underline{\gamma} \mid \underline{X}) d \underline{\gamma}=\int_{\underline{\gamma}} L\left(\underline{\theta} \mid \gamma, \underline{X}\right) p\left(\underline{\gamma} \mid \underline{\theta}\right) d \underline{\gamma} \]
Computationally intensive (integrate over all possible values for the random effects)
Only feasible for complex models in last ~15 years due to software advances
Fortunately in RTMB we only have to specify the log of the joint likelihood
\[ L^{*}(\underline{\theta})=\sqrt{2 \pi^{n}} \operatorname{det}(H(\underline{\theta}))^{-\frac{1}{2}} \exp (-g(\underline{\theta}, \underline{\widehat{\gamma}})) \]
Unrealistic example to keep data management dead simple. One observation of length at each age (2-12) for each pond (arranged in a matrix).
fit vonB model for length at age, but now assume asymptotic length (Linf), rather than being a single number, varies among ponds, with the log of Linf for each pond coming from a common normal distribution.
Assume observed length at age normally distributed, with mean generated from the vonB function for that pond (and age), and a common SD shared over ages and ponds.
Before we proceed, what are the parameters (excluding the pond specific Linfs that are now random effects)?
Make logK rather than logLinf a random effect (so now just one logLinf not random)
Estimate logLinf for each pond as a fixed effect rather than a random effect
AIC is calculated as 2k-2*NLL, where k is the number of estimated parameters and NLL is the true (marginal) likelihood. See if you can figure out how to calculate this.