Skip to content
Mar 8

Actuarial Exam STAM: Short-Term Actuarial Models

MT
Mindli Team

AI-Generated Content

Actuarial Exam STAM: Short-Term Actuarial Models

Mastering the content of Exam STAM is essential for any actuary working in property, casualty, or health insurance. This exam moves beyond probability theory to focus on the practical statistical models used to price insurance products and set reserves for short-term risks. Success requires you to fluently combine probability distributions, statistical estimation, and business logic to solve real-world insurance problems.

Building Blocks: Severity and Frequency Models

At the heart of short-term modeling is the decomposition of aggregate loss into two components: how often claims occur and how large they are when they do. You model these separately before combining them.

A severity model describes the size of an individual loss. You will work with common parametric distributions like the exponential, gamma, Weibull, and Pareto. The choice depends on the data's shape. For example, the Pareto distribution, with its heavy tail, is often used for modeling severe, infrequent losses like catastrophic events. A key skill is modifying these distributions for common insurance provisions. If a policy has a deductible of , you don't simply use the original loss distribution ; you work with the payment per payment variable, , which shifts and scales the distribution.

A frequency model describes the number of losses (claims) occurring in a fixed period. The fundamental distribution here is the Poisson distribution, defined by a single rate parameter . You must also understand its two popular extensions: the negative binomial, which introduces more variability (overdispersion), and the binomial, used when there is a maximum possible number of claims. For the STAM exam, you must be adept at determining the correct frequency model based on the mean-variance relationship of given data.

Aggregating Risk: The Aggregate Loss Model

The total loss for a portfolio or policy over a period is the aggregate loss, denoted , where is the frequency random variable and the 's are independent, identically distributed severities. Calculating the exact distribution of is complex, so you rely on key approximations.

You will frequently use the collective risk model. The expected aggregate loss is straightforward: . The variance is more insightful: . This formula shows that aggregate volatility comes from both severity variance and frequency variance. For risk assessment and pricing, you need to understand the full distribution. The primary approximation method is the Central Limit Theorem, applying a normal approximation to . For heavier-tailed severity distributions, you may use the translated gamma approximation, which fits the first three moments (mean, variance, and skewness) of to a shifted gamma distribution.

Shaping the Risk: Coverage Modifications

Insurance policies are rarely unlimited. Standard provisions like deductibles, policy limits, and coinsurance fundamentally alter the payout and, therefore, the risk borne by the insurer. You must adjust your models accordingly.

For a policy with a deductible , policy limit , and coinsurance factor , the payment per loss random variable is:

You will calculate key metrics for this modified variable: the expected payment per loss , the expected payment per payment , and the loss elimination ratio . A crucial tool here is the limited expected value function, , which simplifies these calculations immensely. For example, with a deductible and limit , .

Parameter Estimation and Credibility Theory

Given a dataset of losses or claim counts, you need to select and fit a distribution. Maximum likelihood estimation (MLE) is the most important estimation method for STAM. The principle is to find the parameter values that make the observed data most probable. You will set up the likelihood function (or a product of probabilities for discrete data) and often maximize the log-likelihood, .

For example, finding the MLE for the Poisson rate involves taking the derivative of , setting it to zero, and solving to find the intuitive result: , the sample mean. The exam will test your ability to perform this process for exponential, Poisson, and binomial distributions, among others. Understanding the properties of MLEs—they are consistent and asymptotically normal—is also key.

Often, you have limited data for a specific risk class (e.g., a new policyholder). Credibility theory provides a formula to blend this "experience data" with a broader, more stable "manual rate" to produce a more accurate estimate. The fundamental credibility formula is , where is the sample mean of the experience data, is the manual mean, and is the credibility factor between 0 and 1.

You will work with two main approaches. Bayesian estimation treats the risk parameter as a random variable with a prior distribution. You use the experience data to update this prior to a posterior distribution, and the posterior mean is your credibility estimate. Bühlmann credibility is a linear approximation to the Bayesian estimate. Here, the credibility factor is , where is the number of exposure periods and , the ratio of the expected process variance to the variance of the hypothetical means. Calculating is a common exam task.

From Models to Prices: Ratemaking Procedures

The ultimate goal of these models is to determine an adequate and fair premium. Ratemaking is the process of calculating future premiums based on past loss experience, projected exposure, and business strategy. The fundamental insurance equation is: Premium = (Losses + Expenses) / (1 - Profit & Contingency Load).

You will learn to calculate indicated rate changes using the loss ratio method and the pure premium method. In the loss ratio method, the indicated change is: . The pure premium method calculates the indicated rate as: \text{Indicated Rate} = \frac{\text{Experience Pure Premium} + \text{Fixed Expense per Exposure}}{1 - \text{Variable Expense %} - \text{Profit & Contingency %}}. A critical step is trending past losses to the future policy period and developing them to ultimate values, which integrates reserving concepts into the pricing process.

Common Pitfalls

  1. Misapplying Frequency Distributions: A common mistake is forcing a Poisson model when the data shows variance greater than the mean (overdispersion). Recognizing when to use a negative binomial instead is crucial. Remember: Poisson mean = variance; binomial mean > variance; negative binomial mean < variance.
  2. Forgetting Coverage Modifications in Aggregation: When calculating aggregate loss for a portfolio with deductibles or limits, you must first apply the coverage modification to create the payment severity distribution. Do not aggregate raw losses and then try to apply the deductible; the order of operations matters.
  3. Confusing Limited Expected Value Notation: Misinterpreting is a frequent error. It is not . It is the expected value of the variable . Practice calculating it directly from the survival function: .
  4. Mixing Up Credibility Formulas: Confusing Bühlmann's with the Bayesian updating process can lead to wrong answers. Remember: Bühlmann gives a linear estimate; Bayesian gives the exact posterior mean. They yield the same result only under specific (conjugate) conditions.

Summary

  • Short-term actuarial modeling separates risk into severity (loss size) and frequency (loss count) components, which are modeled with parametric distributions like Pareto and Poisson before being combined into an aggregate loss model.
  • Coverage modifications (deductibles, limits, coinsurance) are applied directly to the severity distribution using tools like the limited expected value to calculate expected insurer payments.
  • Maximum likelihood estimation (MLE) is the primary method for fitting distribution parameters to observed data.
  • When data is scarce, credibility theory (both Bayesian and Bühlmann) provides a disciplined method to blend experience with prior or industry data to produce stable estimates.
  • The end goal is ratemaking, using the loss ratio or pure premium method to translate modeled expected losses, expenses, and target profit into a technical premium.

Write better notes with AI

Mindli helps you capture, organize, and master any subject with AI-powered summaries and flashcards.