I have gone through the book so that I can understand the application of compound Poisson process model and the estimation of relevant parameters. The book, as is evident from the title, talks about the math in insurance context. I like "One example" books where a single example is explored in all dimensions throughout the book. This is one such book where the data relating to “Danish fire insurance” is used to illustrate various modeling principles. This dataset comprises the claim arrivals and claim sizes of a fire insurance firm between 1980 and 1990. The book is rich in visuals and that's a real nice thing about this book.

There are 4 parts to the book. The first two parts of the book do not need any major fundas other than basic probability and undergrad math. The third part of the book is about generalizing the first two parts of the book using Point processes. I tried going over Point processes without going through the first two parts of the book. However I realized, soon enough, that reading about point processes with a specific example in mind, is a better way and hence read the book in a sequential manner. In this review, I will try to summarize the first two parts of the book.

__Part I – Collective Risk Models__

__Chapter 1: The Basic Model__

The basic problem dealt in the book is:

Claims arrive at an insurance firm with claim sizes. One needs to determine the premium that needs to be charged to the customers?

It was Swedish actuary Filip Lundberg(1903) who laid the foundations to non-life insurance mathematics by introducing a simple model. The key assumptions of this model are claim inter-arrivals are exponential; claim size sequence is an iid sequence that is independent of the claim arrival process. By specifying the claim inter-arrival distribution, a counting process also gets specified. This counting process is generally called claim number process.

The main object of interest is the total claim amount process or aggregate claim amount process.

The process S(t) is called the random partial sum process or compound sum process with Ti denoting arrival times, Xi denoting the claim sizes ad N(t) the arrival counting process.

This chapter is a sort of trailer to the book and gives the following list of questions that Part I of the book explores:

- Find sufficiently simple probabilistic models for S(t) and N(t),i.e. ways to specify claim size process and claim arrival process
- Determine the theoretical properties of the stochastic processes S and N. Distributions of S and N, their distributional characteristics such as the moments, the variance and the dependence structure. Asymptotic properties of N(t) and S(t).
- Simulation procedures for N and S
- Based on the theoretical properties of N and S, give advice on, How to choose a premium in order to cover the claims in the portfolio? How to build reserves? and How to price insurance products ?

__Chapter 2: Models for the Claim Number Process__

For the total claim amount process, one needs to assume a process for N(t). This chapter introduces three types of processes that can be used as claim counting process. They are Poisson processes, Renewal processes and Mixture Poisson processes.

__Poisson Processes__

I think one must kind of be comfortable with viewing three views of claims process, i.e. as an arrival process, as a renewal process and as a counting process. Once you start moving amongst these three views, lot of things are easier to understand. The chapter starts off with standard homogeneous Poisson process, lists various appealing properties of the process and derives them. It then moves on to homogeneous Poisson process and does the same. I think the most important word that I learnt from this chapter is “clock-time”. It is a nice word/analogy for looking at intensity function. Using the terminology of “clock-time”, I understood difference between homogeneous Poisson process and non homogeneous Poisson process. Visualizing the Poisson random measure as an inner clock or operational time of the counting process also helps in simulating an inhomogeneous Poisson process from a homogeneous Poisson process.

The various characteristics of Poisson processes given in this chapter are

- Markov property of Homogeneous Poisson process
- Backward Recurrence time and Forward recurrence time distributions(useful in understanding inspection paradox)
- Joint Distribution of inter-arrival times
- Joint Distribution of the arrival times
- Conditional distribution of the arrival times given the number of arrivals – Order statistics property. This amazing property helps in solving a lot of problems where one can substitute order statistics and get a far simpler expression for moment calculations of a random variable that is a function of arrival times. They are many business critical random variables that are functions of arrival times. Delay in claim settlement is one such example.
- Distribution of a symmetric function of arrival times

There is a good analysis of Danish fire insurance dataset that illustrates the following points

- Estimate the rate parameter of the process using MLE and compare the rate parameter in various years/time intervals
- If the intensity varies across years, it is better to fit a local intensity function for each year
- A nice way to check whether Poisson process is an approximate model is to transform the process in to standard homogenous Poisson process and visualize the inter arrivals using qqPlot. ‘car’ package in R has a nice function that gives qqPlot for exponential random variable.
- Poisson process with constant intensity might be a suitable model for shorter time periods

The first section on Poisson processes ends with an informal discussion of transformed Poisson process and Generalized Poisson processes. This is a good prelude to part III of the book where such things are discussed in a general point process setting. The basic idea is that you want to combine the arrival times and claim size random variables in to one Poisson process.

__Renewal Processes__

If you generalize a Poisson process allowing arbitrary distribution for inter-renewals, you get a renewal process. This generalization has one downside. You do not get closed form expressions like that of a Poisson process. In any case, there is a ton of research that has already been done on Renewals that you can use to analyze the renewal claim arrival process. A brief recap of renewal process theorems are given such as Strong law of large numbers for Renewal processes, Elementary Renewal theorem, CLT for renewal process, Blackwell’s Renewal theorem, Smith’s key renewal theorem. Also a basic intro to renewal equation and solution to the same is given. Based on these fundas, you can test out whatever inter-renewal distribution you have in mind, fit it to the data and then use the renewal process theorems to talk about various aspects of the process such as emsemble average, time average, moments such as mean and variance etc.

__The Mixed Poisson Processes__

Inclusion of mixing variable to the intensity function makes a process Mixed Poisson. The mixed Poisson process inherits the following properties of the Poisson process

- Markov property.
- Order statistics property.

The mixed Poisson process loses the following properties of the Poisson process

- It has dependent increments.
- N(t) is in general not Poisson.
- It is over-dispersed.

__Chapter 3: The Total Claim Amount__

This chapter deals with Xi, the random variable denoting the claim size. It starts off with deriving some rough approximations for premium size calculations.

For a Poisson process, the mean and variance of the total claim process can be obtained in a compact form. For a renewal counting process, one needs to appeal to asymptotics to obtain expressions for the mean and variance of total claim process. SLLN and CLT of Renewal processes help in getting in some grasp on the moments. Why bother about the mean and variance of the total claim process in an asymptotic sense ? The reason being, that these give a clue on premium pricing. For example the expressions for S(t)/t shows that the premium process should be linear function of time. Using mean and standard deviation of the total claim amount process, various types of premiums can be charged.

- Premium based on Net Equivalence principle
- Premium based on Expected value principle
- Premium based on Variance principle
- Premium based on Standard deviation principle

Out of the four thumb rules, the premium based on standard deviation principle fits the theoretical requirements.

The second section of this chapter deals with claim size distribution. This section has to do with stat fundas like choosing a right distribution for claim sizes, checking whether the chosen distribution fits the data etc. One often hears about fat-tailed distributions in many areas and especially in finance. How does one go about measuring it? Let’s say you are given an arbitrary distribution. How do you decide whether it is fat tailed or not? How do you decide analytically? There is no standard answer but one of the ways is to see how the tails behave with respect to the exponential distribution. The following expressions can be used to check whether the distribution is light tailed or fat tailed.

The section introduced a graphical tool called “Mean Excess Plots” that can serve as a guide to check the fat tail presence of the claim sizes. The standard light tailed distributions that once can use are Exponential, Gamma, Weibull, Truncated Normal. It mentions a list of fat tailed distributions, some of which were totally unfamiliar to me. I had never heard of them. The author also gives a note of caution that it is not easy to distinguish between the distributions based on parameter estimation(MLE).Sometimes I wonder why not choose a decent prior and keep updating the claim size distribution, instead of dealing these complex fat tailed distribution functions that anyway are hard to distinguish.

The last section deals with the distribution of the total claim amount under the standard assumption that the claim number process and claim size are independent. Some appealing properties of compound Poisson process are derived. One of them being that sums of independent compound Poisson variables is a compound Poisson variable. A useful tool to cull out a mixture distribution is the characteristic function. A few examples are shown in the context of compound Poisson variable. Three approximation techniques are suggested in the chapter for calculating the total claim amount distribution. First is Panjer recursion, the second one is based on CLT and the third one based on Monte Carlo/Bootstrapping. All three methods have their own drawbacks.

__Chapter 4: Ruin Theory__

I felt this chapter to be the most challenging one in the book. It is easy to understand the question the chapter deals with. Given a total claim amount process, what is the probability that a firm goes bankrupt? So, given an initial capital that is used to start an insurance firm, the premium payment process and the claim disbursement process can cause the net amount to hit 0. The solution to ruin probability is complicated. This chapter gives techniques to compute rough approximations to the probability of ruin. These are basically some bounds on the ruin probability and the math used is based on renewal equation. Most of the content deals with the “small claim size” scenario.

__Part II – Experience Rating __

__Chapter 5 & 6: Bayes Estimation + Linear Bayes Estimation__

The estimation problem dealt in this chapter is,

How can one determine a premium for a specify policy by taking the claim history of that policy into account?

Two models are introduced, one being a heterogeneity model and the other being a model based on Linear Bayes estimation.The idea of the heterogeneity model is to incorporate a customer specific parameter that captures the individual attributes. One assumes a prior for this parameter, assumes a likelihood model for the data and then updates the prior. So, given the claim history of a customer, one tries to find to reasonable approximation to Expected value of claim size given this heterogeneity parameter. Even though this looks good in theory, the problem is that there is a strong assumption, i.e. conditional on the heterogeneity parameter, the claim sizes are iid. The chapter of Linear Bayes relaxes this condition and states a model with a rather weak assumption.

Even the context of the book is non-life insurance, the math relating to counting measures can be applied to variety of areas. For a reader looking for a solid understanding of compound Poisson processes, this book can be a good starting point. The total claim amount process can serve as a good real life example while going over abstract details of point processes.