It is straight forward to explain express the problem of credibility – given the existence of a well established premium of interest, how much credibility should be given to smaller, more specific data which supports a different parameter.

There are libraries of methods to assess credibility, but the CT6 syllabus focuses on a small number of Bayesian methods, all of which could be called empirical Bayes in the sense used by statisticians such as George Cassella (although the CT6 notes use Empirical Bayes to refer to a specific non-parametric methods in Chapter 6). Loss Models explores a larger selection of methods than the CT6 core reading, and skimming some of the non-CT6 methods or even reading the table of contents can be a good way to contextualise what’s in the core reading.

Note that both Loss Distributions and the SOA/CAS have more methods of assessing credibility than the British CT6 core reading. R.E. Beard comments in Risk Theory (and possibly the comment can be found in Daykin’s Practical Risk Theory ?) that the study of Credibility was done with more enthusiasm in the States than in the UK.

In the first of the two chapters on Credibility in theCT6 core reading, then, three models are presented. The first is a thumbnail sketch of limited fluctuation credibility, the ‘old school’ credibility method, which is treated in more detail in Loss Distributions or in one of the references from my last post.

More serious treatment is given to the two Bayesian models : Poisson-gamma for counts data (number of claims arriving) and normal-normal for claim severity. These are both common empirical Bayes methods, and the first can be found as the illustrative example on the Wikipedia entry for Empirical Bayes – although note that the treatment gives different equations to those in the core reading as it is for a single observation model (which has a link to a related motor accident example).

The normal-normal model is also a commonly used Empirical Bayes model, and slightly more complex than the Poisson-gamma model. It is a little harder to find free web resources for the Normal-Normal model, although there is a great introductory paper which covers them here -> www.biostat.jhsph.edu/~fdominic/teaching/…/**Casella**.Emp**Bayes**.pdf (although you need to not be put off by a more ‘mathy’ treatment). The normal-normal is also summarised here -> http://www.biostat.jhsph.edu/~fdominic/teaching/BM/2-4.pdf, where it is called the normal model for unknown mean, known variance.

As far as both of these models are concerned, there are treatments of both in widely available texts, which are identical to what the core reading presents, except that they don’t present formulae for finding Z, which is only really required to make the models comparable to the limited fluctuation models.

Next time, we shall look at what the Core Reading calls ‘Empirical Bayes Credibility’ and the rest of the world knows as a non-parametric sub-species of ‘Empirical Bayes’. This is seems to be an area of the Core Reading which is frequently panned on forums, and hopefully we can find some other resources to look at which are a tad clearer.