Archive | Credibility Theory RSS feed for this section

Empirical Bayes Credibility

6 Dec

The chapter on Empirical Bayes Credibility is one of the most criticised, if other blogs and forums are anything to go by. The usual criticism is that the treatment makes the this subject more confusing than it really is.
The chapter implements an approach to credibility theory that doesn’t require any assumptions about the prior distribution – a non-parametric approach. Given that fitting a distribution makes our life easier on the maths front, and allows for a more precise estimate, it is important to remember that on the con side it is difficult – particularly in the absence of a large data set – to show that our data follows a particular distribution. A better way and more correct way of expressing the preceeding idea is that there is a real danger of overfitting if we assume a particular distribution.

There are two models outlined. CT6 core reading calls them model 1 and model 2. Model 1 is the simplest available implementation of this method, and the core reading says that it more useful for learning than for doing. This model is known outside the world as Buhlmann credibility. Model 2 has the added complication of making adjustements for the volume of business in each year.

Given that people find it difficult to use the CMP for this material, it is worth remembering that we are not stuck with using that material to learn this stuff. Apart from Loss Distributions, which runs the material quickly, there are a number of sets of lecture notes available online which cover the same material.

Of these, one of my favourites is

http://www.math.ku.dk/~schmidli/rt.pdf

This treatment discusses both the derivation of the two methods and gives examples of how they are used. An alternative more concise version can be found here:

http://personalpages.manchester.ac.uk/staff/ronnie.loeffen/risktheory2013/risktheory_2013_main.pdf

The final resource I suggest is the ‘study note’ on credibility theory supplied by the SOA/CAS which is actually a draft chapter from Foundations of Casualty Actuary Science. It can be found here :

Foundations of actuarial science

For a reader in the Institute of Actuaries system there are pros and cons to this resource. The con side is that the notation is different to the notation used by the CT6 Core reading. On the pro side, there are plenty of examples and exercises, and, I think importantly, examples and exercises which strip out the purely actuarial/ insurance from a quintessential problem facing anyone working with data from more than source (be it experiment, survey etc) – how can these multiple data sources be used together to form a more accurate view of the object of interest?

Advertisements

Credibility Theory Continued

14 Nov

It is straight forward to explain express the problem of credibility – given the existence of a well established premium of interest, how much credibility should be given to smaller, more specific data which supports a different parameter.

There are libraries of methods to assess credibility, but the CT6 syllabus focuses on a small number of Bayesian methods, all of which could be called empirical Bayes in the sense used by  statisticians such as George Cassella (although the CT6 notes use Empirical Bayes to refer to a specific non-parametric methods in Chapter 6). Loss Models explores a larger selection of methods than the CT6 core reading, and skimming some of the non-CT6 methods or even reading the table of contents can be a good way to contextualise what’s in the core reading.

Note that both Loss Distributions and the SOA/CAS have more methods of assessing credibility than the British CT6 core reading. R.E. Beard comments in Risk Theory (and possibly the comment can be found in Daykin’s Practical Risk Theory ?) that the study of Credibility was done with more enthusiasm in the States than in the UK.

In the first of the two chapters on Credibility in theCT6 core reading, then, three models are presented. The first is a thumbnail sketch of limited fluctuation credibility, the ‘old school’ credibility method, which is treated in more detail in Loss Distributions or in one of the references from my last post.

More serious treatment is given to the two Bayesian models : Poisson-gamma for counts data (number of claims arriving) and normal-normal for claim severity. These are both common empirical Bayes methods, and the first can be found as the illustrative example on the Wikipedia entry for Empirical Bayes – although note that the treatment gives different equations to those in the core reading as it is for a single observation model (which has a link to a related motor accident example).

The normal-normal model is also a commonly used Empirical Bayes model, and slightly more complex than the Poisson-gamma model. It is a little harder to find free web resources for the Normal-Normal model, although there is a great introductory paper which covers them here -> www.biostat.jhsph.edu/~fdominic/teaching/…/Casella.EmpBayes.pdf (although you need to not be put off by a more ‘mathy’ treatment). The normal-normal is also summarised here -> http://www.biostat.jhsph.edu/~fdominic/teaching/BM/2-4.pdf, where it is called the normal model for unknown mean, known variance.

As far as both of these models are concerned, there are treatments of both in widely available texts, which are identical to what the core reading presents, except that they don’t present formulae for finding Z, which is only really required to make the models comparable to the limited fluctuation models.

Next time, we shall look at what the Core Reading calls ‘Empirical Bayes Credibility’ and the rest of the world knows as a non-parametric sub-species of ‘Empirical Bayes’. This is seems to be an area of the Core Reading which is frequently panned on forums, and hopefully we can find some other resources to look at which are a tad clearer.

 

Chapter 5: Credibility Theory

6 Nov

This is likely to be a little bit shorter than usual, but  there will be more on this topic.

One of my goals in this blog is to identify cheap or free (most likely online) resources. Although it seems straight forward to find academics who have posted lecture notes covering Financial Mathematics (CT1) or the standard undergraduate probability and statistics covered in CT3, it is pretty hard to find stuff relating to the core of CT6 (as distinct from some of the mainstream statistic topics such as GLMs or Time Series).

Hence for the first bite at Credibility Theory, I want to briefly mention some of the free resources on this topic made available by the American Associations. The material in what is called Learning Outcome H for Exam 4/C for Casualty Actuaries Society can be studied either using the text by Willmot, Panjer and Klugman, which is also suggested reading for the Institute and Faculties CT6, or it can be covered from one of two study notes made available by the Society of Actuaries.

The first is the Credibility Theory chapter from the text called ‘Foundations of Casualty Actuarial Science’ (avaialbe from the CAS) and is available here: www.soa.org/files/pdf/C-21-01.pdf

The second is a study note called Topics in Credibility, which is here: www.soa.org/files/pdf/C-24-05.pdf

This one is specifically written for people studying Exam 4/C, partly in order to address some gaps in the first reference. However, the first is possibly more in tune with the CT6 notes in the sense that it more explicitly discusses the connection between credibility and Bayesian statistics. Both study notes in come with a number of exercises and answers.

A much briefer look at credibility for those who want to read a concise treatment before jumping into detail is:

http://thoughtleaderpedia.com/Marketing-Library/Credibility_Trust/CredibilityTheoryForDummies.pdfI

I like the way this one discusses, to some extent, when to use the different approaches.

 

For a more statitiscal look at Credibility, that is not necessarily connected to the exam material, you could look at:

http://www.actuaryzhang.com/publication/slides_CAS_Mar.pdf

I like this stuff because it hierachical model and geospatial models are what I was doing at uni, but to me it also helps to build a bridge between what statisticians are doing, and what actuaries are doing. Also, it sheds some light on what analyses insurance companies are doing in real life, rather than what people study for the exams. This last is for interest, rather than for exam study, per se.