Archive | October, 2012

Chapter 4: Reinsurance (Equivalent CAS/SOA C/4 Section 29)

25 Oct

So far, I’ve had the inside track inasmuch as the topics were topics that were important to my earlier statistical studies. ‘Reinsurance’ however is not something that is required to be understood by all that many statisticians. (Apologies for pretending English is Latin, and trying to use the ablative absolute)

First, let us summarise the chapter:

We start with some terminology. Reinsurance is a method of risk transfer from one insurer to another. This chapter looks at three simple forms:

  1. Proportional reinsurance. Does what is says on the tin
  2. Excess of loss reinsurance. The insurer is responsible for paying the claim to a value M. Above M the reinsurer is responsible, until an excess limit reach, at which point the original insurer is again responsible
  3. Stop loss insurance. The reinsurers has the responsibility to pay the amount owing over a claim amount M with no upper limit.

We saw in the last chapter that given a set of claim data, it is relatively straight forward to model the size of claims – one can check various distributions for their goodness of fit. The meat of this chapter is how to convert the expressions for conditional probability from both the original insurer’s and the reinsurer’s point of view into tracatble algebraic expressions.

To provide the future actuary with the tools to apply these expressions, we are given  formulae for the moments of lognormal and normal, to simplify finding the moments of these distributions without needing to explicitly integrate the cdf, necessary we suppose due to the difficulty of integrating the Gaussian distribution.

There is also some salad in the form of discussions of the implications of inflation (if the size of claim is changed by inflation, but the retention limit is stationary, who does this effect the probability of exceeding the limit?), the difficulty in estimating the values of estimators if the data is censored by the retention limit e.g. claim amounts above the retention limit only show as the retention limit and a simple statement of fact that an excess paid by a consumer policyholder holding a general insurance is mathematically identical to the position of an insurer holding a reinsurance policy.

 

Advertisements

Chapter 3: Loss Distributions

19 Oct

I found this chapter more enjoyable than expected (not quite enjoyable the way ice cream is enjoyable, but still), essentially because it re-introduced me to some statistical ideas I hadn’t looked at for a long time. In some respects it was trip back in time, to simple discussions of MLEs and methods of moments estimators – the first things I learned in the last course I took in a physical classroom.

A quick review of the actual material. We are presented with the main probability distributions used to model the size of individual insurance claims – exponential, normal, lognormal, gamma, Pareto, generalised Pareto, Burr and Weibull. It is observed that distributions individual claims are often positively skewed, which is an obvious influence on this list. Much of chapter is taken up with defining the characteristics of these distributions – their distribution functions and moments.

The second part of the material is a discussion of three methods of fitting distributions to data, or more precisely, of finding the parameters which fit the data the best, given a particular choice of distribution. After you have done this, the only place to go is obviously to perform some sort of formal test of goodness of fit, and the annointed method to be in the CT6 material is the chi-squared test. A solid choice.

The material is rounded out with some commentary on mixture distributions, which may or may not be important for modelling individual losses, but will definitely be important for modelling portfolios of losses where the number of losses and the size of these losses are random varables.

It is my intention from now to make reference to the US/ Canadian exam system, where the equivalent SOA/CAS/ CIA exam is Exam C/4. I have two reasons for doing this.

Number 1: Some of my readers may be in the US or Canada. The stats WordPress gives me tells me that half my readers this week were in the US, albeit from a very low readership so far.

Numbert 2: There are many free resources available for the SOA/CAS exams, probably far more than for the Institute and Faculty of Actuaries exams. Most likely this simply follows from the size of the United States. Whatever, I figure knowing which material is equivalent gives people on either side of the Atlantic access to a lot more teaching aides and practice questions. In future posts, I aim to mention some of these free resources that can be applied.

In the present case, the CT6 core reading covers topics also in the SOA/CAS syllabus, grouped together in  simiarl way. In fact, the main difference is that the SOA/CAS material goes further, presenting some extra tests of hypotheses and some more material on graphical methods (or so I infer from blogs and forums dealing with preparation for this exam, and also from the text ‘Loss Models From Data to Decisions by Willmot, Klugman and Panjer, which appears to follow the US-Canadian Exam C/4 material very closely in its choice of topics).

I am developing a habit of writing at least one thing that I would do differently if I were writing the material on a particular topic. In my last post I said I would introduce the exponential family earlier. In the C/4 exam there is apparently some mention made of graphical methods of assessing goodness of fit. Why not in CT6? Not even the simple Q-Q plot is there. (if you haven’t met the Q-Q plot, take the path of least resistance, and let Wikipedia save you -> http://en.wikipedia.org/wiki/Q-Q_plot)

It’s intuitive, it’s easy to interpret, it highlights differences in skewness and tail weight, the kind of differences that this chapter of CT6 actually emphasises. It can be done instantly in R, or it can be done with a little bit of stuffing around in Excel.

It is as simple as plotting the quantiles of your data set against the quantiles of a proposed distribution. If it’s a nice straight line at 45 degrees, they match exactly. The further you move from this ideal, the less they match. If they match in the middle but diverge at either or both ends, then one has heavier tales than the other. If the match at one end but not at the other, they have different skewness.

‘How to Lie with Statistics’ makes statistics seem like its all about the graphs. And why not? Let’s get visual!

 

Chapter 2: Bayesian Statistics

15 Oct

Bayesian statistics is one of the few areas in the actuarial syllabus I’ve seen before, but when I first encountered it as a beginning statistics major, it made no sense, both from the point of view of how to do it, and from the point of view of what for.

To understand why Bayesian statistics might be important to an actuary, well the best thing to do is to read the rest of the CT6 (or C/4) notes. To understand why and how it is interesting to a statistician you could read the Scholarpedia article -> http://www.scholarpedia.org/article/Bayesian_statistics

This article has been written and reviewed by some of the biggest names currently working in the field!

The scholarpedia version is, despite being relatively short, magisterial and comprehensive. As one might expect from those involved, some of the biggest names in Bayesian statistics academic practice.

For another short and sweet view of the topic, one could also read the introduction at the bottom of the page (a small amount of scrolling may be required before you get to it as of today, 15/10/2012, due to election notices) at bayesian.org under the heading ‘What is Bayesian Analysis?’

For me, an obvious omission from the Acted treatment of this topic is that after emphasising the use of conjugate priors, there was no discussion on how to find the damn things. Also, not much discussion of diffuse priors. With respect to the first point, the notes make it seem like conjugate priors are usually available, whereas they are very rare outside the exponential family (although the exponential family does, of course, contain some of the most used probability distributions). The strangeness is compounded given that it is essential to understand exponential families of distributions in order to understand generalised linear models, and hence this family of distributions is taught later in the subject.

With respect to diffuse priors, it should be noted that they are also difficult critters, and it is hard to find truly non-informative priors. James Berger, one of the heavyweights of Bayesian decision theory apparently only admits the existence of four (4) (the word apparently appears in the preceeding sentence because my only reference is my third year Bayesian Statistics lecture notes, and the quote is not referenced.Most likely it appears in this paper http://www.stat.duke.edu/~berger/papers/catalog.html(1985), but I can’t be certain because I only found the paper one second before rewriting this parenthesis. Reading it will have to wait), although I think at least one of his four is a set of priors, rather than a single specific distribution.

To give weight to my rant about how easy it is to find conjugate priors, I give below the steps to finding the proposed by Raiffa and Schlaifer (not quite the originators of the term and the concept, but they appear to have it given natural conjugates a lot of momentum), as written in S. James Press Applied Multivariate Analysis: Using Bayesian and Frequentist Methods of Inference, Second Edition (Dover Books on Mathematics)
) , a text available in a Dover reprint for only slightly more than a nominal amount.

“…write the density or likelihood function for the observable random variables and then interchange the roles of the observable random variables and the parameters, assuming the latter to be random and the former to be fixed and known. Modifying the proportionlity constant appropriately so that the new ‘density’ integrates to unity and letting the fixed parameters be abitrary provides a density that is in this sense is ‘conjugate’ to the original.”

Not terrifying difficult, but maybe not trivial enough to not be a distraction if you’re not specifically after testing it?

Chapter 1: Games and Decisions

7 Oct

This is my first real post. I tried to express in my introductory post that my intention in this blog was to challenge myself to look Institute and Faculty of Actuaries’ Core Technical material in a way not necessarily suggested by the material itself.

The first chapter of CT6 Statistical Methods is a brief look at Game Theory and Decision Theory.

In this chapter some of the essential terminology of these two related topics is introduced.To wit:

  1. Dominated strategies
  2. Maximin criterion
  3. Saddle point strategy
  4. The Bayes criterion

I think the authors’ of this part of the notes which is that the opportunity to use whimsy in the examples – one of the best opportunities throughout the Core Technical series, given it is hard to find spice in interest theory calculations. In this regard, I recommend Luce and Raiffa’s Games and Decisions (1957) available from Dover for no more than semi-drinkable bottle of wine (in Australian bottle shops, anyway. Depending on the country you may get anything from a completely undrinkable to really quite decent for that money). Note though that out of fourteen chapters, only two (chapter 4 and chapter 13) really correspond to the material in CT6, though.

While the above objection is only half serious, another way in which the Luce and Raiffa treatment is more interesting, is that for each classification of game it discusses, it gives an example game either of research interest (whether for theoretical or practical reasons). In particular the egg craking story ( borrowed from Savage) used to motivate statistical decision theory is a far better illustration that a statistician tossing a coin used in CT6. Although, as a married man, I am a little distracted by the questions thrown up by the opening sentence of this example – ‘Your wife has broken five eggs into a bowl when you…volunteer to finish making the omelet’. The problem is to decide whether or not to check if an egg is rotten before craking it into the bowl and either making a large omelet or ruining five eggs.  If it was not your wife, but you girlfriend, daughter, kitchen hand (to your chef) or housemate, how would your decision process change?

In the end there is a purpose to fitting game and decision theory to real life situations, so to redress the lack in the CT6 notes I offer an inversion of the problems suggested there based on TV’s The West Wing.

In the final series, a Republican and Democratic presidential candidate are in the race to become Bartlet’s successor. In the episode Duck and Cover a nuclear power plant whose approval was made possible by the lobbying of the Republican candidate goes into meltdown, and at least one of the repair crew dies after trying to fix it. The respective campaign managers must decide whether to put out a statement telling the world about the Republican candidate’s role. For the Republican candidate, putting out a statement first could minimise the political damage because their side of the story will get told first, but if there is no statement from the Democrats, it will reveal the existence of the connection. On the other hand, if the Democratic candidate points out the connection to the press, it could either be helpful to their campaign if t harmful depending on whether there is a Republican press statement available to neutralise it. Propose a pay-off matrix which describes this situations. Is there a spy proof strategy?

The example above operates under the rules of decision making under certainty. In the show, there is an element of uncertainty in the form of whether the press discovers the connection and when. Adding this element to the problem above makes for a more complicated example.

The Beginning

4 Oct

Actuarial exams are famous for, amongst other things, their length. Possibly also difficulty, in some areas a little tedium, but definitely their length.

Having told everyone I’ve ever known that I am attempting actuarial exams, just as soon as I’ve finished studying statistics, I’ve run out of excuses.

For me, the best way to study is to explain what I’m studying to someone else. As I don’t know ayone else who is interested, I’ve decided to set up this blog to find people who are interested.

I mentioned finishing a degree in statistics. To start as near as possible to where I left off, I’m going to start by studying CT6 Statistical Methods. To try to say something different, in the hopes of attracting an audience, I am going to talk about the material through texts outside the Core Reading, and even outside the suggested reading. Of course, I’m a cheap skate, so I’ll be talking about the material in relation to the cheapest sources I can find.

That’s it for now… see you in the soup.