Blog

Like the flu, alcopops and Celine Dion, Excel is fine in small doses

In the world of actuarial and financial modelling, excel spreadsheets still appear to be king.  I think this is probably due, in large, to fact that us actuaries, quants and economists building the models desperately feel a need to be able to own the model, and to change it as we go.  We prize the ease of change and flexibility of the spreadsheet far more than the assurance of having no errors.  There have been a few stories in the press emerging recently about how the use of spreadsheet models have had, as our software engineers refer to them, “undiscovered and undesirable features”.

Last year JP Morgan reported a large and unusual change in a VaR statistic year on year, and in their 10-Q filing attributed this primarily to changes in their synthetic credit portfolio.  However it appears that a key driver for the change appears to be due to someone discovering an error in the VaR reporting spreadsheet used for the past few years.  In simplistic terms, the spreadsheet formula appeared to divide by the sum of two numbers rather than the average, leading to a potential understatement of VaR by half.  No-one picked this up until they went through a formal process to build a robust application and compared the outputs. Article

Recently the “Reinhart-Rogoff” study has been in the news.  This study is the origin of the widely quoted statistic that economies with debt greater than 90% of GDP have significantly lower growth rates than economics with lower debt / GDP ratios.  As it turns out, the spreadsheet calculation of this ratio may have been misstated – amongst other things some people are disputing is the fact that the formula in excel did not sum the whole column, appearing to leave several significant economies (Australia, Belgium, Canada) out of the calculation.  This appears to have been missed in the checking of the spreadsheet. Article

I’ve always been a big believer that sometimes the flexibility to update and change financial models is not worth the huge uncertainty it creates and not worth the additional governance that you need to put in (if it’s even possible).  The challenge is to get us “non-committals” to commit to a model design, specify it properly and then have it robustly coded, even if it is not going to always be the perfect model.

Someone once told me to accept that all models are wrong, it’s where they are not useful that we should worry.  Building our models in a more robust environment would reduce the flexibility of the user to change and tweak, but maybe that isn’t a bad thing.  Maybe senior management would rather have a model that is stable, locked down, fully tested and consistent.  It may still be wrong, but arguably it would be more useful.