Blog

Scheduling your validation over draft and final submissions

A client recently asked our opinion on how to spread validation over the draft SCR and final SCR submissions.  There aren’t definitive answers to be found saying exactly what to do, but there are several considerations we think you should take into account when looking at this.

Materiality is probably the biggest factor to consider in your decision-making.  Make sure you carry out as much validation as possible on the material risk classes and material risk groups for the final run of the model.  You can probably do the lower materiality risk groups or risk classes on the draft.

The balance of validation depends on what is expected to change between the draft and final runs, and by how much.  Are areas that are not re-parameterised between the two submissions can usually be safely addressed in the draft. It may also be worth doing the validation on areas which in the past have typically not had significant volatility in the parameterisation, only updating the validation for the final run on an exception bases if something moves outside of a tolerance or threshold.

In terms of overall tests, the P&L attribution is an obvious candidate for the draft model given that it usually speaks to model scope and coverage rather than purely to parameterisation.  The model scope is unlikely to change between draft and final submissions and you probably wouldn’t be in a much better position to use a new year’s data in the exercise. Consider making that a “draft” test candidate.

A lot of the ground work for some tests can be done on the basis of the draft run. Things like reverse stress tests can be built and trialled on the draft such that they are nearly automatic when it comes time to compare to the final run.

Ensuring that the key expert judgements have a good structure now will make it significantly easier to update them after the parameterisation.  You can also look at the back-testing of any of the key expert judgements now, as that only depends on having sufficient information to test the expert’s judgement rather than being dependent on a model run.

Any validation over the accuracy, completeness and appropriateness of the data to be used for the parameterisation or model run is another candidate for early validation.  It may be safe to assume that this data or quality of the data is unlikely to change between model runs.

As always, organisations come in all shapes and sizes and we believe a “one-size-fits-all” approach will seldom add significant value.  If you’d like to chat about how to tailor this to the curves of your organisation, please get in touch.