Following a major hurricane we may turn to cat models, with the understanding that both the type of model and interpretation methods used will significantly impact the ways in which the insured or economic losses are ultimately estimated. However, what is often overlooked is the crucial, and far larger role that data quality plays in the overall process and from the very outset.
I have previously mentioned my surprise at having seen so many organisations make decisions based on erroneous data, in many cases often for years until the problem was noticed. The job of maintaining data integrity is a multi-layered affair requiring appropriate checks and balances, going back as far as the master record where the data was originally created. Exactly how an organisation decides what should be defined and agreed as the master record is rarely a technological or systems problem, but tends to be more a question of organisational governance, which is exactly why it is so often hard to get right.
Once the master record has been defined, building consistent and robust integration points between relevant systems and applications is key. The single most important thing that I always ask clients to consider is, “Which data sets are managed at enterprise level and which are managed by the business users?". As with the growth in the self-service reporting, data preparation driven by the end user is a growing phenomenon, and one to promote in the modern insurance world. Acceptance that our business critical data isn't always stored in a robust and well governed application or reporting platform is the first step towards managing it properly, as ignoring the issue will only make it worse in the long term. I have seen too many organisations fail to accepts this two-tier approach, which leads to the creation and sharing of business user's data without the necessary checks and balances.
Some of the new technologies are currently offering viable replacements to spreadsheets, and as such make useful steps in paving the way towards the reality of self-service data preparation. Tools such as Trifacta, which we use both internally across the Charles Taylor group and with our clients, offers much the same power and flexibility as a spreadsheet, yet at the same time provides a central governance and rigour as is essential in the enterprise space.
So - next time you deliberate upon which model to use in order to make a business critical decision, I recommend taking a step back and asking yourself whether you are confident that the input data is 100% accurate, and if so, exactly how much interpretation has taken place within the data preparation process beforehand, as such interpretation may have profoundly influenced the data before it even reached the stage of becoming a model input.
To what extent can the world’s new treasure trove of data make life precisely, reliably measurable? That puzzle is a tantalising one for an industry as ripe to take advantage of the 21st century’s deluge of raw information as (re)insurance. This is a business that should thrive on data. When you are weighing complex risks and pricing the dangers associated with phenomena that are often dynamic and unpredictable such as hurricanes, having the ability to measure your exposure to the nth degree is a precious thing