Risk Modelling Frameworks
The development of risk modelling frameworks using free and open software:
Multi-peril, multi-region, multi-year: In order to support portfolio aggregation and long term planning models need to be able to handle all risks a company is exposed to and be capable of simulation over different time horizons.
Parallel computing with distributed storage: For many years computing power doubled roughly every 18 months, in accordance with Moore's Law. Chip technology is reaching hard physical limits. Already if you want more computing power the solution is to use more processors in parallel, rather than wait for faster processors that may not come. Parallel computing is technically challenging, particularly where significant amounts of data needs to be processed.
We have the expertise to help clients use parallel compute environments and distributed data storage.
Portfolio Aggregation
We support the design and build portfolio aggregation tools.
Companies need to understand how the various risks they are exposed to aggregate. New risks should be evaluated both on a standalone basis and on how they impact the overall risk profile. We can help companies build comprehensive risk aggregation frameworks, which are able to use a wide variety of underlying models to assess individual risks whilst still able to aggregate those risks.
Model Development
We have expertise in a wide range of risks, with a particular emphasis on:
Natural catastrophe models: Modelling economic loss using open data, open science and free and open software
Cyber risk models: Evaluation of relative risk of proprietary and free software, model correlations with other non-cat risks, model legal, geo-political and technological aspects
Cross Platform Development
Many of the Bermuda based reinsurers are using entirely proprietary software stacks and have little free and open source experience. As such, they are struggling to compete with other finance companies that are able to work with the open source community.
Advantages of free and open source software include:
- freedom to modify to your own needs;
- no vendor lock-in;
- lower cost;
- faster integration of new technology;
- development driven by the users, not by a company's desire to sell a new product.
We have 15 years experience working in mixed linux/windows environments and can assist clients to integrate their proprietary systems with free software systems.
Oasis Loss Model Framework
The Oasis Loss Model Framework is an open framework for risk modelling funded by a number of insurers and reinsurers. The goal is to provide an open framework to enable researchers, academics and commercial catastrophe modelling companies to make their models available in a standard, open platform.
Key goals are to create transparent modelling systems and reduce the lead time to turn new scientific insights into better risk models.
We can assist clients in using Oasis models in their risk analyses. We can also adapt client models to the Oasis Loss Model framework.
Collaborative Software Development
Over a period of over 30 years the free software movement has steadily improved collaborative development methodologies and tools. Free and open source software is often build by teams spread across the world. The free software world has developed its own tools and practices to support this development model.
For example, the git version control system was designed and built by Linus Torvalds in order to help him manage the Linux kernel project, one of the world's most complex software projects, with 1000's of contributors.
In the free software world, tools are not used because they are mandated by management, but rather because they have been designed and built by those that are working in that field and understand the problems they are trying to solve.
We can introduce clients to this world and help them to learn how to work with the open source community.
Data Science
Many academic institutions are moving to open research models whereby their research is made freely available through the internet. In many areas of science research will inevitably involve data analysis and use of software. As such, simply publishing a report is rarely sufficient for other academics to be able to verify and build on your work. It is important not just to publish the findings, but also the software and data that were used to arrive at those findings.
Tools and techniques, such as the Jupyter notebook enable researchers to do repeatable research and to make not just their final reports, but all the data and software. Without these tools many researchers are in fact unable to reproduce their own results six months later.
We can advise clients on how they can work to adopt these tools and improve their own research and development processes.