People or Machine: The Emerging Gap Between Fully Automated Risk Selection and Human Intervention in Cyber Insurance

John Merchant is Managing Director, Cyber and Technology Practice – North America for Optio Insurance Services, a global MGA specializing in over 13 classes of business.  He is responsible for overseeing business development, digital distribution strategy, product development and vendor management.  Prior to joining Optio, John was the Insurance Practice Advisory Leader and Cyence Risk Analytics and was responsible for the planning and execution of strategic customer initiatives.  Prior to joining Cyence, John spent over 10 years in the insurance industry, primarily focused on cyber product development, strategy and underwriting. John has held positions at Nationwide, AIG and The Hartford.  Before joining the insurance industry, he held various strategic sales roles in the technology sector.  John holds a BA in Political Science from the University of Connecticut.

“I underwrite from my gut”.  A statement no senior executive wants to hear from their lead cyber underwriter, or any underwriter for that matter.  I can attest to hearing that statement, or something similar on occasion though.  Although an extreme example, it’s hard to deny that a certain degree of empirical reasoning goes into many cyber underwriting decisions, as it should.  To what degree empirical data, or gut instinct, influences underwriting decisions could be the difference between profitability and being in the red.  Sometimes those decisions, although appearing to be gut instinct, are based on sound experience and are a crucial part of the decision-making process.

With any extreme approach an opposite must exist.  In the case of cyber underwriting, a purely data-driven approach without any human underwriter involved doesn’t exist just yet (excluding cyber endorsements and other slot-rated products for very small entities).   However, the industry has begun to more fully embrace a risk selection approach heavily influenced by externally collected and curated data.  Investors have too.  Five years ago, there were no insurtech Cyber MGA’s.  Now there are over a half dozen and counting, and funding appears plentiful.

For this blog post I’d like to pose a question.  Which approach is better?  I’d argue a cogent case can be made for both, especially given the current market conditions.  There’s no denying that data and analytics are here to stay.  Predictive modeling and analytics have changed the landscape of several industries.  For me, an industry most profoundly changed is professional baseball.  Other industries have been changed materially as well, but baseball is more interesting to note than financial services.  Apologies to all the quants out there.

The arc undertaken by baseball is reminiscent of the arc being taken by the cyber insurance market.  A very old industry, set in its ways, being unexpectedly and somewhat begrudgingly forced to adopt the use of data to make decisions.

This shift from a game dominated by gut decisions, feeling and how a player looked to one dominated by predictive analytics was seismic.  Who knew game decisions and multimillion dollar contracts would be made by acting almost solely on OPS, ISO, WHIP or wOBA.  All acronyms I’d be happy to debate on the next sabermetrics Zoom.  However, have these analytics proven to be superior or is the human element (meaning decision making by “informed instinct”) a necessary or even superior method in some cases?

Below are mini, non-lawyerly “cases” for a machine approach to underwriting vs. a human approach.  I purposely avoided making cases against each as they are implied.

The Case for Machines

We’re already there.  Auto, home, BOP’s…all underwritten by algorithms gathering internal and external data to predict profitability at the policy level.  These are standardized products of course, but human touch is no long part of the process.  This allows for lower acquisition costs and the ability to grow at scale.  Cyber however, is anything but commoditized.  I could argue the industry has tried a little too hard to commoditize a highly complex product, but that’s fodder for another blog.  That thought aside, the introduction of externally collected and curated data to drive the underwriting decisions has exploded.

Data can be collected and analyzed at scale, providing threat intelligence which is simply impossible to ascertain through the traditional underwriting process.  There is no way, at least from what I’ve seen, that an underwriter can quickly find out the number of publicly facing IP addresses, proper or improper configuration of the Sender Policy Framework (SPF), whether traffic to and from a website is encrypted and if that encryption is strong or weak, and finally, if any RDP ports are externally visible and therefore prone to exploitation by cyber extortionists.  Underwriters could ask on an application, but good luck with that approach.

Our machine friends can also provide portfolio level intelligence to ascertain potential points of accumulation.  With these points of accumulation identified, they can assist with building disaster scenarios, which up until recently were “built” by adding one’s exposed limits together.  I’m sure many of my colleagues on the underwriting side recall the days when accumulation management meant asking what cloud providers a company used and then physically adding them to an excel spreadsheet.  A tedious task, I assure you.

Are these models perfect.  No, far from it.  They’re all wrong to some degree but are certainly better than knowing nothing.  To grossly oversimplify, I’d rather be told there’s an 80% chance it may rain this weekend and prepare appropriately than plan a beach day on what turns out to be a wash out.

To conclude the case for the machines, this data can be continually collected, and models trained to get better and more accurate.  Economies of scale kick in and this approach becomes less and less expensive, in theory.

The Case for People

You can’t have a beer with a machine.  Twenty years ago, that may have been a valid case.  Kidding aside (sort of), models and the data and analytics that power them are only as good as their creators, who are people.  Not underwriters, but people, nonetheless.  Where underwriters come in is in the interpretation of the data.  As pointed out, all models are wrong to some degree.  That degree is ascertained by highly trained underwriters with years of experience in the market.  Plus, the human brain is still the best decisioning engine ever created.

There are also several risk factors which can’t be gleaned from an external scan or continuous monitoring of a network.  These include how (or if) companies train their employees, the experience level of their management and security team, the health of the industry class, the overall adoption of a security mindset, and their plans in the event of a cyber-attack.  None of these data points can be gathered externally, but any claims rep or breach coach will tell you they’re directly correlated with losses.

To conclude the case for people, this is still a relationship business.  Something I hope will not go away.  Trust is arguably the most important asset an underwriter can have and is only gained through long-term relationship building.

The Case for Both

It doesn’t take a rocket scientist (or an algorithm) to predict where I was going with this blog post.  However, it’s not quite that simple.  It’s too easy to say that underwriters should simply merge their approach with that of a data only approach, attaining some level of cyber zen.  This is where the real work comes in, and where I have some direct experience.

Having worked at a predictive analytics start-up prior to joining Optio, I saw first-hand how powerful these models could be.  However, two common themes among my clients also struck me.

One was how much work the customer, in this case insurers and reinsurers, had to put in to glean real value from the model.  There’s “heavy lifting” to be done on the part of the customer to make these solutions work.  Often this additional work wasn’t anticipating by the customer, creating agitation and unwelcome extra work.

Second, and more importantly in my opinion, was the analytics telling an underwriter something they didn’t want to hear.  Prior to 2019, incurred losses were in the 40% range, with actuals much lower.  Unexpectedly seeing a profitable portfolio showing a modeled loss ratio 20-30 points higher instills almost immediate distrust in the analytics.  Also, competing models may show materially different loss figures, adding to this distrust.  On an individual risk selection level, for an underwriter to be told that a company they’ve been insuring for several years, without any notices, is suddenly a bad risk raises questions, as they should.  It’s an underwriter’s job to question.

These issues can lead underwriters to use the good news and filter out the bad, effectively gaming the system.  On the flip side, going solely by the numbers may have led to no growth and, not to be crass, but no bonus or job.  Last time I checked those were important.

The case for both is a challenge to the cyber underwriting industry, both old guard and new arrivals.  On one side a challenge to recognize that machine-learning and AI can be a tremendous competitive advantage but utilizing them is a major undertaking and some level of trust is required.  On the other side, a challenge to recognize that pure data-driven underwriting with no, or very limited underwriter oversight can lead to losses at scale and increased distrust of technologies that are vital to a profitable cyber insurance market.

Thank you, and I welcome your feedback and opinions.

Sprinklered Buildings Still Burn

Kurtis Suhs
Founder and Managing Director, Cyber Special Ops, LLC

Mr. Suhs serves as the Founder and Managing Director for Cyber Special Ops, LLC,  a cyber risk company that provides its clients with Concierge Cyber®, a revolutionary new delivery solution for cyber risk services modeled on concierge medicine.

Many insurance professionals have compared cyber insurance to employment practices liability (EPL) insurance which took decades for organizations to adopt; however that is where the comparison ends. Cyber insurance is more analogous to catastrophic commercial property insurance, in which state-sponsored actors and sophisticated crime syndicates target and seek to burn down your building 24/7/365 days per year.

According to FM Global, the three main reasons sprinklered buildings burn are 1) design deficiencies, 2) system impairments before a fire, and 3) system impairments during a fire.  Let’s evaluate how each of these causes compare with cyber loss.

Design Deficiency

Sometimes due to design deficiency or system impairment, an automated  sprinkler system fails to suppress a fire sufficiently and thus a building burns despite the system.

Water supply
Is the water source
—a public water supply?
—a fire pond?

Incident Response
Is the data breach team
—an external third-party service provider?
—an internal legal and infosec team?

System design
Is the system design adequate?
What is the system trying to protect?

Network Design
Is the network architecture adequate?
What is the system trying to protect?

Changes in occupancy

Changes in electronic assets                               

The building (organization) was devastated by fire (a cyberattack). The cause of the devastation was multifaced. The water supply (incident response plan) was limited because a single connection from the public water main (a few data breach firms) supplied the entire sprinkler system (cyber insurance market). However, the water supply (incident response plan) was limited and the water flow (insured’s cyber insurance coverage and limit) to the automatic sprinklered system (network defense) was marginally adequate for the task. The sprinkler system (network defense) was designed for a facility (organization) that processed a specific amount and type of paper (electronic assets). The plant (organization) was changed to process a new and greater amount of  hazardous coated paper (sensitive information). This change was made without reevaluating the sprinkler design (network design) or water supply (incident response plan).

The system (network) simply couldn’t generate enough water (cyber insurance) to mitigate this type of fire (cyberattack) and suppress it because it wasn’t designed for this use and didn’t have enough water (cyber insurance coverage and limit) for this type of fire (cyberattack). Furthermore, the local fire department (cyber insurer) wasn’t aware of the change in the amount and type of paper (the exposure basis) and thus didn’t know they were responding to a hazardous chemical fire (state-sponsored actor), which requires a very different firefighting response (incident response) as compared to a traditional uncoated paper fire (simple malware).

System Impairments Before a Fire

A fire that would normally be adequately controlled or suppressed completely can instead rage out of control and destroy the building.

There are three type of impairments that can occur before a fire (cyberattack) as follows:

  • renovation of building (network)
  • inadequate maintenance of property (network)
  • arson (state-sponsored actors and sophisticated crime syndicates).

Deliberate action by an arsonist (state-sponsored actor or sophisticated crime syndicate) can impair or disable an automatic sprinkler system (computer network) so the arsonist’s (threat actor) fire setting (cyberattack) actions will cause damage.

Arsonists (cyber attackers) learn how sprinkler systems (computer networks) work and find ways to defeat or overtax them. Limited only by their imagination, for example, they may close valves (software applications) or attempt to overtax the system (all computer servers) by setting multiple fires (cyberattacks) designed to circumvent, damage or destroy the building (organization).

System Impairments During a Fire

System impairments that can occur during a fire are often the result of human action that cause a protection breakdown.

The most common system impairment that can occur during a fire (cyberattack) is premature closure of a sprinkler system’s control valve (network defenses).

Another common system impairment is the inadequate monitoring of the sprinkler control valve (network defenses).

Call to Action:

For most businesses, the five most important categories of risk are tied to 1) theft of intellectual property, 2) business interruption, 3) theft or corruption of personally identifiable information, protected healthcare information, 4) credit and debit card data and 5) diminished cash flow. But which of these is a priority, to what degree, and for which organization assets?

If we really want to make cybersecurity better, we first need to ask what do we need to protect within the organization? All of this is highly dependent on the business, the internal network structure, and the other security controls that are in place premised upon the zero-trust information security model.

Organizations will never outpace the sophisticated cyber threat actor. Remember, the cyber adversary only has to be right once while your organization has to be right 100% of the time.

Cybersecurity Litigation Review

This blog post was submitted in dialogue with the recent PLUS webinar “Cyber Risk is a D&O Risk.” You can view the recording of this webinar and past free webinars on the PLUS website here.

If you have blog content you’d be interested in submitting, please reach out to Katie Campbell at

John Cheffers was hired to be a Director of Research for Watchdog Research in 2019 and creates content that is featured on the company blog.  He obtained his J.D. from Ave Maria School of Law in Naples Florida in 2019, where he was a member of the Law Review and graduated magna cum laude. Prior to that he worked for Audit Analytics as a Research Analyst.

Cybersecurity has gone from a niche concern to a hot topic in the D&O insurance world.  On September 23rd, this week, PLUS hosted a webinar on how companies can strategically handle cybersecurity concerns.  The speakers offered tremendous perspective on this dynamic and growing area, and we encourage everyone to listen to their fascinating conversation.

We are an independent research provider that uses an extensive database of public information to create easy-to-use reports for over 4,500 publicly traded companies.  Since we track cybersecurity incidents and all material litigation for public companies, we thought we could use this as an opportunity to provide a little color to the important discussions concerning cybersecurity.


We began by looking at incidents that occurred at companies listed on the NYSE and Nasdaq over the past ten years, and the growth rate of cybersecurity incidents is alarming: 

*The graphs and tables in this post were created by Joseph Burke, PhD, and derived the Audit Analytics database.

In 2010, only 0.1 % of companies reported a cybersecurity incident. In 2019, 2.2% of companies reported a cybersecurity incident. The growth of cybersecurity incidents over the past five years has been incredible and it is not clear when it will slow down. 

Another interesting facet is that the risk of a cybersecurity incident is much higher at a large company that it would be at a small company. Attacks on large companies are driving much of the growth in these numbers.

Cybersecurity Security Class Actions

A cyberbreach at a company creates all sorts of problems, including litigation. We identified all the security class action suits that were brought over cybersecurity issues and calculated the likelihood of being named in one of those suits. Unsurprisingly, the last ten years has shown significant growth in the risk of being named in a cybersecurity related lawsuit.

It is important to note that these percentages are for all companies.  Large cap companies have a significantly probability than is represented in the graph because they are both more likely to be the victim of a cybersecurity incident and are generally more likely to have a securities class action suit filed against them.  

Cybersecurity as a Leading and Covariate Indicator

Two of our researchers, Joseph Burke PhD and Joseph Yarbrough PhD, wrote a research paper calculating when particular flags from our reports were associated with an increased risk of securities class action litigation for 2014-2018. Companies with a cybersecurity incident were almost three times as likely to get named in a securities class action lawsuit the year that the incident occurred.

Additionally, cybersecurity incidents were one of the six leading indicators of securities class action suits.  An event is considered a leading indicator of litigation if the occurrence of that event is associated with an increased risk of litigation for the following year. 


The chance of being involved in a cybersecurity securities class action lawsuit is still relatively low, but it is increasing rapidly. Additionally, the risk profile is far higher for large companies, which are more likely to be a victim of a cybersecurity incident and more likely to get named in a securities class action lawsuit. 

If company boards wish to prevent having their company victimized twice (by hackers and by lawyers), then they need to make wise and strategic decisions to confront this growing threat.