Generative AI in Focus, Part II: Cybersecurity, Civil Unrest & Social Inflation Concerns

a rendering of a semiconductor
By Eleanor Bragg   and Greg Scoblete  CPCU

Key Takeaways

  • While it’s still early days, generative AI is already reportedly being used for hacking and cybercrime—a nefarious use-case which may grow even more concerning as the technology improves.
  • The analytical power of generative AI may also influence social inflation trends by lowering the barrier to entry for litigation or by helping plaintiffs analyze cases to select the most promising ones.
  • Generative AI’s ability to create content may also enable bad actors to aim a firehose of misleading-yet-realistic-looking disinformation into online spaces, potentially causing financial panics or civil unrest.

Generative AI’s (GAI) capacity to potentially influence a range of liability risk exposures (addressed in part one of this series) might arise from possible flaws in its training data or outputs. But what happens when GAI works as intended, but is used by bad actors, like cyber criminals or provocateurs? What happens if GAI tools grow in reach and sophistication to influence trends, like social inflation and class action litigation, that could have a significant impact on insurers?

Unfortunately, for at least some of these concerns, the questions are no longer theoretical.

Empowered Cybercrime

While the other concerns discussed below are still mostly theoretical, many of the cyber-related issues we raised in January have already come to pass. Namely, GAI tools are being used to scam people and companies and spread malware.

Some of this is being accomplished through the creation of deep fakes—i.e., artificial representations of real people. According to one cybersecurity report, the use of deep fakes in ransomware attacks increased by 13 percent in 2022.

One cybersecurity firm noted a sharp increase in the number of deep fake videos being uploaded to a popular video-sharing platform that contain links to malware. In March, the Federal Trade Commission (FTC) warned the public that scammers were cloning people’s voices from video clips collected online and using that audio to target people for phone scams. (GAI tools may only need as little as three seconds of training on an audio file before being able to perfectly mimic that audio in natural conversation.) In June, the FBI issued a warning that deep fakes were being used to execute “sextortion” campaigns.

The FTC also unveiled a veritable laundry list of other scams that GAI tools are helping to accelerate, including phishing emails, fake websites, fake consumer reviews, imposter scams, extortion, and more. These scams could also potentially target insurers—someone could use a GAI tool to generate or doctor images during the claims process in an attempt to defraud their insurer. Chatbots may also be helping hackers craft more compelling prose for phishing emails and texts.

The ability to create photo-realistic but inauthentic images of real people may also compromise biometric security measures that leverage an individual’s physical characteristics.

What’s more, GAI coding tools are reportedly being used not simply to spread malware, but to build it—making it easier for anyone with ill intent, regardless of software development experience, to arm themselves with increasingly sophisticated cyber weapons.

Social Inflation

Insurers may be familiar with the trend of “social inflation” over the past several years, in which increased litigation and larger jury awards may drive up property/casualty insurance claims costs. GAI could potentially influence social inflation in several ways:

  • To the extent that these tools claim to lower the barrier to entry to litigation, they could potentially facilitate an uptick in litigation generally. One GAI chatbot that dispenses legal advice, for example, is marketed as a means to “fight corporations and sue anyone at the press of a button.”  
  • Because they may be analyzing past case law, these tools may also be able to help determine which suits are more likely to end in a successful settlement or verdict, which could boost plaintiffs’ success rates. Investors backing litigation funding may similarly use AI chatbots to both solicit and analyze prospective cases (an AI tool released in India, for example, appears to do something along those lines).
  • GAI tools could also be used to strengthen cases, as they may be able to help locate, compile, and analyze evidence for litigation. For example, in a lawsuit alleging a pattern of workplace discrimination, GAI could possibly assist in identifying and compiling evidence of a discriminatory or hostile environment from large datasets such as internal company communications, social media, or public message boards.
  • These tools could also potentially assist plaintiffs’ lawyers and/or plaintiffs themselves to locate and analyze other claims that have already been filed. This use case could potentially speed up the process of forming class actions, as law firms could more quickly analyze large amounts of information about individual claims and identify overlaps. It might also make it easier for a plaintiff to find and join existing multi-district litigation (MDL) that aligns with their own case.

Civil Unrest

Targeted misinformation and incitement on social media have helped spur violent incidents of unrest in the not-so-recent past, using comparatively cruder tools. Given GAI’s ability to create synthetic media that’s indistinguishable from the real thing, there is the possibility that these tools could be used to foment civil unrest across the U.S. Note, too, that some social media platforms have reportedly been laying off employees tasked with policing these platforms for misinformation, potentially making them even more vulnerable to abuse.

Public and private groups, including some of the developers of GAI themselves, are working on tools that could help mitigate the risk of AI-generated misinformation. Academic researchers and tech companies have reportedly experimented with educational programs that could inoculate school students or the general public against misinformation. Even before the recent attention on GAI, large tech companies and small tech startups have been working on tools that could identify and flag or remove misinformation–sometimes using AI to do so–but the effectiveness and reach of these tools and interventions appear to be limited.

Financial Panic and Contagion

 Internet access and social media may have already enabled financial panic to spread faster through public forums. For example, some analysts and academics have credited social media with speeding up the recent collapse of Silicon Valley Bank. GAI could accelerate this existing trend since it enables users to quickly churn out content that looks genuine and authoritative that may contain misinformation or simply may amplify a certain narrative. As one hypothetical example, a short seller could use GAI to more quickly generate realistic-looking analyst reports that spur panic over a given company or set of companies.

As is the case with many GAI applications, AI tools may also be used to mitigate the risks that they create. In the case of financial panics, companies and financial institutions themselves may be able to use GAI to counter misinformation or to quickly amplify their own messages to slow the spread of the panic. Indeed, some AI-based tech startups already exist that seek to help protect corporate clients from online misinformation and social media pile-ons. However, as mentioned above, current tools to detect misinformation appear to be limited, and combating fast-moving, negative narratives online seems to be highly difficult.

Lastly, introducing GAI tools into a company’s operations may carry risks of its own, even if it leads to success in countering or slowing financial panics. The possible risks covered in our first article about GAI’s potential flaws, such as inaccurate information from GAI “hallucinations,” or copyright infringement in GAI-generated content, could continue to present challenges for reputable companies or institutions that may seek to counteract GAI-accelerated risks by launching their own AI tools.


All references accessed June 23, 2023.

“Malicious deepfakes used in attacks up 13% from last year, VMware finds,” The Register.

“Threat Actors Abuse AI-Generated Youtube Videos to Spread Stealer Malware,” CloudSek.

“Scammers use AI to enhance their family emergency schemes,” FTC.

“New Microsoft AI Can Clone Your Voice From Three Seconds of Audio,” MSN.

“Malicious Actors Manipulating Photos and Videos to Create Explicit Content and Sextortion Schemes,” FBI.

“Generative AI raises questions about biometric security,” Biometric Update.


“The Implications of ChatGPT for Legal Services and Society,” Harvard Law School.

“Revolutionary Litigation Funding Startup 'FIGHTRIGHT Technologies' To Transform India's Legal Landscape With AI/ML Innovations,” Outlook.

“Social media giant layoffs signal opportunity for online misinformation, bad faith attacks,” Courthouse News Service.

“Rohingya seek reparations from Facebook for its role in massacre” NBC.

“New AI classifier for indicating AI-written text,” OpenAI.

“Misinformation Epidemic Raises Stakes for Solutions,” DANA.

“Can AI Stop People From Believing Fake News?,” Technology Review.

“‘The first Twitter-fuelled bank run’: how social media compounded SVB’s collapse,” The Guardian.


“Controlling the spread of misinformation,” APA.

“Pause Giant AI Experiments: An Open Letter,” Future of Life.



Artificial IntelligenceCommercial Property or FireCyberFinancial InstitutionsGeneral Liability
a photo of Eleanor Bragg

Emerging Issues Weekly Digest

Verisk's Emerging Issues is your source for insights on the evolving risks and opportunities facing insurers and risk managers. Each week, we deliver vital market intelligence that can help inform product development and strategic planning.....
Weekly Digest