For insurance providers, the integrity of data flowing in from healthcare providers and other sources is critical for training reliable automated underwriting, claims processing, and risk modelling systems. However, there are concerning scenarios where if upstream healthcare AI systems become compromised by data poisoning attacks, that corruption could potentially cascade through the entire AI pipeline – undermining insurance models in turn.

Data poisoning, where adversaries subtly manipulate training datasets to induce vulnerabilities and errors in machine learning systems, has emerged as a critical threat as AI becomes more widespread. By strategically injecting mislabelled examples, corrupted inputs, or carefully crafted training samples, attackers can cause AI models to internalize discriminatory biases, systematic blind spots, or “backdoor” triggers that cause unexpected behaviour.

For healthcare AI deployments ingesting sensitive patient data, diagnostic images, and electronic health records, the consequences could be dire – from privacy breaches, to incorrect diagnoses, to prescribing dangerous treatment plans. Poisoned clinical AI systems may overlook high-risk conditions, mishandle protected health information, or make faulty judgments that put patient safety at risk (Finlayson et al, 2019).

And for insurance companies whose underwriting, claims, and actuarial models often rely heavily on data from healthcare providers and clinical decision support tools, data poisoning attacks could lead to catastrophic knock-on effects. If the training data is tainted from the source, insurance AI will learn flawed correlations, misestimate key risks, and propagate those errors and biases through their own systems.

A hypothetical scenario that has been explored is a compromised AI for processing medical images systematically understating certain conditions, which could then skew downstream risk calculations. Corrupted health record data propagating through could induce highly inaccurate underwriting predictions. And maliciously manipulated inputs may mislead fraud detection engines. The potential consequences span underpriced policies, undetected fraud, disadvantaging certain demographics, and miscalculating reserves based on faulty actuarial models.

It’s a concerning scenario of poisoned data pipelines – where the often opaque nature of modern machine learning allows corruption to flow from tainted healthcare AI systems into the core automated decision systems of their insurance industry partners. And effects could be compounded if insurance providers have their own compromised AI ingesting poisoned data.

Defending against such attacks will require robust data security and provenance tracking at each stage of the AI data supply chain. Healthcare providers must lock down access to sensitive training data, implement stringent filtering and integrity checks, and potentially invest in privacy-preserving techniques like differential privacy to de-risk their clinical AI pipelines (Finlayson et al, 2019). Insurance companies in turn will need advanced data validation, fingerprinting, and adversarial checks to identify potential corruption propagating into their workflows.

But given the highly interconnected nature of modern AI systems, with models deployed across complex supply chains and data exchanges, security is only as strong as the weakest link. Maintaining integrity across these tightly coupled systems will likely require cross-industry collaboration, common standards, and architectures designed from the ground up for robust AI data supply chains.

The risks are clear – corrupted AI pipelines stemming from healthcare data poisoning attacks could fundamentally undermine the reliability and trustworthiness of critical insurance decision systems. Raising awareness and driving coordinated defences against this threat must be an urgent priority for both the healthcare and insurance sectors as they embrace AI-powered digital transformation. The consequences of such poisoned data pipelines are simply too severe to ignore.

In response, governments are starting to recognize the need for regulation to ensure robust data governance and model security practices. The European Union has taken the lead with its new AI Act (passed on March 13th, 2024), which establishes horizontal requirements like data auditing, risk management testing, human oversight and transparency provisions aimed at bolstering the integrity and trustworthiness of high-risk AI systems.

While a positive first step, the EU rules remain somewhat high-level, and their effectiveness will depend on how comprehensively the data governance and risk management obligations are implemented in practice. Elsewhere, jurisdictions like the United States, United Kingdom, Canada and China have issued more limited AI principles or sector-specific rules touching on algorithm auditing and responsible AI development. But the EU is currently the only major economy with binding, cross-industry legislation directly tackling challenges like data poisoning head-on. As AI deployments become increasingly critical and interconnected, robust standards enforced through regulation will likely be necessary to maintain end-to-end integrity and security across AI supply chains.

Reference:
Finlayson, S. G., Bowers, J. D., Ito, J., Zittrain, J. L., Beam, A. L., & Kohane, I. S. (2019). Adversarial attacks on medical machine learning. US National Library of Medicine (NLM)