The Open Door: AI, Legacy Systems, and the Cybersecurity Gap That Cannot Be Closed
C-Suite Risky Business
In the spring of 2021, a ransomware group called DarkSide broke into Colonial Pipeline’s network through a single compromised password on a legacy VPN account that the company hadn’t known was still active. Within hours, 5,500 miles of fuel pipeline serving the U.S. East Coast went offline. Gas stations ran dry from Virginia to Florida. The CEO called the White House. Colonial paid $4.4 million in ransom, in Bitcoin, within 24 hours of the attack.
DarkSide didn’t need sophisticated AI to pull this off. They needed one unpatched gap in a sprawling, underdocumented legacy system. Now imagine what they could do with it.
AI has just handed attackers a capability that changes the game: the ability to scan millions of lines of code, including decades of custom, undocumented legacy code, and identify exploitable vulnerabilities at machine speed. At the same time, the very same AI tools that create this threat are reshaping how companies build products, manage data, and run operations. Companies that don’t deploy them will fall behind. Companies that do will expand their attack surface. There is no path that avoids both risks simultaneously.
What does this mean for senior business executives?
§AI has triggered a step-function increase in cybersecurity risk on three distinct vectors: AI’s ability to discover and exploit software vulnerabilities, the data security risks embedded in AI agents, and the speed at which AI writes code that may not be secure.
§ The tension is structural, not solvable. The same AI capabilities that create new vulnerabilities also create competitive value. Companies that try to eliminate AI cybersecurity risk entirely will simply be outcompeted. The question is not whether to accept risk, but which risks to accept, and where.
§ Large, established companies face a structural cybersecurity disadvantage relative to startups that investment alone cannot overcome. This is not a temporary gap. It is a permanent feature of competing with decades of accumulated technical debt.
§ The right response is triage: identify mission-critical systems and data, concentrate defenses accordingly, and elevate these decisions from the CISO’s office to the boardroom, where they belong.
Three Vectors of AI Cyber Risk
Cybersecurity risk has always been a fact of corporate life. What has changed is the magnitude, the speed, and, most importantly, the structural nature of the threat. AI has not simply made existing attacks easier. It has opened attack surfaces that were previously unmappable, enabled new categories of intrusion, and done so at precisely the moment when companies are most aggressively expanding those surfaces in pursuit of competitive advantage. Three vectors account for most of the new exposure.
1. AI-Powered Vulnerability Discovery
Anthropic recently revealed that its Mythos AI model is extraordinarily capable at identifying and potentially exploiting security flaws in software, having found vulnerabilities in every system it tested. The announcement was striking enough that Anthropic initially restricted Mythos to a handful of companies, including Microsoft and Google, so they could use it defensively: racing to find and fix their own flaws before the model fell into other hands.
That containment has already begun to erode. Unauthorized users claim to have accessed Mythos, and other AI companies and governments have developed comparable capabilities, or will very shortly. The target space is staggeringly large and, for most enterprises, effectively unmappable: decades of custom, undocumented code whose vulnerabilities have never been systematically catalogued, let alone remediated. AI models can scan this code for weaknesses far faster than any security team can respond. The Colonial Pipeline attackers found one forgotten door. AI-powered tools can try every door simultaneously.
2. The AI Agent “Lethal Trifecta”
AI agents, tools that can autonomously research, analyze, and act on user requests, introduce a different and less understood class of risk. Security researcher Simon Willison describes a “lethal trifecta” of agent capabilities that, when combined, create a reliable attack pathway: access to private data, exposure to untrusted content, and the ability to communicate externally. Any agent that combines all three can be manipulated by an attacker into accessing your private data and sending it outside the organization. The attacker need not penetrate your perimeter. They simply need to trick your agent.
The uncomfortable arithmetic: the risk grows in direct proportion to how useful the agent is. Some tech companies have already given AI agents complete access to internal systems, including email. The more data an agent can reach, the more productive it becomes, and the more catastrophic a breach becomes. As agents proliferate and employees grow more skilled in using them, the exposure compounds. The very behavior that makes agents powerful, autonomous action across large, integrated datasets, is precisely what makes them dangerous if compromised.
3. AI-Generated Code and the Velocity Problem
Services like Claude Code are already capable of writing production-quality code at a pace no human team can match, and capabilities are improving rapidly. Companies that insist on purely human-written code will fall behind. But speed and security are in tension. Code written rapidly and at volume is not necessarily secure. The pace of new code creation can, and in many companies already does, outrun the ability of security teams to audit it. The result is an attack surface that expands from the inside, driven not by attackers but by the company’s own development velocity.
The Legacy Disadvantage: A Gap That Cannot Be Closed
The Colonial Pipeline attack is instructive not because it was sophisticated - it wasn’t - but because it illustrated a structural truth about large, established companies: their accumulated technical debt is an attack surface that grows faster than it can be managed, even without AI. DarkSide didn’t breach Colonial’s state-of-the-art systems. They found a forgotten door that had been left unlocked, probably for years, by people who no longer worked there.
Now consider what AI-powered vulnerability scanning does to that equation. A typical legacy enterprise carries tens of millions of lines of custom code, accumulated over three or four decades, written by developers long since departed, documented incompletely if at all. Previously, finding exploitable vulnerabilities in that environment required skilled human attackers investing significant time. AI tools can now scan that entire codebase in hours and surface a prioritized list of weaknesses. The 2024 Change Healthcare breach, which compromised medical records for nearly one in three Americans and has cost UnitedHealth Group an estimated $3.1 billion in its first year of remediation, was enabled by precisely this kind of legacy exposure: a portal that lacked multi-factor authentication. A forgotten door, again.
The critical distinction is between an operational gap and a structural one. An operational gap, such as insufficient investment, inadequate staffing, or poor processes, can be closed with resources and management attention. A structural gap cannot, at least not fully. Startups build on modern cloud infrastructure with security baked in from day one. They have smaller, better-documented codebases with no forgotten VPN accounts, no legacy portals, no decade-old integrations nobody fully understands anymore. Legacy enterprises cannot replicate these conditions regardless of how much they spend. They can remediate known vulnerabilities, modernize specific systems, and build stronger detection and response capabilities. But the underlying reality, a vast, partially unmappable attack surface that AI-powered tools can probe faster than security teams can defend, does not fundamentally change.
The appropriate board response is neither denial nor despair. It is clear-eyed acceptance of the structural reality, followed by deliberate, prioritized risk management.
Which Industries Face the Sharpest Exposure
AI-driven cybersecurity risk is not uniformly distributed. Four sectors face a materially elevated threat profile, each for structural rather than incidental reasons.
• Healthcare and life sciences face the most acute combination of legacy infrastructure, high-value data, and regulatory consequence. Electronic health record systems at many large hospital networks were built on architectures that predate modern security standards by decades. Patient data commands premium prices in criminal markets: a complete medical record is worth multiples of a credit card number, and every successful breach triggers mandatory public notification. Change Healthcare demonstrated that a single compromised third-party system can cascade into industry-wide disruption of extraordinary scale.
• Financial services carry deep legacy exposure in core banking and payments infrastructure. Some large institutions still run COBOL-based mainframes written before most of their current employees were born. The sector also faces the agent data aggregation risk acutely: AI tools that synthesize transaction histories, credit profiles, and customer communications are enormously valuable for fraud detection and personalization, and enormously dangerous if compromised. Post-breach regulatory accountability now routinely reaches above the CISO level.
• Critical infrastructure, including energy, utilities, and transportation, faces a threat that extends beyond data breach into physical consequence. Operational technology systems controlling pipelines, power grids, and water treatment facilities were designed for reliability, not cybersecurity, and many were never intended to be networked at all. The convergence of IT and OT systems, accelerated by AI-powered monitoring tools, is creating new attack pathways into systems whose compromise carries life-safety implications. Colonial Pipeline was, in hindsight, a warning shot.
• Manufacturing and industrial companies are in the early stages of deploying AI agents across supply chain management, quality control, and production optimization, which puts them squarely in the velocity problem described above. Custom ERP and manufacturing execution systems built over decades create substantial legacy exposure, and competitive pressure to move fast means governance typically lags capability by a year or more. Intellectual property, including product designs, process specifications, and supplier relationships, is an underappreciated target that AI-powered attackers can now pursue systematically.
Other sectors, including retail, professional services, and media, face real exposure as well. But executives in these four industries should treat AI cybersecurity as an immediate board-level priority. Not a medium-term one.
Recommendations for Managing the Heightened Risk
The best strategy starts with two honest recognitions: the inherent risk has increased dramatically, and fully eliminating it is neither possible nor desirable, because the same AI capabilities that create vulnerabilities also create competitive value no company can afford to leave on the table. New security measures are nonetheless urgent. The question is where to concentrate them.
Step Zero: Triage
Before addressing any threat vector, every company needs to know what it is actually protecting. Not all systems carry equal risk, and treating them as though they do wastes resources and creates false security. A breach of marketing analytics is a bad day. A breach of customer health records, operational control systems, or core financial infrastructure can be existential. Identify mission-critical systems and data first, apply a materially higher standard of protection to them, and build the rest of your program around that foundation. In the four high-exposure sectors above, this exercise should already be complete.
Addressing the Three Vectors
• Security vulnerabilities: Use the best available AI model to test your own systems for weaknesses. This is table stakes now, not an advanced practice. Dedicate people to reviewing the findings and acting on them. Because AI models improve continuously and produce different results each time they run, repeat this process regularly for mission-critical systems, weekly for the highest-risk environments. The logic is simple: use the same tools your attackers are using, before they use them on you.
• AI agent data access: Establish and enforce rules governing what data your AI agents can access and whether they can communicate externally. Try to eliminate at least one leg of the lethal trifecta for every agent your employees use. For new deployments, treat the full combination of private data access, untrusted content exposure, and external communication as a disqualifying configuration unless a specific business case justifies it. Consider denying agents access to your most sensitive data entirely, even at some cost to productivity. That tradeoff is usually worth making.
• AI-generated code: Require security review of AI-generated code before it is deployed to systems that touch sensitive data. Where feasible, use AI models to test new code for vulnerabilities before it goes into production, matching AI-speed generation with AI-speed review. The goal is not to slow development. It is to make sure the attack surface does not expand faster than your security team can track it.
Elevate This to the Board
The most consequential governance change required is also the simplest to state: these decisions can no longer be delegated to the CISO. The tradeoffs between AI deployment speed and security exposure, between data aggregation and breach risk, between competitive positioning and risk tolerance, involve revenue, reputation, regulatory exposure, and in some industries, physical safety. They belong at the CEO and board level. Companies that continue to treat cybersecurity as an IT problem will make these tradeoffs poorly, either accepting too much risk in the wrong places or surrendering competitive ground by being too cautious in the right ones.
None of these steps will eliminate AI cybersecurity risk, and they will probably not reduce residual risk to where it was eighteen months ago. But failing to act creates the potential for catastrophic exposure. And excessive caution carries its own risk: falling behind competitors who are moving faster.
The Colonial Pipeline CEO had 24 hours to decide whether to pay $4.4 million in Bitcoin to people he’d never meet, to restore systems his team didn’t fully understand, through a vulnerability nobody had known existed. He paid. None of the risks described in this article are hypothetical. They are extensions of dynamics already in motion, now sharply accelerating. The companies that navigate this environment best will not be those that eliminate the risk. They will be those that understand it clearly enough to make the right tradeoffs, and who make them deliberately, before events make the choices for them.



