News
Industrial Cyber – December 14, 2025
Industrial cybersecurity is entering a more exposed and strategic phase defined by hard lessons from 2025. Organizations spent the year facing a harsh reality where reactive defenses and siloed IT and OT teams are no longer sufficient against a threat landscape that is rapidly evolving and penetrating further across industrial environments. Evolving cybersecurity strategies must address this new reality by integrating more robust risk management frameworks, strengthening interdepartmental collaboration, and adopting a proactive security posture.
Industrial incident analysts are reporting that adversaries are now spending increasing time in networks before being detected, and increasingly utilizing the limited visibility in legacy OT infrastructure. Operational risk consistently identifies the same vulnerabilities, such as partial asset inventories, poorly managed remote access, and monitoring solutions that are simply not deep enough into industrial processes.
Nation-state hackers have stepped up that pressure. Industry intelligence and government-sponsored analysis indicate a growing trend in state-related reconnaissance against energy, manufacturing, water, and transportation. These operations are rarely about immediate disruption. Instead, they concentrate on mapping environments, maintaining persistence, and generating future options for leverage, raising the risk for delayed detection and segmented responses across industrial operations.
TL;DR: The 2026 cybersecurity landscape is defined by AI-driven attacks and defenses. Expect more data-theft extortion, hypervisor-level targeting, and AI model manipulation, as well as a skills gap that slows response. Professionals must focus on Zero Trust, identity management, and AI-specific security skills to stay ahead.
AI, Geopolitics, and the 2026 Threat Landscape
The F5 breach highlights a core theme: Attackers are playing a long game. That was the heaviest quarter on record, confirming that extortion and data theft now drive attacker economics, and that criminal operations scale better than most corporate defenses.
Nation-state actors continue to conduct operations for espionage, disruption, and financial gain. At the same time, cybercrime has become a mature, on-chain economy, and the barrier to entry has plummeted thanks to AI. Attackers are fully leveraging AI to enhance the speed, scope, and effectiveness of their operations, building on use cases observed in 2025.
Defenders are in a race to adapt. Identity has moved to the center of decision-making. Most modern intrusions use valid logins at some point, which is why runtime access now depends on who the user is, which device they hold, and the risk signals surrounding that session. The network perimeter still matters, yet the practical perimeter now travels with users and devices. The net result is a year defined by cybersecurity trends that favor identity, automation, and resilience over the older promise of hardened walls.
Marcus on AI, – December 20, 2025
2025 turned out pretty much as I anticipated. What comes next?
AGI didn’t materialize (contra predictions from Elon Musk and others); GPT-5 was underwhelming, and didn’t solve hallucinations. LLMs still aren’t reliable; the economics look dubious. Few AI companies aside from Nvidia are making a profit, and nobody has much of a technical moat. OpenAI has lost a lot of its lead. Many would agree we have reached a point of diminishing returns for scaling; faith in scaling as a route to AGI has dissipated. Neurosymbolic AI (a hybrid of neural networks and classical approaches) is starting to rise. No system solved more than 4 (or maybe any) of the Marcus-Brundage tasks. Despite all the hype, agents didn’t turn out to be reliable. Overall, by my count, sixteen of my seventeen “high confidence” predictions about 2025 proved to be correct.
Here are six or seven predictions for 2026; the first is a holdover from last year that no longer will surprise many people.
- We won’t get to AGI in 2026 (or 7). At this point I doubt many people would publicly disagree, but just a few months ago the world was rather different. Astonishing how much the vibe has shifted in just a few months, especially with people like Sutskever and Sutton coming out with their own concerns.
- Human domestic robots like Optimus and Figure will be all demo and very little product. Reviews by Joanna Stern and Marques Brownle of one early prototype were damning; there will be tons of lab demos but getting these robots to work in people’s homes will be very very hard, as Rodney Brooks has said many times.
- No country will take a decisive lead in the GenAI “race”.
- Work on new approaches such as world models and neurosymbolic will escalate.
- 2025 will be known as the year of the peak bubble, and also the moment at which Wall Street began to lose confidence in generative AI. Valuations may go up before they fall, but the Oracle craze early in September and what has happened since will in hindsight be seen as the beginning of the end.
- Backlash to Generative AI and radical deregulation will escalate. In the midterms, AI will be an election issue for first time. Trump may eventually distance himself from AI because of this backlash.
And lastly, the seventh: a metaprediction, which is a prediction about predictions. I don’t expect my predictions to be as on target this year as last, for a happy reason: across the field, the intellectual situation has gone from one that was stagnant (all LLMs all the time) and unrealistic (“AGI is nigh”) to one that is more fluid, more realistic, and more open-minded. If anything would lead to genuine progress, it would be that.
Enable AI. Reduce cybercrime. Unleash abundance
Perhaps the biggest near-term AI opportunity is reducing cybercrime costs. With serious attacks unfolding almost daily, digital insecurity’s economic weight has truly grown out of control. Per the European Commission, global cybercrime costs in 2020 were estimated at 5.5 trillion euros (around $6.43 trillion). Since then, costs have only spiraled. In 2025, Cybersecurity Ventures estimates annual costs will hit $10 trillion, a showstopping 9 percent of global GDP. As Bloomberg notes, global cybercrime is now the world’s third-largest economy. This is truly an unrivaled crisis.
Thankfully, it is also an unrivaled opportunity. Given the problem’s sheer scale, any technology, process, or policy that shaves off just a sliver of these cyber costs has percentage point growth potential. Reduce cyber threats, and abundance will follow.
The immense potential of software translation is far from the only near-term AI opportunity. Already, studies have proven AI can automate vulnerability detection—that is, AI can discover serious security issues without human involvement. Soon, software could be proactively secured even before it ships. Likewise, advances in AI task completion suggest software patches could soon be automated. In a few years, software fixes could be generated and shipped just moments after insecurities are discovered. Beyond, we find countless other possibilities in advanced cyber intelligence, threat detection, real-time response, and more.
