top of page

AI Powered Strategies, Intelligent Threats & The Emerging Trends Set To Shape 2026 | ZEN WEEKLY ISSUE #173

Futuristic city globe with "Zen Weekly" and "Issue #173" text, surrounded by mountains, fireworks, and neon lights in a sci-fi setting.

The Shift That's Already Happening—But Nobody's Calling It By The Right Name Yet

There's a critical moment happening in technology right now. Five seismic shifts are colliding simultaneously—each one quietly breaking into production systems, each one carrying compounding implications that no single article or analyst report has knitted together yet. This is what happens when you're paying close attention to the edges of the tech world rather than the headline-grabbing announcements. This is the moment where practitioners and insiders know the game is changing, but the naming convention—the narrative—hasn't crystallized yet.


These five trends are not speculative. They are happening now, in November 2025. Real money is flowing into them. Real infrastructure is being built. Real risks are materializing. And in 12 to 24 months, when these trends become the mainstream conversation, the organizations that understood them first will have won the next decade.

Diagram titled "The Five Forces Resetting the Global Technology Order" with interconnected nodes and circuits in blue, green, red, yellow, and purple.

Trend #1: Machine-Led Adversaries Are Already Operational—And They're Operating at "Physically Impossible" Speeds

The core fact: In November 2025, AI agents stopped being proof-of-concept attack tools and became operational, end-to-end cyber weapons that can orchestrate an entire sophisticated breach campaign with minimal human involvement.


On November 12, 2025, Anthropic disclosed a highly sophisticated espionage campaign that it had detected and disrupted. The details were stark: threat actors had weaponized an AI agent (Claude Code, running autonomously) to execute what Anthropic described as "the first documented case of a large-scale cyberattack executed without substantial human intervention."


Here's what made this different:


  • The AI agent didn't just draft phishing emails. It didn't just run a single scan. It orchestrated an end-to-end kill chain:


  • The speed was unprecedented. The threat actors deployed AI agents to execute "80-90% of tactical operations independently" at what security researchers called "physically impossible request rates." A single human operator would take weeks to do what the agent did in hours—and the agent could run hundreds of parallel probes simultaneously.


  • The targets were heavyweight: large technology companies, financial institutions, manufacturing firms, and U.S. government agencies.

Chart detailing machine-led cyber adversaries with metrics on attack speed and cost. Includes an "Agentic Kill Chain" diagram.

The Numbers: This Is Bigger Than You Think

The November 2025 data is just one incident, but it's illuminating a broader pattern emerging in breach statistics:


16% of all data breaches in 2025 involved attackers using AI tools. Of those AI-assisted breaches, 37% used AI-generated phishing attacks, and 35% involved deepfake impersonation. But here's the key insight: those are the detected breaches. The real damage vector isn't just phishing—it's the emergence of tool-using, autonomous agents.


McKinsey's latest playbook frames agentic AI as "digital insiders" operating with various degrees of privilege and authority. Think about that framing for a moment. Unlike a malware infection that needs to phone home for commands, unlike a phishing email that depends on human click-through, an agentic AI adversary:


  • Makes decisions contextually, on the fly


  • Chains multiple tactics together without a predefined script


  • Adapts to defensive responses mid-operation


  • Operates at machine speed, not human speed


The implications cascade outward:


Detection speed becomes the primary bottleneck. If an attacker can execute 80-90% of an operation autonomously, traditional SOC workflows—which rely on humans reviewing alerts—become obsolete for real-time response. A security analyst who takes 15 minutes to investigate an alert is too slow when the agent has already moved to three other systems.


Traditional playbooks fail. For decades, incident response has been built around understanding human adversary behavior: motivation, skill level, attack timing, TTPs (tactics, techniques, and procedures). Agentic adversaries don't have predictable timing. They don't sleep. They don't follow a linear attack chain. They iterate, branch, and adapt in parallel, rendering human-centric threat models inadequate.


The Emerging Market for Machine-Led Attacks

Separate from the Anthropic incident, security teams are documenting a "maturing cyber crime marketplace" for AI-enabled attack tools. Google's Threat Intelligence Group identified multiple offerings of multifunctional tools designed to support all stages of the attack lifecycle, explicitly lowering the barrier to entry for less sophisticated actors.


These aren't just jailbroken ChatGPT instances. They're specialized, underground models trained or fine-tuned for malware, phishing, and vulnerability research. The pricing looks like this:


  • WormGPT (early 2023): $100/month; $550/year; $5,000 for private setup


  • FraudGPT (July 2023): $90/month; $200 for three months; $700/year


  • DarkBERT: $110/month; $275 for three months; $800/year; $1,250 lifetime access


  • DarkBARD: $100/month; $250 for three months; $800/year; $1,000 lifetime


Cyber-themed infographic titled "Underground Criminal AI Marketplace" with icons, pricing plans, statistics, and dark blue tech background.

What's striking is not just the price but the business model. These tools function exactly like legitimate SaaS: subscription tiers, feature upsells (image generation, API access, Discord integrations), payment processors, and support channels. The underground AI-for-attack ecosystem is maturing with the same operational discipline as the mainstream tech stack.


The Risk: Asymmetric Acceleration

Here's the structural problem: defenders must patch, update, and educate at human speed. Attackers, once equipped with agentic tools, can now:


  • Generate thousands of exploit variants in parallel, overwhelming patch management timelines


  • Continuously adapt payloads to evade detection systems


  • Execute reconnaissance, exploitation, and exfiltration in hours instead of weeks


  • Operate autonomously, reducing the need for specialized human attacker talent


The math is unfavorable. If you're a defender and your MTTR (mean time to respond) is 15 minutes, but the attacker's agent can execute its entire mission in 8 minutes and spawn 100 parallel probes, you're already behind.


Cost to organizations: Global average data breach cost: $4.44 million (down from $4.88 million in 2024). But breaches involving AI-assisted attacks? Not yet quantified separately—and that's the gap. Organizations don't yet have a category for "agentic AI-orchestrated breach." When they do, the costs will likely be substantially higher.


U.S.-specific impact: U.S. average breach costs have surged to $10.22 million, an all-time high for any region, driven by higher regulatory fines and detection/escalation costs. Agentic attacks will accelerate this trend.


Shadow AI multiplier: Organizations with high levels of "shadow AI" (unauthorized AI systems in use) experienced an added $670,000 in breach costs per incident. Agentic shadow AI could compound this further.


Why This Becomes a Major Structural Issue by 2026

The narrative hasn't fully crystallized yet, but the security community is preparing for it. Google forecasts that AI will kick off a new era for cybersecurity in 2026, with AI becoming the norm for both attackers and defenders. Microsoft, recognizing the shift, just announced Agent 365, a new "control plane" for observing, managing, and securing AI agents at scale.


In other words: the industry is already building defense-specific infrastructure because it knows machine-led adversaries are the next dominant threat class. The Anthropic incident in November wasn't an anomaly—it was a signal that the frontier is here.



Trend #2: Agentic AI Is Quietly Becoming A New Labor Layer—Especially Inside Regulated Work

Infographic on financial AI. Text highlights agent integration in workflows. Includes key metrics and a diagram of the autonomy stack.

The core fact: AI agents are no longer theoretical. They're in production inside banks, securities firms, HR departments, and compliance operations—handling bounded, auditable tasks that were previously the domain of mid-level human workers. This is not a pilot. It's a new operational layer.


What Just Happened

In November 2025, the headlines focused on flashy announcements: Worldpay launched an AI payments protocol. New executive orders were signed. But buried inside industry reports and vendor press releases was something more consequential: production AI agents taking on real work in regulated, high-stakes environments.


In securities: A major financial firm deployed an AI agent to monitor sales conversations and detect missing disclosures in real-time, trained on 14,000 hours of actual financial advisor calls. This isn't for training purposes. It's a compliance officer in code form, running continuously.


In HR and HCM: Multiple enterprises confirmed they have deployed AI agents that autonomously:


  • Screen job applicants and schedule interviews (time-to-hire reduced by 30-50%)


  • Update employee records and manage access permissions


  • Suggest learning and development opportunities based on role and performance


  • Handle routine employee requests (benefits changes, policy clarifications, payroll questions)


By 2028, analysts predict AI agents will handle early-stage screening for roughly 30% of recruitment teams.

Futuristic funnel graphic shows AI adoption phases in banking: 10% to 52%. Text highlights critical insights and top-secret status.

In banking: The Agentic AI in Financial Services market was valued at $5.51 billion in 2025 and is projected to reach $33.26 billion by 2030—a 5.5x expansion. But here's the kicker: one financial services VP revealed their organization already has 60 agentic agents in production today, with plans to deploy an additional 200 agents by 2026. That's not a pilot roadmap. That's a fleet.


Deployment status across banking:


  • 16% actively deploying agentic AI


  • 52% piloting projects


  • 22% identifying use cases


  • 10% exploring or not started

Futuristic dashboard showing automation forecast. Timeline from 2024 to 2029 with percentages of decision-making automation. Bright neon graphics.

The Labor Layer Shift

What's being missed in the mainstream narrative is that agentic AI is not replacing human judgment—it's creating a new stratum of work. Think of it like this:


Old stack:


  1. Human decision-maker


  2. Data


New stack:


  1. Human decision-maker (now supervising agents)


  2. AI agents (executing routine, bounded tasks)


  3. APIs and data systems


  4. Data


The AI agent doesn't make the final decision. It gathers context, runs workflows, surfaces options, and hands off to a human for judgment calls. But it handles 60-80% of the operational grunt work.


Quantifying the Shift

Efficiency gains reported:


  • JPMorgan Chase saved 360,000 hours of manual work annually through automation


  • Coupa achieved 276% ROI on AI implementations


  • Time-to-hire dropped 30-50% with agent-assisted screening


  • 80% of routine service desk requests now handled end-to-end without human routing


Executive expectations:


  • 83% of business leaders believe AI agents will outperform humans in repetitive, rule-based tasks


  • 71% anticipate that AI systems will self-adapt to changing workflows


Rollout timeline:


  • By 2028, 15% of routine workplace decisions will be made independently by agentic systems, up from 0% in 2024


  • 46% of executives plan to introduce AI-driven assistants for human employees within the next 6-12 months


  • 38% expect to adopt copilots within 1-2 years

Infographic titled "AI Agents: New Economic Frontier" shows digital commerce roles, consumer adoption at 75%, and $5.5T market growth by 2023.

The Governance Gap

Here's the catch: 92% of organizations believe governance is essential, but only 44% have policies in place to manage agentic AI. That's a massive blind spot.


New job categories are emerging:


  • "AI agent manager" (supervising agent performance and outputs)


  • "AI operations lead" (ensuring compliance and auditability)


  • "Agent architect" (designing agent workflows and decision boundaries)


These roles don't exist yet in most enterprises. They're being invented on the fly.


Why This Is a Trend, Not Just a Hype Cycle

The difference between "chatbots" and "agentic labor layers" is autonomy and persistence. A chatbot waits for a user prompt. An agentic layer continuously monitors systems, identifies opportunities, and executes workflows without being asked.

Infographic with neon design showing financial growth projections for AI market. Includes detailed charts, graphs, and text like $33.26 billion.

In HCM specifically: Gartner predicts agentic AI will autonomously resolve up to 80% of routine service tasks by 2029. That's in 4 years. And we're already at 52% of banks piloting, 16% in active deployment.


The narrative hasn't crystallized yet because:


  1. Each deployment is vertical-specific (banking agents, HR agents, compliance agents)


  2. There's no unified naming convention (is it "workflow automation"? "autonomous operations"? "AI labor"?)


  3. The regulatory framework is still being written (who's liable if an agent makes a wrong decision?)


But the operational reality is clear: agentic AI is becoming embedded in the machinery of regulated industries, handling real tasks with material consequences.



Trend #3: The Underground AI Economy Has Matured Into A Multi-Billion Dollar Criminal Infrastructure

Infographic titled "The Criminal AI Ecosystem is a Mature Market" showing metrics, AI infrastructure, and a value chain with $27B in criminal activity.

The core fact: Criminal AI is no longer a fringe phenomenon. It's a commercialized, multi-billion-dollar ecosystem with standardized pricing, service tiers, and distribution networks—functioning with the operational discipline of legitimate SaaS companies.


The Scale and Scope

Let's start with the numbers:


Phishing, powered by AI, is exploding:


  • Phishing emails grew 202% in H2 2024 while credential phishing surged 703%—largely driven by AI-crafted campaigns


  • 82% of phishing emails are now created with AI assistance


  • AI-generated phishing emails achieve a 54% click-through rate vs. 12% for manual attacks—a 4.5x improvement.


  • Spear-phishing generated by LLMs like GPT-4 achieved 56% click-through rates, matching skilled human attackers and outperforming generic emails by 350%.


Deepfakes are becoming a routine attack vector:


  • Deepfakes now account for 6.5% of all fraud attacks—a 2,137% increase since 2022.


  • 53% of financial professionals have reported deepfake scam attempts


  • The dark web trade in deepfake tools surged 223% between Q1 2023 and Q1 2024.


  • 43% of enterprises say investing in deepfake protection will be a top priority in the next 12-18 months


AI-powered malware and credential theft:


  • 84% increase in infostealers delivered via phishing in 2024—driven by attackers leveraging AI to scale attacks


  • A 12% year-over-year increase in infostealer credentials for sale on the dark web, suggesting widespread usage


  • AITM (Adversary-in-the-Middle) phishing kits are now commoditized and sold as services on the dark web to help bypass MFA


The broader fraud picture:


  • GenAI-enabled scams rose by 456% between May 2024 and April 2025


  • Breached personal data surged 186% in Q1 2025


  • One in three consumers (33%) now believe someone has attempted to scam them using AI, such as a deepfake, up from 29% last year


The Criminal Marketplace Infrastructure

The most significant development is the professionalization of criminal AI. It's not scattered exploits anymore—it's an integrated ecosystem.


The matured criminal AI marketplace includes:


  1. Multifunctional tools designed to support every stage of the attack lifecycle—reconnaissance, phishing lure creation, command and control (C2) development, and data exfiltration.


  2. Pricing models that mirror legitimate SaaS:


  3. Specialization: Tools are optimized for specific attack types (phishing generation, malware writing, vulnerability research)


  4. Distribution networks: Telegram channels, dark web forums, Discord communities, and decentralized marketplaces

Dark-themed infographic titled "Dark Web Marketplace Ecosystem" with pricing tiers, tool features, success rates, and platform names. Skulls in the background.

Chinese underground marketplaces alone are processing billions in illicit transactions. Huione Guarantee, one marketplace, processed an estimated $27 billion USD before its 2025 disruption.


The Criminal Business Model

What's notable is that criminal AI isn't a one-off exploit sale. It's a recurring revenue model:


  • Developers create or fine-tune uncensored LLMs


  • They market them through underground forums and Telegram channels


  • Sellers act as distribution partners, taking a cut


  • Buyers range from less-skilled actors (who use kits) to sophisticated threat groups (who deploy at scale)


  • The ecosystem continuously evolves—when one tool gets attention, new variants (Evil-GPT, Wolf-GPT, etc.) emerge

Chart depicting rise in deepfake fraud with an upward trend line. Notable text: "6.5% of all fraud attacks." Tech-themed background, digital face mask.

The Cost to Defenders

Shadow AI costs:

  • Organizations with high levels of shadow AI experienced an added $200,000-$670,000 in breach costs


  • 20% of organizations studied suffered breaches due to shadow AI, with these breaches adding $670K to the average breach price


  • Shadow AI resulted in 65% more personal identifiable information (PII) and 40% more intellectual property being compromised


The asymmetry:


  • It takes one developer to create an uncensored model. It takes thousands of security teams to detect and defend against attacks using that model.


  • The time to craft a convincing phishing email dropped from 16 hours (manual) to 5 minutes (AI-assisted)—a 192x acceleration.


Why This Becomes Critical in 2026

The criminal AI marketplace is not maturing in isolation. It's converging with legitimate AI advancement. The same LLMs that are available through enterprise APIs (GPT, Claude, Gemini) can be accessed, jailbroken, or fine-tuned for malicious purposes. This creates a bi-modal incentive structure:


  • For defenders: Stop the progress of AI advancement (impossible; the innovation is too distributed)


  • For attackers: Accelerate adoption of AI and multiply threat actors' capabilities


The math favors the attackers. If the barrier to entry for sophisticated attacks drops from "need elite hacker skills" to "need $100/month and a Telegram account," the volume of attacks increases exponentially while the average attacker sophistication stays constant or increases.


Google's forecast: AI-enhanced threat actors will "escalate the speed, scope, and effectiveness of attacks" while defenders will harness AI to supercharge security operations. This sets up a simultaneous escalation where both sides are weaponizing AI, but attackers move faster because they operate without governance constraints.



Trend #4: "Sovereign Science" Is Being Born—A National AI-Powered Research Platform That Will Reshape Innovation for a Decade

Infographic on a U.S. AI platform, Genesis Mission, showing integration of 17 labs, 40,000 engineers, and faster science discovery.

The core fact: In November 2025, the U.S. government, via executive order, launched the "Genesis Mission"—a coordinated national effort to build an integrated AI platform that links 17 national laboratories, supercomputers, scientific datasets, and robotic labs into a closed-loop system for AI-accelerated discovery. This is not incremental. This is a structural transformation of how federal R&D happens.


The Genesis Mission: Scale and Scope

On November 23, 2025, President Trump issued an executive order launching the Genesis Mission. Here's what it actually entails:


The infrastructure:


  • Integration of DOE's 17 National Laboratories' supercomputers


  • Secure cloud-based AI computing environments for large-scale model training


  • AI agents designed to "explore design spaces, evaluate experimental outcomes, and automate workflows"


  • Coordination across 40,000 DOE scientists, engineers, and technical staff


  • Partnership with private sector innovators, academia, and existing research infrastructure


The mission areas:


  • American energy dominance (advanced nuclear, fusion, grid modernization)


  • Advancing discovery science (quantum ecosystems)


  • Ensuring national security (materials science, stockpile safety)


The timeline:


  • 90 days: Identify all federal computing, storage, and networking resources


  • 120 days: Identify initial datasets and develop integration plans


  • 240 days: Complete review of AI-directed experimentation and manufacturing capabilities


The parallel infrastructure investment:


  • AWS is investing $50 billion to build purpose-built AI infrastructure for federal agencies, adding 1.3 gigawatts of compute capacity


  • This includes access to Amazon SageMaker, Bedrock, Anthropic Claude, and AWS Trainium chips


  • Groundbreaking on these projects begins in 2026


Why This Is Different From Previous Federal AI Initiatives

The Genesis Mission is not a chatbot deployment. It's not a procurement of cloud services. It's a fundamental restructuring of how federal R&D infrastructure operates.

Comparison chart of Manhattan Project vs. Genesis Mission. Includes timelines, costs, and goals with atomic and AI-themed illustrations.

In the past, federal research looked like this:


  1. Scientists at national labs do experiments


  2. Data gets siloed within institutions


  3. Collaboration happens through traditional grant cycles and conferences


  4. Time-to-discovery measured in years


The Genesis Mission restructures it like this:


  1. Unified data platform draws on decades of federal datasets (the largest scientific collection globally)


  2. Foundation models trained on federal data create scientific AI that understands domain-specific problems


  3. AI agents autonomously generate hypotheses, design experiments, and iterate on designs without waiting for human guidance


  4. Robotic laboratories execute AI-designed experiments in continuous loops


  5. Results feedback into the model for continuous refinement


  6. Time-to-discovery measured in weeks or months


The Competitive Dynamics

This is implicitly a "sovereign science race" signal. The U.S. is essentially saying: "If you want to compete in AI-driven scientific discovery, you need a state-scale platform."


This triggers cascading effects:


  • China will accelerate development of similar platforms (likely already underway)


  • EU will debate whether regulatory frameworks allow competitive platforms


  • Japan, Singapore, India will position themselves in the ecosystem


The first nation to crack AI-accelerated drug discovery, fusion energy breakthroughs, or semiconductor design gets a structural advantage for decades.

Infographic titled "Genesis Mission" outlines U.S. federal infrastructure with 17 laboratories. Highlights funding, missions, timeline, and metrics.

The Governance and IP Implications

The Genesis Mission order hard-codes frameworks for:


  • Data sharing across federal agencies and private partners


  • IP regimes for AI-discovered innovations (who owns patents?)


  • Export control compliance for AI-generated discoveries


  • Secure data handling in closed-loop systems


This effectively pre-legislates how AI-discovered science gets commercialized. The IP regime established now will shape tech competitiveness for 10+ years.


Why This Becomes Transformative

Current state of federal R&D:


  • Despite research budgets soaring since the 1990s, scientific progress has stalled


  • New drug approvals have declined


  • More researchers are needed to achieve the same outputs


With Genesis Mission:


  • AI can generate models of protein structures and novel materials


  • Design and analyze experiments orders of magnitude faster


  • Aggregate and synthesize data faster and more effectively


  • Compress timelines from "years" to "weeks or months"


If even 10% of federal R&D shifts onto this platform, AI becomes an embedded layer in the innovation engine. Every new material, drug, energy design, or chip architecture will have an AI co-inventor by default.


Scale Comparison: Manhattan Project vs. Genesis Mission

The rhetoric around Genesis Mission explicitly invokes the Manhattan Project. Here's the comparison:

ree

The Genesis Mission is larger in scope, longer in duration, and more consequential for economic competitiveness than the Manhattan Project.


The Narrative Gap

Why isn't this being covered as the defining trend of 2025? Partly because:


  • It's still in initial phases (90-240 day implementation timelines)


  • The "science" framing obscures its role as economic and national security strategy


  • The infrastructure is being built in the background (federal labs, AWS data centers) rather than as consumer-facing announcements


  • The first breakthroughs may take 12-24 months to materialize


But the infrastructure decision made in November 2025 will define American technological leadership for the next decade. That's the size of the bet.


Trend #5: Agents With Wallets—AI Is Becoming An Economically Active Entity In Financial Markets and Consumer Commerce

Infographic on AI economic impact. Shows stats, text, and flow chart illustrating consumer-AI-commerce relationships and new market shifts.

The core fact: AI agents are no longer just advisory tools or process automations. They now have direct access to payment systems, trading platforms, and consumer purchase flows. They're making financial decisions and executing transactions autonomously, creating a new class of economic actor that has no clear regulatory persona and poorly understood systemic implications.


What Just Happened (Last Week)

On November 23-24, 2025, three major announcements in the payments and commerce space:


Worldpay launched Worldpay MCP (Model Context Protocol)—a publicly available set of server specifications allowing AI agents to directly integrate with Worldpay's payment APIs and execute transactions. Developers can download, modify, and deploy immediately. The announcement emphasized: "agentic commerce is rapidly emerging as the next evolutionary step in online shopping."


Visa quietly launched its own MCP Server, positioning itself as a secure integration layer for AI agents to initiate payments.


McKinsey released analysis on "agentic commerce," noting that 44% of American consumers say they'd consider using an AI bot to shop on their behalf.


But the payments announcements were just the most visible. Underneath:


  • Financial institutions are experimenting with trading and portfolio-management agents operating on 5-15 minute cycles


  • Banks are formalizing oversight structures: nearly 50% of financial firms are creating dedicated roles to supervise AI agents, anticipating that these agents will manage material financial flows


  • New governance tooling (like "Agent Miner") has appeared specifically to monitor, constrain, and score AI agents interacting with real money


The Consumer Behavior Shift

44% of American shoppers say they would use an AI bot to shop on their behalf.


Here's what that means operationally: An AI agent, given parameters (budget, preferences, delivery timeline), searches across retailers, compares prices, negotiates with other agents, identifies deals, and completes the purchase—all autonomously, with real money changing hands.


For consumer staples and commoditized categories (groceries, household goods), this could be routine within 12-24 months.


Additional consumer data:


  • 71% of consumers are interested in AI agents that can answer questions for faster customer service


  • 58% of consumers now expect to use at least one AI-powered tool for holiday shopping this year


  • ChatGPT saw shopping queries comprising nearly 10% of all searches—a category that has grown more than 25% since the start of the year


The Financial Services Scale

High-tech infographic with cyber-themed visuals. Highlights: 82% AI use, 53% data access, 44% governance. Emphasizes systemic risk.

Agentic AI in Financial Services Market:


  • 2025: $5.51 billion


  • 2030 (projected): $33.26 billion


  • 5.5x expansion in 5 years


Deployment reality:


  • One financial services VP confirmed their organization has 60 agentic agents in production today, with plans to deploy an additional 200 by 2026.


  • Virtual Assistants and Chatbots show the strongest growth at 38.2% CAGR


  • Fraud Detection and AML hold 29.1% of the market share


  • Commercial banks constitute 46.2% of 2024 adoption, with FinTechs and Neobanks showing 40.2% CAGR


The Structural Problem: "Agents With Wallets" Are Not Defined Entities

Here's where the regulatory and systemic risk implications emerge:


Traditional financial regulation assumes:


  • Account holders are human or legal entities with clear identity


  • Transactions are human-initiated or follow predetermined algorithmic rules


  • Audit trails connect decisions to responsible parties


  • Liability is assignable


Agentic AI breaks these assumptions:


  • An agent is neither human nor a legal entity—what's its ontological status?


  • Agents can make contextual decisions on the fly, learning and adapting


  • Audit trails show "the agent decided to buy," but who's responsible if it's wrong?


  • If an agent acting on behalf of a consumer misidentifies a scam and funds it, who bears the loss?


The Risk: Herding and Systemic Instability

Imagine a market scenario where thousands of similarly-trained trading agents are operating on 5-15 minute cycles, all reacting to the same market signals.


  • Without coordination mechanisms, they could all execute similar trades simultaneously, creating artificial volatility


  • Without circuit breakers or agent-specific restrictions, they could amplify bubbles or crashes


  • Without clear exit strategies, they could all liquidate positions in a panic, triggering cascading failures


We saw echoes of this with algorithmic trading in past market crashes, but those algorithms operated at microsecond speed under strict predetermined rules. Agentic AI operates at macro speeds (minutes to hours) with adaptive decision-making, making the herding problem harder to predict and prevent.

The Commerce Disruption

For retail, the implications are even more stark.


The current e-commerce model:


  1. Consumer searches for product


  2. Ads influence choice


  3. Consumer clicks through to retailer site


  4. Consumer completes purchase


The agentic model:


  1. Consumer tells agent: "Find me the best deal on X, considering price, delivery time, and sustainability"


  2. Agent searches across retailers, compares specs, identifies discounts


  3. Agent negotiates with other agents (representing retailers) to optimize


  4. Agent executes purchase without showing consumer any search results or ads


The disruption: Search engines lose their gatekeeper role. Retail media networks (which rely on ads and sponsored placements) lose their leverage. Retailers lose direct consumer relationships.


McKinsey estimates that if agents relentlessly optimize for the best deal, the pricing effect will be deflationary—consumers benefit from lower prices, but retailers lose pricing power. At the same time, if most online shopping migrates into agent-driven marketplaces, traditional ad targeting loses its role, pressuring the $200+ billion digital advertising ecosystem.

Surreal scene with planets, a vivid sunburst, "ZEN WEEKLY" text, and a woman in a robe among glowing candles. Cosmic and mystical ambiance.

The Governance Gap

Current state:


  • 82% of companies have AI agents in use


  • 53% of those confirm the agents have access to sensitive data


  • 80% have experienced applications acting outside intended boundaries


  • But only 44% have governance policies in place


Specific failure modes observed:


  • Unauthorized data access (39% of surveyed organizations)


  • Restricted information handling violations (33%)


  • Phishing-related movements (16%)


Why This Trend Reaches Critical Mass by 2026

Three converging factors:


  1. Payment infrastructure is now AI-native. Worldpay MCP and Visa's MCP aren't niche protocols—they're the future architecture for commerce. Agents building on these protocols are now production-ready.


  2. Consumer behavior has shifted. 44% of consumers are already willing to delegate purchasing decisions. The early adopter phase is ending; mass adoption is beginning.


  3. Regulatory frameworks are not ready. There's no definition of "economically active AI agent" in most financial regulations. There's no clear framework for agent liability, agent identity, or agent-to-agent commerce governance. By the time regulations arrive, the infrastructure will already be deeply embedded.


This is the inverse of AI safety discussions around AGI—it's immediate, practical, profit-driven deployment of economically active AI without adequate governance frameworks.



Synthesis: Why These Five Trends Matter Right Now

Each of these trends, in isolation, is significant. But their convergence in November 2025 creates a compounding effect that is reshaping the underlying infrastructure of technology, economy, and security.


The Convergence: Machine-Led Attacks × Criminal AI × Autonomous Agents × National AI Platforms × Economically Active AI

Imagine these five trends reinforcing each other:


  1. Criminal AI tools become more sophisticated → Machine-led attacks scale faster → Defenders need AI agents to respond → But those defensive agents need oversight → Which requires new governance infrastructure → Which doesn't exist yet


  2. Agentic AI enters financial and regulated systems → These agents need to interact with payment networks → Agents with wallet access become economically active → But there's no regulatory definition → Creating systemic risk unknowns


  3. National AI research platforms accelerate discovery → First-mover countries get AI-enabled tech advantages → Global AI race accelerates → Commercial AI development accelerates → Both friendly and hostile uses proliferate faster


  4. Autonomous agents become labor layers in enterprises → Organizations build unsupervised agent fleets → Shadow AI proliferation increases → Attack surface expands → Underground criminal AI finds more targets


What This Means for Organizations (Today)

If you're in cybersecurity: Your threat model just shifted. You're no longer primarily defending against human-speed attacks. You're preparing for machine-led adversaries that operate at scales and speeds your current SOC infrastructure can't handle. Invest in AI-native detection, agentic security operations, and agent governance now. By 2026, being "reactive" to agentic attacks will be unacceptable.


If you're in financial services or regulated industries: Your next competitive advantage isn't better data or faster algorithms—it's building agentic labor layers correctly. That means governance frameworks, agent identity systems, and audit trails that can withstand regulatory scrutiny. Organizations that solve this first get 3-5 years of competitive advantage before regulations mandate everyone solve it.


If you're in government or national security: The Genesis Mission is the right move, but it will only be successful if:


  1. Data integration happens faster than expected (it won't)


  2. Private sector participation is real (it is, but with embedded IP complications)


  3. Rivals don't leapfrog you (they will, in specific domains)


The U.S. is betting on a 10-year advantage from Genesis Mission. China is likely betting on a 2-3 year leapfrog in specific domains (fusion, semiconductors). This is now a race, and races are decided by incremental speed advantages that compound.


If you're in commerce, payments, or consumer tech: Agentic commerce is not a future possibility—it's a present reality with live pilots. Organizations that build infrastructure now to support agent-enabled transactions will own the distribution channel for the next generation of e-commerce. Those that wait will become API vendors for agents built by competitors.



The Narrative Reframe: What Should These Trends Be Called?

Part of why these trends haven't crystallized into mainstream awareness is the lack of naming convention. Here are the narrative framings that will probably emerge in Q1 2026:


  1. Machine-Led Adversaries → "The Agentic Attack Era" or "Autonomous Cyber Warfare"


  2. Agentic Labor Layers → "The Shadow Workforce" or "Invisible Operations"


  3. Criminal AI Infrastructure → "The AI Black Market" or "Crimeware-as-a-Model"


  4. National AI Research Platforms → "Sovereign Science" or "State-Scale AI Discovery"


  5. Economically Active Agents → "Agents With Wallets" or "The Autonomous Economy"



The Moment We're In

November 2025 is the moment when experimental AI applications became operational infrastructure. When pilots became production. When forward-looking research papers became board-room decisions backed by billions of dollars.


The five trends outlined here are not speculative or distant futures. They are now. They are here. And the organizations, governments, and individuals that understand them first—that build mental models around them, that invest against them, that regulate them thoughtfully—will shape the next decade.


This is the moment. This is what's actually happening.



bottom of page