A companion to Insight Belongs to the Machine. Decisions Belong to the Human.
A Tuesday in late January
On 16 January 2026, Arthur Mensch sat down for an interview on the Big Technology Podcast with Alex Kantrowitz. Mensch is the co-founder and CEO of Mistral AI, the French foundation model lab that by then had reached a $14 billion valuation and was on track for a billion-dollar revenue run rate within the year. Mensch is thirty-two, an alumnus of École Polytechnique and the École Normale Supérieure, and a former Google DeepMind researcher who left in 2023 because he believed the closed-source orientation of the major US labs was wrong for both technical and political reasons. The interview covered familiar ground for the first half. Then Kantrowitz asked Mensch about Dario Amodei's repeated public claims — most recently at Davos a few weeks earlier — that AI would soon eliminate large fractions of white-collar entry-level work.
Mensch dismissed the claim directly. He called it overstated marketing.
That moment, in a podcast that most US tech media did not cover that day, captures something the louder discourse misses. The major foundation model labs do not actually agree with each other. They do not share the same architectural assumptions. They do not share the same political priorities. They are not even in agreement about what AI is going to do to the workforce.
A week earlier at Davos, Mensch had said something more consequential. In ten years, the world won't rely on just U.S. or Chinese AI — Mistral, he predicted, would become the third pole. That is not a typical CEO prediction. It is a strategic claim about how the global AI landscape resolves over the next decade, and it is being made by a leader whose customers already include the European Space Agency, Singapore's defense and homeland security agencies, the French government, and a growing list of Francophone African states evaluating sovereign AI alternatives to US hyperscaler dependence.
The piece that follows is an attempt to map what the leaders shaping agentic AI in 2026 are actually saying, where they actually disagree, and what those disagreements reveal about a landscape that the US-centric news cycle systematically flattens. The voices come from foundation model lab CEOs, agentic platform CEOs, open-source orchestration leaders, senior technology officers inside the world's largest regulated enterprises, and political leaders shaping the regulatory and sovereignty environment those operators work within. They are quoted from public statements made between roughly Q4 2025 and the end of April 2026, with attribution to specific contexts where the context helps the reader weigh the statement.
The piece is organized around seven tensions — seven places where serious leaders publicly disagree, and where the disagreement reveals something about the architectural, commercial, or political choices in play. A global, critical, players, and disrupter lens runs through each tension where it applies, rather than being ghettoized in a single section. The architecture of the piece itself is the message: the conversation about agentic AI in 2026 is not a US conversation; it is a multipolar one, and the voices that get the loudest US press coverage are not necessarily the ones with the most consequential things to say.
I should disclose, as I do in the appendix of the main article: my professional work is concentrated in the Pega ecosystem. I have included Alan Trefler of Pegasystems alongside Matt Calkins of Appian, Jakob Freund of Camunda, Marc Benioff of Salesforce, and Bill McDermott of ServiceNow as voices in the agentic-platform-CEO category, in proportion to the substantive distinctness of their public statements rather than in proportion to my professional proximity to any of them. The reader should weigh the disclosure as they see fit.
Tension One: Where reasoning happens — design time, runtime, or somewhere new
The architectural tension that most directly maps to the main article is also the tension where the agentic-platform CEOs and the foundation-model-lab CEOs most publicly disagree. It also globalizes in interesting ways once the cast expands beyond the US.
The runtime-reasoning camp — the camp that argues the model is now capable enough to reason about the live state of an enterprise and act on it without extensive pre-designed scaffolding — has its loudest US voice in Sam Altman. OpenAI Frontier, launched in early February 2026 and described by Altman as a semantic layer for the enterprise — a unified platform that lets AI agents navigate business software, execute workflows, and make decisions across an organization's entire technology stack, is the architectural expression of this view. The implicit premise is that the model can navigate the enterprise; that the historical question of did we design the workflow correctly matters less than the question of can the agent figure out the right action right now. Dario Amodei's vision for Claude Cowork, launched 12 January 2026, sits in the same architectural territory with a different governance posture: Anthropic positions Claude's runtime reasoning as the value, with safety mechanisms layered around it. Both companies are large enough and capable enough that they can credibly make this bet.
Demis Hassabis, at Davos 2026, was more measured than either. In an Axios interview at the World Economic Forum he described agentic AI in production as hit-or-miss and slow, an honest engineering acknowledgment from a Nobel laureate AI lab leader that the runtime reasoning thesis is not yet ready for the most consequential enterprise tasks. Pichai's Cloud Next 2026 keynote in April described the engineering shift inside Google more candidly: the conversation has gone from "Can we build an agent?" to "How do we manage thousands of them?" The honest read is that even the labs that publicly champion runtime reasoning are privately wrestling with the operational reality that an agent that figures it out is only useful if its figuring is reliable, auditable, and governable — and that those properties are easier to engineer at design time than they are to retrofit at runtime.
The design-time-reasoning camp has multiple distinct voices, and they are louder in 2026 than they were a year earlier because the operational record of pure-runtime agents has not been as kind as the marketing.
Matt Calkins, opening Appian World 2026, made the cleanest version of the structural argument with an analogy that landed because it was concrete. AI is the breakthrough of a generation, he said, citing reports from PwC, MIT, and McKinsey on adoption-without-impact, but we can't seem to make it valuable. His diagnosis: the missing infrastructure. AI struggles with what I call serious work — large regulated companies and mission-critical applications where errors are not acceptable. The light bulb, he reminded the audience, did not change the world by itself; it changed the world when the surrounding infrastructure of wiring and grids and standards was built. AI is in the same position. The next phase of AI maturity will depend on embedding AI directly into the core of how work gets done. And the scaffolding that makes that embedding reliable is process — a deterministic scaffolding of constraints, checks, escalation paths and human oversight.
Calkins is worth taking seriously beyond his commercial position because his public commentary is unusually candid. In a December 2025 interview on Yahoo Finance's Market Domination Overtime, he predicted that 2026 would see a battle in AI regulation — federal versus state, with administrative threats to withhold broadband funds in play, and another little battle between Republicans who believe in state's rights and Republicans who believe in presidential authority. That is not a normal CEO prediction. It is a substantive read of US regulatory politics that most CEO commentary actively avoids. Calkins is positioning himself, and Appian, as the platform vendor most willing to name the political dynamics as substantive rather than incidental — which they are.
Alan Trefler at Pegasystems makes a sharper version of the same architectural argument, with a longer institutional history behind it. On Pega's Q4 2025 earnings call in early February 2026, Trefler called competitors who pitch thousands of agents driven by individually written prompts delusional. His core technical claim: Our competition rethinks the problem from scratch over and over again. And the slightly frightening thing is, the models don't always come up with the same answers, even in situations and regulated industries where coming up with the same answer is not just important, it is imperative. Trefler's prescription — creative reasoning at design time, governed execution at runtime — is the architectural position the main article develops, articulated in different vocabulary. He illustrates it with an anecdote that has become a recurring theme in his public talks: a chess problem from the Financial Times that ChatGPT solved incorrectly twice, while Stockfish — a domain-specific chess engine — solved instantly. It was right the first time, because it was the right AI for the right purpose.
What is genuinely new in 2026 is that the design-time camp now has an open-source voice with serious intellectual weight. Jakob Freund, co-founder and CEO of Camunda, has been articulating a distinct position from Berlin that the US press largely misses. Camunda is the BPMN-and-DMN orchestration platform whose open-source roots and process-centric philosophy have made it the quiet substrate of many large European digital transformations. In February 2026, Camunda shipped 8.9 Alpha 5 with what Freund called native MCP server support — the orchestration cluster now exposes its capabilities via a built-in Model Context Protocol server, meaning any MCP-compliant client (VS Code, GitHub Copilot, Claude Code) can discover and invoke Camunda tools without custom integration. Agent-agnostic, composable, no lock-in, Freund wrote on LinkedIn. The same release added a built-in audit log that natively records every user and client operation across process, identity, and task domains. When a regulator asks, you have the answer. This is what governance at the orchestration layer looks like.
Freund's position globalizes the design-time camp's architectural argument in a way the US proprietary platforms cannot. He frames Camunda's value proposition as blending deterministic orchestration via BPMN with agentic orchestration via agents, so you can implement as much or as little AI as you want within guardrails. The deeper observation, made on Camunda's blog in May 2025 and reiterated through 2026, is that the hardest part of agentic AI in the enterprise is not building an agent — it is making it reliable, auditable, and safe to run at scale in production. BPMN, a public ISO standard rather than a vendor format, gives him a plausible claim to architectural neutrality that no proprietary platform can match.
The blended camp, where the foundation model labs are quietly converging despite their public positioning, has its clearest articulation in Satya Nadella's Looking Ahead to 2026 essay, published 29 December 2025. Nadella framed the engineering challenge as moving from models to systems — we are now entering a phase where we build rich scaffolds that orchestrate multiple models and agents; account for memory and entitlements; enable rich and safe tools use. His framing of AI as cognitive amplifier rather than substitute, and of the engineering problem as real-world impact rather than model capability, is the closest a major US lab CEO has come to the workflow camp's architectural position. Microsoft's Foundry — with over ten thousand customers using more than one model and an integrated control plane the company explicitly compares to trust but verify doesn't scale without tooling — is the operational expression of the blended view.
What the global lens reveals. The runtime-versus-design-time tension looks different from outside the US. Mistral's enterprise positioning, anchored in Mistral Forge launched at Nvidia GTC on 17 March 2026, is structurally a design-time bet: enterprises train proprietary models on their domain data, deploy them in their own infrastructure, and own the resulting system outright. Mensch's framing during the Forge launch was direct: most enterprise AI projects don't fail because of bad technology; they fail because the AI was never built for your business. The European Space Agency, Singapore's DSO and HTX defense agencies, and the Francophone African governments adopting Mistral are choosing this path explicitly — not because runtime reasoning is unappealing in principle, but because the regulatory and sovereignty calculus in their environments rules out the architectures the US labs are selling.
The Chinese voices in this tension are different again. DeepSeek's open-weights position, which I'll develop at length later, structurally reframes the question: when the model itself is open and inspectable, runtime reasoning is no longer a vendor-controlled black box, which softens one of the design-time camp's strongest objections. The architecture of reasoning at runtime against an open, auditable model is genuinely different from reasoning at runtime against a proprietary, opaque API — and the US discourse rarely distinguishes between them.
Tension Two: Vendor or in-house — and through what sovereign lens
This is the tension where the foundation-model-lab CEOs lose control of the narrative and the senior technology officers inside the world's largest regulated enterprises take it back.
Lori Beer, Global Chief Information Officer at JPMorgan Chase, manages a $19.8 billion annual technology and AI budget. In a Fortune feature published 29 April 2026, Beer's position on agentic AI vendor strategy was unambiguous: one clear certainty when it comes to JPMorgan's agentic AI strategy is that these tools won't run through a third-party vendor. This is going to be critical, because it's the underlying flow of how we do business.
Beer's framing of the underlying engineering problem is the cleanest articulation of agentic governance from the buyer side that any major bank CIO has put on the public record. AI agents will change the way one thinks about work, the tasks to complete that work, how to break those tasks down, the tasks the bank is comfortable automating, the tasks that require human reflection, and then the proper technology ecosystem with the proper security, resiliency, and controls. On agent permissions: In HR, a human has broader license to see JPMorgan employee data than an agent. You don't want them to go outside the bounds of the specific tasks that they can do, because they don't have the same thinking a human does. And the observation that names the structural shift: Agent-to-agent interactions are very different than traditional system-to-system interactions, and so there's a huge uplift we're thinking about today that we need to be there to get to that truly agentic autonomous world.
Marco Argenti, Chief Information Officer at Goldman Sachs, sits in an adjacent posture. On Bloomberg's Odd Lots podcast on 30 March 2026, Argenti described the past eighteen months as the firm's AI work moving from proof of concept to product-level utility. His core observation about the engineering shift inside the bank: Goldman Sachs engineers are no longer primarily in the business of writing code. They are increasingly in the business of supervising the machines that write it for them. Argenti's January 2026 What to Expect From AI in 2026 essay, published by Goldman, named two predictions that have aged well: the gigawatt ceiling — that the binding constraint on AI capability through 2026 is data center power, not model architecture — and mega alliances, the prediction that AI competition would consolidate around a small number of very large partnerships with network effects. Both predictions are now visible in the 2026 record.
The structural pattern at the largest US banks. Combined technology budgets approaching $30 billion annually. Both publicly stating that agentic AI for the underlying flow of how we do business will not run through third-party vendors. Both building proprietary frameworks. Neither has named a single foundation-model-vendor agentic platform as primary infrastructure for core workflows. The implication for the platform vendors: the largest regulated buyers in the United States are not the addressable market the platform pitch decks assume.
The European buyer view differs in instructive ways. European enterprises face a sovereignty calculus US enterprises do not. The EU AI Act compliance regime, the sovereignty requirements imposed by the European Central Bank on critical financial infrastructure, and the broader anti-hyperscaler-dependence sentiment that has emerged across European policy circles in 2025-2026 mean that European banks like BBVA, Santander, ING, and Deutsche Bank are making decisions on a different timeline and with different default assumptions than their US counterparts. BBVA was named in the OpenAI Frontier launch as a pilot customer, but the European banking sector has been more cautious overall about adopting US-vendor agentic platforms for core operations. The Mistral-Accenture multi-year strategic collaboration announced 26 February 2026 — Accenture's Mauro Macchi described Mistral as offering world class performance with the complete ownership that Mistral AI's technology offers enterprises — is a structural alternative European enterprises are now able to choose. Accenture itself is a customer of Mistral.
The Asian buyer view differs again. DBS Bank's Piyush Gupta has spoken publicly about Singapore's positioning as a regional AI hub, and DBS has been one of the more aggressive Asian banks in its agentic AI rollout — but on Singapore-government-aligned infrastructure. Mitsubishi UFJ in Japan has taken a more conservative approach, with explicit emphasis on Japanese-language model performance and domestic data residency. Indian banks, advised by the major Indian system integrators (TCS, Infosys, Wipro), are pursuing agentic AI deployments that align with the broader India Stack digital public infrastructure approach — a fundamentally different philosophy from either the US bank in-house approach or the European sovereignty-vendor approach. The Asian patterns are not a single pattern; they vary by national regulatory environment, by domestic AI capability, and by the geopolitical alliance structure each country is navigating.
The deeper observation is that the vendor versus in-house question is not actually a single question. It is at least three: which vendor (US lab, US workflow platform, European sovereign lab, Asian platform), in which sovereign jurisdiction (US-aligned, EU-aligned, Chinese, Indian, sovereign Gulf, sovereign African), under which regulatory regime, and with which long-term geopolitical bet. The CIO who answers all three correctly in 2026 is making a strategic choice, not just a procurement choice. The CIO who answers without realizing the questions are connected is making a default choice.
Tension Three: Are foundation model vendors eating SaaS, or are SaaS vendors eating foundation models
This is the most contested commercial tension in the landscape, and the equity markets have already begun pricing the resolution. The question is whether the resolution will hold.
The disintermediation thesis. When Anthropic launched Claude Cowork on 12 January 2026, the release wiped roughly $285 billion in market value from legal-technology and data companies in a single trading session. Thomson Reuters fell sixteen percent. RELX fell fourteen percent — its steepest single-day decline since 1988. Wolters Kluwer fell thirteen percent. By February, the iShares Expanded Tech-Software Sector ETF had dropped more than twenty percent year-to-date. Salesforce was down roughly thirty percent. Adobe was down twenty-seven percent. ServiceNow took a twenty-three percent hit on the day of Claude Cowork's launch alone.
Sam Altman, speaking at the India AI Impact Summit 2026 in New Delhi later that month, was direct about what was happening: It is totally true that software is now far easier to create than ever before, and I'm sure that will be quite bad for some software companies, but I think a lot of software companies have a value proposition that is quite different. J.P. Morgan Research, on the same selloff, attributed the move to broken logic and described the worst-case investor pricing as software is dead. Analysts framed the underlying shift as Software as a Service to Service as Software — agents severing the traditional link between corporate headcount and software spending.
The SaaS counter-position has multiple distinct voices. Marc Benioff at Salesforce framed Agentforce as what AI was always meant to be, and made a contrarian commercial bet that the financial press has not fully digested. While most major US tech CEOs in late 2025 and early 2026 were warning that agentic AI would absorb entry-level white-collar work, Benioff announced on 25 April 2026 that Salesforce would hire one thousand new graduates and interns to ride the AI exponential: They said AI would kill entry-level jobs. Meanwhile these grads & interns are building it — powering Agentforce & Headless360 at Salesforce. The architectural response to disintermediation is Salesforce Headless 360, announced 16 April 2026: Welcome Salesforce Headless 360: No Browser Required! Our API is the UI. Entire Salesforce & Agentforce & Slack platforms are now exposed as APIs, MCP, & CLI. The bet is that SaaS becomes the API surface foundation model agents call into, and the value accrues to whoever owns the data and the workflow.
Bill McDermott at ServiceNow has taken a structurally similar but rhetorically distinct position. The Q4 2025 earnings call, in late January 2026, framed ServiceNow's strategy as building the AI control tower for business reinvention so enterprises can operate securely in an agentic AI world. The company's expansion into CRM territory, traditionally Salesforce's domain, is the operational expression of the bet that the workflow platform with the broadest enterprise integration footprint becomes the orchestration layer that agentic AI plugs into rather than replaces.
The platform CEOs in the workflow-first camp — Calkins, Trefler, Freund — make a different version of the disintermediation defense. Their argument is not that workflow platforms remain the system of record despite agentic disruption; it is that workflow is itself the missing infrastructure that agentic AI requires to be useful at scale. Trefler's Q1 2026 earnings call statement: Pega's Blueprint AI helps enterprises reimagine their businesses while Pega's powerful workflow engine provides the harness that ensures predictable outcomes. Freund's positioning, stated more concisely on Camunda's website: AI agents are powerful, but without orchestration, they lack the coordination, accountability, and reliability needed for real-world, business-critical processes. The implicit response to the disintermediation thesis: it is not workflow platforms that get disintermediated by agentic AI; it is the agentic AI deployments without workflow infrastructure that fail in regulated production.
The European workflow voice that the US press underweights. SAP, headquartered in Walldorf, Germany, and led by Christian Klein, has been quieter than the US platform CEOs but is making structurally similar bets. SAP's Joule platform and the agentic capabilities being built into S/4HANA are being positioned to European enterprises as the alternative to foundation model vendors who don't understand European data residency, sovereignty, and AI Act compliance requirements. Klein's voice does not feature as prominently in the US tech press as Benioff's or McDermott's, but the European enterprise software market is roughly half of SAP's revenue, and the strategic position he is staking out — workflow platform plus sovereign-cloud option plus integrated AI agents — is the mainland European answer to the question the US platforms are answering with Headless 360 and AI Control Tower.
The deeper observation about the disintermediation thesis. The thesis as currently priced by US public markets is essentially that US foundation model vendors eat US SaaS vendors. Outside the US, the thesis looks different. European enterprises are less invested in the US SaaS sector and less worried about its disruption. Chinese enterprises do not run on US SaaS in the first place — they run on Alibaba, Tencent, ByteDance, and Huawei platforms, where the agentic AI integration is happening through Chinese foundation model vendors (Qwen, DeepSeek, Doubao) that are not subject to the same disintermediation pressure because they are part of integrated technology stacks rather than competing across them. Indian enterprises are still in the early stage of cloud-and-SaaS adoption that the US went through a decade ago — the disintermediation question for them is not will agentic AI eat SaaS, but will SaaS even reach the maturity stage in India before agentic AI replaces it. The thesis the US markets are testing is not the global thesis. The global thesis is more nuanced and the resolution will not look the same in every market.
Tension Four: The lock-in problem, the protocol convergence, and the open-source counter-architecture
A subtler tension running underneath the disintermediation debate is the question of architectural neutrality — whether enterprises are making conscious lock-in decisions or default ones, and whether the open-source and open-protocol counter-architectures are mature enough to make the conscious choice viable.
The neutrality argument from the closed-platform camp. Sundar Pichai's Cloud Next 2026 keynote framed Google's competitive positioning explicitly as anti-lock-in. The pitch named walled garden competitors who own your models, your data, and your agents — a not-very-subtle reference to Microsoft's Azure-OpenAI integration. Google's Universal Commerce Protocol, announced at the National Retail Federation in mid-January 2026, was built together with industry leaders Shopify, Etsy, Wayfair, Target and Walmart, and endorsed by 20+ more, and Pichai framed it as compatible with existing industry protocols like Agent2Agent, the Agent Payments Protocol, and Model Context Protocol. The implicit positioning: open protocols prevent foundation-model-vendor lock-in, and Google is in the open-protocol camp.
The lock-in counter-observation from the protocol governance side. OpenAI's Frontier supports multi-vendor agents — Fidji Simo, OpenAI's head of applications, said on the Frontier launch: Frontier is really a recognition that we're not going to build everything ourselves. Frontier supports agents from OpenAI, the enterprise itself, and third parties including Google, Microsoft, and Anthropic. The architectural posture is collaborative on the model layer; it is less collaborative on the orchestration layer, where Frontier itself is the destination. Frontier being framed as the semantic layer rather than a semantic layer is a lock-in story even when the underlying model selection is open.
The open-source counter-architecture has matured significantly in late 2025 and 2026. The Model Context Protocol, originally an Anthropic specification, became an open standard supported by every major lab and platform within nine months of its release. Agent2Agent (A2A) followed a similar trajectory. The Agent Payments Protocol, the Universal Commerce Protocol, and various other open agentic protocols are now part of the standard enterprise vocabulary. An enterprise that builds on MCP-compliant tools can swap underlying models more easily than it could a year ago. That is a real change.
But the protocol convergence is not yet sufficient for orchestration neutrality. An enterprise that builds on a vendor's orchestration platform — Frontier, Agentforce, ServiceNow's AI Control Tower, Pega's Agentic Process Fabric, Camunda's orchestration cluster — inherits dependencies that no current protocol abstracts away. The protocol layer covers tool calls and agent-to-agent communication. It does not cover the data graph, the agent registry, the policy engine, the audit ledger, or the orchestration runtime itself. The honest reading of 2026 is that the protocols are sufficient for tool-level interoperability and not yet sufficient for orchestration-level neutrality.
This is where the open-source position carries genuine weight. Camunda's Freund has been making the argument that BPMN as an ISO-standard process notation, combined with open-source orchestration runtime, gives enterprises a level of architectural neutrality that proprietary platforms cannot match. Embed agents directly within BPMN process models — agents don't replace processes, they enhance them, Camunda's positioning reads. Use deterministic flows for predictable behavior, and delegate to agents when AI-driven reasoning adds value. The Camunda integration with LangChain, AWS AgentCore, and the major MCP-compliant frameworks gives developers access to agent concepts (memory, LLMs, tools, agent loops) without committing to a single vendor's orchestration model. Forrester named Camunda a Strong Performer in its Q3 2025 Digital Process Automation Wave; Gartner placed Camunda as a Visionary in the Business Orchestration and Automation Technologies Magic Quadrant. The recognition matters because it signals analyst acknowledgment that the open-orchestration alternative is mature enough to be evaluated alongside the proprietary platforms, not as a separate category.
The DeepSeek dimension. Beyond protocols and orchestration runtimes, the open-source counter-architecture in 2026 has gained a foundation-model dimension that did not exist in 2024. DeepSeek's release of V3 in December 2024 and R1 in January 2025 produced what Marc Andreessen called AI's Sputnik moment — a Chinese open-weights model that matched OpenAI's reasoning capabilities at roughly one one-hundredth of the cost. The trajectory continued through 2026: DeepSeek V4, released 24 April 2026, is a 1.6-trillion-parameter Mixture-of-Experts model with a one-million-token context window, available under open license, runnable on enterprise infrastructure that supports the hardware requirements. The implication for the lock-in conversation is structural: an enterprise that wants to avoid US foundation model lab dependency now has, for the first time, a credible open-weights frontier-grade alternative. That option did not exist in 2024.
The cultural and intellectual position behind DeepSeek matters as much as the technical one. Liang Wenfeng, DeepSeek's founder, in a January 2026 interview with the Chinese media outlet An Yong (translated and republished by The China Academy), articulated a position that the US press has had difficulty processing: Because we believe the most important thing right now is to participate in global innovation. For years, Chinese companies have been accustomed to leveraging technological innovations developed elsewhere and monetizing them through applications. But this isn't sustainable. This time, our goal isn't quick profits but advancing the technological frontier to drive ecosystem growth. And the deeper claim: We believe that with economic development, China must gradually transition from being a beneficiary to a contributor, rather than continuing to ride on the coattails of others. That is not a defensive statement. It is a strategic claim about China's role in global technology development, made by a former hedge fund manager who has assembled one of the most efficient AI labs in the world.
The intellectual position is a refutation of the closed-source-as-moat thesis: In disruptive technology, closed-source moats are fleeting; the real competitive advantage lies in the team's growth and innovative culture rather than keeping code proprietary. That position is increasingly defensible — Andreessen's Sputnik framing, Alexandr Wang at Meta describing DeepSeek as on par with the best American models — and it is reshaping the global AI conversation in ways the US lab CEOs have incentive to downplay.
Hugging Face's Clem Delangue holds a parallel position from a different geography. Delangue has been, since 2023, the most consistent public voice arguing that open infrastructure is the foundation of trustworthy AI deployment in regulated and sovereign contexts. Hugging Face's role as the central hub for open-weights models — DeepSeek, Mistral, Qwen, Llama, Gemma, and many specialized fine-tunes are all hosted there — has made it the de facto neutral ground of the open AI ecosystem in 2026. Harrison Chase at LangChain has played a similar role for open agentic frameworks; LangGraph as the reference open orchestration framework has matured into something that genuinely competes with proprietary alternatives, and Camunda's integration with LangChain is one of several signals that the open agentic stack is converging into something coherent.
The open-source position is no longer a fringe position in 2026. It is a credible architectural choice for enterprises that want to avoid lock-in, for sovereign jurisdictions that want to avoid US dependency, for regulated industries that want auditability of model weights, and for any organization that prefers compounding internal capability over compounding vendor dependency. The choice is not free — it requires more engineering investment than the proprietary path — but it is now a real choice. That was not true in 2024.
Tension Five: Build versus buy, with the system integrators as the deciding voice
The traditional build-versus-buy question in enterprise software has a new variant in the agentic era. The question is no longer should we build the application or buy it from a SaaS vendor. It is should we build the agentic orchestration layer in-house, or buy it from a vendor whose commercial incentives may not align with ours. And in 2026, the system integrators advising on that question have become a distinct voice with its own commercial positioning.
The bank-CIO answer for the largest regulated US enterprises — Beer at JPMorgan, Argenti at Goldman — was the same: build the orchestration layer in-house, with the data graph and agent identity as proprietary infrastructure. Below that scale, enterprises are buying. The buy-side decision is increasingly mediated by the system integrators, who are no longer neutral advisors.
Andy Jassy at AWS has positioned AWS Bedrock and AgentCore as the build-your-own-orchestration backbone for the in-house camp. Bedrock provides multi-model access; AgentCore provides managed runtime, memory, session management, tool access, identity, and observability. The pitch is that an enterprise can build proprietary agentic infrastructure on AWS without committing to a single foundation-model-vendor's orchestration platform. The deeper commercial logic: AWS is paid for the infrastructure substrate regardless of which models or which orchestration patterns the customer chooses, which aligns AWS commercial incentives with customer architectural neutrality.
The Indian system integrators have emerged as a distinct voice. Infosys, TCS, Wipro, HCLTech, and Cognizant collectively employ over a million technology professionals globally. In 2026, they are advising a meaningful fraction of the Fortune 500 on agentic AI deployment, and they are doing it from a position that the major US consultancies cannot match: deeper India-based delivery capacity, lower cost structures, and increasingly their own platform investments. Infosys's Topaz platform is being equipped with Gemini Enterprise across more than 100,000 Infosys developers globally, per Google's Cloud Next 2026 announcements. HCLTech is launching a Gemini Enterprise Business Unit. TCS and Wipro have parallel arrangements. The Indian SIs are simultaneously delivery partners to the US labs and platforms, infrastructure builders themselves, and increasingly the buyers' chief operational advisors. Their voice in 2026 is louder than the US discourse acknowledges.
The hyperscaler platform play. Snowflake's $200 million multi-year partnership with Anthropic, signed December 2025, makes Claude available through Snowflake's platform with the data staying in Snowflake's governed environment. Databricks's pitch — model registry, agent governance, lineage tracking, integrated MLOps — is structurally adjacent to what Credo AI offers as a horizontal AI governance platform but lives inside the Databricks data and compute environment. The hyperscalers are positioning themselves as the substrate underneath whichever orchestration layer the customer chooses, and they are aligned commercially with neutrality.
The honest reading of 2026. For an enterprise above some threshold of scale and regulatory exposure, the build-side of the build-versus-buy question now favors building the agentic orchestration layer in-house on cloud-substrate infrastructure, with foundation-model APIs as plug-in components rather than orchestration commitments. For an enterprise below that threshold, the buy-side favors a vendor whose commercial incentives most closely align with the enterprise's. The threshold sits somewhere around the regulated mid-market: a regional bank with $50-200 billion in assets, a healthcare system with twenty-plus hospitals, a Fortune 500 manufacturer with material data residency obligations. Above that threshold, building dominates. Below it, buying dominates. The system integrators advising the buy-side decisions are increasingly platform-aligned, and the enterprises that do not interrogate the alignment are accepting it as a default.
Tension Six: Sovereignty, regulation, and the political divide
The architectural tensions above are all technical or commercial. There is a political tension running underneath the entire 2026 agentic AI conversation that most CEO commentary avoids, and that this companion piece will surface honestly.
The Anthropic-Pentagon schism is the cleanest contemporary example. In February 2026, Dario Amodei refused the United States Department of Defense's request to remove contractual restrictions prohibiting the use of Claude for mass domestic surveillance and fully autonomous weapons. The DoD subsequently designated Anthropic a supply-chain risk and the Trump administration ordered US agencies to stop using Claude. Multiple organizations filed amicus briefs supporting Anthropic; the Pentagon denied interest in the contested use cases but insisted on unrestricted access. The dispute remains active.
The substantive disagreement under the procedural one is genuine. One view, articulated in Amodei's January 2026 essay The Adolescence of Technology, holds that AI labs have an affirmative responsibility to refuse use cases the lab judges to be high-risk, even when the customer is the federal government. The opposing view, held by significant portions of the foreign policy and defense community, is that domestic AI labs declining to support US government use cases creates a strategic asymmetry that benefits authoritarian competitors who face no such constraints. Amodei's entente strategy framing — a coalition of democratic nations using advanced AI systems in military applications to achieve a decisive advantage — implicitly accepts some military applications while drawing a line at others. The line itself is contested.
Sam Altman's posture on the political question has been more accommodating to government customers. OpenAI has expanded its Pentagon work, and Altman has spoken publicly about US-China AI competition in terms that frame government partnership as essential. The substantive disagreement with Anthropic has commercial consequences: OpenAI is now more politically aligned with the current administration than Anthropic is, and that alignment is showing up in federal contracts and in regulatory posture.
The deeper structural observation underneath the Anthropic-DoD dispute, and underneath the broader political-divide tensions developed throughout this section, is that AI sovereignty in 2026 is fundamentally a chokepoint question — grounded in the chokepoint and high-risk dependency dynamics I have developed at length in Dirty Wisdom, the AI capital and capability landscape is, in many ways, a chokepoint story working itself out at speed. Nvidia's GPU supply is a chokepoint. TSMC's advanced node manufacturing is a chokepoint. The US export controls on advanced chips are deliberate chokepoint creation. The hyperscaler concentration is a chokepoint. The US AI lab concentration in OpenAI, Anthropic, and Google is a chokepoint that DeepSeek's open-weights release was, in part, a deliberate Chinese response to. The Anthropic-Pentagon schism is itself a chokepoint dispute — the federal government attempting to remove use-case restrictions on a model it has structurally come to depend on, and the lab refusing because the dependency runs both ways. Every sovereign capital response that follows in this section can be read as deliberate chokepoint diversification or alternative chokepoint construction, depending on which jurisdiction is making the move.
Mark Carney's Davos 2026 speech introduced a register the US discourse rarely operates in. The Canadian Prime Minister, a former Bank of Canada and Bank of England governor, made what may be the most consequential political statement on AI sovereignty by a G7 leader in 2026: We're co-operating with like-minded democracies to ensure that we won't ultimately be forced to choose between hegemons and hyperscalers. The framing carries weight because Carney is not a typical politician on technology issues. He has articulated a positive vision for what digital sovereignty means in a middle-power context — Canada has what the world wants. We are an energy superpower, and our compute capacity tracks electricity. Data centres already consume about 1.5% of worldwide electricity, growing at 12% annually with AI as the primary driver. The Canada Strong Fund, announced 27 April 2026 with an initial $25 billion contribution, is the operational expression of the vision. Whether the strategy works is contested — domestic critics including the Council of Canadian Innovators have argued that the Carney government's focus on infrastructure is decoupled from the founder-friendly conditions that would let Canadian AI startups actually scale — but the political articulation of digital sovereignty as a middle-power strategic concern is genuinely new in the global discourse.
Carney's deeper observation, made multiple times across early 2026: that the race for foundational large-scale models is essentially lost outside the US and China — Only two countries — the United States and China — can muster the hundred billion dollars required to develop foundational AI models competitive with GPT-5 or its successors — and that the question for middle powers is not how to win that race but how to capture value from AI inference and deployment without ceding sovereignty to either hegemonic bloc. That is a structurally different framing from the US discourse, which treats the foundational model race as the central question. For Carney, the central question is what comes after the foundational race is settled.
The European political voice runs along similar lines but with sharper sovereignty emphasis. Emmanuel Macron's positioning of France as the European AI champion — anchored to Mistral and to the broader European compute infrastructure investment — explicitly frames European AI strategy as a third pole between US and Chinese hegemony. The European Union AI Act, in force through 2026, is the concrete regulatory expression of the philosophy. European enterprises and governments operating under the AI Act face compliance requirements that US foundation-model vendors must accommodate — and the Mistral pitch, refined throughout 2026, is that Mistral was built for those requirements from inception while US vendors are retrofitting them.
The African Union position is the political voice the US tech press most consistently misses. The AU's Continental AI Strategy, adopted in 2024 and now in its first implementation phase (2025-2026), positions AI as a strategic asset pivotal to achieving the aspirations of Agenda 2063 and the Sustainable Development Goals. Paul Kagame, who led the AU institutional reform process from 2016 through early 2024, has been Rwanda's most articulate voice on AI as a development priority — Rwanda hosted the Global AI Summit on Africa in early 2025. The AU-Google MoU signed 17 February 2026 is a direct attempt to advance African sovereign AI capacity, with explicit emphasis on local infrastructure and data residency. The deeper observation made repeatedly across African policy commentary in 2025-2026 is the concern about digital colonialism — that global foundation models trained on data extracted from African populations, deployed via foreign infrastructure, with no continental ownership of the resulting capability, repeats historical extraction patterns. The struggle for data sovereignty, one analytical piece from late 2025 framed it, risks repeating historical patterns of extraction, this time through digital colonialism. The framing is rare in US discourse and substantive in African policy circles.
The South American voice is emerging more slowly but is genuine. Brazil's Lula government has pursued AI regulation that draws structurally on the EU AI Act, with an emphasis on social rights and worker protection. Mercado Libre, the largest Latin American e-commerce platform, has been one of the more aggressive Latin American adopters of agentic AI in customer service and operations, and its public positioning on AI deployment has been pragmatic — neither anti-foundation-model-vendor nor sovereign-AI-maximalist, but focused on operational gain. The fuller South American AI policy conversation is still developing through 2026, and the most substantive voices have not yet emerged into the global English-language discourse.
The labor question is also political. The prevailing tech CEO narrative through late 2025 and early 2026 was that agentic AI will absorb significant portions of entry-level white-collar work. Amodei's Davos 2026 statement that some Anthropic engineers had stopped writing code was widely covered. Mensch publicly disagreed: overstated marketing. Benioff put substantial commercial weight behind a contrarian position with the April 2026 hire of one thousand new graduates. The labor question has stakes that go beyond any single firm's hiring strategy. It will be re-litigated through 2026 and 2027 in ways that the technical conversation has not yet absorbed.
The energy and compute ceiling. Argenti's gigawatt ceiling prediction is now broadly shared across the infrastructure CEOs. Alphabet's reported $185 billion 2026 capital expenditure plan is more than double its 2025 total. The major hyperscalers are racing for compute capacity. The political question underneath the infrastructure question is which alliance an enterprise binds itself to — and how reversible the decision is when the alliance configuration changes, as it inevitably will.
Tension Seven: The sovereign-versus-capitalist governance question
The integrative tension that pulls the architectural and political threads together is the question of which governance model — state-directed, capitalist-competitive, or rights-protective — produces the AI systems that serve their populations best. The honest answer is that no single model has yet demonstrated unambiguous superiority, and the trade-offs each model accepts are visible to anyone willing to look at them clearly.
The Chinese state-directed model has produced, in DeepSeek, an outcome that the capitalist-incentive structure could not have produced. There was no commercial reason for any US foundation model lab to release a frontier-grade model under open-weights license; doing so would have eroded the moat the lab depended on for valuation. DeepSeek's release was made possible by a different incentive structure — Liang Wenfeng's stated goal of advancing global frontier capability rather than capturing commercial returns, combined with the political environment that rewards Chinese AI labs for contributing to national technological leadership. The result was a structural reset for the global AI market that benefits non-US enterprises trying to avoid US lab dependency. The Chinese model also produced, in DeepSeek's training efficiency, capabilities at a fraction of the compute cost the US labs were spending. Liang's January 2026 interview observation that China's models likely require twice the compute power to match top global models due to structural and training dynamics gaps is the operating assumption that drove DeepSeek's emphasis on algorithmic efficiency over brute-force scaling.
The trade-off the Chinese model accepts is constrained individual-firm experimentation. State direction can compel data sharing, standardize protocols, allocate compute strategically, and overrule short-term profit motives in service of long-term capability building. It cannot, easily, produce a Salesforce — a contrarian commercial move against the prevailing technology narrative based on a single CEO's bet. The Chinese ecosystem produces strategic coherence at the national level and slower individual-firm iteration. It also produces censorship and surveillance applications that the same state direction makes possible, which is the trade-off the human-rights critique of the Chinese model focuses on.
The US capitalist-competitive model produces faster individual-firm innovation, more diversity of approach, more willingness to fail publicly, and the kind of contrarian moves — Benioff hiring a thousand graduates against the prevailing narrative, Anthropic refusing the Pentagon, OpenAI Frontier launching despite the disintermediation risk to the SaaS sector — that strategic state-direction would smother. The model also produces strategic incoherence at the national level: Microsoft's commercial logic, Google's commercial logic, OpenAI's commercial logic, and Anthropic's commercial logic do not add up to a single coherent national AI strategy, and the federal government's posture toward the labs is contested between the administration, the Pentagon, and the regulatory agencies.
The trade-off the US model accepts is that strategic alignment at the national level happens, when it happens, through coordination among private actors that have no obligation to pursue alignment. The Anthropic-DoD schism is the structural symptom: a major US AI lab is publicly defying a Pentagon contracting demand because the lab's commercial and ethical incentives diverge from the executive branch's strategic incentives. In the Chinese system, that schism would not occur because the lab would not have the autonomy to refuse. In the European system, it would not occur because the regulatory framework would have already constrained both sides. In the US system, it occurs because the system is built to allow it — which is a feature in some readings and a bug in others.
The European rights-protective model is the third path, and its trade-offs are different again. European AI strategy under the AI Act and the broader sovereignty agenda is optimized for legitimacy — for the public's confidence that AI deployments will respect rights, due process, and democratic accountability — at the cost of speed. European enterprises adopting AI in 2026 are operating under compliance constraints that US enterprises do not face. The sovereign-AI alternatives Mistral and Aleph Alpha provide are designed for those constraints, which is part of why they are succeeding in European public-sector and regulated-industry deployments. The trade-off is that the European model has not produced a frontier foundation model competitive with the US labs or DeepSeek, and Carney's we cannot win the foundational model race observation applies to Europe as well as Canada. Whether the European choice to optimize for legitimacy at the cost of frontier capability turns out to be wise depends on whether the foundation model race itself remains the central determinant of AI value, or whether — as Carney argues — value migrates to inference, deployment, and sovereign infrastructure.
The African and Latin American models are still emerging. The AU Continental AI Strategy is in its first implementation phase. Brazilian AI regulation is still being shaped. The substantive question for the Global South in 2026 is whether the African and Latin American jurisdictions will be subjects of AI capability built elsewhere or whether they will develop the capability themselves. Kagame's Rwanda has explicitly chosen the second path; the AU Strategy aspires to it continent-wide; Mistral's expansion into Francophone Africa, including the planned R&D center in Rabat and the partnerships with African universities, represents one operational answer that operates outside both US and Chinese hegemonies.
What no single governance model has solved. The pace of AI evolution in 2026 is faster than any of the governance models — state-directed, capitalist-competitive, or rights-protective — is reliably tracking. The Chinese model can coordinate strategically but it cannot stop a Liang Wenfeng from making decisions that disrupt the global market. The US model can produce contrarian innovation but it cannot align its labs around a coherent strategic posture. The European model can optimize for legitimacy but it cannot win the foundational model race. None of the three has yet produced a clear answer to the question of how a society absorbs AI capability that is now compounding faster than institutions can adapt to it.
The honest reading is that the global AI conversation in 2026 is shaped by leaders who have, for their own reasons, decided to act with urgency despite that uncertainty. Some are right. Some will turn out to be wrong. The diversity of approaches is itself a kind of insurance against any single governance model being catastrophically mistaken — which is an argument for, rather than against, the multipolar landscape that has emerged.
What the senior buyers are actually doing
The CEO and political-leader statements get the headlines. The senior technology officers inside the buyer organizations are making the deployment decisions, and their public statements, when they make them, are worth listening to with more weight than the vendor-CEO statements. They are the people running production agentic AI in regulated environments.
Lori Beer at JPMorgan and Marco Argenti at Goldman Sachs, whose statements I quoted earlier, are the clearest US bank examples. The structural pattern — build the orchestration in-house, treat foundation-model APIs as commodity components, focus governance investment on identity, audit lineage, and model risk — is now the default for the largest US regulated buyers.
Andrew Reiskind, Chief Data Officer at Mastercard, has been featured by Credo AI as an example of a major financial institution embedding AI governance directly into the data and product development lifecycle. Mastercard's posture on agentic commerce — Pichai's UCP launch with Mastercard explicitly endorsed — places the company in the position of co-defining the protocol layer rather than being a passive consumer of it. The structural pattern at the major payment networks is that they are running agentic AI on top of protocols they helped define, in infrastructure they own, with governance frameworks they built themselves.
The European bank CTOs are operating in a more constrained environment. ING, BBVA, Santander, and Deutsche Bank are deploying agentic AI in 2026 but with sharper sovereignty calculations than US peers. BBVA was named as an OpenAI Frontier launch customer; ING has been one of the more aggressive adopters of Microsoft Azure AI in European banking; Santander's Ana Botín has spoken about AI as a productivity lever while emphasizing the regulatory constraints; Deutsche Bank has been more cautious overall. The European bank pattern in 2026 is not yet won't run through a third-party vendor in the US bank sense, but it is increasingly won't run through a US-only third-party vendor.
The Asian bank CTOs have moved earlier than their European counterparts in some respects. Singapore's DBS Bank, under Piyush Gupta, has been one of the most aggressive Asian banks in agentic AI rollout. Mitsubishi UFJ in Japan has emphasized Japanese-language model performance and domestic data residency. Indian banks are pursuing deployment patterns aligned with the India Stack digital public infrastructure approach. The Asian patterns are not converging into a single pattern — they vary by national regulatory environment, by domestic AI capability, and by the geopolitical alliance structure each country is navigating — but they are collectively running ahead of European deployment in operational scale and behind US deployment in budget.
The structural observation across all the buyer voices. The largest regulated buyers in 2026 are spending tens of billions of dollars on AI and they are spending most of it on infrastructure, in-house engineering, and foundation-model API access — not on third-party agentic orchestration platforms. The vendor CEO statements about the agentic enterprise are aspirational from the buyers' perspective. The buyers are building the agentic enterprise themselves, with the vendors as suppliers of components, not as orchestration owners. This is the most consequential observation in the global discourse, and it is also the one the public CEO discourse most actively obscures.
Think globally, act with urgency but strategy
The architecture argument the main companion article advances — cognition belongs to the agent, coordination belongs to the workflow, statistical prediction sits between them, each with its own governance regime — is consistent with what the most credible operators worldwide are publicly saying in 2026. Calkins's light bulb without infrastructure, Trefler's Predictable AI, Freund's blend deterministic and dynamic orchestration with guardrails, Beer's proper technology ecosystem with proper security, resiliency, and controls, Argenti's emphasis on audit trails, hallucination mitigation, and data lineage, Mensch's AI was never built for your business, Liang Wenfeng's participate in global innovation rather than ride on the coattails of others, the AU's technological sovereignty — these are different vocabularies expressing structurally compatible positions. The architecture stands.
What the global lens reveals that the US-centric lens misses is that the who is building this question is not what the loudest voices in the US press present it as. The foundation model labs are building capability. The agentic platform CEOs are building orchestration. The bank CIOs are building integration. The European sovereign labs are building alternatives. The Chinese open-source labs are building architectural challenge. The system integrators are building delivery capacity. The political leaders are building the regulatory and sovereignty environment within which everyone else operates. None of them is the whole story. All of them are the whole story.
The pace at which AI capability is evolving in 2026 is faster than any single governance model — Chinese state-directed, US capitalist-competitive, European rights-protective, African development-aligned, Indian digital-public-infrastructure — is reliably tracking. The diversity of approaches is itself a hedge against any single model being catastrophically wrong. A reader who is genuinely trying to make architectural and strategic decisions in this environment is well served by listening across the diversity rather than within a single narrative.
The disposition that the global lens recommends, finally, is the one this companion's title points to. Think globally — because the conversation is multipolar and the most consequential voices are not always the loudest US ones. Act with urgency — because the pace of capability evolution is faster than the deliberative cadences most institutions are built for. But with strategy — because the architectural and sovereignty choices made in 2026 will define operational shape for years, and a default choice made under time pressure compounds into a strategic dependency that is difficult to reverse.
The reader can decide which voices to listen to most carefully. The disagreements are real. The architecture has stakes. The next twenty-four months will produce a clearer answer than the current discourse provides — and the leaders who are wrong, or whose strategic incentives override their architectural judgment, will be visible by 2028. I will be reading the same statements, and quoting from them in subsequent pieces, as the picture continues to develop.
— Pumulo Sikaneta
This companion piece supplements Insight Belongs to the Machine. Decisions Belong to the Human.
Sources include public earnings calls, conference keynotes, podcast interviews, blog posts, government press releases, and policy publications between Q4 2025 and the end of April 2026. Specific sources referenced include but are not limited to: OpenAI's Frontier launch coverage and Sam Altman's India AI Impact Summit 2026 remarks; Dario Amodei's January 2026 essay The Adolescence of Technology and Davos 2026 commentary; Anthropic's response to the US Department of Defense in February 2026; Satya Nadella's Looking Ahead to 2026 post and subsequent earnings calls; Sundar Pichai's Cloud Next 2026 keynote and NRF 2026 remarks; Demis Hassabis's Davos 2026 commentary; Marc Benioff's Dreamforce 2025, Q3 FY26 earnings call, April 2026 hiring announcement, and Headless 360 launch commentary; Bill McDermott's ServiceNow Q4 2025 earnings call; Alan Trefler's PegaWorld 2025 keynote and Q4 2025 earnings call; Matt Calkins's Appian World 2026 keynote and December 2025 Yahoo Finance predictions interview; Jakob Freund's Camunda blog posts and LinkedIn commentary on Camunda 8.9; Lori Beer's Fortune feature on JPMorgan's 2026 AI strategy; Marco Argenti's Goldman Sachs What to Expect From AI in 2026 essay and Bloomberg Odd Lots podcast appearance; Arthur Mensch's Davos 2026 remarks, the Big Technology Podcast interview of 16 January 2026, and the Mistral Forge launch coverage from Nvidia GTC; Liang Wenfeng's January 2026 interview with An Yong (translated and republished by The China Academy) and DeepSeek's V4 launch coverage from 24 April 2026; Mark Carney's Davos 2026 speech, the Canada Strong sovereign wealth fund announcement of 27 April 2026, and the Canadian Sovereign AI Compute Strategy; the African Union Continental AI Strategy and the AU-Google MoU of 17 February 2026; the Accenture-Mistral strategic collaboration announcement of 26 February 2026.
The author's professional work is concentrated in the Pega ecosystem; this disclosure also appears in the appendix of the main companion article. The architectural patterns described here apply across the workflow and BPM platform category, and the agentic platform CEOs are treated as parallel voices throughout.