THE AI STAGE TAKES EUROPE
|
Main Logo
San FranciscoAmsterdam
April 6-9, 2026

2025 Was the Dress Rehearsal. 2026 Is Opening Night.

Stefan's 2026 Brief

2025 Was the Dress Rehearsal. 2026 Is Opening Night.
2025 Was the Dress Rehearsal. 2026 Is Opening Night.
Blog 2025 Was the Dress Rehearsal. 2026 Is Opening Night.

Let’s get this out of the way: a lot of what happened in enterprise AI this year didn't work.


Not because the technology wasn't impressive - it is. But because organizations confused "deploying a chatbot" with "transforming the business." They spent billions giving individual workers marginally faster ways to draft emails and summarize documents, then wondered why the P&L didn't move.


This isn't a failure story, though. It's a teachable moment - sort of like when my daughter painted the couch - love the inspiration to change the color, wrong method of action.  And the lesson is simple: AI as a side project is worthless. AI woven into how the business actually operates is transformative (which is why we built HumanX, in case you forgot…)


2025 was the dress rehearsal. 2026 is when we find out who was paying attention.



The Death of the Pilot

The "pilot purgatory" phenomenon deserves a proper autopsy. Why did so many initiatives stall?


It wasn't the models. GPT-4, Claude, Gemini - they're genuinely capable. The failures were organizational. Generic AI tools couldn't navigate the specific pricing logic of a B2B distributor or the approval hierarchy of a pharmaceutical company. They could write a sales email, but not your sales email.  And don’t even get me started about the non-deterministic inherent nature of GenAI - leaders need to believe that an AI will give the right answer 50 times in a row, not hallucinate that first class seat assignment when you protest too much. 


Meanwhile, data lived in silos. Pilots launched without access to the ERP, the CRM, the supply chain system without the "ground truth" that makes AI useful rather than hallucinatory. And when teams tried to move from advisory bots ("here's a suggestion") to action-taking agents ("I'll handle this"), legal and compliance slammed the brakes. Here’s what I heard from my friends who lead major enterprises: no framework existed to audit autonomous decisions, so autonomy never happened.


The result? Expensive demos that never became operational tools.



From Copilots to Agents

I am advocating a vocabulary shift from "copilot" to "agent" that isn’t marketing. I want people to consider it a fundamental change in what we're asking AI to do.


A copilot waits for you to ask it something. An agent gets assigned a goal and figures out how to achieve it: breaking the problem into subtasks, reasoning through obstacles, executing across multiple systems. The human shifts from driver to supervisor and validator.


Gartner predicts agentic systems will be embedded in 33% of enterprise software by 2028, up from less than 1% in 2024.1 For those of you who aren’t math majors, that's not incremental growth…that's a rewiring of how enterprise software works (and no, we’re totally not ready for it).


The practical implication: instead of AI helping someone process claims faster, AI processes the claims. Instead of AI suggesting how to reconcile invoices, AI reconciles them. The human reviews exceptions. This is where the ROI finally materializes—not in making individuals 10% more efficient, but in automating entire workflows at 80-90% autonomy.



Governance Became Real

Remember when AI ethics was a conference panel topic at HumanX? I do, but as we talked about in Vegas (and it didn’t stay in Vegas) it’s now fully empowered compliance officer with a clipboard and a can of whoop ass.


The EU AI Act hit full enforcement in 2025. "High-risk" AI systems—anything touching employment, credit scoring, education, critical infrastructure—now require quality management systems, technical documentation, and human oversight protocols. The fines for non-compliance run up to €35 million or 7% of global revenue.2 If that didn’t make you sit up, I’m not sure what will.


And the copyright wars got bloody. The GEMA v. OpenAI ruling in Germany found that training on copyrighted material (and importantly, the model's ability to reproduce it) constitutes infringement. The court rejected the "text and data mining" exception that many assumed would provide cover.3 Anthropic's $1.5 billion settlement over training data alludes to the potential that past indiscretions will be extracted retroactively.


The strategic response is what Gartner calls "geopatriation" - fragmenting AI architectures to comply with regional legal regimes.4 One model instance for the EU, trained on GDPR-cleared data. Another for North America. Another for Asia-Pacific. The dream of a single "global AI brain" is dead. What replaces it is messier, more expensive, and legally necessary. I honestly quite get my head around how this will even work in practice with GenAI where determining provenance of outputted data is super hard - sure glad I started HumanX so someone can tell me.


The Silicon Ceiling

Here is my “keep you up at night” prediction: AI is about to make inequality inside organizations much worse.  To be clear, this isn't just an ethical problem. It's an operational one. The insights and efficiency gains don't come only from headquarters.  And yet…


BCG research shows GenAI usage at 75% among leaders and managers but only 51% among frontline workers. The gap isn't solely about capability (we all know executives who are dullards and frontline workers who are stars) - it's mostly about access and training. Executives get expensive AI tools and the support to use them. Frontline workers get nothing, or get tools nobody taught them to use.


The result is a two-tiered workforce. The "AI-empowered" become exponentially more productive. The "AI-deprived" see their relative value plummet.  I unfortunately know a number of leaders who would be ok with that.  I say it’s absolutely foolishness.  Hoarding AI in the hands of a small number of people simply limits the rate and quality of innovation companies can achieve with broader access and training on AI solutions.  If you're not actively democratizing AI access (like deploying simple, mobile-first tools designed for frontline tasks) you're building a workforce that's “productive” at the top and stagnant everywhere else.



Reskilling, Not Replacing

The World Economic Forum estimates 170 million jobs will be created by 2030, while 92 million will be displaced.5 Net positive, but the churn is brutal and traditional education can't keep pace.  Not to mention this isn’t like a rip and replace; the inconsistent nature of jobs leaving and jobs being created will be so incredibly painful - and most of our leaders aren’t acknowledging it. 


The response taking shape is "skills-first" hiring. Degree requirements are disappearing from job descriptions. What matters is whether you can do the work, not where you learned to do it. Internal talent marketplaces use AI to inventory existing workforce skills and match them to emerging needs. Instead of firing the data entry clerk and hiring a data auditor, the system identifies adjacent skills and prescribes a learning path.


The organizations getting this right are training people for the next job before automating the current one. Preemptive reskilling beats reactive layoffs across every vector: morale, retention, and the practical reality that institutional knowledge is hard to replace.


A paradox worth noting: as AI handles more cognitive tasks, the ability to think without AI is becoming valuable again. Gartner predicts 50% of organizations will require candidates to pass assessments without AI assistance.6 The question isn't just "can you use the tools?" It's "do you understand what the tools are doing?"



What Actually Matters in 2026

Strip away the hype, and here's what's left:


Vertical over horizontal. Generic chatbots had their moment. What works now is deep, function-specific AI—claims processing, invoice reconciliation, supply chain optimization—where the AI can handle most of the volume and humans handle the exceptions.


Governance as competitive advantage. The companies that built robust data lineage and compliance infrastructure can deploy agents where their competitors can't. Being "legally clean" is a moat.


The human question isn't going away. Every efficiency gain from AI creates a decision about what happens to the people who used to do that work. The organizations that accept this truth and actively work to stay ahead of the changes will outperform those that don't.


Hot Take: The CAIO role is temporary (sorry!). As AI becomes inseparable from how companies operate, the need for a dedicated "AI leader" diminishes. It's heading the same direction as "Head of Electricity".  It was important during the transition, irrelevant once the transition is complete.


These are the conversations worth having. Not "will AI change everything?" (yes, obviously) but "how do we make it work without breaking the organization, the workforce, or the law?"


That's what we're building HumanX around: practical answers to hard questions, from people who've actually implemented this stuff. If 2025 was the year of learning what doesn't work, 2026 is the year of proving what does.



1 Deloitte, "Unlocking Exponential Value with AI Agent Orchestration," 2025
2 EU AI Act Compliance Guidelines, Article 99
3 Regional Court of Munich, GEMA v. OpenAI, December 2025
4 Gartner Strategic Predictions for 2026
5 BCG, "AI at Work 2025: Momentum Builds, but Gaps Remain
6 World Economic Forum, "Future of Jobs Report 2025"