Convergence is Not the Hard Part

Why the future of AI depends on the social infrastructure we don't have.

Image: Depositphotos

Source: Beth Rudden

I spent a morning listening to Amy Webb deliver her 2026 Convergence Outlook at SXSW. Her session has people lined up the night before year after year and I have always enjoyed watching her onstage.

Webb has retired the word "trend." She's replaced it with "convergence" - the moment when multiple forces collide at system level, redistribute power, and become difficult to reverse. She mapped ten of them for 2026, grouped under three she chose to spotlight: Human Augmentation, Unlimited Labor, and Emotional Outsourcing. The scenarios she painted for 2031 were appropriately unsettling. End-stage capitalism in one corner. A contribution credit economy in the other. Choose your fighter.

It's good work. Five forces - technology, economics, geopolitics, demographics, climate - each with their own core shifts and key uncertainties. Webb's team has mapped the operating environment in which all of this convergence happens, or doesn't.

But here's what my brain kept doing while I listened. It kept going back to the 13 points.

Financial anxiety alone can suppress IQ by 13 points. Not lack of education. Not poor nutrition, though that matters too. The cognitive load of worrying about money - the rent check, the co-pay, the grocery math - takes up bandwidth that would otherwise be used for planning, problem-solving, abstract thought. Thirteen points is the difference between a normal day and a fog. And it's happening right now, at scale, in a country that's simultaneously spending $500 billion on AI data centers.

Webb's report documents a workforce hollowing from both ends - junior workers quitting unpurposeful roles, mid-career women forced out by caregiving costs. It describes a stuck-in-place economy where only 11% of Americans moved in 2024. It names the post-reality era. All of this is correct. All of this is well-sourced.

What the report doesn't say - what no futures report quite says - is that these aren't conditions waiting to be solved by convergence. They are the substrate convergence lands on. And that substrate is already fractured.

So when Webb talks about Emotional Outsourcing - AI companions, therapeutic chatbots, systems designed to process and respond to human feeling - I don't hear a technology convergence. I hear a tell. We are building machines to hold the emotional labor because the social systems that were supposed to hold it are gone or were never there. The lonely teenager talking to an AI boyfriend in 2026 is not a technology adoption story. It's a social infrastructure failure wearing a tech costume.

Complex adaptive systems theory tells us that emergence can't be predicted, only recognized. You can't engineer emergence. You can only create conditions where useful patterns are more likely to survive. And you can’t do it from the top of a $500 billion compute stack.

Webb called for creative destruction and agency. She quoted Schumpeter. She mentioned she's a long-distance endurance athlete. I wrote that down. Not because it's irrelevant - because it's the whole point. Webb has time to train. She has the financial safety to think in decades. That container shapes the guidance. The container matters. It always does.

I agree with the ask. But I keep wondering: active participants using what? The 13-point-IQ-drop parent doesn't have bandwidth for creative destruction. The communities spiraling downward while resource-rich areas compound their advantages aren't going to be shaped by whether AI inference chips run on photonic processors or neuromorphic silicon. They're going to be shaped by whether anyone shows up.

The real infrastructure problem isn't compute or chips. It's that the knowledge of how complex systems work - emergence, feedback loops, non-linear causation, the difference between complicated and complex - is concentrated in a tiny number of people. And the decisions about how to deploy convergent technologies are being made without that knowledge, by people who think in linear cause-and-effect and quarterly returns.

The work isn't prediction. It's perseverance. It's building systems that make it easier for the parent with the 13-point cognitive tax to access the same quality of care, information, and decision-support as the investor reading Webb's report at $10,000 a seat. That's not a technology problem. It's a recognition problem - who we see, what we count, and whether we're willing to build for the ground conditions that already exist instead of the convergence we hope is coming.

So what does this look like inside a corporation? Dave Snowden's Cynefin framework offers a starting posture: probe, sense, respond. Stop writing five-year AI roadmaps as if convergence is a complicated engineering problem with a knowable solution. It isn't. Instead, launch small, parallel, safe-to-fail experiments - test AI deployments in the actual conditions your workers and customers inhabit, not the conditions your strategy deck assumes. Map where your real influence ends and where you're just performing control; most companies will find the boundary is a lot closer than the org chart suggests. And replace your engagement surveys and dashboards with micro-narratives - actual stories from the people your systems touch - because stories surface emerging patterns before metrics do, and they surface them in time to act. The corporation that learns to sit with complexity instead of flattening it into a quarterly plan is the one that might build something worth converging toward.

The work is in the gap.


Beth Rudden

Beth Rudden: I’ve spent 25 years building AI systems and training 10,000+ data scientists across 175 countries. Then I watched someone I love get lost in a system drowning in information nobody could trace, verify, or explain. That’s when I started building AI that shows its work. Bast AI is pioneering certainty in AI. We ground every answer in ontologies so humans control what AI knows, how it reasons, and where every conclusion came from. 30+ patents and counting. My degrees are in archaeology and anthropology — turns out that’s perfect training for understanding how knowledge systems work and how they fail. I build AI that makes human reasoning better, not redundant.