Beyond the AI Hype: Making Generative AI Actionable for Enterprises

Image: Seth Earley

Generative AI is dominating headlines, boardroom discussions, and innovation budgets. From marketing copy to code generation, it's being hailed as the most revolutionary technology since the internet. But while enterprise leaders are captivated by the promise of artificial intelligence, few are seeing real returns on their investments.

In fact, most generative AI initiatives fail. Not because the technology isn’t powerful, but because enterprises aren’t prepared to make it actionable.

After years of advising Fortune 500 companies on information architecture and AI adoption, I’ve seen firsthand why so many projects stall out. They begin with hype and ambition, but without structure, governance, or clear value. To make generative AI truly transformative, we need to move past the experimentation phase and start treating these efforts like any other strategic initiative: with defined use cases, measurable outcomes, and enterprise-ready data.

Here’s how.

Why Generative AI Fails in the Enterprise

The common belief is that large language models (LLMs) alone can deliver competitive advantages. But here’s a bold truth: LLMs offer efficiency, not differentiation. Everyone has access to the same foundational models. If you’re using them to do what everyone else is doing, generate similar content, automate customer service, summarize documents, you’re not gaining an edge. You’re just speeding up standardization.

Competitive advantage comes from your proprietary knowledge, your workflows, your data, and your people, none of which the LLM inherently understands.

So how do you bridge that gap? By integrating your knowledge into the AI through retrieval augmented generation (RAG). But even RAG isn’t a silver bullet. If your content is messy, inconsistent, outdated, or poorly structured, AI won’t be able to retrieve or reason over it effectively.

As one client told us after a failed project: “We used an AI vendor with all the buzzwords—machine learning, retrieval, GenAI—and they got no usable results.” Why? They didn’t define use cases. They had no content model. No metrics. No information architecture. They didn’t even know what “good” looked like.

The Foundation: Structured Content and Information Architecture

Generative AI doesn't fix your past data sins. It just exposes them.

A successful AI system depends on structured, curated, and well-tagged content. We saw this clearly in a recent project with field technicians accessing thousands of pages of manuals. The documents, many over 300 pages, were inconsistently formatted, filled with unstructured tables and diagrams, and lacked metadata.

Even when users asked good questions, the system couldn’t reliably retrieve relevant answers because the content wasn’t designed for retrieval. Question answering depends on componentized content, structured so each part can answer a specific query. Without this, even the most advanced AI will flounder.

This is why we coined the term “information architecture-directed RAG.” Retrieval isn’t just about pointing AI at a knowledge base. It’s about designing that knowledge base to answer real, nuanced questions in specific business contexts.

Bad Questions, Bad Answers: Why Query Design Matters

Another lesson from the field: most people don’t know how to ask good questions.

Technicians searching for “Vector 7700” (the name of a model) might expect troubleshooting steps. But the query is ambiguous. It’s like walking into Home Depot and saying, “tools.” Without context, AI can’t disambiguate. That’s where faceted search, user cues, and metadata enrichment come in.

We also monitor outcomes in three categories:

  • Good question, good answer

  • Good question, bad answer

  • Bad question, bad answer

Sometimes you get lucky: bad question, good answer. But that’s rare. You need feedback loops, both from users and the system, to improve performance. And you must design systems to handle poor queries gracefully, using AI to infer intent and provide suggestions.

Context is King… and It's Multi-Dimensional

One of the biggest technical challenges is managing context. Generative AI systems operate in a vector space, where every document, label, and query is transformed into multi-dimensional embeddings. But when we enrich those embeddings with metadata, like user role, task, location, we blow up the context window.

Think of a GPS. It operates in three dimensions. But if you add “restaurant,” “Italian,” “three stars,” and “under $30,” you’re adding dimensions—intentional dimensionality. That’s what metadata does for your knowledge corpus. It makes AI more precise by narrowing the space to the most relevant vectors.

But many systems, like Microsoft Copilot, aren’t designed to handle enriched vector embeddings. They lack the ability to fully leverage the correct knowledge architecture. This technical limitation is one reason AI projects that "look good on paper" still fail in practice.

From Proof of Concept to Proof of Value

Too many organizations are stuck in the proof-of-concept (PoC) phase. These projects are often unmeasurable, unconstrained, and unscalable. Instead, we advocate for proof of value (PoV). That means:

  • Using real data, not lovingly curated samples.

  • Defining measurable outcomes.

  • Building with deployment in mind.

  • Starting with business strategy, not technology.

When you start with PoV, you’re not just testing if something “works,” you’re testing if it delivers value at scale. That requires upstream thinking: What are the enterprise outcomes we care about? What processes support them? What information do those processes depend on?

From there, you can identify information leverage points—areas where AI can have the biggest downstream impact. Maybe it’s proposal generation, where a bottleneck costs you millions. Maybe it’s portfolio analysis in R&D. Whatever it is, start with the business, not the bot.

How to Spot Real AI Partners (and Avoid Pretenders)

Many vendors pitch themselves as AI-first but fail to deliver. When evaluating partners, listen for the right language:

  • Information architectureKnowledge models

  • Use cases and user journeys

  • Content models and metadata

  • Governance and workflow

  • Customer and employee data models

If they can’t talk about content curation, tagging, and retrieval architecture, they’re not ready to help you scale.

Making AI Work: People, Process, and Pragmatism

AI is not "auto-magic." It’s software, powerful software, yes, but still bound by the same rules of business logic and content quality.

You can’t automate what you don’t understand. That’s why we start with process analysis, user needs, and actual business constraints. We involve subject matter experts but remove unnecessary burden by using AI to suggest content models, derive tags, and infer use cases.

With the right architecture, what used to take a million dollars and 12 months can now be done in 3 months at a fraction of the cost. But you still need governance, feedback loops, and metrics. Otherwise, you’re just chasing the latest shiny object.

The Way Forward

AI transformation isn’t about chasing the latest model—Gemini, Claude, Copilot. Those are implementation details. The real question is: What problem are you solving? What data supports that? What processes are you enabling?

Once you’ve answered those, you can make informed technology choices. But until then, AI is just another tool looking for a job.

We’ve seen this movie before. During the dot-com boom, everyone needed a .com. Today, everyone needs AI. But what enterprises really need is value and that only comes when AI is grounded in the fundamentals: good content, good structure, good use cases.

The hype will fade. The hard work will remain. But for those who invest wisely, the returns can be extraordinary.


Seth Earley, Founder, CEO, Earley Information Science

Seth Earley
Founder, CEO, Earley Information Science
seth@earley.com

Seth is an expert with 20+ years’ experience in Knowledge Strategy, Data and Information Architecture, Search-based Applications and Information Findability solutions. Seth has worked with a diverse roster of Fortune 1000 companies helping them to achieve higher levels of operating performance by making information more findable, usable and valuable through integrated enterprise architectures supporting analytics, e-commerce and customer experience applications.His work centers around information architecture (IA) and he coined the industry catchphrase “There’s No AI without IA”. Seth Earley is a sought-after speaker, writer, and influencer. His writing has appeared in IT Professional Magazine from the IEEE where, as former editor, he wrote a regular column on data analytics and information access issues and trends. He has also contributed to the Harvard Business Review, CMSWire, Journal of Applied Marketing Analytics, and he co-authored “Practical Knowledge Management” from IBM Press. Seth is author of the award-winning book The AI-Powered Enterprise: Harness the Power of Ontologies to Make Your Business Smarter, Faster and More Profitable. Seth was named to Thinkers360 top 50 global thought leaders and influencers on Artificial Intelligence for 2022 and as a top thought leader for 2023. His current research is in knowledge management and large language models (LLMs)