Four Lessons on AI Agents

Last week, several articles helped bring the agentic AI picture into focus. Real-world experience from J&J, LangChain, and BCG give us early insights into the success factors for successful AI agents.

AI Agent Lesson #1: AI Value is Not Evenly Distributed

After about a year of learning from AI projects, J&J is pivoting its AI strategy from a wide approach to a more narrow focus. Their CIO said the refocusing “allocates resources only to the highest-value generative AI use cases, while it cuts projects that are redundant or simply not working, or where a technology other than GenAI works better.” They were taking a broad approach, with over 900 AI projects underway. The findings? The Pareto Principle (which says that 80% of something comes from 20% of the contributors) applies…and in this case it was even more apparent, with 80% of the value coming from 10%-15% of the use cases.

AI Agent Lesson #2: Successful Agents need Good Context (RAG)

Confused about agents? So is everyone else. But we’re starting to see some clarity emerge from those working on agents, and agentic systems. If you’re interested in the current thinking, this excellent post by LangChain compares OpenAI’s guide and Anthropic’s blog post on the subject and integrates learnings from LangChain (a platform for building with LLMs, including building agents). The main takeaways: 

  • The most difficult challenge is giving agents the proper context at each step of the process. Most often, this means getting RAG right, but it can also mean handoffs and instructions (prompts or tool descriptions, for example).
  • The article favors the term “agentic systems,” which encapsulates LLMs being used across a spectrum, from workflows (where LLMS follow predefined steps) to agents (that figure out what to do on their own)
  • Not everything needs an agent. If you can get the job done with something simpler, don’t use an agent.
  • It’s still early and a lot is changing
  • Anthropic’s Model Context Protocol or MCP is becoming the standard for how LLMs access information and tools. MCP is a standard that allows LLMs to move beyond the “chat” paradigm and become AI applications that have access to information and can use tools (for instance, to perform a search or control a web browser).

AI Agent Lesson #3: Not Everything Needs an Agent

A presentation on Agentic Evolution and MCP from Boston Consulting Group laid out similar thoughts, with an excellent visual progression of generative AI:

From this we see that the industry is beginning to move from pre-defined workflows to self-directed agents, not everything needs an agent, and we’re still very early in AI agent deployments.

I really liked their way of thinking about workflows (where LLMS follow predefined steps) to agents (that figure out what to do on their own). Although this misses some nuance, it lays out good starting criteria for what is good for what:

  • Workflows are good for process consistency, whereas Agents are good for process flexibility.
  • Workflows are good for situations that rely heavily on domain intelligence, whereas Agents are good for situations that benefit from general intelligence.
  • Workflows are more predictable, whereas Agents are less predictable. Although it’s not the same thing, predictability is closely related to reliability.
  • Workflows are less adaptable to new situations, whereas Agents are more adaptable to new inputs.

AI Agent Lesson #4: Don’t Worry About Compute

From the better, faster, cheaper department: News out of China that Huawei has created a new chip that may threaten NVIDIA’s dominance in the GPU market. They are claiming a chip that matches or exceeds performance of NVIDIA’s flagship H100 chip, with all-Chinese technology. Whether or not these claims prove true, we can expect faster compute at lower prices as the hardware wars continue.


My take on why does it matter, particularly for generative AI in the workplace


These developments show that, regardless of your definition of AI Agent, it’s clear they’re going to be a big part of future workplaces. Some conclusions:

Agents are bringing real value to business, and they’re starting to move from theory to practice. After a year of experimentation, J&J didn’t cancel their AI efforts, they refocused them on the ones that will make the biggest difference to the business.

Why does the LLM mess up? Two reasons: (a) the model is not good enough, (b) the wrong (or incomplete) context is being passed to the model. From our experience, it is very frequently the second use case.” – LangChain

As shown by this quote, for working agentic systems, context is critical. In fact, I think we’re almost to the point where we can say context is everything. Sure, you need the right model too…but the models are quite capable now. Good instructions and good knowledge are required for successful agentic systems, and in most cases knowledge will come from some kind of RAG. Getting that RAG right will be the key to successful agents.

“We learned [that abstractions make it hard to control the LLM] the hard way…two years ago…an agent class that took in a model, prompt, and tools…didn’t provide enough control back then, and it doesn’t now.” – LangChain

Agents won’t work for everything; specifying workflows will often be more effective and in fact necessary for reliable AI workers. Not everything will need an agent. We saw this with J&J’s efforts, as well as guidance from LangChain and BCG.

Although we hear more frequently how models are getting better and cheaper, it’s the same for the hardware they run on. My take? LLMs are a newer technology, so we are discovering new techniques quickly; GPUs have been around a while so much of the low-hanging fruit has been exhausted, but there are still huge gains to be made in both.

Companies on the leading edge of deploying generative AI (outside of the early wins like coding and customer service) are starting to see real value, and as a market we are starting to be able to see more clearly what the “agentic AI” future could bring – and as importantly, what it will take to get there.

Copyright (c) 2025 | All Rights Reserved.


Discover more from

Subscribe to get the latest posts sent to your email.