GenAI’s Impact: more like the Internet or the Metaverse?

Google continues to march forward, Anthropic adds safety features, and an x.AI bug makes private conversations public. But after GPT-5 failed to bring about huge gains (c’mon, is anyone REALLY surprised?) many are starting to ask if the AI bubble is bursting. Even some folks from MIT say 95% of AI projects bring no value.

I don’t think so. Read on to see why.

Google Moves Forward

Three announcements from Google this week about Gemini.

Gemini Gets More Agentic

Google now lets Gemini make restaurant reservations for you, handling involved situations (number of people, days, times, locations, kind of restaurant, special requests) so that you just tell it what you want, and it’ll find options that meet your criteria. Select the option that you want and approve it, and it will make the phone call and confirm the reservation.

It looks quite useful, and they plan to expand its capabilities to make other service appointments soon and be personalized to you. It would be nice to let it schedule your next appointment taking into account your preferences and work schedule without you having to do any back-and-forth.

This is an interesting milestone in bringing agentic AI to the average consumer, an indication that agentic AI is marching forward for everyone just as companies are hoping to deploy agentic AI in the enterprise.

Gemini’s Energy Consumption

Google released information about Gemini’s energy use. Serving an answer to the median prompt to Google’s Gemini consumed 0.24 watt-hours, equivalent to watching TV for 9 seconds. This is significantly lower than previous estimates. Some things to consider:

  • Of course the numbers are lower than we thought a year ago because efficiency keeps improving. Google’s says over the past 12 months, Gemini’s energy consumption is 33x less (and the carbon footprint has decreased by 44x).
  • But that’s just the energy for inference. It doesn’t include pre-training, and training genAI models have gotten massively bigger in time, cost, and energy consumption. So it’s only capturing a part of the total energy use.
  • It doesn’t include image or video generation, which of course takes much more energy than text conversations.
  • It’s likely that Gemini is the most efficient of all the popular models (in large part because they are the fastest – faster models generally use less energy).

Gemini for the Government

After OpenAI made ChatGPT available to US Government agencies for $1, and Anthropic followed suit with Claude, Google didn’t want to be left out! They decided to give Gemini to the feds for a mere $0.47. I’ll let you draw your own conclusions about market share, influence, and getting people or agencies hooked on an AI model today, with the expectation of bigger benefits down the road.

More Safety from Anthropic

Anthropic continues its efforts to emphasize safety. In an early step to curb potential problems for users, they gave Claude the ability to end conversations “in rare, extreme cases of persistently harmful or abusive user interactions.” The goal is to prevent the models from continuing or contributing to conversations that are headed to bad places, but not “in cases where users might be at imminent risk of harming themselves or others.” For now this is just available on their Opus models, not yet for Sonnet.

Less Safety from x.AI

Have you shared any chats with Grok? Al most all of the models give you the ability to share a chat with someone else, so they can see your interactions and even continue it on their own. It turns out that over 300,000 conversations with Grok that were shared…were shared a bit more widely than expected. They are publicly searchable by anyone! Some conversations included private information, personal information, and some of them violated x.AI’s terms of use policy by discussing off-limit subjects.

I hope it wasn’t any conversation you shared. Even once x.AI removes them from the web, they will continue to be available online until they’re removed from the indexes created by the search engines.

A similar thing happened a little while back, where some of OpenAI’s shared conversations were also searchable. They claimed it was a “short-lived experiment” where users didn’t realize they were sharing to the entire web, and removed the feature.

Be careful out there, and share responsibly!

Is the Bubble Bursting?

Because GPT-5 didn’t live up to all the hype, some are concerned that Gary Marcus’ question from 2022 has been answered, and AI is hitting a wall. If we are in an AI bubble, is this enough to burst it?I think it depends on what you think is a bubble. If you’re looking at a possible financial bubble, then maybe. Given the wild valuations of the Magnificent 7, the huge sums of money invested into GPUs, and the fact that the top 10 companies (by valuation) account for almost all of the gains in the stock market:

Source: x.com

And as a result of their ballooning valuations, they represent a much larger portion of the S&P 500 than we’ve seen in a while:

Source: Bloomberg

Then yes, I think we are in a bubble. The question is how big is it, and when might it pop? But I’m no expert in those kinds of predictions (if I were, I would be retired by now). But if we’re talking about a generative AI bubble, I don’t think so. There is tremendous benefit to be gained by the generative AI we already have, and we’re still in the early days of figuring out how to deploy it properly and carefully. I agree with Andrej Karpathy:

When I see things like “oh, 2025 is the year of agents!!” I get very concerned and I kind of feel like, this is the decade of agents. And this is going to [take] quite some time. We need humans in the loop. We need to do this carefully.
– Andrej Karpathy

Another factor that’s got people talking about a bubble is a report from MIT that claims 95% of generative AI pilots fail, in that they don’t produce much value. The report is getting a lot of attention, and has some important takeaways:

  • 40% of companies report deploying generative AI
  • But only 5% of integrated AI pilots are driving value
  • Primarily because generative AI tools don’t retain feedback, adapt to context, or improve over time
  • But, purchasing solutions from vendors are twice as successful as internal builds

My take on why does it matter, particularly for generative AI in the workplace


I’m going to focus on this MIT report. There is a lot of good material in it, but I believe the conclusions should be taken with a grain of salt. The methodology isn’t fully disclosed, it has sample bias based on participation, and the sample size was fairly small. The report was generated from “52 structured interviews across enterprise stakeholders [at AI conferences], systematic analysis of 300+ public AI initiatives and announcements, and surveys with 153 leaders.”

Their points are valid – that most generative AI in the enterprise so far doesn’t adapt fully to the unique and varying needs in the workplace. But I don’t believe that the best solutions are adding memory and self-improvement. These technologies are still largely experimental in the consumer space, much more so in the enterprise.

To me, this statements from the report is telling:

The same users who integrate [chatbots] into personal workflows describe them as unreliable when encountered within enterprise systems…Chatbots succeed because they’re easy to try and flexible, but fail in critical workflows due to lack of memory and customization.

Employees are using ChatGPT and Claude and Gemini…for basic, general tasks (summarize, compose an email, analyze a report). But for enterprise applications, these systems need context: knowledge from company systems. That’s where RAG comes in. RAG isn’t easy, but when done right, generative AI goes from general-purpose tool with limited value to an informed, high-powered agent that can make a real impact.

So if the GenAI divide is real, giving these systems enterprise knowledge is the bridge across the divide.

The key is to build systems that solve specific problems using the knowledge of the organization. Feedback and learning make them even better, but that’s a bonus, not the foundation.

So I’m not convinced this is evidence of a bubble. It’s evidence that we’re at the beginning. We’re just beginning to learn how to properly deploy generative AI internally and extract real value long. The hype says “it’s happening now” but the reality is we’re just starting a long journey to agentic AI.

Even if the specifics are up for debate, the report is directionally valuable. So I’ll close with what I think are the report’s most valuable conclusions. As is often the case, the reality does not align with the hype and many of the headlines:

Five Myths About GenAI in the Enterprise

1. AI Will Replace Most Jobs in the Next Few Years → Research found limited layoffs from GenAI, and only in industries that are already affected significantly by AI. There is no consensus among executives as to hiring levels over the next 3-5 years.

2. Generative AI is Transforming Business → Adoption is high, but transformation is rare. Only 5% of enterprises have AI tools integrated in workflows at scale and 7 of 9 sectors show no real structural change.

3. Enterprises are slow in adopting new tech → Enterprises are extremely eager to adopt AI and 90% have seriously explored buying an AI solution.

4. The biggest thing holding back AI is model quality, legal, data, risk → What’s really holding it back is that most AI tools don’t learn and don’t integrate well into workflows.

5. The best enterprises are building their own tools → Internal builds fail twice as often.

Five Myths from MIT’s The GenAI Divide: State of AI in Business 2025

Copyright (c) 2025 | All Rights Reserved.


Discover more from

Subscribe to get the latest posts sent to your email.

Leave a Reply