Insights Into How We’re Using AI

ChatGPT and Claude Usage statistics, AI gets even better at math competitions, and AI Summaries are reshaping the web…especially for news sites.

ChatGPT and Claude Usage Reports

OpenAI released a report outlining ChatGPT’s use, and Anthropic updated their Economic Index with a more detailed look at how Claude is being used. The reports are a quite interesting look into the usage trends, although they aren’t an apples-to-apples comparison. OpenAI only gave usage for the public version of ChatGPT (personal accounts only), whereas Claude’s report includes all use.

For the first time, Anthropic included stats on the use of Claude’s API (i.e., when a computer sends a prompt to Claude rather than a human typing one in), giving insight into how businesses are using generative AI. For the topic of this blog, this is by far the most important data. Key observations:

  • Coding is the most common use (accounting for about half of the traffic)
  • Greater use for automated tasks (rather than conversations)
  • Weak price sensitivity (companies are willing to pay for the high-cost capabilities)
  • Effectiveness is limited by weak context

THAT’s the most important takeaway: Company use tends to have much larger prompts, evidence of sending internal data to Claude (i.e., “providing context”) so that it work with that information. Anthropic recognizes that the model’s ability to respond is limited without the right internal information. More below after Does It Matter?

The other key insight is that it’s still early; enterprise adoption varies significantly by sector and so far, not surprisingly, it’s concentrated in tech:

“early enterprise use of Claude is likewise unevenly distributed across the economy and primarily deployed for tasks typical of Information sector occupations.”

Here are the other most interesting takeaways:

  • Claude is primarily used for work (and is still dominated by coding, 36% of total use), whereas ChatGPT is used for more conversational tasks, such as practical advice (27%) and writing.
  • Notably, ChatGPT is used less for work than a year ago (from 47% to 27%); it’s unclear how much of this is because other usage has grown vs. how much work usage has dropped (or migrated to enterprise accounts, which aren’t included).
  • OpenAI offered a useful 3-bucket segmentation for patterns of ChatGPT use: Asking (49%), Doing (40%), and Expressing (11%). Expressing includes “reflection, exploration, and play.”
  • Anthropic looked at usage of Claude by geography, and found that usage per capita is correlated to income. The U.S. and India have the most usage, but adjusted for population Israel is #1 followed by Singapore and the U.S. is the sixth largest user. China, of course, is nowhere to be found.

I would not have guessed Estonia, Malta, or Cyprus to be among the largest users!

AI Wins Math Competition, redux

Remember the last math competition where Google and OpenAI both claimed a gold medal? Google competed directly, OpenAI did their competition offline with separate judges. But for the 2025 ICPC World Finals, both companies entered their AI solvers (not the models publicly available) into the competition.

AI is Hurting the Economics of Journalism

For years, many internet sites have relied on Google’s search. You have a question about something, you Google it, you click on the link and go to the site. This was especially true for new sites. Now, Google often gives you an AI-generated summary, and you don’t need to go to the source. So you don’t click.

What’s the impact? Here’s a look at traffic to US news sites in July, showing the drop from last year to this year. AI Summaries are definitely having an impact.


My take on why does it matter, particularly for generative AI in the workplace


RAG is Key to Business Value

Showing API use is very valuable because it’s the first real insight we have into business integration of generative AI. Sure, the use of conversation bots is interesting, but the money is going to be in the specialized applications: the AI assistants, the AI agents, and agentic AI (which means “two or more AI agents working together”). All of those applications are going to access generative AI through the API.

The usage information is interesting, but I think it’s too early to extrapolate and draw trendlines. It’s going to be very different a year from now.

What is clear is that it’s hard to give models the right contextual information. The models are very capable. But their value for business use depends on whether or not they can be given proper context: accurate, focused, relevant, and complete business knowledge.

“This implies that for some firms costly data modernization and organizational investments to elicit contextual information may be a bottleneck for AI adoption…deploying AI for complex tasks might be constrained more by access to information than on underlying model capabilities.” – Anthropic Economic Index report

That’s exactly what I’ve been saying for a while: good performance in the enterprise requires good retrieval; you have to find the right internal knowledge and feed it to the model. Naïve RAG isn’t good enough, a sophisticated RAG system and pipeline is essential to capitalize on your in-house knowledge. The importance of this will only grow as enterprises try to accomplish more sophisticated things with generative AI.

AI and Math Competitions

The success of OpenAI and Google’s models in the math competition is interesting, and appears to mark another point in history where, for a certain task, the best computers are essentially as good or better than the best humans. What makes this different from Deep Blue beating Gary Kasparov or Watson beating Ken Jennings is that this time:

  • it’s not a narrow game with well-defined rules (i.e., chess or Jeopardy)
  • it’s not a machine that was built specifically to do that one thing.

That’s a big deal. This is not victory in a specific domain. This is victory for a wide range of situations and subjects(yeah, it’s math…but math can be used for many more things than moves on a chess board).

It’s not likely to affect you directly. These math problems are well beyond the reach of 99.9% of the population.

It’s not clear that this raises the floor. Generative AI is still subject to hallucinations and fails to answer some of the simplest requests.

But this shows that the research-grade models continue to improve in their capabilities, even without the classic approach of scaling up training and compute. This means that the public models will continue to improve as well. We are nowhere near the finish line.

Copyright (c) 2025 | All Rights Reserved.


Discover more from

Subscribe to get the latest posts sent to your email.

Leave a Reply