Those of you who follow this blog know that I think Anthropic is the only company taking the right approach to AI, with a focus on safety. They are also generous about publishing research that most of the other companies don’t. For instance, they have given early insight into how LLMs might “think” and have discussed how models have unwanted behaviors and what might be done about it (here are some examples).
Last week they published an excellent report about how LLMs are being used by employees at their own company. That’s my sole topic today.
What’s the big deal?
This report is prescient. And the impact is huge.
12 months ago, [employees] used Claude in 28% of their daily work and got a +20% productivity boost…
now, they use Claude in 59% of their work and achieve +50% productivity gains.
I could be wrong. This report mostly looks at how AI has helped coding. And Anthropic (true to form) was careful to point out that their study may not apply everywhere, because their employees on the cutting edge of AI, the survey had selection bias, and employees in an AI company may be motivated to overstate AI’s capabilities.
But I don’t think these affect transferability much, if at all. This report is a look into the future of work at most companies.For some, that future is already here (or can be, if they deploy AI as extensively as Anthropic). For most, it might be a year or two away.
This is the best picture I’ve seen about how AI can be used right now.

My take on why does it matter, particularly for generative AI in the workplace
This is how AI adoption will happen…everywhere.
You should read the full report. It is a preview of how your company will adopt AI.
We know that AI is going to bring tremendous value, but that it struggles at basic tasks. We know it can answer questions that only a PhD can understand, yet we can’t trust it because it hallucinates. So how do we use a tool that is so powerful and simultaneously so imperfect?
Anthropic has figured out how.
Claude is a constant collaborator but using it generally involves active supervision and validation, especially in high-stakes work—versus handing off tasks requiring no verification at all.
– How AI Is Transforming Work at Anthropic
The answer is they use it for tasks that are:
- Outside the user’s expertise and are low complexity
- Easily verifiable
- Well-defined or self-contained
- Not critical (i.e., a mistake isn’t costly)
- Repetitive or boring
- Faster to prompt than execute
- Basic questions and learning
Find Your Fit
There’s your 7-item checklist. Find the work in your organization that fits the above categories, give it to AI, and instantly your employees will be contributing more. They just might be happier to. Even though Anthropic is focused on coding, that’s just the first and easiest use case. The principles apply to any work. And the models are constantly improving (this report is based on Claude 4…and Claude 4.5 is already much more capable!). Let’s look at the seven items closely:
1. Outside the user’s expertise and are low complexity
AI can help with a lot of stuff you don’t know. It gives you the ability to tackle problems that are outside of your expertise, because as long as they are basic tasks, it can give you the knowledge you need and guide you through the key steps.
I can very capably work on [things] where previously I would’ve been scared to touch stuff I’m less of an expert on.
2. Easily verifiable
If it’s easy to verify, then let AI do it. It will probably do it faster, and you can easily check to see if it’s got it right. The workload and the difficulty are almost irrelevant if it’s something that can be checked with little effort.
It’s absolutely amazing for everything where validation effort isn’t large in comparison to creation effort.
3. Well-defined or self-contained
AI is much more reliable on narrow, focused, and constrained problems. The more general you are, the more likely it will be wrong, either because it hallucinates, it gets off topic (maybe you weren’t clear enough in your prompt), or because it simply doesn’t appreciate the nuance of your specific request. The smaller the scope, the higher the accuracy.
4. Not critical
If the task is such that being close is good enough, let AI do it. A great example is summarization – information is always lost in a summary, so if the summary is 90% the same as what a very good human summarizer would create, that’s good enough. Another perfect example is prototyping. Prototypes don’t have to be perfect but it takes time to create them. Let AI do the creating.
People consistently said they didn’t use Claude for tasks involving high-level or strategic thinking, or for design decisions that require organizational context or “taste.”
5. Repetitive or boring
The more frequent a task has to be performed, the more it’s worth investing into automating it. The return is great, even if you have to iterate a few times to get it working. The same goes for things that aren’t fun – if boring tasks can be delegated to AI, that gives you more time to do the interesting stuff.
The more excited I am to do the task, the more likely I am to not use Claude.
6. Faster to prompt than execute
Some tasks will always be faster to do yourself. But if the manual work required to do it takes more time than creating a prompt and a few turns of conversation, give it to the AI. Let it do the heavy lifting, and you can guide it. If it gets it 90% of the way there, you can polish the last 10%.
[For] a task that I anticipate will take me less than 10 minutes… I’m probably not going to bother using Claude.
7. Basic questions and learning
If you have a general question, AI can answer it. If you have a specific question about anything in the public domain, AI can probably answer it. It’s patient and non-judgmental, no matter how many questions you ask.
I ask way more questions [now] in general, but like 80-90% of them go to Claude.
The Changing Workplace
These dynamics are changing the workplace, and of course no change is without side effects.
Questions that used to go to colleagues now go to Claude. That accelerates learning and reduces dependencies on the team, but fewer social interactions means some employees yearn for more human interaction, and some complain of fewer opportunities for mentorship.
Employees are able to broaden their skillsets and capabilities by leveraging AI. However, some are concerned that using AI is causing other skills, particularly deeper and focused skills, to atrophy.
…effectively using Claude requires supervision, and supervising Claude requires the very coding skills that may atrophy from AI overuse.
Your Homework
Read the report. Consider how you should be using AI at work. Do the same for your colleagues and your team. Then, figure out how to apply the seven principles more broadly. This is how AI is going to change work, and you should get started now.



Leave a Reply
You must be logged in to post a comment.