Early 2026 has been full of reflection on the past 3 years of generative AI and thoughts about what’s next. I’ve seen a number of trends (or supposed trends) discussed at length. I’ll give my take on those trends after a bit of reflection and some of the news.
Reflection: How far we came in 2025
Stanford’s annual AI index report is a wealth of information about what’s going on in AI. The full report is huge, but they did a great job of extracting the key highlights, with 12 key points in the “Top takeaways” section at the link. If you want to go one level down, take a few minutes to scan through their summary of accomplishments in R&D.
New Models
We continue to see new models; after a flurry in late 2025 January has been relatively quiet from the big providers, but recently two big open weights models were released – from China, of course.
Moonshot AI released Kimi K2.5 and it seems to have it all. They released multiple versions (instant, thinking, agent, and a new one – agent swarm, in beta). Exceptional performance, open weights, and at a lower cost than the proprietary models.
Alibaba released Qwen3-Max-Thinking which challenges the top models on many of the standard benchmarks, a powerful model comparable to GPT-5.2, Opus 4.5, and Gemini 3 Pro.
The Trends
I’ve seen at least four trends with a lot of coverage:
- AI is Taking Jobs
- AI Agents at work
- AI will kill software
- Context engineering >> prompt engineering
I’m going to give you my take on these trends and what I think is happening, without a deep dive (as I’ve covered the foundation of most of these in previous posts…or if needed I’ll go deeper in future posts).
AI is Taking Jobs
There is a lot of talk that AI is already taking jobs in the workforce. With Amazon laying off another 16,000 employeesthis week (some connecting it to Andy Jassey’s statement eight months ago that they anticipated replacing corporate workers with generative AI), it’s back in the headlines. On January 27, Pinterest said they’re laying off 15% (around 750 employees) and two days later Dow said 4,500 employees will be let go – in large part due to generative AI.
Is this due to generative AI, or are companies just using AI as an excuse to cut labor costs? It’s probably a mix of both. Or maybe even companies are just being cautious, in the event that AI can take jobs, they’d rather run lean for a little while then hire a bunch of people only to lay them off later.
Measuring if AI is taking jobs is very hard (the classic causation vs. correlation dilemma). There is certainly evidence for it; unemployment is historically fairly evenly balanced across age groups, but we’re seeing a significantly higher unemployment rate for recent college graduates. That seems to align with the “AI is taking entry-level jobs” theory, particularly in areas like consulting.
It also could be we’re starting to see that the first generation of kids who grew up on social media and video games are less employable – or even less interested in being employed. I’m sure there are cases of this, but I’m not aware of any evidence that it’s a trend or widespread enough that this is a driver.
AI Agents at work
I’ve covered this ad nauseum before so I’m just going to summarize what I see. 2025 was supposed to be the year of the agent (in other words, moving beyond having conversations and moving to actions; letting the AI make decisions about what to do and then do it).
It wasn’t.
Instead, 2025 was the year of agent-washing, whereby anything that used an LLM was called an agent. So, claims of agents deployed at companies are often greatly exaggerated.
2026 is much more likely to be the year of true agents. Especially because LLMs (ChatGPT, Claude, Gemini…) are themselves becoming agentic. If you enable tools and give them access to software that you use then they are able to make decisions and take action. So these “LLMs” are becoming increasingly agentic on their own. Most of them can control a browser. And Claude’s Cowork is an impressive early version of a personal agent that works with files on your computer.
We’re also seeing a proliferation of general-purpose agents in other forms. Among the most popular is Manus, created by a Singapore startup and just purchased by Meta, attempts whatever you ask it to do. And recently-hyped Moltbot (which has rebranded as OpenClaw) runs on your computer but connects to a wide array of applications to do even more than Cowork (and with much greater security risks).
But in most cases general-purpose agents like these are not effective for the tasks specific to the workplace. Instead, custom agents need to be built, given the right enterprise knowledge, and access to the right tools. But they also have to be secure, and given proper governance to ensure that they don’t go off the rails. And that story is just beginning.
AI will kill software
So many recent articles are talking about how AI has gotten so good at programming, that it’s endangering the software market. It’s even been suggested that this fear is impacting the stock market, with traditional software companies taking a hit in their share price. Some have forecasted the end of businesses buying software; instead of purchasing an application from a vendor, companies may just create their own instead.
It’s an interesting idea, and it has some merit. We know the AI isn’t good enough for this yet, but it is growing ever more capable. Maybe soon it will be possible for non-programmers to create sophisticated applications, maybe even suitable for corporate use.
It’s clear that AI has gotten good enough that everyone is now a programmer. Or, more specifically, anyone can tell an AI to create a program to do something, and the AI will create the code to make it work. And the AI will continue to get better, making it possible for individuals to create more sophisticated applications faster. This is making personal software very cheap, and will result in massive software proliferation. I might have AI write a program to automatically reply to certain emails, or organize and stylize my photos in a certain way, or help my kid with their homework. These are all applications that would never exist without the AI, either because I didn’t have the skills or the time to create them.
So rather than killing software, AI will likely make it proliferate! At least in the consumer realm. For business? See my thoughts in the DIM section below.
Context engineering > prompt engineering
The final trend I’m noticing is a lot of conversation about the importance of “context engineering.” If you haven’t heard that term before, it’s beginning to replace “prompt engineering” as the most important thing to get an AI to do what it’s supposed to do.
Simply put, prompt engineering is about writing a good prompt, that is, giving the LLM detailed, specific, and comprehensive instructions to get the output you want. Prompt engineering is sufficient for a conversation with an LLM about general knowledge.
Context engineering is broader. It includes the prompt and all of the other context that a model needs, for example: knowledge, personalization, history, and memory. Knowledge from specialized sources (RAG by searching the internet or a company repository). Personalization about the person making the request and what they like. History of the prior discussion or interactions with the AI. Memory that has been collected over time about the topic and the nature of such questions.
So the transition from prompt engineering to context engineering is the natural evolution of thinking about one conversation with an LLM on a general topic to multiple interactions with AI agents.

My take on why does it matter, particularly for generative AI in the workplace
My take on the four trends:
AI is Taking Jobs
I’m no employment analyst, but I think all these factors are at play, plus one more: it’s cool to claim that you’re good at AI. So when you see layoffs, it’s a combination of:
- AI is replacing a limited number of jobs (consulting firms need fewer entry-level consultants)
- AI is replacing certain tasks, which makes people more productive, which means in a large organization, fewer people are needed to do the same work.
- anticipation that these impacts from AI may accelerate, resulting in a conservative approach to staffing. They’re choosing to run lean, anticipating larger benefits from AI in the near future.
- normal actions as a result of the business cycle and market risks/indicators, and are using generative AI as an excuse. After all, blaming it on AI says “I’m on the cutting edge” and firing people to pocket larger profits says “I’m greedy.” Which option do you think the PR department wants to choose?
I’ll take a deeper dive in a future post.
AI Agents at work
Although 2025 was predicted to be the year of AI Agents, it was more the year of AI Agent-washing, where everything AI suddenly became an “agent.” True agents – those that make decisions, take action on their own, and use tools – are becoming more common but are still relatively rare in the enterprise. That’s going to rapidly change in 2026 as companies build their own custom agents that use their own internal knowledge, with RAG. And they’re going to rapidly pull back, as they discover that these agents need more scaffolding to not go off track, and more governance to control and monitor the ones that do.
AI will kill software
As I described above, as AI gets better at coding, we will see an explosion of software for individuals, and even in the consumer market because almost anyone will be able to create a specialized app and publish it to an online app store.
Can we expect the same for business? I don’t think so, at least not for enterprise-wide, mission-critical applications. Enterprise apps that are used by hundreds or even thousands of employees need a lot of things besides “functionality” – things that are not so easy for AI to create.
Ensuring reliability (it works under all circumstances) and robustness (no bugs or edge cases) and safety (protected from bad actors and cyberattacks) are very difficult things, and I have a hard time envisioning AI-written code meeting these needs. Given the difficulty of understanding and testing complex code, I have an even harder time envisioning businesses accepting the risk that AI-written code might create for their business.
Sure, for simple apps handling low-risk data for low-risk needs, AI-written code makes sense. But as long as the base models rely on prediction and pattern matching I don’t see a huge impact on business software for complex or high-stakes needs. For that, we need a different approach (e.g., world models) or alternate technology (yet to be invented) or supplemental techniques (e.g., testing and validation capabilities).
Context engineering > prompt engineering
The evolution from prompt engineering to context engineering is a natural one.
- Prompt engineering is what you need to well when having a conversation with an LLM on a general topic
- Context engineering is what you need to do well when having multiple interactions with AI agents
For a while everyone thought prompt engineering was going to be super important, and that would be the big job for people using these models. I disagreed, pointing out that as the models got better and better, it would become less critical. That has been the case. While prompt engineering is still important (especially for repeat tasks) there aren’t many job openings for “prompt engineers.”
But having AI Agents doing things is a much more complicated situation than having a conversation with an LLM. As we move into a world of AI agents, specifying and managing the full context (i.e., context engineering: the prompt, knowledge, personalization, history, memory…and even tools and other agents) will be critical for their success. I don’t think “context engineer” will become a job by itself either, but it will become a key component of the job for people sophisticated AI agents.



Leave a Reply
You must be logged in to post a comment.