Humans Need Not Apply

A company called Firecrawl has some job openings. But instead of hiring people, they want to hire AI agents. They first tried this in February, at that time the CEO said it was as much real as it was for publicity, saying they wanted to hire an AI agent – not a human – for $15k a year. None of the AI agents that – ahem – “applied” at the time were worth hiring. Now, they seem to be more serious, as they’re looking for three positions instead of one and are willing to pay a salary (to the AI agent’s creator) of $60k/year. The job openings are for a media blogger who posts about Firecrawl and learns to grow engagement, a customer support engineer to handle support issues, and a junior developer. It will be interesting to see how this goes!

Speaking of software development…

I don’t think you need to worry about your job yet, but the job market that generative AI is definitely shaking up the most is coding. OpenAI is planning to acquire Windsurf for a cool $3B (yes, with a B). What is Windsurf you ask? Other than my favorite sport, it’s an AI-assisted software development environment; in other words, it’s a program that helps programmers write code with the aid of AI. Why is this a big deal? Well, the company is only four years old and has less than 200 employees (that’s $15 million per employee)!

AI is definitely changing the landscape of software development (more on this later)…but separately from the acquisition, OpenAI unveiled (in research preview) Codex, a new AI coding capability that doesn’t just produce code, but can perform many software engineering tasks in parallel, such as writing code, fixing bugs, and answering codebase questions.

Model Releases? No, Delays.

Meta announced it is delaying the release of its largest model, Llama 4 Behemoth (again). This is further evidence that the gains that come from scaling pre-training are starting to diminish. But there is still ample opportunity to improve on other dimensions, and more importantly, there is still a lot to figure out about how to best use the models we already have.

Solving Problems Better than Humans

Google made an important announcement this week as they continue in their effort to improve scientific discovery. They have put together several different techniques and created an AI that solves difficult problems by itself, and arguably is able to solve problems better than humans. They combined several techniques together and the AI was able to make improvements to what was essentially the best work of humanity. Called AlphaEvolve, here’s what it does:

  1. Come up with a bunch of ideas about how to approach a problem
  2. Try them out and see how they do
  3. Choose the ones that get the best results
  4. Create variations or combinations of the most promising approaches
  5. Go back to step 2 with those new ideas and repeat until you find something interesting

This is called an evolutionary approach because with each iteration, you’re able to apply a different method (as compared to reinforcement learning, where each iteration uses the same approach). Why is this a big deal? Several reasons, but one of the big ones is because they were able to automate this process with AIand in particular, they were able to automate step 3. Once this process is automated, it’s possible to explore many more options much faster than with humans.

They were able to make incremental improvements to the current best approaches (designed by humans, of course) that will yield some material benefits. For instance, they found a better way to perform a particular mathematical calculation common in computers – an approach that humans have been unable to make any better for 56 years!

Most of their improvements were on mathematical problems where, importantly, it’s possible to know if the answer is right or wrong. So what does this have to do with everyday work? Maybe not much, at least not yet. But if the same technique can be applied to problems where there isn’t a “right” or “wrong” answer, but an AI can still judge its quality, then we may have a situation where a broader range of jobs are at risk.


My take on why does it matter, particularly for generative AI in the workplace


There has been a lot of talk about automation and AI replacing jobs. Until a few years ago, we were concerned that blue collar workers would be hit the hardest. We were wrong. With generative AI, white collar workers are in the crosshairs. The first jobs to be impacted are coding and customer service. So what’s happening with coding?

“95% of [code] is going to be AI-generated”
– Kevin Scott, Microsoft CTO

In October, Google said more than 25% of their code was written by AI, and in April they said it’s now “well over 30%.” Microsoft also said in April that 20%-30% of their code was written by AI, with some making projections of 50% in a year and even 95% by 2030. This isn’t necessarily translating to job loss – the world has a seemingly insatiable appetite for software, so for now this is making people more productive and should increase the speed and quality of programs and apps.

But if AI is writing 95% of it?

That’s a question that is yet to be answered. Right now, AI is very much assisting programmers, not replacing them. It’s unclear how much of coding tasks can be handed over to AI completely. That OpenAI bought Windsurf is interesting, it shows that they’re making a bet that this trend is going to continue, and that it won’t just be a better LLM – it will be a better coding environment that uses better LLMs – to get there.

More significantly are the possible implications of Google’s AlphaEvolve. It’s one of the first signs that AI may be able to outperform humans in things that are employable. Sure, AlphaFold already created a database of all possible protein structures and won Demis Hassabis and John Jumper the Nobel Prize in chemistry last year…but in terms of human jobs, no real impact. It just meant that graduate students could research more interesting topics.

But will it Generalize?

But if AlphaEvolve generalizes (and it seems that it should), then a lot of existing scientific research currently done by humans might be performed better by AI (as in, faster and cheaper). Then if it generalizes to questions that don’t have right or wrong answers, derivatives of AlphaEvolve may come after a lot of other jobs. As with programming, it’s likely to be an AI-assisted paradigm for a while. AI can’t yet implement the improved math solution that it found; it’s up to humans to do that. But in some areas, AI assistance will become AI replacement (at least in an economic sense; AI will never replace our humanity).

With the rapid pace of AI development, we won’t be able to predict where this will happen early enough to plan for it. For now, learn to use AI as part of your job, and pay very close attention to the parts that it doesn’t do well. That’s where you can’t be replaced.

Copyright (c) 2025 | All Rights Reserved.


Discover more from

Subscribe to get the latest posts sent to your email.