Phase Shift AI, Tech, Consciousness, Ethics, etc.

Ego's fight with AI

Mark﹡ is a maintainer of a popular open-source software project. On a cold February morning, black coffee in hand, he settled at his desk to review the latest contributions from the community. The second one caught his eye. The code was clean, well-structured, and ready to merge. But something about the contributor’s profile made him pause. They were not human.

We’ve been talking about AI for decades, watching sci-fi movies since HAL 9000 and asking ourselves if we would live long enough to see its arrival. And here it is. If I had to draw parallels, the AI Revolution is more akin to the Industrial Revolution than the Internet and smartphones. It is not only changing how we access information, but also touching every aspect of our lives, changing it at its very core.


Mark rejected the request.

Can we blame him, though? Most of us would have done the same.

Past revolutions weren’t easy. Change has never been easy; people have lost their jobs, the way society functions has been redefined, and sometimes it has taken generations for humanity to adopt. There seems to be something different this time, a kind of unease we can’t quite put into words.

Revolutions threatened livelihoods. This one threatens something deeper, the very thing we’ve relied on for 300,000 years to stay at the top of the food chain:

Our intelligence.


I am not talking about some future AGI (Artificial General Intelligence) or Super Intelligence here, no sci-fi. What we have as of today, although still in its infancy, is threatening to our egos. Forcing us to be sceptical and ask “what’s going on here?” This is true for most of us who are technically inclined, even when we somewhat understand how LLMs work.

Somewhat?

We engineered aeroplanes and fully understand aerodynamics. LLMs are more like breeding a dog for specific traits. You control the inputs (data, architecture, training), you get a result that mostly does what you want, but the internal process that produces the behaviour is organic and messy. You shape it more than you design it. It is this black-box phenomenon that triggers our defences.


The human ego doesn’t like this competition at all. And it fights:

Denial - It is a hype.

Is it though? Sure, there is a bubble, and it is going to burst, but the solid core within it is quite close to the surface compared to, say, the dot-com bubble. This month, Twitter co-founder Jack Dorsey announced that his technology firm Block was laying off close to half its workforce because artificial intelligence (AI) “fundamentally changes what it means to build and run a company.” That’s 4,000 being let go out of 10,000.

The deniers strongly believe it will pass if they wait it out. Five years on from the first usable model, we don’t just get more of the same. Every version, every minor version, brings a leap we weren’t expecting.

Trivialisation - Just an autocomplete on steroids.

Yes, at its core, an LLM calculates the next token. And a brain is just neurons firing. Both descriptions are accurate, and both miss the point entirely. Neither of them accounts for the reasoning that emerges.

You hear a seasoned software engineer saying, “I had a look, a bit hit and miss, good on some use cases”. That was indeed the case a year ago; AI was just a helper within the IDE, patiently watching us code and offering to autocomplete, or at most write the next function for us. A year on, AI can autonomously develop a complete software suite, error-free, on the first go.

Exceptionalism - OK, it’s good, but it’ll never replace what humans do.

At the beginning of this month, in February 2026, $285 billion was wiped off the valuation of legal firms when the markets realised that Claude Cowork could draft legal documents successfully. The concerns quickly spread to the software development firms, pushing the losses to near $1 trillion. The implication was clear: if AI could rebuild what already existed, faster and cheaper, what exactly were those firms going to sell?


These responses are natural. They are human. But each of them has a shelf life. At some point, the ego has to stop fighting and start adapting.

OK. What’s the plan?

Stay Curious

Instead of asking “will this replace me?” we can ask “what can this do that it couldn’t last month?” We can try it on a problem we find difficult. Feed it the plan on the back of an envelope, or simply any context that only we understand and see what it does with it. After a recent workshop, we took a picture of the whiteboard covered in Post-it notes. Single picture, with small, barely legible handwriting. Five minutes later, we had a clean summary and a list of actions. Instead of a picture left in a Slack channel to rot.

The ones pulling ahead today aren’t AI enthusiasts. They’re domain experts who stayed curious.

Understand the Curve

Humans are bad at exponential thinking. In early 2020, most of us heard “a few hundred cases in Wuhan” and carried on with our lives. Two weeks later, supermarket shelves were bare. We couldn’t feel the doubling. The same blind spot applies to AI. We evaluate it based on what it could do the last time we used it, and assume progress is gradual. It isn’t. A scenario that fell flat six months ago is likely to be a trivial ask today. A project we scoped at twelve months today could be completed in a fraction of the time and cost if delayed by six months.

Just when I am about to get my head around how the new reasoning capabilities can help, a major breakthrough in coding happens. We are either at the inflection point now, or very close to it. We’re approaching the point where AI develops AI. When that happens, the pace we’re struggling with now will look slow. If we’re not already running, we won’t get the chance to sprint.

Look Past the Noise

Another day, another blog post, explaining in detail how a dealer’s AI bot sold a brand-new car for $1. Written in 2026, the incident is from 2023. Remember the doubling. Someone finds the one thing it got wrong and writes a post about how badly it performs overall, completely ignoring the good. In the summer of 2025, I was struggling with AI getting stuck in loops while troubleshooting software issues. Six months later, it is no longer a problem. The flaw that today’s post is about will probably be gone by the time the next one is published.

We, technology professionals, are capable of filtering the noise and focusing on what works.


A month later, on a warmer spring morning, Mark pours another black coffee. The request is still closed. The code hasn’t changed. But everything around it has. Here comes another request from an AI author. Mark has a decision to make. So do we.

* a fictional character