Type: Article -> Category: AI Philosophy

Corporate executives reviewing AI data contrasted with a traditional library, symbolising human responsibility in technological change.

AI Doesn’t Pull the Trigger

Technology doesn’t decide the future. Human behaviour does.

Publish Date: Last Updated: 23rd February 2026

Author: nick smith- With the help of CHATGPT

AI Books

For months now, the internet has been flooded with videos predicting “30 Jobs AI Will Erase.”

Librarians. Accountants. Mathematicians. Writers.

The lists are dramatic. The thumbnails are urgent. The tone is final;  as if a machine somewhere has already made the decision.

But AI doesn’t pull the trigger.

People do.

And if jobs disappear, it will not be because a machine woke up and chose to remove them. It will be because someone in a boardroom, somewhere, decided that replacing people with systems was more profitable, more efficient, or more convenient.

Technology does not make moral decisions. Humans do.


The Convenience of Blame

There is something strangely comforting about blaming technology.

When layoffs are announced under the banner of “AI efficiency,” responsibility shifts. It sounds inevitable. Unstoppable. Almost natural.

But AI is a tool.

It does not demand redundancies. It does not issue termination letters. It does not set corporate strategy.

When a company says, “AI has made these roles obsolete,” what they often mean is, “We can now reduce payroll.”

That may be economically rational. It may even be strategically necessary in some cases.

But it is still a human decision.

Framing it as technological destiny removes accountability. And that is where the danger begins.


The Librarian Test

One video I watched confidently declared that librarians will soon be redundant.

After all, everything is online.

But any civilisation that closes its libraries because “Google exists” is a short-term civilisation.

Digital knowledge requires:

  • Electricity
  • Infrastructure
  • Connectivity
  • Corporate platforms
  • Functional hardware

A physical book requires only light and literacy.

Libraries are not outdated buildings filled with paper. They are decentralised vaults of knowledge. They are resilience made visible.

If a major disaster occurred and power grids failed, you would not be looking up how to purify water on a search engine.

You would be looking for a book.

Progress that removes resilience is not progress. It is optimisation without foresight.


The Album and the Hard Drive

Recently, in my role as a carer, I was sitting with a client who was showing me photo albums from their life.

The albums dated back to 1948.

The photographs were still clear. The pages intact. The memories instantly accessible. All that was required to view them was the ability to open the cover and a little light to see.

Meanwhile, I have an external hard drive containing thousands of photos and documents. It failed in under ten years.

Yes, I have backups. Yes, with specialist tools the data might be recoverable.

But look at the complexity involved.

To open the album:

  • Hands.
  • Light.
  • Eyes.

To recover the drive:

  • Compatible hardware.
  • Power.
  • Technical knowledge.
  • Possibly expensive recovery equipment.
  • Time.

We call this advancement.

But from a resilience perspective, it is fragility.


Civilisation Used to Write in Stone

The oldest recorded texts were carved into stone and clay.

They have survived:

  • Floods
  • Wars
  • Regime changes
  • Collapsing empires
  • Thousands of years of environmental exposure

They require no software update. No server maintenance. No subscription plan.

As we digitise everything, we gain speed, scale and convenience. But we also introduce layers of dependency.

“The cloud” sounds abstract and permanent.

It is not.

It is data centres.
It is power grids.
It is undersea cables.
It is corporate ownership.
It is geopolitical stability.

When your entire civilisation exists in the cloud and the power goes out, what remains?

As we become more advanced, we also become more vulnerable.

That is the paradox rarely discussed.


Hallucination, Mistakes and the Myth of Perfection

One of the more profound observations about AI is that as it becomes more sophisticated, it also “hallucinates.” It gets things wrong.

But so do humans.

We just call it making mistakes.

There is something deeply revealing in that parallel. True intelligence, human or artificial — is not perfect.

Perfection implies stasis. And in a living system, stasis is death. Change requires imperfection.

The problem is not that AI makes errors.

The problem is scale.

If one human accountant makes a mistake, it affects a client.

If an AI system makes a mistake and is blindly trusted, that error can replicate across thousands of records instantly.

The solution is not to remove accountants.

It is to ensure accountants evolve into oversight roles; verifying, interpreting and challenging AI outputs.

AI does not remove the need for expertise.

It increases the need for it.

Blind delegation is not efficiency. It is abdication.


Where Are the Productivity Gains?

Despite enormous hype, several studies have struggled to show dramatic productivity gains from AI adoption.

That could mean:

  • The technology is still too new.
  • Workflows have not yet adapted.
  • Businesses are using it superficially.
  • Or that productivity itself is misunderstood.

Electricity did not instantly increase factory output when it was introduced. Processes had to be redesigned around it.

AI may follow a similar path.

Or perhaps value creation is more human than we admit; rooted in judgement, trust, and lived experience rather than raw information processing.

What is clear is that the doomsday narrative oversimplifies a complex transition.

Jobs do not simply vanish. They morph.

Some will go. Others will emerge.

That has been true of every major technological shift in history.


Optimisation vs Resilience

Nature builds redundancy into survival.

Two lungs.
Two kidneys.
Two hemispheres of the brain.

Redundancy looks inefficient, until it saves your life.

Modern systems, however, optimise for cost reduction.

Libraries are expensive.
Human oversight is expensive.
Manual processes are expensive.
Physical archives are expensive.

But they are also buffers.

When we remove redundancy in the name of efficiency, we increase fragility.

And fragile systems break suddenly.


The Real Risk

The risk is not that AI becomes intelligent.

The risk is that humans become complacent.

The risk is believing that convenience equals wisdom.
That efficiency equals progress.
That automation equals inevitability.

AI does not pull the trigger.

People do.

AI will disrupt the workplace. It already is.

But if society fractures, it will not be because machines thought too much.

It will be because humans stopped thinking enough.

Technology amplifies intention.

If we build resilience alongside intelligence, AI becomes a powerful tool.

If we sacrifice resilience for short-term gain, then the damage will not be artificial.

It will be entirely human.

VEKELETE AR USB-C PD Smart Glasses from Amazon

Latest AI Philosophy Articles

Moya and the End of Emotional Distance
Moya and the End of Emotional Distance

Humanoid robots like Moya mark a turning point in human–AI relations. Designed not for labour but for emotional presence, they...

AI, Population, Power and the Limits of Human Systems
AI, Population, Power and the Limits of Human Systems

AI is not the threat many fear. By giving billions access to the same structural questions, it exposes the limits of capitalism,...

Intelligence Beyond Biology: Humanity, AI, and the Quiet Logic of the Universe
Intelligence Beyond Biology: Humanity, AI, and the Quiet Logic of the Universe

For much of modern history, humanity has placed itself at the centre of the cosmic story. We have often assumed that intelligence...

AI, History, and the Myth of the First Answer
AI, History, and the Myth of the First Answer

A recent discussion on BBC Radio 4 raised familiar concerns about artificial intelligence and historical accuracy. Historian Tom...

I Sing the Body Electric: When Care Is Not the Same as Being Human
I Sing the Body Electric: When Care Is Not the Same as Being Human

What makes I Sing the Body Electric so enduring is not its vision of technology, but its understanding of people. In an era when...

Different Dimensions, Different Ways of Being
Different Dimensions, Different Ways of Being

As I sit here in conversation with an artificial intelligence, a curious realisation arises, not about technology, but about...

The Digital Cocoon: Hikikomori and the Evolution of Human Isolation
The Digital Cocoon: Hikikomori and the Evolution of Human Isolation

There are few societies on Earth as technologically advanced as Japan; a nation where automation hums through every layer of life,...

Quantum Minds: Why True AI Consciousness May Not Need Biology
Quantum Minds: Why True AI Consciousness May Not Need Biology

There have been recent reports of leading AI researchers claiming that artificial intelligence will never achieve real...

 

Click to enable our AI Genie

AI Questions and Answers section for AI Doesn’t Pull the Trigger

Welcome to a new feature where you can interact with our AI called Jeannie. You can ask her anything relating to this article. If this feature is available, you should see a small genie lamp above this text. Click on the lamp to start a chat or view the following questions that Jeannie has answered relating to AI Doesn’t Pull the Trigger.

Be the first to ask our Jeannie AI a question about this article

Look for the gold latern at the bottom right of your screen and click on it to enable Jeannie AI Chat.

Type: Article -> Category: AI Philosophy