<!--
.. title: "AI" is a bad tool
.. slug: bad-tool
.. date: 2026-05-05 04:02:08 UTC+02:00
.. tags: computing, ai
.. category: computing
.. link:
.. description:
.. type: text
-->

When you think of the tools of an artisan, those tools have a specific form to perform a specific function.
In programming, too, that applies to the best tools.
This is why [the Unix philosophy](https://en.wikipedia.org/wiki/Unix_philosophy) favors composition of tools:

*In 1978, Doug McIlroy documented a set of principles encapsulating the "characteristic style" that had emerged among Unix system users and developers:*

*1. Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".*

*(...)*

...and then Unix makes it easy to combine tools through piping the output of one tool to the input of the next.

This approach optimizes reuse of code and reuse of tools in the most diverse set of circumstances.
You can combine in "glue code" several tools, each of which does one job well.
It would be harder to combine tools that haven't been developed with that purpose.
So tools are designed to work together with other tools.

But right now, the entire software industry is under the FOMO on "AI".
Everyone thinks "AI is forcefully happening, let's be ready".
That statement is false, but, unfortunately, realizing how false it is depends on knowing a lot of facts, which have been the focus of previous blog posts.
"AI" is not currently getting better and cannot get better unless a revolution happens in the architecture.

(Since AI does not exist, LLMs are better described as stochastic parrots.
They do not think, they just do pattern matching on their training data.)

A parrot, as a tool, is the opposite of a Unix tool, or a replacement of Unix tools:

- It appears to do everything, not one thing.
- It does everything badly, not one thing well.
- It's unable to do many things, but you don't know that until you try them, because the assumption is that it is magic and understands what you want.
- Its behavior is somewhat random, not deterministic.
- Its UI is a text prompt, which is at a minimum, open to interpretation each time it runs, and probably, misunderstood in different ways each time.
- It insists and pushes itself to do everything (write the entire code, think up the whole architecture, provide the entire solution too early, making hundreds of assumptions about the actual problem the user is trying to solve), but it is unable to respect all the constraints you set in your prompts.
- FOMO in companies (as well as the eagerness to fire all employees) causes companies to insist and push the use of parrots in ~80% of problems, but parrots are only appropriate for ~2% of problems.
- The "AI" companies promise eventually the parrot will do everything, therefore replacing the need for Unix tools that can be easily combined.
- Inasmuch as its UI is only text, it turns you into a foreman or "slave owner" of sorts. You are no longer doing the thing with your tools, you are just yelling for "someone else" to do the thing. What I mean by this is, your role is no longer operation, it is direction.

In short:

- People who do not daily use parrots... assume the hype is true, AGI is just around the corner, and believe people are to become directors of the AI, which will be the one using tools.
- People who do use parrots daily... cope with its spectacular incompetence by treating it as another tool – but it is a bad tool for all the reasons above.

The first group expects parrots to do more and more of the work; the second has the parrot doing about 2% of the work until something radical changes.
