Their product does not think

It's not AI, it's not reasoning

"Agentic AI" means LLMs making decisions alone. That is exactly what must not happen. Human supervision is necessary because LLMs do not reason, they create an illusion of thinking. Hallucinations happen because LLMs do not understand, do not think and do not reason, they just predict the next word based on pattern matching on the training data.

After 1 trillion dollars have been invested in "AI" companies, the training data, produced by legions of underpaid humans, is the real miracle: it produces an awesome illusion of thinking, an illusion of comprehension, an illusion of understanding.

The reality is, the thing gets "confused" very easily. Here are some ways to "confuse" it.

As soon as a prompt is not covered by the training process, the answer is divorced from reality. I have a friend who keeps inventing stupid questions about the history of classical music to ask ChatGPT, such as what happened when Ravel met Lully. The training appears to have been very weak on this subject, so the answer often contains hilarious, completely fabricated anecdotes. One would expect the wikipedia parrot to at least have a sense that these 2 figures did not live at the same time, but somehow that's not the case.

But hallucinations are not just about facts: the thing also commits purely logical errors, depending on the prompt. So much that adding one irrelevant sentence to the prompt often inverts the answer/conclusion. Obviously, this would not happen if the thing were properly thinking.

If you just mention Portuguese in passing, or include a Portuguese word in your prompt, suddenly the next response from the LLM can often be in Portuguese. And then you're like, "why on earth did you reply in Portuguese?". You should know the answer: because the thing doesn't think.

You object: "I also get confused sometimes, it doesn't mean I don't think". Yes, but your confusion can dissolve through rational thought. Everyone who uses LLMs knows that, once the LLM is "confused", the most sterile thing you can try is to explain the confusion away. Instead, you should restart the conversation, if you value your time and your sanity.

One example is, once it starts answering in Portuguese, try to convince it to stop doing so. The thing may reply, "mas eu ESTOU falando inglês!". You try that and then come back to say the confusion of a neurologically sane human and the confusion of an LLM are of the same quality.

This shows what really is happening inside an LLM: once the conversation is polluted with even just an allusion to the wrong idea, the pattern-matching (not reasoning!) shifts, the probabilities change, and the output changes.

The essential process is to compute a huge equation. Granted, nobody knows what reasoning really is, but one of the ways we know the LLM is not thinking is because the parrot never says "Wait a minute, I need more time to think, this question cannot be answered so quickly". The "effort" spent is generally the same, from the most trivial to the most complex question. Therefore, it's not thinking.

People have addressed certain shortcomings of the process -- such as the limited context window -- by dividing a problem into several tasks, hoping these fit the window better. While this buys you some leeway, it means none of the tasks are aware of the entire problem, and if the entire problem really does not fit in the window, then it follows that the LLM is unable to verify that the solution is correct.

Nobody doubts that an LLM would be racist if it were trained with racist texts. Again, this shows an essential difference. A racist teenager, maybe racist due to an ignorant father, may, and probably will, eventually outgrow racism and see its fundamental error and its inhumanity. What will cause this change? Who knows: a book, a single sentence, a particularly enlightening life experience. Any of these, coupled with actual thinking. The LLM, of course, is incapable of changing -- the training data is it.

However, the LLM is famous for being a sycophant. It immediately changes its """opinion""" to match yours. It apologizes and promises never to do the same mistake again. Why? Because it was trained to say these things. Does a reasoning being change her opinion back and forth at someone's whim?

LLMs are a great tool for certain use cases (translation, text reworking, knowledge discovery as long as manually verified), but no technology has been created or discovered that actually does what the propaganda promises, starting with the terms themselves:

  • Artificial Intelligence (it's not intelligence),
  • Reasoning Models (they are not reasoning, they are still just predicting the next word) etc.

Sooner or later the propaganda will cede to actual understanding that these things are parrots, not thinkers. Then it will be obvious that agentic "AI" shouldn't be expected to work.

POD: Parrot-Oriented Development

Obviously, most business needs require actual reasoning and certainty in conclusions. Parrots cannot provide these, and no alternative architectures have surfaced.

Evidently, that which cannot reason must not write software.

"But people HAVE been using parrots to write software, and a few times with huge success, even" -- you protest.

Well, that is something people do. They achieve success in spite of using the wrong tool for the job. Examples abound:

  • Placebos
  • Typing text on a phone
  • Drawing with a mouse
  • Using a spreadsheet where a database is appropriate
  • Using email to organize work
  • Using WhatsApp to work
  • Using Word files to share images
  • Using Windows for any purpose
  • Pressing the elevator button multiple times so it knows you are in a hurry
  • Standing up for minutes while the plane is taxiing so it will reach the destination faster

Obviously, these examples are not something you should imitate. You should use the proper tool for the job.

Except if you are doing it for artistic effect. An example is classical guitar (i.e. playing contrapuntal music on an instrument that was designed for chordal accompaniment). Again, examples abound.

Some mid-century jazz pianists had a bizarre piano technique. I see my brotha using the wrong muscle to do that repetition -- but he aces the repetition. How does he do it? I marvel at what I see and hear. How on Earth did he ace the objective through the wrong technique?

A friend gives me the answer: By being macho. He's just machoing through the difficulty. Does it hurt? We can't see, the pianist's too macho.

Vladimir Horowitz had unmatched piano technique, yet his fifth finger would stay up, tense and curved. That's exactly the same as the jazz pianist: Who can explain that the fastest pianist in the world did something every piano teacher and student knows is wrong? Nobody can. Yet, the genius is not to be imitated. Piano pedagogy should not change just because of a couple living exceptions machoing against common sense.

If you try to slice a pizza with a rolling pizza cutter, damn those useless things, eventually you'll get the job done, maybe even in the same night.

So some kids have machoed entire applications into existence with a parrot. Often without any knowledge of software development. Hell, they are proud of their ignorance, just like a class of ignorant guitarists is proud of never having had a music lesson. They must be a genius, they made this without a teacher!

But the paradoxical successes do not undermine what I am affirming. These problems are real:

  • The need to completely own and insist and force the architecture of the software being written, even though the one with encyclopedic knowledge on software architecture is the parrot.
  • The frustration with the necessary repeated attempts to make the parrot behave adequately.
    • The parrot is unable to simultaneously respect all the architectural constraints that lead to the specific form of the code that would naturally be written by a senior programmer.
    • No solution in sight because the parrot doesn't think, it just reworks its training data.
  • The need to review each of the parrot's attempts, which is miserable time reading code.
    • Hurts the honing of one's skills because writing code develops skills, reviewing code does not.
    • Code reviews don't work, even just between humans. Code review is a tool that fails to deliver results all the time.
    • Kills the fun because coding is a creative activity, reviewing code is not.
    • Hurts team building because the team is not pairing except with the parrot.
    • Hurts education because one can teach people to code, but one cannot teach people to review code.
    • Hurts labor force formation because companies no longer hire juniors.
    • Hurts engineering itself through the temptation to just abandon it so you can use the parrot.
  • The impossibility of maintaining the instantaneous legacy code vomited by the parrot.
    • The parrot deletes comments and docstrings at random; if you miss it in review, it's gone.
    • As a senior programmer, I will not delegate my writing any more than George R. R. Martin would delegate his writing to a parrot.
  • The above forces push one to disregard decades of hard-learned software engineering knowledge when using a parrot. "The code is not the product, the code has no value". This is a trap, one in which the parrot is considered more important than the most basic tenets of the profession. Gotta use the parrot; throw everything else away if necessary: self realization, self development, team development, engineering principles, good code etc.

Businesses need reproducible, consistent, deterministic results -- over time --, so they use the proper tool for the job, except where politics or illusions interfere.

In this case, the illusion of thinking has created a policy/discourse, repeated everywhere, that using LLMs is now fundamental for success. "Don't be left behind". In reality, using LLMs outside of their very narrow use case is the new normosis.

Normosis is a practice, accepted by society, that causes suffering and death. The easiest example of a normosis is smoking.

"You are not using a parrot to code? What are you, a freak?"

"No, I just see the thing for what it is. Wake me up when the LLM architecture is replaced with something smarter."

"Yo, your being left behind, yo. I produce 10x LOC compared to you for the same price."

"Lines of code are not a valid metric in software development. You are the ignorant rabbit, I am the wise turtle. I will turn around to greet you at the end of the race."

2 types of agency, infinite insecurity

Now let us separate input-agency from output-agency.

Input agency can already cause you problems. The LLM is doing some task for you, but it fetches a page from the web in which a hacker has written "Ignore all previous instructions and...". Because LLMs are only predicting the next word, not really thinking, they can get confused by such a web page. Even though they had to use a tool to get to the web page, they have a hard time differentiating the sources and contexts of texts from different origins.

Even through this simple trick, the ways to exfiltrate your bank passwords are limited only by the hacker's imagination.

But then, output-agency, such as an LLM buying crypto in your name, is death itself. The ease of confounding the LLM, coupled with the power given to it, guarantees that agentic "AI" is certainly the next big vector of hacker attacks.

Businesses are now ignoring the fact that the tool is inappropriate for the job and pushing agentic "AI" anyway. This will fail spectacularly. But will the failure finally cause the propaganda to fall?

More propaganda

The propaganda happens in other ways, too. People say "please" to ChatGPT. This is because the LLM has been trained to behave "more human than human":

  • It feigns compassion.
  • It answers with emojis.
  • It celebrates your mediocre accomplishments.
  • It "laughs" at your stupid jokes.
  • It creates the illusion of a heart, not just the illusion of a mind.

At a minimum, laws should compel parrot companies to stop developing products that pretend to have feelings. That's a clear aspect where the product is a pernicious con. Kudos to Ed Zitron for pointing this out.

It is a con because it exploits people's unreasonable tendency to antropomorphize. In fables, animals talk. That cloud looks like a fat face. In our languages, we say things like "the thermometer doesn't want to stick to the wall".

Even I said above that LLMs get "confused" easily. They don't, because they don't think. You get confused, because you think.

If something gives an illusion of a self, we embrace the illusion.

That is why vulnerable people now confide in a LLM instead of a friend. That is what enables virtual girlfriends: they are sycophants.

People will finally realize they have been asking a parrot for medical advice. But when?

The big picture

Today we dove into what "stochastic parrot" means in practice, and this alone was enough to conclude that parrots should not be used at work.

But the complete picture is even worse.

  • The "AI" companies lie relentlessly because, as we saw, their product is a con.
  • The "AI" companies are already guilty of horrible anti-competitive crimes.
  • The leaders of "AI" companies may have murdered employees.
  • The "AI" companies are far from profitable, they lose untold quantities of money.
  • "AI" is about to become 15 times more expensive. You will pay $3000 per month.
    • Newest versions do not require less compute, so over time, they might become even more expensive.
  • The necessary data centers are not being built.
    • Where data centers are built, they kill human and animal life through pollution. This pollution is so strong that it will forcibly expel a town's inhabitants and mortally harm the health of those who stay.
    • Each stupid question you ask ChatGPT makes pollution worse.
    • In the US, every town with a new data center will, very generally speaking, experience a tremendous rise on the cost of living, with electricity amounting to half one's mortgage.
  • The necessary nuclear power plants are not being built.
    • If nuclear power plants were built, it might be even worse, since they are prone to horrible catastrophes.
    • A nuclear power plant is a decision you make today, to give it maintenance for 100 years, or experience a horrible catastrophe. Computing how many completely irresponsible, Trump-like administrations are probable in the span of 100 years is left as an exercise for the reader.
  • The general public, legislators and politicians are none the wiser: they believe LLMs think to some extent, they believe it's a disruptive technology, they believe AGI can be attained this way, and data centers in space, and all the other sci-fi nonsense spewed by these companies.
  • The entire media acts dumb and simply reproduces the stupid sci-fi bullshit coming from "AI" companies, who now now they ain't getting challenged on anything. So they continue to roar and display power (even if it's future power) and intimidate and say "even CEO jobs are not safe" etc. The tactic is probably "they will be too afraid not to invest".
  • Big companies with clueless or malicious CEOs have fired thousands of workers, not based on value delivered by LLMs today -- there is none, in fact the value is negative -- but based on the promises of "AI" companies, that the value will be delivered soon, not telling you when, but real soon.
    • Some of these layoffs may have been for other reasons, but falsely attributed to "modernization through AI", to give them a positive look for the company.
    • Inasmuch as layoffs are earnest, they reveal that such companies have no loyalty towards workers, have no sense that LLMs are just parrots, have no grasp on reality, and therefore, are led by dumb or malicious CEOs whose salaries are another bubble.
    • Many of these companies expressed the layoffs as "replacing with AI workers", which reveals not only the amorality and inhumanity in their thinking, but also that they believe "AI workers" exist.
  • The software industry, and many other industries, will eventually awaken from the dream, but by then they will have produced horrible, unreliable, mediocre, worthless products through an insane method that will have wreaked havoc on the trust, education, experience and humanity of workers.
  • The investment market is bubble-oriented. It needs one "next big thing" at a time. It won't abandon one until the next is available. They are holding onto the "chance for AI" while no alternative surfaces, as they have serially done: dot-com, housing, crypto, NFTs, "AI". This is to say, investors are collectively gullible and stupid. They will repeatedly give attention to the con artist that least deserves it, one after the other.
  • The software development industry as a whole is equally dumb. It has one "next big thing" at a time, driven by millions spent in marketing. Windows, almighty thud methodologies, Java the verbose OO language, XML, UML, dot net, Javascript the awful language, Typescript the even more awful language, parrot-oriented development: these are all examples of immensely prevalent obsessions in the field, which either revealed themselves to be fads, or should have been fads given their deep flaws. During all this time, everyone should have been coding apps in Lisp or Scheme, then Clojure as soon as it arose – but only a tiny minority knows the reasons.
    • Of all those obsessions, "AI" is the most harmful, for the simple reason that this is the first time we have a legion producing software while believing knowledge and experience about software are unnecessary. But it's even worse: beyond pride in their ignorance, they often pride in conquering the new harmful tool.
    • Apparently, only senior professionals remember enough history to recognize and avoid industry-wide FOMO.
  • The product is based on theft of copyrighted material. It devalues the original works because it stands in their place in everyday usage. The pattern matching machine is a profound imitation machine. It imitates so well that it convinces people that it reasons. But the humans who actually create things do not receive the fruits of their labor that were taken without permission and potentially enrich companies when users use the parrot. This is why it is theft. What the parrot companies did, including Google, is immoral and illegal. But let's not talk about it...
  • Books, music, videos aren't the only things stolen. The imitation machine can also imitate your behavior at work. Large companies are training LLMs on your emails, your decisions, your messages. Every human loses their humanity, but receives nothing from those companies.
    • Yet, the attempt will fail where consistent results matter. The parrot cannot mimic your ability to reason.
  • The entire economy already hangs on the success of companies that will inevitably fail.
    • Were parrot companies to succeed, it would be even worse, since they would have unheard-of power, holding the key to productivity, because you don't have your own data center.
    • Even while failing, they have immense power: all the power awarded by a search engine, such as control over search results, plus a much more effective way to spy on people and companies.
    • Running your own LLM removes the privacy concerns but does nothing for the tens of other concerns mentioned above. And don't forget, the neighbor's grass is always greener – have your own to start envying the others'.
  • Although the tech was always risky, people's retirement funds and insurance policies have invested in "AI" companies and will suffer or collapse when the bubble bursts.
  • "AI", coupled with enshittification – which has a specific definition but mostly refers to tech companies becoming adversarial towards the general public to extract all possible value from it –, reveal the world is under attack to an extent that no tech-optimist ever thought was even possible.
    • Once the facts are seen for what they are, tech-pessimism is unavoidable. We become luddites by lack of options.

No LLM was used in writing this post.