On the use of harmful tools

…So you are to manage some software creation. Then you must understand you cannot simply employ the same techniques used to manage other domains. Software creation is strange – many counterintuitive truths about it. You will fail unless you learn about these weird realities.

This series contains these posts:

  1. How to hire a development team
  2. Counterintuitive facts of software development
  3. No estimates
  4. On the use of harmful tools

Summary

Big Tech is incentivized to oversell AI’s capabilities. They're marketing it as "productivity" while gutting skilled labor and consolidating power. It's a profit-maximizing maneuver masquerading as innovation.

AI-generated code is not magic. It's pattern-matching, not reasoning. It lacks context-awareness, long-term responsibility, and often produces brittle or misleading output.

Today, Big Tech companies are peddling AI tools before these are ready for prime time. For now, AIs are hallucination machines that cannot be trusted with making decisions without supervision, yet this is exactly how we are already using them.

Big Tech makes insanely exaggerated claims, such as a 3x productivity gain just by using an AI tool. Seduced by these, managers are accepting losses in all the aspects involved in maintaining a codebase, and then hyping their own fictitious feats with AI.

But real software development is a craft. It requires learning, culture, understanding, reasoning, practice, intuition, refactoring, communication... Some of these the AI can do badly; some it cannot do at all. Not yet.

Businesses try real hard not to see that software development is an art. If they see it, they pretend they forget it. However, art has its own requirements, and one forgets these at one's own peril.

Inserting AI as the code writer yields no real increase in productivity – and worse, it requires fundamental changes in the software development process. I will show that some of these changes are, in fact, impossible for humans, though not all humans can see it.

The industry is buying the hype way too early. Developers, managers, even companies are swallowing the narrative before the technology has earned it.

So my thesis is:

Pushing programmers and writers to use smart tools such as AI is harmful and infringes Agile principles.

Though it might seem to be what everyone is doing, it's the opposite of what you should be doing.

After I wrote this post, I found a Simon Sinek interview in which he says today we are obsessed with output and losing track of humanity in AI adoption.

A word about intelligence

What is intelligence?

Intelligence is establishing true relationships between things previously considered separate and unrelated. Intelligent people show you how things relate between themselves. A void gets replaced by a model.

Contemporary normoses are always due to a lack of respect for the necessary relationships between systems. An obvious example is the lack of relationships between the economy and the environment, even though doing what the environment "demands" of us would be beneficial to all people and all generations.

Why am I talking about intelligence? Well, for the following argument, I need an intelligent reader – one that doesn't have difficulty evaluating new or infrequently made relationships.

I also need a patient reader. I know this post is too long. But fuck you, reader. Read it in 2 sittings or something. To establish relationships between things I need the space.

The argument

If you don't let your students use AI, why do you let your workers use AI?

In education, AI usage is avoided for 2 simmetrical reasons:

  • Middle school and high school students must not use AI because they must learn to write by practicing writing.
  • College-level students must not use AI because the output is not college-level.

The same reasons apply to your employees:

  • Junior devs are still learning to write software, so they must practice the damn craft of writing damn software – not have someone else do it for them.
  • Senior devs write software much better than the AI, so there isn't much gain for them.
    • But also, writing software is a craft, exactly like writing prose is a craft. A craft must be exercised, or you lose it. So a senior dev must continue to write software – or her knowledge, skill and stamina will atrophy.

Now in more detail

The plagiarism problem always existed, it is not new. And there was never a different solution. Plagiarism is not tolerated, ever.

You know a student must learn to write her own dissertation, because that is an essential life skill. You forbid AI for the student – when writing her English assignment – so they will actually practice the skill. If that is the case – why are you doing the exact opposite when it comes to people who work for you?

Every professional writer in the world tells us writing is a muscle that must be exercised everyday. They force themselves to practice their craft every morning and they know they get worse when they don't.

If a high school student must practice her writing, why do you think your worker doesn't need to do the same?

Am I talking about writers or coders? Yes.

This applies to the writing of prose and computer programs, both. There is no difference. A programmer also knows they are worse after a vacation. It takes a few days of work to get back to speed.

Agile is about developing the people more than the process. Improving the process is worthy; but we understand developing the workers has even more value.

Agile recognized that building a great team is the single most important thing the business needs to do. Notice I didn't say hiring a great team, I said building it.

If you are not making an effort to provide the necessary environment and motivate the team to improve, then your organization is not truly Agile.

Therefore, using certain tools by default is against Agile, inasmuch as these tools hurt team building.

Now I will give examples and establish further relations with other matters.

Example 1

To hire a writer, I want to measure her grammar skills. So I ask her to correct the grammar in some text I've written. But she uses AI to do the correction. Now I cannot evaluate her knowledge, it is mixed with the AI's knowledge. Here the AI's knowledge is noise, because it is essentially free for me – I don't need to hire the writer to get AI-level results.

You certainly see the only way to evaluate the writer is to forbid usage of AI.

But then, after she is hired, why would I push her to use AI, if that means exercising her muscles less, and therefore getting worse? Do I want my worker to atrophy???

The HR department, as a system, typically complains that the job market lacks qualified workers and develops means to find and hire them. But after the expert is finally hired, her manager now expects her to be reviewing AI slop, instead of using her expert skills to write?

Can't anyone see this will result in her losing her expertise over 2 years? Or is she now supposed to study and practice her craft in her copious free time?

The artisan exists before the artist. Without an artisan you don't get an artist. You can't have a craftsman that doesn't practice her craft.

The HR department cannot find qualified workers, but hired workers must become progressively less qualified. Wonderful.

How can companies act in such a schizophrenic way? Because businesses lack intelligence, which is the ability to establish relationships between systems that appear to be separate.

Planning the obsolescence of writing itself

But even worse, businesses have been firing writers, since they figure writing is now a job for an AI. Can they really not see the ways in which AI slop lacks creativity and personality? Can they not feel the sterile sameness in everything the AI writes?

But it's even worse: Nadella is peddling a product that liberates illiterate executives from the task of reading. The AI does all the reading for him, and "summarizes". This person believes in building a world in which I carefully write an email and he doesn't even read it!

What a dystopic world, in which human communication will be mediated not just by distance and screens, but also by a summarizing idiot who misunderstands every other sentence!

Nadella peddling AI usage for managers

Nadella might discover he is a replicant himself...

And you thought I was extreme when I wrote "fuck you, reader", didn't you? Well, what about Nadella's "fuck all writers"?

"Never read anything if you value your time!" – That's his advice, and if you follow it, you are one fucked up reader!

If we remember that, in the Gold Rush, the ones who got really rich were those selling tools, Nadella is doing right by his company. But he is also doing wrong by humanity. That's the threat of a new normose, which as we saw before, is lack of intelligence.

This is not prejudice

I have no prejudice or hate against the tools themselves. For instance, look at a sane example:

Even after the computer finally beat man at chess, man continued to play chess. Man did not stop memorizing hundreds of openings, did not stop practicing the endgame, and did not stop playing chess, since it is a wonderful sport for the mind.

The computer changed the game forever, yes. Man now tries to learn from the computer, too. But giving up the skill and letting the computer have the chess game for itself, that's what we are incapable of doing. In fact, chess is now more popular than ever.

It's not about winning, anyway. It's about the Olympic spirit.

Get better than you are now – for you are a man.

AI currently sucks hard at creative endeavors. It is okay at reformulating existing language, improving the vocabulary used, and maintaining coherence (as long is it understands the content). But it's awful at creating literature, stories, jokes, poetry, music, art etc. But suppose AI does become good – would man give up the arts?

Of course not. Not in a million years.

Be a man. Get awesome.

Example 2 (mild)

Woody Zuill tells a story. A couple programmers on the team were delivering code in which the architecture left something to be desired. Instead of having the code corrected, he tolerated it while giving the team ample time and incentive to study these matters. In just a couple of weeks, the devs had acquired the skills and were themselves correcting those mistakes. No feelings were hurt, the team was enthusiastic (due to learning), and Man did what (s)he is supposed to do: learn and improve.

Zuill temporarily sacrificed product quality in order to develop a happier and more capable team. That was the wise choice. Is this what most IT companies are currently doing???

Or are they saying you shall be happy vibe-coding? Are you going to be happier in your relationship to a human team, or by becoming a cyborg?

Example 3 (the kicker)

I will leave you with this example by Dave Farley, which I invite you to watch in his own words. But I will paraphrase his tale here if you prefer to read:

Dave gets hired a second time by the same company to rescue a problematic software project. Dave realizes a certain dev doesn't know how to code – the guy even avoids for loops in Java because he doesn't know how for loops work. The dev is an impostor. Dave tells the company, this guy shouldn't be working here, he is not good enough. The company answers, what do you mean, his code review skills are legendary in the company. In the end, it is discovered that the guy simply used an IBM tool for code reviews, and understood none of it.

Now imagine how much financial loss that employee was responsible for:

  • His own salary for years.
  • The damage he caused to the codebase every day, putting the project at risk.
  • The rates of the experts eventually hired to rescue the project.
  • The administrative decision making, required to hire the experts.
  • The license paid to IBM for a tool that is actually harmful to the team.

(The last bullet is mine, not Dave's.)

Exercise for the reader

Obviously, constantly using AI can be even more harmful than the code review tool; but I want you to think about something else:

From the story above, what can you conclude about the practice of code reviews? If a complete amateur can hold a job by using a tool to do code reviews, doesn't that say something about code review as a practice?

Why don't you think about it first? Next I'll tell you what I think.

The hysteria about AI

The most current and modern tremendous mistake being committed in the IT industry is, of course, the premature acceptance of code written by AI, disregarding every aspect of it just because it gets done fast.

The AI companies affirm the AI is equivalent to a junior developer; the managers have no experience of the actual work of producing software, so they believe that hype and stop hiring juniors.

Then juniors do not, over time, learn and become seniors and soon the industry will be devoid of qualified workers.

AI cannot yet replace humans because its experience of the world is too limited.

  • It cannot be aware of the whole project in one prompt/interaction.
  • It cannot compile the code it writes.
  • It cannot execute the automated tests.
  • It cannot ask for clarifications.
  • It cannot ask for the opinions of team members.
  • It has no concept of the history of the project.
  • ...

So it's easy to see that comparing a simple tool to a human is nonsensical. Saying an AI is equivalent to a junior dev is to see the junior dev only within the task of writing some code, as if that could be done without all these other tasks.

The truth is, code written by AI is:

  • as mediocre as the code it was trained on,
  • full of hallucinations,
  • not yet compiled, not yet tested, not discussed, not aligned with overarching goals of the project,

...and by the time the developer takes care of everything that is missing, which is boring work, any productivity gains have been inverted.

Further, it has recently been discovered that using AI for coding has become insecure, due to a new kind of attack called slopsquatting.

Nevertheless, in this video, Theo realizes that code written by Artificial Intelligence is a reality in companies and that we need to adapt our workflow. Then he makes the most obvious mistake: He thinks we need to improve our ability to review code.

I like Theo's videos because his takes are always mistaken and short-sighted. For instance, his irrational hatred of Flutter, which is a lovely piece of technology. When I agree with Theo on something, I know I need to think more!

Code review is a scam

Another effect of AI is that a programmer is no longer a programmer. She is just a code reviewer now. The AI codes, the dev reviews.

As a developer, I am now supposed to be slowly reviewing code quickly written by a moron (the AI) –, which is a job I hate.

I became a developer because I love programming. They are taking away the creative activity I love – writing good code, using skills acquired over decades – and telling me to do something boring instead: fix bad code, all the time.

Further, there is no way to teach a person to be a code reviewer. That's impossible. You can only learn to program, and then, after you program well, you can review code, badly.

One learns to program by programming, not by reviewing code. The practice of programming is indispensable. One cannot truly understand, for instance, object-oriented programming simply by reading a book. The practice of it is what finally teaches you.

The same applies to libraries, not just programming languages. You start a code review and you see the AI wrote this:

import caprese

You have never seen this caprese library before. (Assume the library is legitimate; I am not talking about the security problem mentioned above.) What do you do now? You know the AI hallucinates. Do you blindly trust its usage of the library? Or do you start reading the documentation of the library?

Remember, management told you to work faster by using AI. How much time should you now spend validating the AI's hallucinations about this caprese library?

Are you so inexperienced in codebase maintenance that you allow any unknown library to enter your dependencies – much less a library suggested without any consideration of the rest of your codebase?

Beyond security problems, have you never had a library become unmaintained and start generating thousands of warnings after you upgrade the version of your programming language?

You are an expert; how on Earth did the powers that be conclude you would work faster by reviewing the thoughts of an idiot? Do they not know that software development is an immensely complex craft? Is it not insulting to you that they see you as a monkey typing words on a keyboard?

And how are you going to learn to properly use the library if you are not even programming anymore??? You think reading documentation makes up for it??? Have you never heard of Constructivism?

The philosopher Ludwig Wittgenstein affirmed that man can only truly understand that which he creates.

The idea that human programmers could ever become just code reviewers is the single most absurd thing I have ever heard in this industry.

But it's even worse than that. Code review itself is a mistaken practice.

I realize that reviewing code is impossible. This is because writing code is a creative activity that begins with the programmer concentrating on all the information necessary to write the code, including how the thing works right now and how the thing should work tomorrow.

Whereas reviewing code doesn't start from the same point. It starts with the subject reading new code, at which point several assumptions happen.

The user story, the objectives, the necessary information that drove the writing. Do you think the code reviewer can be as aware of these things as the code writer was?

And there was a process. Maybe the code was written, then rewritten. But the reviewer is absolutely unaware of the evolution of ideas in the mind of the writer.

In practice, the code reviewer's preoccupations are different from the code writer's, and that creates problems.

As a simple matter of human perception, conscience after creation is much deeper than conscience after revision.

In a code review one probably should question everything, but that is impossible to do. In reality, only a handful of things can be questioned in a code review – and then we are tired.

This is why, in years and years of experience, code reviews have never been able to catch enough bugs. It is absolutely necessary to test all the written code. And before testing, you need to understand the impact of the change so you know which screens and which use cases need to be tested. And if you don't test, there will certainly be problems despite the greatest possible skill in reviewing the code.

…just like UML was a scam

There are similar mistakes that have been made in the history of programming – for example, a focus on UML. For a long time, there was a class of programmers who did not use a code editor, they used Word or software to make diagrams. They were what we called office programmers. Gradually, the focus on diagrams was abandoned, because a programmer, in order to think correctly, needs to be in front of her editor, not in front of tools that only express abstractions and do not compile.

They say "paper accepts anything". That's the problem with diagrams. Also the problem with code reviews.

A code review does not occur in a text editor, it occurs in a code commenting interface. Just as UML could be divorced from reality, a code reviewer also has trouble seeing reality for what it is, and may make suggestions that either aren't practical or do not aim at the objective of the current user story, because a reviewer is not so conscious of either, compared to a creator.

The alternative to code reviews

(Pair programming is the right answer.)

At the moment of a code review, I don't anymore do a code review. I become co-creator instead of reviewer. I do code changes, and then I ask the original developer to look at my changes. This is the only way to ensure the code review doesn't descend into nonsense.

I open my editor and I rename the variables that are poorly named. I add the else clause that is missing after the if clause. With each improvement I make to the code, it becomes possible for me to see other problems.

If I didn't solve the issues I already saw, I wouldn't be able to see further. I don't know why this is so, but this is how my conscience behaves. I have to open the tomb first, then I can see the spider webs. I have to clear the webs, then I can see the dust. I have to clear the dust, then I can see the hieroglyphs.

A man walks into a doctor’s office, frantic. "Doctor, you’ve got to help me! I think I’m going blind – I can’t see a thing out of my left eye!"

The doctor looks at him carefully and says, "Well, that’s serious. But I also notice your right leg is missing – what happened?"

The man waves it off, "Forget the leg! That’s old news!"

This is why code reviews fail: people are unable to agree on what to focus on. Cognitive overload and tunnel vision are common realities.

When I review code and ask for certain changes, the programmer often misses my point and creates a third strange version. Showing a better result (and letting them compare) is more effective than meta-talking about an idea that for the present is only in my head.

So code review is not possible as code review, it is only possible as code maintenance done in an editor.

Pair programming, or mob programming, is the right idea. Code reviews are too late, too tiresome, too costly and too ineffective.

AI does not edit

But it's even worse than that.

AI is a machine that creates legacy code. The challenge with legacy code is understanding its intention, which has been lost because the people are no longer there. If only the original developer were around, you could ask her what led to certain decisions. The reasons for the code to be this way.

But in this case, the intention – if it ever existed – is impossible to determine, because the AI does not have a personality; it has different beliefs, priorities and knowledge each time you run a prompt through it. The output can be markedly different even if the prompt is always the same.

AI is unable to edit code – it can only rewrite code. This leads to a huge patch each time. Instead of a laser focus on the detail that needs changing, you get a completely new program! How on Earth can you manage that?

This is such a messy way to work and understand code that any productivity gains – which are all lies, by the way – would be reversed by the time the bugs hit production.

The mess created by using AI – a mess that affects the diffs in the git repository, the ability to reason about the code, the expertise of the worker herself, and the fun of working, is so grave that no ability to type code faster can compensate.

Managers never explain the reason

I believe most of what I am saying is obvious to any developer, but humans are terrified of going against the mob. In this case, AI hype and greedy managers have already created a mob.

Let us read an example told in a YouTube comment on 2025-05-28. This person is a video editor in a large company:

(...) Everytime I or anyone makes suggestions on shooting photos or video with real environments we’re asked to “think how we can incorporate Ai into it” or even just skipping hiring models and actors to use Ai instead.

Where I work, the executives, directors and managers don’t want to listen to creative professionals or employees with the skill set. Speaking up just puts a target on your back. During my end of year review, the only thing they told me to work on was to “embrace Ai more”. The CEO even had to write a company wide email about how “our industry is moving towards Ai and we need to follow”. Funnily enough, had he just made Ai write that email, it wouldn’t have sounded so cold and depressing.

But notice how managers are always prescriptive, they never explain anything. The pressure of using AI now... is postulated, not proven. There is never any justification given for the new directive, much less one that would survive some scrutiny.

The real message is, AI is faster and cheaper, therefore it shall be used, even at the expense of everything else:

  • The sanity of the team
  • The jobs of the team
  • The sanity of the process
  • The knowledge, expertise, and wisdom acquired over decades
  • The quality of the product
  • Customer satisfaction
  • Everyone's humanity

...and the only thing that remains possible is for managers to pat themselves in the back, self-congratulating for gains that are, in fact, false. Such managers should embrace AI by stepping down. Even an AI would do a better job.

However, in the end, the proof shall be in the pudding. Companies that "embrace AI" in a stupid way will certainly suffer.

Conclusion

A programmer must program as a writer must write as a pianist must play. Smart tools can hinder the exercise and development of one's craft.

AI spoils man's memory, thought and self-improvement. It can type lines faster, but in a manner that is not manageable, and people are going to suffer and die for this reason.

Companies must realize the importance of motivating people to develop their craft and become their best selves. Companies that push a tool that will hinder the development of the team certainly are not Agile.

Architecture before coding was divorced from reality. Estimates in software were divorced from reality. Waterfall was divorced from reality. The solution to these three was never to learn to do them better; we tried that and it failed. The solution was to stop doing them.

Code review is like that, it is the great mistake of this generation. We will spend 25 years trying to review code, striving to do it better. Finally we will realize what Wittgenstein and Constructivism already knew: Building it yourself is an essential step to understanding it. Humans cannot review code well, they can only write and maintain code well.

You can do pair programming with a human, but not with an AI. If you can only review the AI's code, then that's not a realistic option, since code reviews are insufficient.

AI cannot yet replace humans because its experience of the world is too limited. But by abusing our interim hyped AI, we are having it replace quality, accountability, and the soul of engineering.

Ultimately, we are talking about Robert C. Martin's maxim: "The only way to go fast is to go well". The problem is not new. Software developers, as professionals, have the duty to say "no" to every ignorant management measure that hurts development while trying to make it faster. It is a disgrace that so many professionals swallow it.