Jaws is not a two-note theme

They recognize it as soon as it starts – on the first note, even, as the reaction of this crowd shows at 16 seconds.

We are talking about John William's main theme for Steven Spielberg's classic film Jaws (1975).

When the thing starts, it already has a certain personality, definite choices in the orchestration, that enable anyone to know what it is. It's like a tutti of the lowest orchestra. The contrabasses and cellos are helped, as the video immediately shows, by a contrabassoon and two bassoons. I think there is no bass clarinet, I can't hear it.

Actually, low strings doubled by bassoons... that's a very common orchestral sound. But I suppose if you know it's Williams, then on the first note you know what this is.

It is a truism that the theme has only two notes. I understand what people mean when they say that. Journalists have said that often when John Williams is interviewed, and he won't correct them, he accepts the number two. He says "yes, and".

But that's wrong. Of course the theme does not have only two notes, nor could it. It's a horror film, therefore it has all twelve notes.

As you probably know, there are normal people, and there are musicians. Normal people haven't counted the pitches. But any composer will – it's not a case of "ackchually", but a necessity for understanding the language and the material.

We have only examined the introduction so far, which is insinuating the theme. When it stops lurking and launches into the fast ostinato, at 40 seconds, two things happen. First, a piano joins the texture, probably to give it a very definite attack. But most importantly, you get the same 2 alternating notes, Mi and Fa – but also a third note in the accents: Re.

So the main idea, the main subject of the film, requires three notes, not two. And Re is the lowest – the note we would hear as the bass of any implied chord there. Re is used in the accents, you couldn't write the actual theme without this third note.

With 3 notes given so far, one could be forgiven for thinking we are probably in the key of D Minor. The composer is now quick to prove that wrong – at 48 seconds the texture becomes more complex. The 4-horn call is incredibly important, contributing only pitches that are a minor second apart from the 3 pitches of the ostinato. This means it could not sound more dissonant in relation to the ostinato, it could not be in a more foreign "key". The pitches are Mi Bemol, Sol Bemol, and Re Bemol.

One can play all the pitches mentioned so far by dropping one's closed hand on a piano, from C# to F#. This fact suggests a kind of dodecaphonic thinking. A composer realizes if you expand that with only one more neighbour note, you get the range of a tritone, and then you can apply dodecaphonic procedures to that, like transposing the whole thing up a tritone to use the rest of the 12-note scale.

But that is not what Williams does here. I would say the thinking is more along the lines of pitch sets, combined with polytonality, because we easily hear the two simultaneous musical events (the ostinato and the horn call) in two different "keys".

So here a public that has already forgotten polytonality (except that jazz never forgot it) is exposed to an advanced instance of it. Darius Milhaud, author of the best polytonal music, "Saudades do Brasil", would be proud.

The accents in the trombones... there is something weird about them, in this recording they sound like La Bemol. This surprised me because, as I started writing this, I was pretty sure it was Si Bemol. This video soon becomes too audio-distorted to remain useful, we must switch to a studio recording now. In the studio recording, the trombone accents are Si Bemol, definitely. (0:30)

This kind of difference is something that always happens with Williams. For his most famous pieces, he always created a Boston Pops version, with a more normal set of instruments, such that it can be played by any orchestra without hiring many extras. Thus the original film music always sounds a bit more "custom" than the orchestral suites made for concerts. But he doesn't stop at orchestration – there are changes in the discourse, in the order of the sections, and in tiny details such as this, the pitch of the trombone accents.

Would you like me to speculate about why he would change B flat to A flat? OK. Both are possible as part of the horn call scale. But B flat sounds much more in harmony with the ostinato. In fact, B flat dangerously undermines the atonality because it "wants" to become the root under the ostinato, which would then become a simple B Flat Major chord (Si bemol, Re, Fa). A flat does not have that problem, being an "ugly" tritone away from the Re.

Now (0:36) the pitches Fa and La Bemol also become a secondary part of the horn call. The end of the horn call has longer notes in this version, while in the Boston Pops version the notes are very short.

Speaking as a composer, the horn call is the actual main theme of this piece. This is because the ostinato is not amenable to development – it is most useful if it remains there doing its thing without change.

A simpleton like Hans Zimmer could also compose the ostinato – anyone could. But to make the rest of the texture and the rest of the piece, John Williams' taste, knowledge, creativity and development skill are necessary. Therefore, the value of the piece isn't really in the ostinato, it's in everything else!

One wonders if normal people have heard the rest. I think they have, but they can't talk about it, they don't have the vocabulary for it. So they talk about the two-note theme. It's a sort of musical bikeshedding that happens. If I were to interview Williams, I wouldn't ask him about the same tired old things, I would make a technical interview, a composer's interview.

At 1:00 a huge tutti is building up, the violins carry the melody, and guess what, the melody is based on the horn call. There is still something oscillating in the background in the interval of a minor second (piano?), but that is "drowned" by the rest. It is also two times slower as the oscillation in the ostinato.

At 1:20, on top of a chaotic atonal texture, flutes play some kind of American march fragment. Or maybe it's not a march, maybe it's a children's song. Either way, we are reminded of the general public having fun in the beaches – those that the main character, a reluctant hero, needs to save. This song is again superposed "in a different key" from the rest. The polytonal procedure is important in the language here. It makes sense because these people are unaware of the danger – the dangerous shark and the people having fun are in two different realities.

These are the most interesting things I could say about the Jaws theme. To sum up:

  • It's never just 2 notes. In fact, it's 3. In fact, it's a horrible chromatic cluster. In fact, it's polytonal.
  • The polytonality is a musical way of expressing two simultaneous, conflicting realities. But it is a latent conflict: we can hear the conflict, but the two individual situations cannot, each being "happy" in its own key.
  • Anyone could have composed the ostinato. But only a jazz musician like Williams would have remembered to use polytonality to express the latent conflict.

The entire music of the film is genius of course. Not just the recognizable theme. Witness, for example, the "father and son" scene.

Here, I believe the piano and harp representing the child are in the scale of D Major, but really in G lydian. (The lydian scale is the brightest one available.) The most important thing in the melody is the C# resolving to B natural.

The innocent youth is incapable of understanding the depths of gravity that a responsible adult will have to go through. This scene drives home the simultaneous realities of the youth on vacation and the policeman on whose shoulders the responsibility for the dead girl weighs.

Of course he can only think about his problem, nothing else. What is more insistent than a 2-note ostinato? A single held note. John Williams will not mention the shark musically here, since there is no danger. An even worse alternative would be to invent a theme for the politician. Let's not do those. Not when one note will suffice.

Therefore, polytonality is used again. Guess what note is in the bass? C natural. Why?

  • It's the lowest note cellos and contrabasses are able to emit.
  • It's right between the C# and B natural I just mentioned, just a semitone apart from both, therefore it maximizes dissonance to the child's melody.

If the shark isn't just 2 notes and actually requires 3... at the same time, John Williams can convey an idea with a single bass note if he wants to. That's amazing!

One character imitating another... that reminds me of another Spielberg movie with another perfect musical score. In that one, the imitation is the very first act that humanizes the hideous deformed monster who, by the end of the movie, will have become your best friend in the entire galaxy and leave you in tears for his absence. But in Jaws, the point is the contrast between their two mindsets, and Williams put that contrast in the music.

Usually film music will seem to be from the point of view of one character. This is a rare example of two POVs, simultaneous, accomplished through bitonality.


When Spielberg first heard the shark theme on the piano, he didn't seem to like it. If memory serves, it went more or less like this: "Really? That's all?" "You have to imagine the orchestra, of course." "Yeah but that simple?" "It's a very primal movie you've made here, Steven."

Williams' accusation that the movie is "primal" is not true. Let us just say his scores have more complexity and truth than his assertions. Or rather, in trying to defend himself, he went for the wrong argument. The correct response would have been, "of course that's not all, that is only the ostinato. On top of that we'll have modernist techniques, an entire orchestra playing in polytonal layers." Because that is what he actually did. The music for Jaws is not based on two notes, that's nonsense.

"Jaws" was a great success, and remains one of the greatest accomplishments in cinema, not because it was primal, but because it is sophisticated. Since all the sequels lack sophistication, they are failures. This sophistication is obviously in the music, but also in the director's technique. Watch this analysis of a single scene to understand how.

Dart versus TypeScript

In recent posts I have shown that the web only has crummy technologies, but at the same time, Flutter deployed on the web is not yet free of its own crumminess to compensate.

In this post I shall convince you, beyond any doubt, that to develop frontends, you should use the Dart language rather than TypeScript. We'll examine:

  1. Problems with JavaScript
  2. Ignoring those problems like ostriches
  3. Problems with TypeScript
  4. Problems with functional languages
  5. Dart as a solution
  6. Problems with Dart
  7. Conclusion
  8. Futurology

1. Problems with JavaScript

JavaScript was created in 10 days and now we have to tolerate it forever!? WAT. It is the only language I know with so many evil parts, other than INTERCAL.

The best book about it is called "JavaScript – the good parts". Everyone has read that book. However, its author now says it's time to stop using the language.

The design problems in the JavaScript language are too numerous to list here, but here are some of the most egregious:

  1. this keyword
    • Context sensitivity: The value of this can change depending on the context in which a function is called, leading to unexpected behavior and countless debugging sessions for thousands of developers.
    • Binding issues: Developers often need to use .bind(), call(), or apply() to explicitly set this, which is cumbersome and unheard of in any other major programming language.
    • Arrow functions: Arrow functions do not have their own this context, which can be both a benefit and a source of confusion when switching between arrow functions and regular functions.
  2. Arrow functions
    • Implicit return: The concise syntax can be misleading, especially with object literals, where {} is interpreted as a block rather than an object.
    • Arrow functions do not have their own arguments object, which can be limiting in certain scenarios.
    • No own this: While it solves some problems, it can also be confusing when developers expect a traditional function's this behavior.
  3. Type coercion
    • Implicit coercion: JavaScript's automatic type conversion is a severe misfeature that leads to unexpected results, such as '' + 1 resulting in '1' or true + false resulting in 1.
    • == vs. ===: The loose equality operator == performs type coercion, which can lead to unexpected results, whereas === does not, leading to a general preference for the strict equality operator but also to confusion among new developers.
  4. Classes and super()
    • Syntactic sugar: JavaScript classes are often criticized for being syntactic sugar over the prototype-based inheritance, which can lead to misconceptions about how inheritance works in JavaScript. In reality this is not a problem in itself, except for all the terrible implementation details in classes, this and super, which become a list of gotchas for developers to memorize.
    • Mandatory call: In a derived class, if you define a constructor, you must call super() before you can use this. Forgetting to do so results in a reference error.
    • Order of initialization: The call to super() must happen before accessing this, which can complicate constructor logic and initialization sequences, or even make one's idea impossible without a redesign.
    • Classes in JS are so bad that most JS developers prefer to ignore them entirely. Instead, they achieve encapsulation by abusing closures, which is in itself another terrible way to write software.
  5. Module systems
    • Due to historic reasons, JavaScript has multiple module systems (CommonJS, AMD, ES6 modules), which can be confusing and lead to compatibility issues.
    • The final system (ES6 modules) has a export default feature which I don't see in any other language, does not add value per se, and probably only exists to emulate the previous 2 module systems.

JavaScript is a language notorious for its inconsistencies and flaws. Despite possessing modern features, it suffers from fundamental issues that remain unresolved. this, function, arrow functions, super, and the, so to speak, excessively dynamic type system... the behavior of these things is riddled with exceptions and unexpected outcomes. Learning JavaScript often feels like memorizing a long list of workarounds.

As a result, JavaScript is a language that makes kittens cry every day. It is legitimately a language to be hated, if we are being reasonable.

JavaScript is a hypocrite, like a person who pays for expensive, albino-white facades on their front teeth, but leave their back teeth to rot full of caries. JavaScript is the guy with a sports car who in truth is hurtful to women.

People who decide to use JavaScript outside of the browser are backwards: the browser should acquire a good language, instead of the worst language contaminating the entirety of computing.

The real reason every other language compiles to JS, and the real reason WASM exists, is not a lack of cool new features in JS. The real reason is that in JS, this is broken, function is broken, arrow functions are broken, super() is broken, the type system is broken... To learn JS is to learn a pointless list of exceptions to expected behavior.

Consequently, many developers choose to use languages that compile to JavaScript or explore alternatives like WebAssembly. This trend highlights a critical issue: JavaScript's fundamental flaws hinder development efficiency and cost lots of time and money.

As an example, here is a lesson for today:

  • Arrow functions cannot be constructors.
  • Arrow functions do not inherit a this binding; if they are part of an object, they cannot talk to it.
  • Arrow functions don't provide arguments; normal functions do.
  • Named functions are hoisted, const is not. Arrow functions were invented to be anonymous and to make small event handlers and callbacks, but people are abusing them and naming them with const.

This is just one confusing instance where JS has 2 ways of doing the same thing, both with advantages and disadvantages depending on what you are doing. How much of this will you remember in 30 days?

Why not pick a good language instead?

In short, what needs to change in JavaScript is its WAT. And while that doesn't happen, hordes of young programmers are learning a horrible programming language first. Getting used to the most inelegant solutions. Honestly, JavaScript has become the most popular programming language, and also the worst popular programming language. The only things that are worse are those that are designed to be worse: esoteric programming languages like INTERCAL and Whitespace.

But the worst part is, they seem to have given up fixing JavaScript. They have concluded it's impossible, due to the requirement of eternal backwards compatibility. That is the wrong conclusion, and it shall be revised real soon now, as web development has clearly become unsustainable.

2. Ignoring those problems like ostriches

Most JavaScript developers are the proverbial boiled frog. They have been studying this cursed language for years and years, why worry now? "I am productive in JavaScript in spite of its shortcomings." Their attitude is that of the ostrich: "learn the good parts", shun the bad parts, and develop code today.

They will add, that all the alternatives to JavaScript are also doomed, for other reasons. Maybe they are harder to debug in the browser. Their performance is necessarily worse than JavaScript, since they compile to JavaScript. And so on.

In short, it's the famous Sunk Cost Fallacy. JavaScript is evidently not beneficial, but one sticks with it due to past investments.

Where Python 3 focused on removing all the warts from Python 2 and succeeded, people imagine this to be impossible in JS, since they believe there is an eternal backwards compatibility requirement. I predict this requirement will drop very soon, as the accumulation of horrible web standards becomes a terrible burden.

Yet, a successful precedent exists: ActionScript 3 introduced class-based inheritance, separate from, and without disrupting, the existing prototype-based system. This demonstrates that it's feasible to evolve a language without breaking existing code.

Again: It is NOT impossible to fix JavaScript; the impossibility is an illusion that makes you accept JS.

The only thing that is even more painful than fixing a floor full of rusty nails pointing up... is to forever tolerate it. But that's exactly what a boiled frog does. "I already know where the rusty nails are, I don't step on them anymore."

This almost amounts to a Human Rights issue.

My advice to you is: are you writing a large web app? Then for the love of humanity, do it in anything but JavaScript.

3. Problems with TypeScript

There is a moderately popular project by Facebook called Flow. It lets you write static type annotations on otherwise JavaScript code, it checks the types as you write, and then it simply removes the type annotations in the end, leaving only your JS code. I consider Flow a good design – if you need to write JS, that is.

Microsoft answered the same question differently.

They hired Anders Hejlsberg, the guy who had created Turbo Pascal and Borland Delphi, to make derived languages for them. First they used him in their attempt to Embrace, Extend and Extinguish Java. Microsoft then lost a tremendous lawsuit to Sun Microsystems for that misstep, so they turned to the next best war strategy: make their own Java while denying all influence. Thus C# and the Dot Net Framework were born, or rather, cloned. To this day these people are affirming that "C# belongs to the C family of languages", while it really is Java with a couple of misfeatures removed. Hejlsberg was and is the main designer of C#.

In 2012 Microsoft announced another Hejlsberg creation: TypeScript, which has become the most popular compiles-to-js language. But instead of just adding types to JS (like Flow does), it is a bastard child of JS and C#. I imagine they gave Hejlsberg these contradictory goals: "We want C# for the web, but it also must be a superset of JavaScript". The superset bit means, if you paste JS into a TS file, it just works – all JS is valid TS. It also means TS has its own separate features, augmenting JS.

The fact is, this one-way compatibility with JS is probably why TS won. But you know what I am going to say, right?

TypeScript again decides not to fix any of the bad parts of JavaScript. TypeScript is a monstrous creation, it adds even more cool features, such as algebraic types, without first fixing the basics. The decision to be a superset of JS sealed TypeScript's fate; after that decision, being a good language was impossible. It presents the best language features and the worst language features in a single thing. TypeScript is the most hypocritic programming language in the world, and as such, it could only have been born at Microsoft. Or Oracle, Apple, Facebook or Google.

Learning TypeScript is learning tens of weird unexpected syntaxes in the type system – things that should be natural and much easier – and then forgetting them while you are coding.

Every developer has noticed that, if TS seems powerful, it is because there's an enormous amount of features for annotating types. It's not simple at all, it amounts to a tremendous cognitive burden. And newer versions never simplify anything, they only add to that burden. The developers of TypeScript take too much freedom to make it impossibly complex. I have found this frustrating, and I am not alone:

Rich Harris: "We also eliminate an entire class of annoying papercuts that will be familiar to anyone who has worked with the uneven landscape of TypeScript tooling."


Here is a video detailing the latest TS release. And here are some YouTube comments sharing my sentiment:

@tacochub4353: These updates are neat... sure, but I don't really see how these methods solve the plethora of issues with using TypeScript. All they seem to do is add unnecessary complexity to an already perplexing ecosystem filled with syntactical nuances.

@JamesDSchw: My beef with many TS releases over the years surround the cognitive load they incur - more syntax and language semantics to be able to model types in existing libraries in the ecosystem.

@universe_decoded797: Typescript solving things that are not problems to create more problems is problematic. ‘Simple things are hard to create’ is a true statement.


In short, you wanted to fix JavaScript and suddenly you saw TypeScript. It overloaded your senses with so much information and impression of power, that it seemed to be the right solution. The only thing everyone forgot was the actual problem: we need to fix JS.

To choose TypeScript, one must overlook two facts:

  1. There is tremendous value in keeping language scope down to a minimum. Until Python 3.4 more or less, Python was a small language, any programmer could pick it up in a week by reading a 100-page description of the language. And then they could learn crucial parts of the standard library in a couple of months. One would become productive very quickly. Unfortunately, Python has entered a new phase, in which they forget the value of staying small and keep adding syntax. Becoming Scala, a language that one never finishes learning. If you are a Scala programmer and you start reading another developer's code, chances are, you have to stop and look up this syntax that is new to you. That is a horrible mistake. Back to TypeScript, it starts by accepting JavaScript, but then paradoxically it again becomes Scala by relentlessly adding features.
  2. We need a language that is a solid base to build upon; the perplexing crumminess of JavaScript is automatically unacceptable if mental health is a value.

4. Problems with functional languages

It is currently my opinion that an object-oriented programming language is perfect for creating user interfaces, even a pure OO language such as Smalltalk.

But here, let us ponder that an object-oriented approach greatly benefits from adopting a few lessons from functional languages. Functional programming is not the opposite of object-oriented programming; to a certain extent these can be combined. Also, object orientation today accepts that composition is better than inheritance most of the time. I favor a pragmatic approach that uses notions from both these worlds. Immutability only on certain kinds of information, and a conscious effort to create pure functions and unit tests for these – these are key to writing good code.

But the current wave of functional programming languages is another thing that a healthy reader should doubt. In about 15 years of people trying functional languages and immutability in the browser (either in JS or in functional languages such as Elm, Elixir and ReScript), the functional paradigm and the insistence on immutability have failed to deliver the cleanliness and developer productivity that were promised.

Here are some arguments so we can establish that functional languages and techniques are not the panacea:

  1. Complexity in state management
    • State overhead: Functional programming emphasizes immutability, leading to frequent state copies. This can increase memory usage and overhead.
    • Verbose code: Functional paradigms often require more boilerplate code to manage state changes in an immutable manner compared to traditional imperative approaches.
    • Business Logic Complexity: For complex business logic, imperative programming often provides more straightforward solutions, whereas functional programming can lead to overly abstract and convoluted code.
  2. Steep learning curve
    • Conceptual barrier: Functional programming concepts like higher-order functions, monads, and pure functions can be difficult for developers to grasp, purity being the easiest. Mathematical concepts are of course beautiful in computing, but they simply are not the way most people communicate – and the web should be for everyone.
    • Limited adoption: The steep learning curve has hindered widespread adoption, making it harder to find developers skilled in functional programming, which impacts team productivity.
  3. Performance concerns
    • Inefficiency in browsers: Functional programming can introduce performance issues in the browser, such as excessive garbage collection due to frequent object creation from immutable state changes.
    • Lack of optimization: JavaScript engines are primarily optimized for imperative code, potentially leading to less efficient execution of functional code.

While above I have tried my best to talk ill of functional languages… knowing what I know today, to develop user interfaces, I would reach:

  • first for a multi-paradigm expressive language such as Python, Dart or Kotlin;
  • then for a pure OO language such as Smalltalk;
  • then for a hybrid functional language such as ReScript, OCaml or F#;
  • then for an opinionated, pure functional language such as Elm or Haskell;
  • then for anything else in existence;
  • before resigning myself to use TypeScript or JavaScript with their broken basics.

5. Dart as a solution

In 2009, Node.js brought JavaScript to the server, and now boiled frogs write their backend and frontend in the same language: the worst one. Someone help them!

Seeing this, Google unveiled their Dart language in 2011. You can think of it as the last Java clone, this time with better influences. Dart 1.0 came out on November 2013.

The initial plan for Dart was to include it in Chrome as a second native browser language, the good brother of JavaScript. This was criticized for fragmenting the web, so they gave up this idea in 2015 at the release of Dart 1.9. And then Google proceeded to dominate the web anyway – through countless bad standards – such that now it is financially impossible for anyone else to develop a new browser. We might as well have had Dart in Chrome, it would have been a tremendous blessing all these years.

There exists a parallel universe in which the frontend community gladly accepted Dart as their saviour when Google proposed it as a sane, parallel native language in the browser. I wish I lived in that universe. Frontend devs, you have Stockholm Syndrome.

Instead, Dart was sort of forgotten for a couple of years while Flutter was being developed. It was released in 2018.

Here are reasons why Dart is good for developing applications and GUIs:

  • It has none of JavaScript's defects.
  • It is essentially just another boring multi-purpose Java clone with a few saving graces.
  • It has null safety.
  • It has a good, pragmatic type system without any trace of TypeScript complications. Something that just helps programmers instead of stealing their attention.
  • It has garbage collection.
  • Very performant for a garbage-collected language. You can roughly think of Dart as 10 times faster than Python and 10 times slower than C++.
  • It runs on every platform; it can also compile to JS.
  • It is now beginning to compile to WebAssembly and even use its garbage collector – this makes the runtime smaller.
  • It is being developed at a nice pace.
  • It has a moderate, good enough syntax size; it does not seem to want to become Scala.
  • It has features to write constructors without so much boilerplate, making Java look silly. However, Python is still better in this regard.
  • In fact, in some places it has been smarter than Java and C#. For instance, it does not have the private, protected and public keywords; instead, the programmer simply starts a variable name with an underscore (such as _myVariable) and that makes the variable private to the current file. This is great language design, removing lots of noise in a single movement.
  • Developer productivity and comfort are higher in a no-nonsense, immediately familiar language.
  • If you learn Dart, you are learning the language of Flutter. If you write the core of your web app in Dart, you can later reuse some of that core in a mobile app. And Flutter is much better than React Native... because React Native is based on JS/TS, and its misarchitecture consists of letting you use crummy web technologies such as CSS (or rather, just an arbitrary subset of these technologies) which then get translated into native widgets. It is just a fundamental lie, designed to keep web developers in their narrow comfort zone.

6. Problems with Dart

  • Metaprogramming/reflection is currently weak in Dart, but being worked on right now (2024), with macros in the roadmap for the next few releases.
  • Interop with JavaScript is more difficult than expected. Using a JS library in Dart code is fine, but you have to write typing stubs for the library's interface. Consuming Dart code from JS requires you to expose objects and functions with a decorator, and I don't think you currently can expose them in a JS module, so you have to put the API on the window object, which feels outdated.
  • Indentation with only 2 spaces is hard to see.
  • It uses curly braces instead of significant indentation. (Significant whitespace is objectively better because it communicates the same information with less visual noise and occupies fewer lines.)
  • It requires semicolons.
  • In Python, None, 0, "", [] and {} are all falsy, because they do not contain information. That's very useful. But in Dart, only null is falsy; the others are truthy. This makes comparisons more verbose to write in Dart. (And in JavaScript comparisons are a minefield.)

Given the above, I would definitely write web apps in Dart, especially using its numerous frameworks for doing so; I would also write a large app component to be consumed by JS through a relatively small interface; but I would not write a typical JS library in Dart, unfortunately.

7. Conclusion

Going parallel to JS is unavoidable, that is why everyone wants Web Assembly to succeed: it's the only escape.

Dart is not perfect, but programming in it is bliss compared to JavaScript and TypeScript. There are alternatives out there; your responsibility is to choose something better than what everyone else is using, if you are smart.

8. Futurology

The parallel solution is soon going to be required for the entire Web, not just JavaScript. Because the powers that be have introduced an enormous number of spectacularly failing standards:

  • JavaScript
  • HTML that is not XML
  • CSS, which again is growing impossibly as a language, is impossibly complex in the interaction of its features, contains an impossible number of footguns, and is already humanly impossible to learn for its target audience of designers and common people
  • Web Components (Custom Elements), which are enormously complex, have a terribly verbose API, yet somehow manage to fail at addressing basic concerns of writing GUIs
  • IndexedDB, the only way for frontend devs to access a SQLite database, has a horrendous API, so nobody uses it
  • ...?

The idea that these bad standards, plagued by complexity and inconsistencies, must remain in the Web forever for backwards compat is absurd and impossible. Of course one day this entire mess will be dropped.

The web platform has become so convoluted that only tech giants can afford to build browsers. This centralization of power threatens the open nature of the web.

When Firefox finally finishes failing, we'll be in the impossible situation of every browser being based on Chromium. This is due to the number of incredibly complex features and standards that a browser must implement. Thus the web no longer belongs to the people, it belongs to tech giants.

I am calling this right now: soon the people will create a "New Simple Web", from scratch, with simpler (but not necessarily more powerful) technologies, languages and protocols, to replace this Impossibly Big Ball of Backwards Compatible Spaghetti. This revolution will be painful in many ways, but it is clearly unavoidable. The most important values for the right technologies, languages and protocols will not be power, but cleanliness, simplicity and developer experience.

I believe the New Simple Web will look more like Flutter than anything else. It will be based on a single good language. No separate language for formatting. It will tend to the pragmatic needs of writing applications. But it will still somehow make contents public, as they are today. Oh, and it will have no DRM.

In order to become popular, the New Simple Web will have to offer something to the users, too. Evidently, that something will be their freedom. By then Google will already be the distopian oppressive OCP they have decided to become, so they will be closing everything on the Web: mandatory ads, mandatory privacy invasion, mandatory taxes, mandatory DRM protecting THEIR content which they actually stole from books, poorly paid videomakers etc... you name it. This is what Chrome will be.

Other tech giants will try to create an Alternative Web in advance, but they will not provide the necessary freedom, and therefore they will fail.

Someone will rise to the challenge, present a clear picture of how the New Simple Web should be built, and do it. People will use Chrome for banking and gradually migrate to the New Simple Web for everything else.

And then the cycle will begin again, inasmuch as humans are bound to forget learned lessons.

You're too quick to dismiss Agile

I see too many programmers today thinking that Agile is nonsense. It has even become an opportunity to sell books on esoteric methodologies.

However, there's widespread confusion:

  • about what Agile actually proposes,
  • about how bad the alternative is,
  • about what our personal experience really is – usually with Scrum.

First of all, let's quickly remember what Agile really is: a reaction to Waterfall.

Waterfall

Agile was the alternative way of building software discovered when everyone was using the obsolete Waterfall method, and failing hard.

Waterfall means each development phase is done for the entire scope of the project and its output informs or cascades into the next phase:

Requirement gathering & analysis ↘
   System architecture and design ↘
                    Implementation ↘
                            Testing ↘
               Delivery / deployment ↘
                            Maintenance

Waterfall had many serious problems:

  • It naively assumed that the system could be planned in its entirety; it did not foresee changes in scope.
  • Each step took months to do, so projects took year(s) to complete.
  • Clients did not participate in the development process, they only saw the finished project at the delivery phase.
  • For lack of communication, invariably, clients felt what was delivered had almost nothing to do with what they had said they wanted.
  • By the time the project was completed, the plans were obsolete: so much time had passed that the solution was born inadequate. Even if there had been no communication issue.
  • Each phase was driven by fear of incompleteness. Most importantly, during the initial requirements gathering, clients tried to be as complete as possible in listing features, for fear of leaving out something important while they still could add scope without incurring costs. This resulted in implementation of many features that were never used in practice – an enormous waste of time and money.
  • Due to lack of communication and lack of trust, negotiation was hard. Clients simply wanted as much scope as possible for as little money as possible. If legislation changed during the project execution and the scope had to change, a negotiation had to take place.

Fixing waterfall

Agile was the correct response, around the turn of the century, to the above problems. Here's how:

  • Understand that the scope changing... is the norm, it is the most common thing in the world, it is not to be treated as an exception, but as a rule. ("Embrace change" is the subtitle of Kent Beck's book on XP.)
  • Keep the same phases as waterfall, but do them for the smallest amount of scope possible, then repeat them. Agile is a series of tiny waterfalls.
  • Because it's a series of tiny waterfalls, the whole team can work at the same time: the business analyst, the system architect, the designer, the programmer, the tester, the system administrator etc. are all working at the same time now, instead of in different phases. The pains of one can immediately be conveyed to relevant others. The system architect is immediately informed of the weaknesses in her architecture. The business analyst is immediately informed of the weaknesses in her output. The programmer immediately sees what kinds of bugs are being found by the tester. Everyone gets opportunities to improve.
  • Instead of the feature creep characteristic of waterfall, agile prefers to deliver the smallest thing that has any value, the minimum viable product, as soon as possible. Even if it's just a single feature.
  • Get feedback from the client and especially from users as early as possible and use that feedback in the following iterations. More likely that the right thing will be implemented. Less likely that unnecessary things will be implemented.
  • Bring the client onto the development team, so there is transparency, they can actually see the hard work being done, they gain trust, and communication switches from negotiation mode to team mode.
  • Even if communication fails hard at the beginning of the project, chances are it will improve and the right thing will be developed.

Maybe Waterfall can succeed for projects that are small enough or very easy to understand and describe. For instance, a device driver for a mouse. But most software projects aren't like that, and then you need something that deals with the actual challenges, and that's Agile.

About the relationship with the client, XP (Extreme Programming) seems to have been the first to alter the contract. When a client hires an XP team, the scope is not predefined anymore. The scope is variable, which the client initially doesn't like. But the contract also says that the client can cancel the project at any time without any fines – this part the client does like. Therefore, the client will keep the project going while it is worthwile, and will cancel the contract as soon as it is more expensive than the value been delivered. This is a saner way to work than the rigid waterfall contracts that existed then, with rigid scope, rigid implementation time etc.

The manifesto

Please refer to the original Agile Manifesto, which is about 5 lines long:

https://agilemanifesto.org/

It simply means that:

  • processes and tools are important, but individuals and interactions are more important.
  • comprehensive documentation is important, but working software is more important.
  • contract negotiation is important, but customer collaboration is more important.
  • following a plan is important, but responding to change is more important.

You see, it's extremely hard to disagree with anything the manifesto actually says.

For example, notice how the manifesto didn't prescribe any meetings. The meetings that developers complain about so much are usually the Scrum meetings. Therefore, they are not a part of Agile in itself. One can develop software well while organizing one's meetings in a different way.

But what some programmers really want is to develop software without communicating at all; that's impossible, and therefore, their own mistake.

Scrum

Scrum is considered an agile development methodology, appropriate for developing products of any kind, not just software. Scrum does not actually say anything specific about software development. However, it prescribes a set of meetings, which does address a tendency that software developers have, to not communicate enough (or sometimes at all) between themselves. If left unchecked, this tendency is enormously dangerous to the health of any project. We'll talk about this again below (XP).

The Scrum Guide (2009-2020) is another document that is very brief (12 pages) and easy to read. Again, you might be surprised at the things it doesn't say. For instance, Kanban and Planning Poker are not necessarily a part of Scrum. Scrum does not prescribe how the product backlog should be organized or how the team actually decides what to include in the next sprint. You might be surprised at how democratic it really is, if you read it.

I have worked in several bad Scrum implementations, each with their own troubles. However, I do know from experience that Scrum can work wonderfully well, because I have been part of a good Scrum implementation. Its democratic nature kept the entire team quite enthusiastic about their work.

Scrum is a humanistic revolution in the workplace. It inverts the hierarchy. If in the 80s you had quite bossy bosses bossing workers around, Scrum says, bosses are out. Scrum teams have no boss; there is only a facilitator, who is supposed to give workers freedom to do the work the way they want to do it. The role of the facilitator is actually to help the workers, for instance by removing obstacles (institutional, physical etc.) that hinder their best work. This creates an engaged team where before you only had begrudging employees. The Scrum team is self-managing.

"But in my experience Scrum is a tool of domination", you say. Well, did you experience actual Scrum, or a bastardized "version" that actually contradicts its principles?

If you are going to criticize Agile or Scrum, first understand what each actually says. Most often, you'll realize your trouble is with the implementations you've had the misfortune of experiencing.

If you pay attention to the history of Scrum, you realize there are at least 2 Scrums:

  • The 1986 one, from the paper "The New New Product Development Game" by Hirotaka Takeuchi and Ikujiro Nonaka;
  • The famous one, from Schwaber and Sutherland in 1995. Both authors are also signataries of the 2001 Manifesto for Agile Software Development. But the Scrum Guide only appeared in 2009.

It is important to notice that the lineage of thought is different for Scrum – it originated in Toyota, not in software development. Quoting the above Wikipedia article about the Japanese original Scrum:

Based on case studies from manufacturing firms in the automotive, photocopier, and printer industries, the authors outlined a new approach to product development for increased speed and flexibility. They called this the rugby approach, as the process involves a single cross-functional team operating across multiple overlapping phases, in which the team "tries to go the distance as a unit, passing the ball back and forth". The authors later developed scrum in their book, The Knowledge Creating Company.

What I can see Scrum has in common with Agile is the humanistic aspect:

  • individuals and interactions over processes and tools
  • collaboration over negotiation – this time within the team.

Extreme Programming (XP)

XP is the actual first agile methodology for software development specifically. This is the one that was made by programmers, for programmers. It was developed by Kent Beck and his team from 1996 to 1999, when he wrote Extreme Programming Explained. That team included these guys who also signed the Agile Manifesto:

Most of the ideas in the Agile Manifesto are already found in XP.

XP described most of the software development practices that won and are in use today:

  • writing automated tests and using them to get the courage to refactor existing code freely, without worrying about breaking everything
  • refactoring before implementing a new feature, so it becomes easier to implement
  • delaying decisions when possible
  • pair programming to improve team communication, disseminate knowledge amongst team members, and to severely improve code quality, killing bugs immediately
  • writing only documentation that you actually need and can keep up-to-date

XP was revolutionary, especially when compared to Almighty Thud methodologies which emphasized writing lots of documentation, which were the norm in the Waterfall era. (The Almighty Thud is the name for the noise made by the volume of paper documentation hitting one's desk.)

In fact, XP is still revolutionary, inasmuch as you haven't yet personally practiced its propositions.

Software is a tamagotchi

The main conflict I see in software development is the following: Agile promises continuous delivery of value (something like delivering features all the time), but software development simply isn't like that.

Expecting a development team to deliver only features all the time is as realistic as expecting a human being to be of service to their significant other all the time. The reality is, the human and the team need some time to themselves.

Business people start with this idea that software is a project with a beginning and an end. "I will develop an app, then sit back and profit." Nothing could be further…

Reality is like Chacon said:

Software is a tamagotchi. It has its own needs, that must be tended to.

The problem is, business people cannot see the virtual pet or its demands, if they cannot program. A programmer can try to explain it to them, but they get bored quickly.

This tamagotchi is a mythical creature. Only little children can see E. T., grownups cannot. And only developers can see the needs of the software; they are invisible to the marketing team.

This past week I upgraded packages in our Linux servers because of the recently discovered openssh vulnerability. If you have servers, you have to be on top of security, there is no alternative.

— Yeah, but as a business person, I absolutely CAN see the value and importance in that work, since not having our users' passwords stolen is a major component of their satisfaction.

Okay, then I suppose sometimes you can see the tamagotchi demanding we do things that our users are completely unaware of.

Let me try another example then. We really have to do some work on the way we use our asynchronous queue system. You see, things like sending emails, talking to external APIs, anything that isn't instantaneous... these things are done in separate processes, in a queue. That queue is known as FIFO, which means First In, First Out, which means first come, first served. The queue is a buffer for tasks that the app needs to do. But the queue has certain features that we need to start using in order not to have problems in it, which would be a terrible situation.

— That's too technical for me, you seem to be saying words. If I were to prioritize that work, I think I would actually never prioritize it, I think, because I have features that our users actually want, and those I do understand.

…and this is an example of the virtual pet dying of starvation, because the parents didn't understand its importance.

Technical debt

It is a terrible thing, when instead of the whole team choosing what needs to be done in the next sprint, it gets dictated by a non-technical Product Owner who leaves technical debt on the wall forever gathering spiders' webs, until it falls on the ground and gets swept away by someone who doesn't think they are important.

A good rule for solving technical debt had already been proposed in XP: refactor before you implement a new feature. This means, remove the technical debt that affects the feature you are about to implement. It also means, leave alone the technical debt that does not affect the code you need to change. Some technical debt exists in code that never gets changed, and that is okay if the code is working fine.

By the way, bugs are not technical debt, bugs are bugs. Bugs annoy users, technical debt "only" annoys developers. Generally, solving bugs is the most important priority, because if you leave bugs alone, they start compounding. Then when you finally decide to kill them, you can't tell anymore where one starts and ends, because they have become this multi-headed monster. Nobody can understand the behavior of a buggy system; a sane mind can only understand sane behavior.

However, annoying developers is a terrible thing too, and that is what business people don't understand.

Clean code

There's a second document about Agile, also very brief, which tries to clarify the Agile Manifesto. It's the 12 Principles behind the Agile Manifesto. Again I invite you to read them.

Here's a bit I would like to emphasize:

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. Continuous attention to technical excellence and good design enhances agility.

This means, agile is not supposed to be a tool against developers. This means the maintainability of the code is valuable, and important to agility.

Any psychologist will tell you, before you can love other people, you need to actually love yourself.

As a developer, each day you must do something to improve your own life. Nobody else is going to do that for you. They will even have difficulty understanding that an annoyed programmer is a much slower programmer, naturally and for good reason. It's hard to put something in production when you can't get past development.

Martin Fowler, a famous and respected British author who writes the best descriptions of ideas in software development, while offering the sanest opinions, went through the trouble of cataloguing and describing the various refactorings, in his famous 1999 book Refactoring. He is also one of the first signataries of the Agile Manifesto.

Why would he write Refactoring if technical debt were not important enough to fix???

Oh, and here is his description of technical debt.

Robert C. Martin, a very important author whose name you also find under the Agile Manifesto, is sort of saying the same thing, when he says, the only way to go fast, is to go well.

A large application cannot be built without lots of discipline. A mature programmer knows this and takes pride in the discipline, just like a practitioner of martial arts or a musician.

Robert C. Martin, in fact, goes beyond the XP rule. He notices that people are about to start dying due to badly written software, or maybe already are dying. So he says, when the thing works, do not stop and deliver it. When the thing works, you are only half done. Now you have to clean it up. Remove all the technical debt, make it easy for other programmers to understand. Make it easy even for you yourself to understand it after 2 months! Remove the useless tests, finish writing the good ones. Refactor the spaghetti, rename the badly named functions. Treat your program like a poem.

If the company places excessive pressure on developers, it is an engineer's duty to be firm and say no, especially if people's health is directly or indirectly at stake. This is the ultimate test of a true professional. You do know that "I was just following orders" does not convince. Show some character strength when your turn comes.

If developers do not self-police in this way, people will die due to buggy programs, and then the legislators will legislate software development, and the result will be much worse. This is what Martin says, and it's hard to disagree.

As a developer, do love your users, but love yourself first. You can't be of use to your users, or the company, or your family, if your software becomes unmaintainable.

In other words, no matter how much you love your user, you can't be used by him all day long!

When a musician gets popular, she often needs to put her foot down and say "no, only 2 shows a week maximum and that's final". Otherwise there's no time to hone her craft. These companies always want you for what you already know, but never want you to spend any time learning what you need to learn to stay relevant. It's up to you to draw the line.

According to Woody Zuill, teams and managers should spend the time to make the work easy to do.

Allen Holub has a heuristics page in which he briefly explains agile in his own words. At the time I am writing, number 15 reads: "Quality is not negotiable. (This rule applies to all aspects of quality, not just testing.)"

Conclusion

When bad managers use Scrum against developers and basic development needs, that is in fact against Agile principles.

A solution for our woes will be found in a proper implementation of Agile ideas – not in mistakenly escaping those ideas as if something better existed out there.

Mind the gurus you choose. Digital influencers can be entertaining and arouse emotions, but some of their advice is terribly misguided. To them, all the names in this article are preferable.

A written impression

I'd like to show you a written impression. Here is how the Brazilian elite of São Paulo, that oh so petite burgeoisie, would sum up the first chapter of the book "Pinocchio" by Carlo Collodi:

"Once upon a time dere was... a CEO? A coach? No, justy a loggy of woody. Itchy gotchy foundgy by a puur carpinter. Dgis carpinter hadgy no shoes. He hadgy never wentchy to Miami Orlando. Dgis vacation we wentchy to Miami, we buyedy dis Niky Air. And she buyedy a New Balance and lots of maquiage in Macy's. You can buy a New Balance on de internetty butty de American one is diferaintchy, muchy better. We also bringy de iPhoney for her and de MackyBoocky Air for me oh, and a Reebocky for Allison, and one Androidy tablety for each one. De screens are so brighty our eyes hurty. We watchy Nettiflix and play Farmy and Sugar Crushy. We wenty to Disney and Sea Wordy in Orlando. No, we don'ty liky Epickotty Center. We liky shoppin center. In de Fri Shopi we buyi wiscky andy vodka andy a new Playstation for Allison. You wantchy caipirinha? Is exclusive!"

Belle Époque, Esperanto and racism

This is a weird post. It tries to connect certain ideas through history, culture and politics. Not sure it manages to convince, but also not aware of the weak points.

The Belle Époque

To deeply understand Esperanto, start from the Belle Époque. It was "a period characterised by optimism, enlightenment, regional peace, economic prosperity, colonial expansion, and technological, scientific, and cultural innovations. In this era (...) the arts markedly flourished, and numerous masterpieces of literature, music, theatre and visual art gained extensive recognition".

The conquest of Africa by European nations certainly makes the époque much less belle. But we are not here to say anything is perfect; we are here to compare this period with the absolute disaster that followed: the two world wars.

In classical music (the thing I know and love), the peaceful period is definitely THE time of complex masterpieces and expansion of language. Think Debussy, Stravinsky, Mahler, Richard Strauss, Ravel, Rachmaninoff... In my opinion this was the ultimate climax of the art of music.

In technology, that is the time of the invention of the automobile, the metro, the telegraph, steel, the railroad... all products of the Second Industrial Revolution. Esperanto was created early in the Belle Époque, giving it a chance to help feed the optimism.

What Esperanto is

Esperanto is simultaneously:

  • not only a designed language;
  • but also a masterpiece of literature, in its design choices that aren't absolutely perfect but still work so well that no improvement proposal, however reasonable, was enough to sway the community;
  • achieved through a scientific breakthrough: the understanding of what parts of grammar are useful (kept) and which are simply inconsequential rules (dropped);
  • amounting to...
    • a new technology: a language so easy that it can be learned in only 200 hours of study;
    • an economic breakthrough: achieving easy, precise communication after much less investment (of hours of language study and therefore money);
    • and a cultural innovation: the idea of a language that actually deserves to be everyone's second language.

Esperanto is the only technology from Star Trek already available today (Federation Standard). Learning languages is not a skill or propensity that every human has – it is only one of many intelligences. Tomorrow every child will need to spend 2500 hours learning Mandarin. If the world were coordinated, they could instead spend only 200 hours learning Esperanto.

What Esperanto means

There is no great achievement without optimism. Like Star Trek and the Belle Époque, Esperanto expresses optimism:

  • optimism about the power of reason
  • optimism that truth can come to light (be expressed)
  • optimism that human relations will improve
  • optimism that peace will be attained and be the default
  • optimism that racism shall become a thing of the past, a forgotten problem
  • optimism that humanism will have priority over nationalism

The enemy: nationalism

Duolingo screenshot

Weird nationalist content in the Esperanto course of Duolingo

This last item is unfortunately the least well understood. Nationalism continues to be an enormous force in the world and people have not generally recognized it as the enemy it is – a menace to each item in the above list. I have seen more than a few nationalists among Esperantists. In fact, they think of Esperantujo itself as a sort of nation...

What could be more strange, more presumptuous, than saying "Nice to meet you, samideano!" (The word means "fellow with the same ideas".)

Do you see nothing strange in a Youth Congress that has everyone sing a little Esperanto hymn at the beginning? Must we confirm the general suspicion that Esperanto is a cult?

Nationalism is the illusion that some brothers are family and others aren't. Nationalism is manipulation enough to make you join a war that is not in your interest, it is only in the interest of the powerful. Nationalism is to feel more pride and connection in your immediate family than in the accomplishments of distant people, which ultimately is not a wise discrimination, even if considered natural.

I will give you an example. These days I am seeing a stupid idea repeated more and more: That white people never had a single creative moment, that they can only steal the creativity of other nations, present those as their own, profit from them and enslave other groups. Whites are just parasites. This idea loses perspective of all cultural complexity and explains everything based on the absolutely stupid criterium of skin color. It is ethno-nationalist, and therefore, wrong.

Racism is a nationalism

One day I thought to myself, "today I will leave Schumann alone and get way out of my comfort zone. I will learn a bit about the history of hip hop, which I never listen to". So I found a video on that theme.

But suddenly I caught the black historian of rap stating that white people have an inner void of sadness which can only be filled with the happiness of black music. According to him, this is why white people love black music and keep stealing it. This is of course exemplified with well-known facts of the industry of rock and pop music in the 20th century. For instance, Elvis Presley grew among black musicians and delivered black music to a racist establishment that wouldn't allow black musicians to rise.

As a white man who always loved classical music, the fact that the preacher would see me as inferior, uncreative and immoral a priori, based on the color of my skin... is quite offensive, of course. So when I heard that, first I had a stupid reaction of spite: I thought "keep your hip hop, dude; I will keep my Grieg".

But that is a dumb thought. It falls into these traps:

  1. Accepting the racial categories – in fact, accepting them as the most important parameter.
  2. Feeding into the racial war proposed by the mistaken historian brother.
  3. Throwing away the baby (hip hop) with the dirty bath water (the ridiculous theory).

But culture is much more complex, isn't it. There are many cultures simultaneously at play, and honestly blackness isn't a legitimate musical parameter, it is quite imaginary. (This point will become clear gradually.)

The intelligent thought is to remember that culture feeds on itself and builds on top of itself, and should be free to do so; that culture is for all people; that Bach is for everyone as much as hip hop is for everyone; so the bro's way of thinking is contrary to the way culture really works, contrary to inclusivity, and just straight racist, inasmuch as it pre-judges a set of people, discriminating by their skin color.

The correct thought is exemplified by Heitor Villa-Lobos, a great Brazilian composer, born in 1887, the same year Esperanto was first published. Villa-Lobos composed a famous series of works titled "Bachianas Brasileiras", in which we find traits of Bach's music miscigenated with things typical of Brazilian folkloric music. In so doing, he is implicitly showing, that the smart thing to do, is yes to import that which is good, and Bach is sooo good, so universally considered the best composer that ever lived, that there's no doubt that Bach should converted into a Brazilian thing. The immortal German should become one of the components of the mestiço country, for sure.

The smart thing to do in culture is import good things and ignore bad things. That is what classical music has always done – building on top of itself and folkloric musics. To be clear, cultural "appropriation" is the wrong diagnostic. The problem was systemic racism that prevented the originals from being as successful and rich as they deserved to be. The problem is racism, not appropriation. Appropriation is the correct, normal, endemic modus operandi of culture.

In other words, Elvis Presley was never wrong to sing in the genre that he loved. It would have been absurd to shout him down: "hey Elvis, stop that, that belongs to others!" Based on skin color!!! You think the music didn't belong to Elvis because of the color of his skin??? Preposterous!!! Can you imagine such a thing??? The racism was wrong – Elvis was right, he wasn't racist! The racism prevented black musicians from having all the success they deserved. Elvis had nothing to do with that! Elvis is guilty of no sin, let the man sing!!!

The thought of the historian is completely ignorant of how culture works. We are all exposed to it. Although I would prefer Ravel, I hear bits of hip hop constantly, because I live in the world. Conversely, he is deluded if he thinks he was never influenced by Tchaikovsky. Everyone has heard Tchaikovsky whether they wanted to or not. We are all exposed to the culture, it influences us, there is no escape.

He talks as if the software used to create hip hop weren't full of the work of European engineers...

His thought tends towards mirroring the identitarian movement. That's nothing to be proud of. This is what happens when minds get contaminated with nationalisms, any kind of them.

Recognize this moment is anti-enlightenment

Notice how much humanity accomplished during the Belle Époque; realize how many great dreams arose in that period; calculate what a waste and what shattering of those dreams the 2 World Wars represented – wars that wouldn't be possible in the absence of the formidable power of nationalism. Think of how much further along civilization might be if the wars somehow hadn't occurred...

"(...) help people get used to the idea that each one of them should see their neighbors only as a human being and a brother."

L. L. Zamenhof, 1912

And then think about the present moment. Think about the polarization, the extremism found in all political sides... When was a Zamenhof more needed?

I'll tell you what I think:

We must seek to be centered, subtle and rational in our expressed views.

Ideologies are good while subtle, well written and well read. Ideologies become bad in the messy behavior of the masses, which ends up directly contradicting the books that propelled them. So reject theories that:

  • cynically negate rationality;
  • cynically affirm the relativism and subjectivity of truth(s), declaring objective truth unreachable in principle;
  • cynically affirm that human "races" are forever in opposition;
  • manipulate people, striving to consolidate a new nation through the old technique of pointing to a supposed common enemy guilty of all problems, while defending the value of inclusivity.

Risks of adopting Flutter

My previous post ended by recommending "something like Flutter" against the crumminess of web tech. But it is not that simple.

First of all, Flutter's performance on the web is much worse than on the other platforms it supports. And this year (2024) this is not going to change because the Flutter developers said they are not working on it right now.

That is enough to rule out Flutter on the web, I think. But suppose that weren't the case. Here are more thoughts about it.

The argument against Flutter

Google kills its projects without mercy. The Google Cemetery website lists tens of such projects.

It is about this risk that David Heinemeier Hansson wrote something I have to quote in its entirety:


The saying "nobody ever got fired for buying IBM" is at its essence about risk management. The traditional wisdom goes that if you buy from a big company, you're going to be safe. It may be more expensive, but big companies project an image of stability and reliability, so buying their wares is seen as the prudent choice. Except, it isn't. Certainly not any more. Meta killing Workplace is merely exhibit #49667.

Any company that hitched their wagon to Workplace just got served with an eviction notice. In a about a year, the data will go read-only, and shortly after that, it's game over. Now companies from Spotify to McDonalds, along with millions of others, have to scramble to find an alternative. Simply because Meta can't be bothered to maintain a platform that's merely used by millions when their consumer business is used by billions.

This, right here, is the risk of buying anything from big tech like Meta and Google. Their main ad-based cash cows are so fantastically profitable that whether it's the millions of paying accounts on Workplace or the millions of live websites once hosted by Google Domains, it all just pales in comparison, and is thus one strategy rotation away from being labeled "non-core" and killed off.

Buying from big isn't the sure bet they want you to believe. Buy from someone who actually needs your business to make the wheels go round.


How can you be sure Flutter is going to stay around? If Google dropped it, I doubt the open source community would be able to maintain it, just because it is a project that targets many platforms and requires many different kinds of expertises. It definitely would require professional organization similar to the Linux kernel or things like Gnome and KDE.

Arguments in favor of Flutter

If you follow DHH's advice and choose something from smaller companies, guess what, many of these aim to be bought, and then the rug gets pulled from under you anyway. If Microsoft buys the product, it starts to get worse with each update. But normally the product is bought to be killed. So the risk must be managed taking much more into account than just company sizes.

The price of Flutter is right for everyone. (Zero.) Xamarin used to offer a comparable product for cross-platform development which cost a thousand dollars per developer per year. And then Xamarin was bought by Microsoft and the product was included into Microsoft's own.

Cross-platform development solutions being offered by smaller organizations (that I know of), such as Qt and kivy, do not seem to be equal in scope.

I should also clarify that Flutter is NOT a toy. The most recent figure is that one million applications have been built with it. That is a very significant number in the mobile space. No other cross-platform toolkit has that number.

Incredibly relevant to the question of "what tech to use" is the advice given in the famous article "Beating the Averages", by Paul Graham, which you absolutely should read in its entirety. Nevertheless, I will quote:


We knew that everyone else was writing their software in C++ or Perl. But we also knew that that didn't mean anything. If you chose technology that way, you'd be running Windows. When you choose technology, you have to ignore what other people are doing, and consider only what will work the best.

(...)

When I was about nine I happened to get hold of a copy of "The Day of the Jackal", by Frederick Forsyth. The main character is an assassin who is hired to kill the president of France. The assassin has to get past the police to get up to an apartment that overlooks the president's route. He walks right by them, dressed up as an old man on crutches, and they never suspect him.

Our secret weapon was similar. We wrote our software in a weird AI language, with a bizarre syntax full of parentheses. For years it had annoyed me to hear Lisp described that way. But now it worked to our advantage. In business, there is nothing more valuable than a technical advantage your competitors don't understand. In business, as in war, surprise is worth as much as force.


Graham's weapon, the Lisp language, was not and still is not normally used in businesses. It is mostly used by academics. Nubank using Clojure (a lisp language) is a notable exception. (And Nubank's app is made with Flutter – they must be following Graham's advice.) But the main point is, don't do what everyone else is doing – do something better, which I think Flutter is, except for the Google risk.

Perusing discussions about the future of Flutter, I came across this comment from July 2023 that made a lot of sense to me:


Connecting the dots of what Google has been doing over the past 13+ years:

  • Launched a system programming language called GO
  • Forked little kernel to make Zircon
  • Started the Fuchsia project on top of Zircon built with Go.
  • Launched Flutter for cross platform
  • Community engagement for spreading the usage of Flutter
  • Build all tool kits for complete cross platform support
  • Build a large swarm of one million apps running in Flutter on play store by community

So I believe Google is trying to hit the IOT market as the existing operating systems are very heavy to be interfaced with smaller devices in which Zircon based hardware is a lighter option with a lot of perks. To make that hardware accessible to masses we need an operating system which is the Fuchsia. But if an operating system is introduced to the world no one is going to use it as for the adoption we require APPS. To make those apps they selected Flutter and Dart as their native language to write apps for Fuschia. Now, they've started community engagement to make flutter a widespread tool and used community to make a unified codebase for a million apps which can be made available in both the play store and the upcoming Fuschia store. This will solve the problem of NO apps in the market for newly launching OS. (The Windows Phone team thought people would make apps instead of community engagement through the years that's why they failed miserably.)

Now when the new OS is launched it'll have a lot of apps which can simply be ported to support Fuchsia with minimal efforts and opens a whole new world of Zircon based hardware hitting the market interfacing with the new OS ushering a new era of IOT apps through Fuchsia.

So Flutter is just a part of a BIG PLAN for years and not a thing made to Die by Google.


That's all I have for you. I think Flutter is the way forward for me, but unfortunately I don't have a crystal ball to divine the future.

The crumminess of web tech

We make things for the web because the web is free. We want our stuff to be accessible to everyone. We don't want, for instance, the Apple police dictating whether we can update our app or not, or when, or how, or taking an enormous cut of our income.

But the default web tech has always been very bad for making web applications. It was originally invented for hypertext, not apps. It has been a long "evolution", but until yesterday, it didn't even offer native popovers, so everyone had to create their own. It is safe to say that a popover component should be part of every basic app toolkit.

Here are a few ways in which web tech is bad:

1. JavaScript

JavaScript is a language that makes kittens cry every day. It was created in a week and now we have to tolerate it forever!? WAT. The best book about it is called "JavaScript - the good parts"... It is the only language I know with so many evil parts, other than INTERCAL.

2. Standards schmandards

Web browsers implement standards differently, and devs suffer with browser support. This will never improve: the new Popover API, which everyone wants to use, already has been implemented differently in each browser. In 2024! My God, the story never changes!!!

Therefore your layout breaks through no fault of your own. It may work today and break tomorrow – not in theory, but in practice.

Further, when your web app runs out there in weird browsers on weird phones, or just in Safari (which is always lagging like an Internet Explorer in adoption of web standards), you keep seeing very weird error messages in your Sentry. Hard to believe they are real.

3. CSS

CSS is always in flux. At first, one had to learn how to write semantic CSS. Then CSS frameworks appeared, solving new problems and creating new ones. Now everyone uses Tailwind, which is again better, but again poses the old problem of how to use CSS well. In fact, Tailwind requires the development of very good taste about "where to put this code", because there are 4 different layers of abstraction, all of them valuable:

  1. global CSS (such as theme variables),
  2. components (where Tailwind defers to Bootstrap style),
  3. usage of actual Tailwind classes (with the danger of too much repetition), and
  4. inline styles, which should be very rare, but still do happen.

4. Build pipeline hell

Typescript and Flow are better than JS, but using them requires compilation. Tailwind does, too. Build tools such as vite are hard to understand and configure, but occasionally that responsibility falls on you. At that moment, you wish you had chosen straight JS, no build pipeline.

For instance, this article teaches you how to have hot reload if you use Vite in the frontend and Flask in the backend. It doesn't mention that you can do the opposite: Vite has a reverse proxy. Maybe the article was written before Vite added the reverse proxy. Anyway, the section named "A word of caution" states:

In my 7 years of building for the web, I've used Grunt, Gulp, Webpack, esbuild, and Parcel. Snowpack and Rome came-and-went before I ever had a chance to try them. Bun is vying for the spot of The New Hotness in bundling, Rome has been forked into Biome, and Vercel is building a Rust-based Webpack alternative.

5. Overactive ecosystem

The above paragraph is just one instance of a tremendous problem:

The JavaScript ecosystem is overactive, oriented to excitement, with lots of packages being created all the time, and then dying in less than 5 years. In fact, you will have a library die before you have a chance to use it. Happened to him, happened to me. A normal developer needs something durable, stable. But we are unable to tell which tools are going to be responsible and stay maintained. And the considerable force of the FOMO makes us try many alternatives.

6. Unruly npm dependencies

Controlling the dependencies of your project can be pretty impossible. Every medium project out there depends on an unreasonable number of npm packages, and you as app developer have a very faint idea of what most of them do. This is dangerous. But it is also a consequence of the overactive ecosystem.

7. Legacy code

Because libraries and frameworks are so short-lived, they put a ceiling on how much devs can accomplish before they have to reimplement their own "legacy code". Witness Angular 2.0, which infamously gave devs no upgrade path from AngularJS 1, leaving thousands of projects orphaned.

8. Boiled frogs

Because the web is necessary, programmers learn JavaScript first and specialize in it too early. These inexperienced developers easily become the proverbial boiled frog – they have no idea of what a sane environment actually feels like, let alone a good programming language. They think that world is normal.

Boiled frogs will disagree with what I am saying. Perhaps even jump to the cheap accusation of a "skill issue".

A subset of the boiled frogs shrug and declare "I did my job". They are happy to learn something new, jump ship, and leave the hand that fed them with legacy code that is only 4 years old. That is an irresponsible and immoral thing to do. Those people have no right to complain about planned obsolescence, or they would be hypocrites.

On the other hand, developers who have a clue may dismay in the face of adversity, sometimes optimizing for ease of implementation rather than awesomeness of UI design. Today many a developer says they prefer the backend to the frontend.

If HTML, CSS and JS weren't so awful, then we wouldn't be seeing every other language compile to JS or WebAssembly or both. The impetus of WebAssembly is proof that many, many people agree with what I am saying here.

9. Too much learning

And now the worst part: the amount of learning that a programmer has to constantly go through. This is something that the boiled frogs don't see anymore, but they have been wasting their lives and brains on HTML, CSS and JS, not to mention JS frameworks in general, especially React, Vue etc. which are always imposing new concepts and new ways.

FOMO makes you try new frameworks. For instance, SolidJS suddenly offers much higher performance through a smarter observable. But only when you use it, do you realize limitations such as... SolidJS only likes arrays, you can't use Maps or Sets, which by the way are collections in JS that should be much more popular.

99% of JS frameworks are full of leaky abstractions like that. They seem to solve a problem but create many others by hurting all the other dreams you had for your code.

Tiny history of the crumminess

In 1995, Netscape told Brendan Eich to quickly create a little language they wanted to add to their browser. So JavaScript was created in 10 days. Nobody expected it to become the most used language...

Through the decades, many, many alternatives were created, so developers wouldn't have to suffer the crummy JavaScript. One of the most important ones was CoffeeScript (2009). Microsoft hired the designer of Delphi to create their own in 2012: TypeScript, or "C# in the browser". TypeScript basically has won. However, all these languages introduce a compilation step, because only JavaScript ran in the browser. Now developers have to wait before seeing changes on the page, which is crummy.

The formatting language, CSS, was considered crummy, so they did the same thing: they invented CSS improvements which needed a compilation step down to CSS, making the total compilation time even longer and crummier.

With compilation, debugging suddenly became crummy, because when an error occurred, the browser reported the error was in line 934 of a JavaScript program that you hadn't written and looked hideous. So Chrome invented source maps, solving that problem: Now the browser would map lines of code and report the error on your CoffeScript source, not the translated JavaScript source. But source maps are a heavy download and they take even more time to compile, making them crummy. For being heavy, they are never used in production.

Trying to control the problem of web apps being heavier and heavier downloads in production, they added "tree shaking", which means, the compiler is smart enough to omit functions that were written but aren't actually currently used in the app. Feeling smart, they forget this is one more complicated thing that every web developer needs to know exists, and takes more compilation time, being, therefore, crummy.

To give back to developers the immediacy of saving code and seeing the changes in the browser without having to wait, sophisticated build tools such as gulp, webpack, vite etc. make complex decisions about which of those things to do during development and which to do in a production build. That's not all they do, it's more complex. Anyway, they have become yet another essential tool in the toolkit, they last even shorter than web frameworks, they are hard to understand, they have hundreds of configuration options... but they solve the immediacy problem – yet again, in a crummy way.

And in this manner, web development is ever more complex, and the crazy mob continues to think themselves smart after patching fundamental wounds with more and more tools.

The real solution always was something like Dart (2011) or WebAssembly (2017): a hard break with crummy web tech, as long as the replacement tech were as open as the web, and designed for applications from the start. So what did the boiled frogs do? They basically ignored these. The initial plan for Dart was to include it in Chrome as the good brother of JavaScript. This was criticized for fragmenting the web, so they gave up this idea in 2015.

Instead, Node.js (2009) brought JavaScript to the server, and now boiled frogs write their backend and frontend in the same language: the worst one. Someone help them!

Searching for a solution

You thought you would develop your product, be done, sit back, let the profits come in and never work again. Then you learned Chacon's lesson: software is a tamagotchi. Like a virtual pet, software has its own needs that must be tended to, over time, constantly. That's normal in software. What is abnormal is the furious intensity of the tamagotchiness if you use web tech: if you don't update it, in only 4 years it is legacy code that nobody wants to maintain.

In short, you had a problem, so you decided to write a web app. Now you have 9 problems, with more to be expected in the future.

We want to make web apps, yes. But with decent tools!

Through the years I have tried many alternatives to JS, especially those that promised I could write my web apps in Python, the most legible language. But I never felt they were mature enough to actually use. They always came with serious drawbacks, such as more difficulty debugging, due to translation to JS.

Still within web tech: Mithril

The best I could do was use basic JavaScript as much as possible, avoiding frameworks that interfere in my data. Something like Mithril.js, a reactive library with limited scope, is the best you can choose. When I use Mithril, my state management library is {}. Do not let a framework dictate how your data should be organized!

You must isolate your business logic from JS frameworks. Write the core of your system with one principle: importing the JS framework is not allowed! Keep the code that uses the framework very thin. This is the only way some of your code can survive these frameworks, which usually last only 4 years.

Avoiding the situation in which your entire frontend is suddenly a pile of legacy code is so important that it trumps other considerations, such as the availability of ready-made widgets for your framework. There are only one or two widget libraries for Mithril out there. To avoid reinventing the wheel we switch to web components (custom elements) for very complex things such as editable sortable filterable data grids.

Beyond web tech: Flutter

The crumminess of web tech is a good reason to adopt something like Flutter. Since 2021, it can render on the web using Canvas. Like a game engine, it paints the screen before it hands the pixels over to some dumb surface or canvas of the platform it is currently sitting on. Therefore, my layout will never break. I will no longer care whether browsers agree on most things. I won't have to use HTML or CSS or web frameworks. The Dart language has been getting great features, such as null safety (2021). Much better than JS and very similar to TypeScript, so learning Dart is easy for most. My Sentry will certainly contain fewer weird errors from weird browsers. And no need to ever configure a bundler.

CSS? Hahahahahaah... CSS... Good one!

Flutter is an open source cross-platform application development kit created by Google. From one codebase it generates the same application for Android, iOS, web, Windows, OS X and Linux. One million applications have been created with it.

Flutter has an initial learning curve, for sure. But in the long run, that is much less effort than keeping up with crummy web tech!

What about beginning developers? What do you think is easier, to learn HTML, then CSS, then Javascript, then Typescript, then Vue, then vite (and alternatives)... or to learn Dart and then Flutter?

But one must know when to use this kind of thing on the web. Flutter is for applications, not for content websites. A Flutter app running on the web is a heavier and slower download, does not expose its content to search engines, and does not easily integrate with the accessibility features of the web platform. Keep your blog in HTML, because HTML was created for hypertext. But it wasn't created for web apps! We have been forcing web apps onto it for years, defacing it, turning it into a monster.

And the only problem is, Flutter's performance is great on every platform, except the web, and Google is not doing anything about that – not in 2024, at least. The web seems to be a second-class platform for them.

Conclusion

Although I am only getting started with Flutter, it is already easy to see how developer productivity should be higher by using it instead of HTML, CSS and TypeScript.

My interest is not necessarily to make iOS or Android apps, no. Just make my life easier. Make a bloody app that will even run on the bloody web, without requiring that I use crummy web tech.

Further thoughts by other people

ChatGPT riffs

Me: Richard Dawkins says he was impressed when he asked you to write a short article on evolution in the style of Donald Trump. Of course that is for the purpose of laughing, but I decided I had to see it, so here I am. Shoot.

ChatGPT 3.5:

Sure thing! Here's a quick riff on evolution in the style of Trump:

Let me tell you folks, evolution, okay? It's like, this big thing, right? Everyone's talking about it. And you know what? I know all about it. I have the best words for evolution. Trust me, I'm really, really smart.

So, here's the deal. You start with these little things, like tiny, tiny things. And they're like, swimming around, doing their thing. And then, bam! Time goes by, and they start changing. They get bigger, stronger, smarter. It's like magic, but not really, because I don't believe in magic. I believe in winning.

And then, you've got all these different animals. Big ones, small ones, flying ones, crawling ones. It's amazing, folks. It's like a reality show, but with animals. And let me tell you, I would have made a great animal. The best animal.

But here's the best part, folks. Humans. That's right, us. We're at the top of the food chain. We're like, the ultimate evolution. We're so advanced, we can build towers, we can tweet, we can do whatever we want. It's incredible.

So, in conclusion, evolution is tremendous. It's the best thing ever. And nobody does evolution better than us. Believe me.

There you have it, evolution, Trump-style.


Me: Brilliant! How about Cosmo Kramer talking about Chopin?

ChatGPT 3.5:

Oh, this should be fun! Here we go:

"Hey there, Jerry, you ever heard of this guy Chopin? Let me tell ya, he's like the Mozart of the piano, Jerry, a real maestro of the keys! So get this, Chopin was this Polish dude born back in the 1800s, and he was all about tickling those ivories, you know what I'm saying?

Now, Chopin wasn't just any old piano player, he was like, a piano wizard! His fingers danced across those keys like nobody's business, Jerry. And get this, he wasn't just playing any old tunes, oh no, he was composing these beautiful, soul-stirring pieces that would make your hair stand on end!

But here's the kicker, Jerry. Chopin had this thing, this vibe, this... I don't know, it's like he was channeling the music from another dimension, you dig? His music was all about passion, about love, about heartbreak. It was like he was pouring his soul into every note he played.

And let me tell you, Jerry, people went crazy for Chopin's music! They were lining up around the block to hear him play, begging for more, like he was some kind of musical rock star!

So, in conclusion, Jerry, Chopin was the real deal. A musical genius, a piano virtuoso, a legend of the keys. And his music? Well, let's just say it's like a fine wine, Jerry. It only gets better with age!"

And there you have it, Jerry. That's my take on Chopin, straight from the mind of Cosmo Kramer!


Me: That was so amazing, I gotta have more. Trump has just been re-elected; Jerry Seinfeld and George Costanza are discussing how that could be.

ChatGPT 3.5:

Alright, let's dive into this scenario:

Jerry: "I still can't believe it, George. Trump got re-elected? How did that happen?"

George: "I know, Jerry, it's like living in the Twilight Zone! I mean, the guy was impeached, he had scandals left and right, and yet, here we are, four more years of Trump."

Jerry: "It's like we're stuck in some bizarre sitcom where the punchline is always the same. But seriously, how did he pull it off?"

George: "Well, Jerry, you know what they say, 'Politics is like a game of chess, except in this game, the pawns vote.' And let me tell you, Trump knows how to play those pawns like a fiddle."

Jerry: "But George, the man tweets at 3 AM, he insults world leaders on social media, and he's been bankrupt more times than I can count. How does he still have supporters?"

George: "I'll tell you how, Jerry. It's all about the narrative. Trump has this way of spinning things, making it seem like he's the hero in his own reality show. And let's face it, people love drama."

Jerry: "So, what you're saying is, we're all just extras in Trump's big, extravagant reality show?"

George: "Exactly, Jerry! And unfortunately, the ratings are through the roof."

Jerry: "Well, George, looks like we're in for another four seasons of 'The Trump Show.' I just hope it doesn't get canceled before the series finale."

And scene! Just another day in the surreal world of Jerry and George.

How to listen to classical music 1/99

Classical music is the best example of an acquired taste. It's something that requires an investment. But rest assured, you profit proportionally to how much you invest.

There are two kinds of people who don't like classical music:

  1. A few seriously tried to learn to listen to classical music, but they really have a problem with it.
  2. Most never tried it seriously, so in fact they could still learn. Usually they don't know how to start or how to continue.

If you want to start listening to classical music, this playlist of mine was created to be didactic. Mostly you want radio stations in which people talk about the music and play entire works (not fragments). In the UK that's BBC Radio 3. In Brazil that's Cultura FM de São Paulo. Elsewhere I don't know, it might not even exist. In Poland, for instance, there is more than one radio station for classical music but they are all square, with a very conservative and restricted repertoire, no comments on the music and they rarely play entire works. Just awful.

If you decide to learn to listen to classical music, what does that amount to? What lies ahead of you exactly? Let's answer this question. Sorry, the answer is long.

What it amounts to

More music than you expect

Most people today listen to less than a century of rock and pop music and little else. But classical music isn't just one more genre, it is a vast repertoire containing many, many genres. I mean, it has existed since the 5th Century. In practice we listen to about 8 centuries of music, with many revolutions in the genres and musical languages used.

That repertoire is probably larger than all non-classical musics combined. You'd need several lifetimes. But not all of it is good, no. The stuff that remains in the active repertoire tends to be the stuff that is really eternal.

It will be strange at first

You know the first 50 years of cinema history are full of masterpieces, naturally. But you feel it's harder to watch black and white films with mono audio (or even silent) and limited techniques than to watch contemporary high-res films with great sound and a contemporary cinematic language. However, the more you watch old Chaplin films, the less strange they seem to you. You can get used to them.

Something similar will happen when you listen to classical music. It will be quite uncomfortable at first. Each era will sound strange to you for different reasons. Each composer will feel strange for a while. In certain cases, individual pieces will cause bewilderment. But understanding will come. You must give it time.

Advice: quickly find something you like; stay with that composer, get to know him; start moving away from him both forwards and backwards in time, gradually.

My personal taste has gone through multiple revolutions. Musics that I detested on the first hearing, in time, became my favourite things in the world. This is a wonderful side of classical music: you do get smarter in your listening as you broaden your horizons.

You also lose a kind of temporal myopia. You gain perspective on your own time, by knowing what things were like in the past. For instance, children today are used to that robotic voice created by AutoTune, which enables people who can't sing to sell music, but sounds completely artificial. But they may react negatively to bel canto (opera singing), with its wide vibrato and focus on the loud side. But you see, there was never any whispering in opera, because the people at the back of the theatre paid for tickets too, so they deserve to hear something. Bel canto developed when electric amplification didn't exist, and it is the most natural way to sing such that everyone can hear you in the theatre. Those guys have to compete with an entire orchestra... For a while you may find it strange, but it's just one more thing that you can get used to, and then forever enjoy.

Listening is a skill to be developed

People used to know this: Listening is a skill that one develops through focused practice. People don't seem to know this anymore. That's because we live in a historically bizarre era: the era of entertainment, which only started in the 20th Century.

What is entertainment? Just cheap art. Leave your brain at the door, grab the popcorn, and forget the movie 5 minutes after it's over. Entertainment is the art that doesn't assume the responsibilities of art. Real art gets you thinking, it annoys you, it forces you to revisit the work, and often it forces you to change your opinion. Real art demands things of you, it gets you to do the work. Marketing departments feel that demanding things from the public is bad for sales, so they shun real art.

But how to develop your listening skill? Well, first of all, understand that...

Classical music is not background music

People use music as a sort of nice background noise for other noisy activities. Classical music is unfit for that purpose. It requires your attention and your silence.

When listening to classical music one absolutely must stop doing other activities that require the same neurons – the neurons used for language processing. No talking, no reading, no listening to audiobooks, no noisy environments, no programming, no scrolling on your phone. But yes, you can silently tidy up your room while you listen, it wouldn't likely interfere.

Why no noisy environments? Because classical music is normally expressive, in the sense that it has softer and louder parts. It is not abnormal, like the contemporary music that stays equally loud from beginning to end. Speaking softly versus screaming is an expressive device that should be used. The reason pop music doesn't... is because then pop music can be used as a distraction in noisy urban life: driving, commuting, working, etc.

If you use classical music like that, 2 things may happen:

  1. An annoyance: you can only hear the loud parts, not the soft parts, because these get masked by environment noise.
  2. A disease: by raising the volume you will absolutely, certainly, without any trace of doubt, lose your hearing. This is of course not limited to classical music. Any prolonged listening, to anything, at a high volume WILL cause hearing loss. It's no joke, just ask around yourself – you already know people who have hearing loss, but maybe they didn't tell you about it. Please do get informed about hearing loss.

Therefore, if you need background music, go ahead and listen to some sausage. Keep classical music for your personal temple.

Classical music is an abstract narrative

But why does classical music require focused attention? Because it's a plot.

Most classical music (but not all of it) is like a story. But the events in the story are abstract rather than specific. There are a number of stories that may fit the same piece of music. As if a story had been told, its events erased, but the feelings and atmospheres remained.

Classical music is also definitely not about lyrics, although these may be present. If you learn to listen to classical music, then instrumental music becomes meaningful to you. This is because music is the art of sound. Lyrics are an entirely separate art called poetry. If you currently dislike music without lyrics, then you'll be learning what the art of music is really about.

But the scope of the feelings and atmospheres portrayed is much, much larger. You see, contemporary popular musics have rather narrow expressive ranges. For example:

  • punk rock and hip hop focus on male self-affirmation: "I might seem menacing but I am right, here are the reasons". This is something that loses all its wisdom as soon as a guy has sex a few times. I mean, self-affirmation is kinda ugly, egotistical and unwise.
  • bossa nova expresses an atmosphere of hanging out at the beach, just looking at the babes, being cool and feeling lonely. Again, rather narrow, you see.
  • "romantic" music was always a current genre, gently talking about love. Somehow, since the ascension of hip hop, love is NOT a cool subject in pop music – violence is. Definitely a historic aberration.
  • country music: I am not sure how to characterize, but definitely narrow in technique and emotional range.

Now depending on the symphony you hear, it might last an entire hour and contain an entire world of different feelings and atmospheres, sometimes in violent contradiction, so you don't get bored. It doesn't have to be a symphony; "Pictures at an Exhibition" would be a great example.

You need a time machine

Classical music is not music of the past. It continues to develop today. But to understand what happened in one era you have to know the previous one.

Rock and pop are not like that, not so much. They are sort of a reset in history, even though they do have their origins in other genres. The point is, they do not refer to their history, you don't have to know it to appreciate it. (In a few pieces you do.)

Classical music is much more historical. You have to use your time machine, place yourself at the time when the piece premiered. By knowing the music that came before, you know what was new about this piece, and what was old. You know what the innovation was, you can guess whether they were shocked or intrigued or whatever. The experts will find historic documents and tell you how the piece was received, what the critics wrote etc.

What if you don't time travel? Then classical music will be a shallow thing to you – you will think it is "beautiful". That's how most people hear classical pops today. "That was nice", they say. But "easy listening" is something else. Classical music was never beautiful; it was frequently shocking, "uglier" and "uglier". The music of one generation was never appropriate for the next generation (and vice-versa), exactly like today. Something always dramatically changed. What and why? That's what is interesting.

You will never understand classical music unless you time travel.

The genius composers of the past were tremendous inventors, and as such, they did quite strange things. You won't be able to understand what and why unless you get informed. I promise you, if you research the times, the composer's life and the piece, you will gain so much understanding and fruition. Figuring out the thing goes hand in hand with liking the thing.

You need proper equipment

I tried to listen to some Tchaikovsky on my iPad. I love Swan Lake, but on tiny speakers... it's annoying. In fact, it was unbearable. The reason is, the lower frequencies are missing. You get only the highs, so the sound is strident and aggressive and honestly, quite ugly. I am not speaking as an audiophile here, I am talking about an extremely palpable thing that everyone will feel.

If you listen to classical music on crummy speakers, I am sure you will give it up.

Classical music requires at least a good pair of headphones. Not those that come with your iPhone, no. Not JBL either, those are bad and expensive. Get good headphones. Or a good pair of speakers, that's even better.

Apple and other streaming companies have started to push surround sound for music in general onto their public. That's bull. It makes music worse, except for a very small portion of the repertoire, which was composed with surround sound in mind. Most music needs stereo, that's the best way to listen to it. The rest is a disservice started by marketing departments.

The golden age of equipment for listening to music was until the 80s, because they sold "Hi Fi" (high fidelity), which means, the sound that goes out the speakers is close to the sound that got into the microphone. That's the goal. Back then, people would listen to music together and discuss it passionately. But in the 90s, Sony, Philips and all those companies stopped talking about Hi Fi and started pushing "hyper bass". That's the beginning of the bullshit era, in which stereos were replaced by home theaters with 5 tiny speakers and one woofer, which by the way, create a nightmare of cables. Suddenly no home had a place for listening to music, that place was now for watching blockbusters. This is also a part of why you were never taught how to listen to classical music, either by your parents or by some friend.

Classical music doesn't want "hyper bass". The sub-bass you can hear in certain disco music (which sounds like a very low electric hum is singing) is not desired. That's the lowest sound humans can hear, there's no need to go that low. If the bass is too strong it will mask other frequencies, hiding detail, which makes things worse. For classical music, normal, common bass is fine, as long as it is present.

Best to buy CDs

These days, on streaming platforms such as YouTube, even classical music often has its dynamic range compressed until it looks like a sausage and sounds at the same loudness level all the time. This is horrible. I am seeing videos so compressed that the softest passages actually sound louder than the loudest passages – this inverts the original meaning. Why are they compressing? Presumably so you can listen to the thing on the speakers of your phone, or in a noisy environment. We already established you must do neither.

You should be the happy owner of quite a few CDs, if only so you can feel the difference. CDs do not use that compression, in CDs you hear loudness differences pretty much like in the theater. The sound quality is also higher, unless you pay for a really good streaming service.

CDs often come with a booklet which, if well written, can be very informative about the repertoire.

If you buy a CD, you can own it for many decades. If you pay for a streaming service, you own nothing, and your favorite music can suddenly disappear.

Do not buy compilations. Ignore "Best of Mozart", that's nonsense. A Beethoven symphony is equally good from beginning to end, and you need to hear the entire thing because the entire thing has a certain meaning. Buy only complete works, never just extracts. Listen to complete works, from beginning to end.

Summing up this post

Here is the advice you learned in this article:

  1. Preserve your hearing for the future.
  2. The repertoire is more vast than you imagine, but the best stuff is well-known.
  3. You will find each thing strange at first. That's normal, happens to everyone. Insist and pass the initial hurdle.
  4. Listening is a skill to be gradually developed.
  5. Classical needs your complete attention, it's not suitable for the background.
  6. It usually tells an abstract story. Keep track of the plot.
  7. Listen to complete works, not extracts or compilations.
  8. You need to research time, place, composer and composition.
  9. You need proper reproduction equipment.
  10. Buy some albums.
  11. Do not listen to any music in noisy environments.

That's a lot of advice, but not enough. More is coming.

A realistic take on AI

AI doesn't exist but it will ruin everything anyway is the most important thing I've seen this month. I strongly suggest you watch it.

AI... The technology itself is good, but now it will be and is being abused. The name "artificial intelligence" is wrong because the thing lacks intelligence; we should probably just say "large neural network".

It translates well (when there is context), it can rewrite existing text well, and it can explain linguistic issues well. In a word, it "languages" well - but it does everything else badly.

Current AI has no experience of reality: it can't see, listen, feel... it can't even calculate, it makes mistakes in very simple arithmetic.

The missing functions should then become subsystems: a specialized visual system, a specialized auditory system, an arithmetic system, etc. But integrating those subsystems into the language subsystem will be a nightmarish task, because it is a black box, meaning no one can understand why and how it works inside. Thus, a neural network that produces images makes mistakes in the number of fingers, written content in outdoors etc. It can be trained, but correcting it and fixing mistakes are very difficult things.

Creative AI efforts are ludicrous. Its jokes are not funny. Its images are completely derivative. Its music is simplistic, ugly and it feels random, like it's going nowhere. This has 2 causes:

  1. Because the AI has no experience of the world, it has no worldview, which is a most important component of all art. In other words, AI has nothing to say.
  2. The only thing we can do with a neural network is to train it, but no Mozart is the result of training.

Although results are despicable, they will use "AI art" to save costs - they never liked to pay for art. Thus the taste of the general public will become even worse than it already is. Entertainment has always been the art that rejects the responsibilities of art: leave your brain at the door, grab the popcorn, and forget the movie 5 minutes after it's over. Now the responsibilities of art will finally be completely set aside, because the producer has no idea of what it is like to be human.

Current optimism on the future of AI is as erroneous as ever. The AI field has seen very few revolutions. Solving the above problems will require lots of new breakthroughs. Right now people are talking as if the singularity were upon us... In fact, HAL 9000 is nowhere near – decisions made by AI are not pure, ethical and logical; they are even more full of prejudice than human decisions. Understandably, since AI is so stupid right now.

Also, AI will forever be weird, like any baby who could write before it could crawl.

There is an enormous bubble in the valuation of this tech and it will burst just like the previous ones.

Doubt on the previous conclusions arises from the fact that tech giants are spending literally trillions of dollars to overcome these limitations. When that kind of effort is spent, problems tend to be solved! For instance, above I wrote (in April) that AI is bad at math. In July, a couple of models already exist that are good at math.

However, problems also exist that might not be solved by short-term research. For instance, the ludicrous amount of energy required to run AI systems. Or the fact that those companies have already run out of human knowledge that they could steal to use for neural network training.

So on one hand, AI is only getting more powerful, never less. On the other hand, to imagine that the current approach – LLMs on silicon – doesn't have hard limitations... would be silly.

Anyway, as the video pointed out first, AI will still fuck everything up. Video and audio will no longer be proof! An entire series of books can be written about this alone.

Companies already misuse AI to make decisions even while knowing it cannot be trusted to make decisions. For instance, HR departments right now are misusing AI to filter resumés. As a result, now resumés have to undergo a sort of SEO, otherwise they are discarded by misused tech and never reach a human. This is distopic enough for me already, thank you very much. If people will have zero chance of finding a job unless they lie in their CV, then it is a scheme that turns everyone into liars.

Google Translate has definitely made the translation method of Unua Libro obsolete. Now we use Translate instead of Esperanto keys, of course! However, Esperanto will definitely survive AI translation systems because using these is only appropriate in the most formal circumstances. People still want to talk directly to each other, without a translation delay.