The Esperanto bus

I know that in Warsaw there is ulica Ludwika Zamenhofa and ulica Esperanto. Worthy homages. But I don't think there's much to see in those places.

Memorial plaque in Warsaw

This week I was sightseeing in Warsaw with a friend. Among many conversation topics, I mentioned that this year I learned Esperanto. I explained that I continue to learn it because I like the decisions in the design of the language. I told him that one learns Esperanto through 180 hours of study, English through 900 hours of study, Polish through 2000 hours of study, and Chinese through 2400 hours of study.

Not a minute later, we saw a bus called "Esperanto" on the street. A simple normal daily bus, probably taking passengers towards the Esperanto street. It was going in the direction of the old city center.

Marketing professionals know that it usually takes 3 or more impressions to make a sale. A consumer will buy the thing after he has seen it a few times. In this sense, the passing of the bus certainly helped me not to look like a crazy nerd to my friend, who had never heard of Esperanto before.

Bus to Esperanto street

Later I told this to my wife, who would not learn Esperanto. She joked:

— And then you signaled for the bus to stop, and when the door opened you sang to the driver: "Buenan diooooon"... and he turned to the other side, muttering "I swear, this bus line is the worst in the country"...

"Buenan diooooon"! I can't make up this sweet! (Correct Esperanto would be "Bonan tagon".)

And she's even right, since I pick my clothes like a 4-year-old who sees a t-shirt with a dinosaur on it and feels it represents him. In Warsaw I bought a Chopin-branded cap and a Chopin-branded t-shirt. I would definitely also wear clothes about Esperanto if I could find them. It's wonderful to live in an exceptional time when one doesn't have to hide one's eccentricities!

But the story isn't over. My friend and I came upon an urban bicycle scheme and I pointed out:

— You see that word, "Veturilo", the name of the urban bike network?

— Yes.

— That's an Esperanto word. It means "tool to get around".

— Why is the name in Esperanto?

— More than a decade ago, when the urban bicycle network was implemented, the city allowed citizens to suggest names, then vote. The head of the Polish Esperanto Youth suggested "Veturilo". Common Varsovians didn't care (it wasn't important), and he organized for the world Esperanto community on the internet to vote for it, so "Veturilo" won.

Veturilo bikes

And then when we had lunch, I actually wanted a Coke, but ordered a Mirinda instead...

Anyway, it is not true that the Esperanto bus has passed. We can get on it at any time. As the dollar and the English language step aside for the yen and the Chinese language to pass, the world would do itself a huge favor by agreeing to use Esperanto instead, saving billions of dollars and trillions of hours of study.

In other words, sooner or later, economics guarantee the final victory.

In 1986, China even proposed that Esperanto become the main language in the United Nations.

No, music is not math

Stop saying "music is math".

How much mathematics was needed to develop musical instruments? Too much math, for sure! However, the same thing happened to cooking utensils, but no one says that "cooking is mathematics", or that "driving a vehicle is mathematics"...

Mathematics is the basis of all sciences. There isn't a single subject that doesn't have anything to do with math. Everything needs numbers.

When people think of the information in music, they basically think:

  1. That the music runs through time, and time is measured, and most music uses fractions to specify durations -- a musical score is primarily a function of notes over time, and the duration of all notes is planned.
  2. That at the same time, the melodic lines or musical events (tones) have another dimension: pitch, which is specified by the musical scale (do re mi fa sol la ti) or measured in Herz; for example, the most usual pitch of La is 440 Hz.
  3. That tones simultaneously have intensity, which means that sounds can be accentuated or hidden, and they can be loud or soft. Intensity is also dynamic, meaning that a single sustained note from a brass instrument can, for example, start with an accent, drop quickly to a low volume, and then grow louder again.
  4. That tones at the same time express timbre (sound color), which mainly depends on the musical instrument and the way it is played. Timbre is the most mysterious and difficult parameter to measure, but science does measure it... using mathematics.

In short, musical notes have 4 parameters: duration, pitch, intensity and timbre. As a result, music also has more or less the same 4 parameters. During a piece of music, these parameters change independently and constantly, and great composers establish relationships between all these parts.

But, so far, we have only concluded that the technique of music depends on numbers. Yes, the basic parameters can be expressed in numbers -- you count the beats while you play: 1, 2, 3, 4, repeat. However, the message is always human and often takes place in another dimension.

Hmm, a very mathematical composer... Have you ever heard the music of Iannis Xenakis? He developed a compositional technique called stochastic music, which uses statistics during composition. He was an architect and he wrote computer programs to help him compose. In this case, I agree that music has a deep connection to mathematics. But the computer didn't compose -- Xenakis did.

By the way, you've probably never heard his music; you probably even dislike such musical modernism; so why do you keep saying that music is math?

Most composers aren't mathematicians, they do little more than counting.

Let's take another example: Frédéric Chopin. The first thing about Chopin is that his music contains a sentimentality that is undeniable. Some Chopin music conveys the sadness of sadness, the misery of misery. To achieve his unique nostalgic effect, he developed a combination of existing techniques (such as pedal tones, chromatic harmonic changes etc.). But even in the happy pieces there is a tenderness that is not found in other composers.

Of the four parameters, the fourth is straightforward in Chopin: he always uses the piano. Timbre is only a consideration in piano solo music inasmuch as the piano is a symphonic instrument, capable of expression comparable to an entire orchestra, under just two hands.

Chopin liked 4 things: the piano, Bach, opera, and Polish music.

  1. Chopin developed the piano technique, but he did not do it alone. He knew pianists and composers of the time, who also demanded more than Beethoven in their scores.
  2. From Bach he learned that grace in the art of music lies in simultaneity -- simultaneous melodies and the relationships between them. The polyphony of the most brilliant composers is often more complex. This is the case in Chopin. But some listeners don't realize that -- they are not aware of the complex polyphony hidden in Chopin's "accompaniment", which is much more than accompaniment. A good pianist will make this polyphony clear and a good listener will know how to hear it.
  3. From opera he loved bel canto melodies. He especially appreciated Vincenzo Bellini. Opera is the most popular genre of classical music. If polyphony is a complex aspect, the melody of the opera is what every music lover does understand. It is what the people take away with their whistles. Melodies on the piano must be played giving the impression of a singer singing. And the listener can help with his imagination...
  4. Chopin was a nationalist and in exile he expressed his love for his country in his polonaises and mazurkas, which actually contain an influence of the Polish music he heard as a child.

The rhythm in Chopin is completely new. If you think that time in music is exact like in mathematics, you are squarely mistaken. The pianist must play romantic music like a human, not like a robot. This means that time is not as written, it is divined by the interpreter, who speeds up the beginnings of phrases and lengthens the ends of phrases, creating a kind of breathing between sentences, about which mathematics has no idea. A rhythmically straightforward rendering would strike anyone as false and quite unbearable.

And that is how playing Chopin can become a national sport. But they play Chopin from memory. Above I said one can count the beats when playing music. Well, maybe pianists counted a bit while learning certain pieces... but not when they finally play it live from memory. In that moment, they make sure they are free. So the notated durations are an instance of Wittgenstein's ladder. You use the ladder to climb the wall, but when you get up there, you throw the ladder away and become free.

In the mazurkas, time is very important. The rhythm of the mazurkas is strange because the second beat often lasts a little longer, creating a feeling of limping. In fact, changes in the rhythm are much more complex than the generalization I just made. And this is not written in the scores. Even Chopin's waltzes need this freedom in the rhythm, which is greater than in other waltzes. If you gave the sheet music to an alien, he wouldn't understand it. Music is part of cultural and human tradition.

How many times did I need numbers to talk about the most basic traits of Chopin's art? Zero. He counted triolets and polyrhythms like any composer, but that's not important. In both Xenakis and Chopin, what we perceive are two very different worldviews. Technique is very important in art and it always depends on mathematics, but worldview is just as important. They had sincere, new, important things to say.

The content of music is certainly not mathematics. The languages of music are many, so they cannot be mathematical either. They are systems developed by entire peoples, each in a different culture.

The art of music is the most profound and powerful art that man has devised. It can tell abstract stories and will move you to tears if you pay attention to it. Chopin and Xenakis are two of the most innovative composers. Chopin's entire second sonata talks about death, without a single word, of course. The fifth Polonaise is a nightmare of war with an idyll in the middle to give you a sense of what the war destroyed. But being an abstract story, maybe it's something analogous to war. The Berceuse is hypnotically tender, with flourishes like fireworks made of petals… I could go on. If an alien asked me what it is like to be human, I would give him Adam Harasiewicz's recordings of Chopin.

When you say that "music is mathematics", I wonder if you have had any contact with its true power. You should listen to composers who create whole worlds, like Mahler...

Music is the most striking testimony to the creative freedom of man; in that city, math is only one of the bricks.

Why You Must Avoid Snaps, Flatpaks and AppImages

If your Linux distribution uses traditional software packaging (for instance, apt in Debian Linux), then you can do a security audit by examining the source code of each package installed in your system. This will take you ages to do because it is so much software... But at least you will audit each software library only once. That's because in such a system each library is found only once, and reused in all apps that need that library through dynamic linking. (Not absolutely so, but overwhelmingly.)

You know what is much, much worse? To use a package manager in which each packaged app comes with all its dependencies. Then packages are much larger and they repeat the same libraries, rather than reusing the one already installed in your base system.

Today nobody has patience for the thankless job of packaging software anymore, so they seek shortcuts.

Image packages

In this article, I will use the expression "image packages" to refer to this kind of packages – encompassing snaps, flatpaks, appimages and docker images.

The programming languages Go and Rust, when they were young, were unable to dynamically link libraries; they only worked by compiling the libraries directly into the binary of the app. This harms security in two ways: 1) when a bugfix in a library is distributed, the app continues to use the buggy version that was statically linked; 2) it is harder to know which version of a library is being used by the app binary.

When Linux distributions first appeared, everyone was aware that security is their main job. If you chose to run Debian, you knew your software had been validated, you could trust it. Debian stable is slow to update software, you keep running the same version of an app for a long time, which gives you more reason to trust that version. This is a good thing, but it makes contemporary audiences bored, since their consumerism also manifests itself in constantly updating one's desktop environment and seeing the latest pointless icons, images and animations. In the 2000s Linux grew beyond groups of people sensible enough to appreciate free software. Many creators of open source software today use Apple computers instead of Linux, allowing Apple to spy on them constantly. That's because these people's priorities are mistaken.

Specific Snap issues

Even in this bad zeitgeist, the community has been protesting Ubuntu's insistence in pushing their Snap packages. Here is a list of the problems with Snap, from least important to most important:

  1. Canonical has been criticized because the backend of the package store is proprietary software. (I am not sure anyone else would want to reuse that backend anyway.)
  2. Canonical forces snaps upon all Ubuntu users; Firefox on Ubuntu is currently a snap package.
  3. A snap package is slower to start – and I don't know if this can be helped, since most snap packages should run in a sandbox (which results in better security) and it is a much larger software blob with all its dependencies.
  4. All snap packages are much larger because they contain all their dependencies, wasting disk space.
  5. Canonical decided that the updating of snaps shall be entirely automatic; in this step they forget who their audience is, for Linux users want to be in control of their computer, instead of controlled by big tech companies.
  6. As we saw before, a snap package contains all its dependencies, making a security audit a nightmare.

These things should be reserved for unpopular software only. For instance, you want to early adopt something that has not been packaged; the developer, even though she does not have time for packaging, made a flatpak; so you reluctantly use it (by trusting the developer).

Linux Mint is a distribution based on Ubuntu, but they saw the writing on the wall and decided not to accept snaps. This means they are doing the work Ubuntu is now refusing to do: to properly put Firefox in a .deb package. Further, Mint started work on a "Debian Edition", which is not based on Ubuntu. Mint realizes that for a Linux distro to start using image packages instead of their traditional package manager... is terribly misguided and will have consequences.

And just 3 hours after I wrote "will have consequences"... I caught word of the incident:

The incident

YouTube video: Malware discovered on Ubuntu's Snap store

Basically, some villain uploaded a snap of a crypto app that asked users for their most basic crypto password, and someone lost USD 10k that way. The app theoretically may also have read the users' home files. Canonical encourages everyone and anyone to make snap packages and upload them to their snap store, with no security audit.

Almost everyone thinks snap packages are sandboxed (without access to a number of things on the system), but the truth is, sandboxing is correctly recommended, but can be turned off by the person making the package. Thus the video gets comments such as this:

@act.13.41: The whole reason we were given for having Snaps available ONLY on the Ubuntu Snap Store was the security of the apps. This should never have happened. PERIOD.

Could this happen in apt?

Could malware invade traditional package management too? Here are 2 great comments about this, from the video above:

by @skynetisreal: Typical case of quality or quantity. Those are choices you have to make as a repository maintainer.

by @knghtbrd: Yes, snap is bad, but this problem could exist on flatpak, it has happened in PPAs, has happened in random mirrors of trusted distributions… It could even happen in the AUR if someone was careful/clever. You need to be damned careful where you get your software.

Debian's primary repository has never had malware in it – but a random mirror was hacked and did get some malware in it once. That's why every Debian package is cryptographically signed when it's uploaded. Once uploaded, the sig is checked, logs are kept which key signed it, the package is hashed and the hash is stored in the package list. The list and release file are then signed to prevent tampering with repos. To put malware into the Debian package pool requires getting someone's GPG key or getting a malicious dev through Debian's process which includes checking someone's real name on government ID.

So Debian isn't proof against this kind of thing… but it's taken reasonable steps. If you go and add 3rd party repos, you're weakening that security. So like I said: People need to be careful where they get their software from.

Putting out the fire

Canonical's response to the incident (covered in the video) is that crypto apps (and only crypto apps) now temporarily shall go through an audit. This does nothing about all the other apps that can read your home files.

Further, a comment by @alexhajnal107 reads: 06:04 The new system as described is still flawed. With their new system one could upload a bootleg but unaltered version of an app. Since it isn't malicious it may well pass review. Wait until it gets a bunch of installs then upload a malicious update. Per their stated policy this will not undergo review.

However, Ubuntu's policy of forcing updated versions on users is actually helping them remove the malware app from people's computers. They simply created a newer version of the affected packages with no software inside. AFAICT there is otherwise no mechanism to even let people know what's going on.

At 9 minutes into the video we learn that a very similar incident had already happened in May 2018. That is 5 years ago – in all that time, practices seem not to have improved at all. Why would they improve this time around?

Lessons for us

This incident is the final stone on the tomb of Ubuntu's Snap project, as far as I am concerned. Whoever uses image packages should now be trying to understand the risks and seeking alternatives. But the general public today for some reason finds it difficult to learn certain lessons.

Trust in the software packages is the most important service provided by a Linux distro. By forgetting this, Ubuntu deserves their ongoing loss of public – and there are many alternatives out there.

When I use Linux, call me crazy, but... I want multiple people to have okayed the software I am running. This is probably why the thankless job of packaging software deserves even more thanks than we knew.

This happened to Snap, but I don't know whether FlatHub has different policies.

And if you use proprietary software, you have ultimately no idea what it is doing in your machine, since nobody has access to the source code, and the binaries are inscrutable. No big tech company currently deserves to be trusted since they are all spying on us and collecting facts about us and building even shadow profiles of us. If I sound paranoid to you, sorry, the truth is you are just ill-informed.

Nix as a good complement

The main problem with image packages is security; it is much easier to audit and trust a single package manager in which dependencies are reused. When dependencies are included in an image you have to trust the image, so the work is not just tripled or quadrupled, no -- the work is multiplied by the number of images you are using, times the number of versions in time.

This is one of the reasons I use nix as a second package manager to complement the main one in a distribution. I can find all the software I need in either apt or nix. That's better than using image packages.

NixOS is a Linux distro based on the nix package manager, which has great features for security audits. nix generates an ID (really a hash) for each package, based on its source code among other things. If the binary is no longer available for some reason, nix knows the git commit and the address from which to obtain the source code, and immediately proceeds to download the source code, compile it on your machine and produce the bit-for-bit exact same package whose binary was lost in time. I am still learning about nix but I would bet it's the best thing for security.

All roads lead to open platforms

Preface: a great programmer

There are a few programmers who I really admire, but one that stands out. That is Chris McDonough, creator of Pyramid, the best framework for developing web apps in Python.

I don't know him except for 1 or 2 questions he personally answered to me in the IRC chat about Pyramid. But through the example of Pyramid I learned to code better ― it's as if I acquired a kind of good taste by reading the code of Pyramid and seeing how things were separated and related. You acquire this knowledge from books by Martin Fowler and others, but seeing the example in Pyramid was an important piece in actually learning it.

McDonough is an important creator of free software in Python. He also created Colander which even today is one of the 3 best schema validation libraries in Python for sure.

My main point

I never knew this, but my history as a programmer mirrors McDonough's with several years of delay. I only became aware of this today, as I found...

this video of his, which contains 3 or 4 anecdotes from his career, all of which prove a single point.

The mirror is especially strong in, through the years, having accumulated reasons to hate Microsoft, then Apple, then Google. And realizing that the open source, freedom-respecting alternative is always better, even when it's technically worse at present.

As he says, all roads lead to open platforms. So leave.

I won't repeat his interesting tales here; please do watch the video. Instead, I will sum up my own history with my own anecdotes.

The choice: C# or Java?

Microsoft first tried to kill Java through its famous embrace and extend technique. It made a Microsoft Java. And that already had some market share when Microsoft got sued by Sun Microsystems for calling something "Java" which had features that were not present in all the other things called "Java", thus breaking the fundamental promise of Java: "write once, run anywhere". Microsoft lost the case and paid an enormous indemnification to Sun. This was probably the first time "embrace and extend" failed.

In the early 2000s I was abandoning my first profession (lawyer) and came back to programming in Visual Basic 6. Now I decided it was time for me to learn object-oriented programming, so I had to choose a new language. The main 2 choices were Java and the new C#, which was Microsoft's second attack on the Java platform, when the first didn't succeed. The choice was not clear at all, because:

  • Java was well-established and open source, which I already knew enough to like.
  • C# was a copy of Java, except it had a couple of better language features, lacked Java's misfeatures, and everyone was saying the standard library was very well organized.
  • Python interested me, but there was no job market for it in Brazil.

The dilemma was, do I choose what is technically better, or do I choose what is green. Here I made the wrong choice (like McDonough) and learned from it.

I chose C#, and the reason was, the Mono project existed ― an open source implementation of Microsoft's Dot Net platform. I wanted to have it all: the better language, in an open source environment.

My Microsoft phase

So I got jobs developing in C#, and I was good at it, because I had been enthusiastic when learning it. However, I was unhappy, because the community around C# was Microsoft-centric, seldom used any open source library, and ultimately I was not allowed to use Mono even once. At home I was running Linux versions whose names you wouldn't remember today, and doing my own things with Mono. But at work, no chance. There were many people then who didn't understand the necessity of open source, and I think they only changed years later, when Microsoft finally told them it was OK to do open source.

Conversely, the open source community already had the allergy to Microsoft that I was about to develop, so they feared the Mono project. They were afraid Microsoft would bring its lawyers out (even if not fair) to attack the technically wonderful Mono project. So it was never accepted.

Now I wish to tell the most colorful episode from this time. But I need to set the stage.

Bear with me in this section

Around 2005 I had already been coding for a while but I still didn't know a couple of essential things. I already had made several websites before I learned how HTTP worked. This was definitely Microsoft's fault. Visual Studio endeavored to hide HTTP and made the developer think as if it were a desktop application. You could develop quite a few things without knowing how the web worked, and suddenly when you finally had to know... you were in a bad place.

McDonough feels shame as he remembers the day he asked someone what a transaction is. I remember this moment in my life, too! You see, it was probably not my fault, or his. We were in a Microsoft ecosystem, and up to then, Microsoft software had never been transactional, that I know of. This was a big part of how Microsoft software was so unreliable. Microsoft did care about transactions, but only of another kind.

I learned the concept of a transaction from Subversion / SVN.

Subversion was the best thing before git. It was a re-implementation of CVS (the version control system that was famous before) with a few features added. Subversion was a transactional system: when the programmer sent her altered files to the Subversion repository, the operation either succeeded or failed as a whole ― there was no way for a couple of files to be accepted by Subversion but for the other files to fail. This is good because it ensures each version in the repository is always consistent.

The tale of SourceSafe

You know what was NOT transactional? Microsoft Visual SourceSafe, the versioning system included with Visual Studio circa 2005.

I worked at Unibanco coding in C# in a team of about 20 people. And everyone struggled with SourceSafe. Every afternoon someone would finish a feature, send the files to SourceSafe, half of the files would be written, then some network error occurred, and the other half didn't go through. Now further communication between client and server was dodgy, and the latest version in the central repository was inconsistent and probably wouldn't even compile. Imagine dealing with this problem every day!

I got curious about this problem ― which nobody seemed to try to really solve at the bank. I found an article that said, using SourceSafe is equivalent to printing out your code, deleting the code from your hard drive, burning all floppy disks, and dumping the printed copy on a paper shredder. The article said, once every 6 months, SourceSafe will die and not work at all, the system administrator will run the MS utility to recover it (yes, SourceSafe had it own chkdsk utility!!!), the recovery will fail, and a new SourceSafe repository will be set up from someone's local copy, therefore losing all history.

Well, you know what happened at Unibanco? I had been working there for 5 months or so, and one day I came to work and was told the entire team could not work, we should go out for a coffee or something, because there was a problem with SourceSafe and they were running a utility to recover it. The team got to know each other better that day. In the afternoon, the utility had failed and they were setting up a new SourceSafe repository from someone's local copy of the system, which meant all previous history was forever lost.

At this moment I summarized the article in Portuguese, sent the link to a couple bosses and started showing information about Subversion to my colleagues. But they were not convinced! They had some issues:

  • "Subversion" is a name that banks don't like.
  • Everyone was used to "checking out" the files they needed to change. In SourceSafe, this meant temporarily locking the files so nobody else in the team could edit them. If you needed to alter a file and someone else was already working on it, you had to wait until they were finished. This was seen as something good! The branch, diff, merge way of Subversion, which was later made better in git with better merge algorithms, was hard for these people to understand, and they feared it.
  • Subversion was open source, therefore there was no paid support. Who could they call if anything bad happened? (Contrast this with all the support MS gave to SourceSafe!)

On this last point, I remember a sentence said by the project manager. He said, "it is even inappropriate for a large, rich company such as Unibanco to use open source". I couldn't believe what I was hearing ― of course it was exactly the opposite, it was criminal for such institutions NOT to foment and use open source. His mindset was common then ― the idea that open source was killing small businesses in software (which never really happened). These people really liked reinventing wheels, a problem that only free software solves.

I insisted on Subversion some more, since the new SourceSafe repository malfunctioned daily just like the old one. Finally a meeting about version control was held, but I wasn't invited; I think only one or two developers from the team were present.

And now the punch line.

The colleague told us how the meeting went, more or less like this:

"There is an improvement being made, but it's not what you anticipate. First off, Subversion is out." (I cannot remember exactly why, probably all the "reasons" above.) I was disappointed.

"However", he continued, "the bank is licensing from IBM their version control system, called Rational ClearCase". I thought "OK, at least we are trying something new, the IBM thing cannot be worse than SourceSafe".

"Just one more thing. Because the license is expensive, ClearCase won't be licensed for every team member. In fact, the team will continue to use SourceSafe, and at the end of each day, one employee of the bank will put the latest version in ClearCase."

"You jest", we replied. "You have got to be joking."

I believe this is how most banks work even today: teams are not empowered to solve their own problems. I never took that job seriously after this. I immediately started looking for another job, and it was time to learn that Python language that had such beautiful and elegant syntax, was open source, had nothing to do with Microsoft, and finally had some jobs in the Brazilian market.

Best decision ever. Using Python I never missed anything from C#.

Problems with Apple

Plenty of other reasons to hate Microsoft, if you read up on the company and its commercial crimes. Now let's talk about Apple.

Apple's presence in our lives is characterized by the following practices:

  1. slave labor in China
  2. they make everything incompatible on purpose
  3. they love DRM
  4. they tried to push video DRM onto web standards
  5. they tried to push their own patented codecs onto web standards
  6. Safari is currently the most incompatible browser (it's really the new Internet Explorer), costing everyone loads of development time
  7. they have an evil scheme to make every developer buy a Mac to develop any website, which is, Safari will not give you any debugging information unless you are using Apple's development tools on a Mac
  8. they have an evil scheme to prevent real competition to Safari in their platforms, which consists of a rule that says, a browser cannot be offered on the app store unless it uses for rendering a Safari component
  9. as a result all browsers on Apple devices are just skins over Safari
  10. Apple also invented the phone without a headphone jack
  11. Apple's headphones are technically bad and expensive
  12. Apple doesn't let people upgrade or service broken phones or computers, they go out of their way to make everything impossible, difficult or expensive so you can't fix the things ― and this gave origin to the Right to Repair movement.

The new villain: Google

Google, for a long time, was the only tech giant with a serious "don't be evil" behavior, but that's clearly history. Now you are the product, you have no privacy, they know exactly where you are in the world, you cannot prevent this from happening, they are going out of their way not to solve your problems but to kill AdBlock and make you pay, the advertisements are more numerous than ever, Google projects continue to be killed and users left without options, they have made the development of new, small web browsers impossible through multiplication of extremely complex web standards (such as the Web Component standard which has a terrible API), and finally, but most importantly, they are attacking the open web itself by pushing DRM for websites onto browsers. They no longer think content should be open (which was their opinion when they were a search engine) ― now that they have all the content, they think it should be closed, very closed.

The fracture of the web is coming, unfortunately. Greed doesn't listen to moral arguments. Mozilla is the only one that still deserves some respect.

McDonough's video is from 7 months ago; Google's betrayal (creating DRM for websites), which is the final drop of water for me, had not happened yet, but he was already saying, "Google is definitely next". Now I am looking at options to de-google my life, including email, cell phones, search engines and video platforms.

Nextcloud is the key to replacing Google utilities with something open source and self-hosted.

Conclusion

Like McDonough, I wonder how it is possible that people stay with these companies, continue to fortify them, just for convenience, instead of supporting an alternative earlier. The price is high, and will be even higher.

McDonough is absolutely right: Open platforms will always win out at the very end. Be the change and leave.

I have been following NixOS at a certain distance. I see McDonough now has more than 50 videos about NixOS. Feels like a prophecy of what I am doing next...