The new employee
Ĉi tiu blogaĵo ankoraŭ ne estas tradukita en Esperanto.
Ĉi tiu blogaĵo ankoraŭ ne estas tradukita en Esperanto.
Ĉi tiu blogaĵo ankoraŭ ne disponeblas en Esperanto.
…So you are to manage some software creation. Then you must understand you cannot simply employ the same techniques used to manage other domains. Software creation is strange – many counterintuitive truths about it. You will fail unless you learn about these weird realities.
This series contains these posts:
From other fields came the notion that deadlines can be set. But one of the most striking bizarre facts about software creation is this:
It is impossible to anticipate when the thing will be done with any reasonable amount of certainty.
This was gradually discovered during the history of software development. It is impossible to estimate development effort, because in programming, difficulty is accidental, it is not predictable. One does not know what the trouble is going to be.
If you cook pierogi once, you have some idea of how long it will take you next time. But new software is like writing a new book or researching new tech. Treating it like pierogi is so wrong, it is offensive. Only in very few cases is software so repetitive, and in those cases, it is not very valuable. Valuable software does something new enough that you can't estimate it.
In such cases, I don't know how long it's going to take, because I haven't written it yet.
Since the 70s, devs marveled at how much their estimates were off. So they concluded: "we should improve our estimation skills. Learn to estimate better through deeper thinking". Research on this was done in the 80s, and they came up with better techniques. Then they saw the results were equally off. It wasn't until the turn of the century that a few enlightened minds decided estimation was a waste of time. The result of the estimation was garbage, so they wouldn't do it anymore. They wouldn't lie to management.
This idea only began to spread about 10 years ago. That's the #NoEstimates movement.
The last time I estimated a story with Raphael, we spent 3 hours and came to the same conclusion, so we felt good about it, although we were extremely tired. But then a better alternative came up and plans changed, so the 6 man-hours were thrown away. The estimate would need to be redone, and in all that time, nothing useful was being done for the user/client.
The 6 man-hours were pure waste: doing the estimation hurt the speed of the team. The estimation was probably incorrect even if we felt good about it, and even if it weren't so, it had no value once we had a better idea and changed our plans. If you want your team to be fast in delivering working software, requiring estimates is a great way to ensure they will be slow.
So, you see, in software, estimation doesn't work, its output is garbage and management decisions, business decisions, must not be based on such poor quality information.
Developers always knew estimates are rubbish, but managers always insisted they need it. Managers won, but nothing could be done about the fact that, in software, estimates are garbage.
When we estimate, we sort of write the software in our heads. We start scribbling that, to finish this feature, we need to write these new database tables, this database migration, these business rules, these API methods, use this or that library, add this and that to the GUI etc.
But that's not how it actually works. When we sit down to do the actual work, we are faced with the entire iceberg, not just the tip we could see.
Initial estimates often assume a "greenfield" scenario, but in reality, we must integrate with legacy parts of the system, poorly documented code, or unexpected architectural bottlenecks.
We make bets. After some software is written, we start to realize whether our bets were off. Only through building can we understand the true problem and discover better solutions. We are now interacting with all sorts of unanticipated forces:
In short, writing actual software is a struggle against realities that were completely unknown at the time of estimation.
Implementation is discovery, not execution. This is why Agile emphasizes iterative progress, continuous feedback, and adapting to change.
When you use estimates, you set yourself up for this eternally repeating conversation:
"The feature is not ready??? You said it would only take a week."
"That's not what I said. I gave you an ESTIMATE."
"That's bullshit!!!"
"You got that right!!!"
"How can I meet my goals like this???"
"What do I know about YOUR job??? My plate is already full of this letter soup!"
Developers give estimates just so managers will stop pestering them and go away. That is an irresponsible thing to do. The only responsible thing is to deny estimation.
However, businessmen still need information to be able to make business decisions. There's an event later this year, that is an important business opportunity, and we need to be able to predict what can be ready by then.
That is why people have been studying how to make management decisions in the absence of estimates. Based on other criteria.
I do not expect you are convinced. I was only convinced after weeks of consideration. But this 37-minute presentation by Allen Holub is what finally convinced me.
After 15 minutes, he starts to show how to manage a software project in the absence of estimates, including how to count stories to make graphs and calculate timetables. After 31 minutes he recommends Story Mapping to organize the backlog and prioritize features for the next release.
Other notable authors about NoEstimates are Woody Zuill and Vasco Duarte.
…So you are to manage some software creation. Then you must understand you cannot simply employ the same techniques used to manage other domains. Software creation is strange – many counterintuitive truths about it. You will fail unless you learn about these weird realities.
This series contains these posts:
One of the most interesting and lasting books about software development was written in 1975. That's "The Mythical Man-Month" by Fred Brooks.
I will base this post on famous quotes by him, mainly from that book. Always in italics:
"The Waterfall Model is wrong and harmful; we must outgrow it."
Yep, that's what you learned in the first post in this series. Brooks already knew it.
"Adding manpower to a late software project makes it later."
The above sentence is known as Brooks' Law. It's probably the most famous quote from his book. It sums up one of the most interesting and counterintuitive things about software.
Suppose you are managing a team and the project is late. To fix the situation, you hire one or two more people. You do this because more people will certainly get more work done.
One month later, maybe two months later even, not only the situation hasn't improved, it's gotten even worse. You wonder why.
The most important reason is that communication overhead increases when someone enters the team. All sorts of information must be asked by the newcomers and answered by old team members:
The communication need is so intense that weeks can pass before a new team member becomes properly productive, and while providing explanations, the experienced team members also have their productivity impacted.
You want to know how to make this even worse? Try to preserve the experienced team members from communication up to a degree, or even entirely. Now the newcomers have no clue what they are doing and have no way to learn valuable things that were already in the culture of the company. This is because a great amount of knowledge in the team is tacit, not explicit. Only communication makes it explicit.
And that is no way to treat a new team member, of course. It conveys that experienced team members are too busy or too valuable to teach anything to the noobs. This leads to conflicts and hurt feelings and hurts team formation.
There is no way to avoid the need for communication, because, in a way, software development is knowledge production. Every new person increases the communication overhead dramatically, leading to miscommunication, coordination delays, and diminishing returns.
As a rule of thumb, the larger the development team, the more time needs to be spent in communication, even after everyone is onboard and productive. As Brooks says:
"The number of communication paths increases with the square of the number of people involved."
This is a good argument to keep software teams small. And this is why in Agile no team is larger than 12.
"Nine pregnant women cannot produce a baby in one month."
This is one of the funniest. Brooks explains:
"When a task cannot be partitioned because of sequential constraints, the application of more effort has no effect on the schedule. The bearing of a child takes nine months, no matter how many women are assigned."
This is about activities and decisions that depend on the completion of others. A manager needs to understand how such dependencies happen in software development.
"Men and months are interchangeable commodities only when a task can be partitioned among many workers with no communication among them."
This means, only when bits of work do not depend on each other, can they be done at the same time.
"The hardest single part of building a software system is deciding precisely what to build. The most important function that software builders do for their clients is the iterative extraction and refinement of the product requirements. For the truth is, the clients do not know what they want. They usually do not know what questions must be answered, and they have almost never thought of the problem in the detail that must be specified."
The entire development team does business analysis: they try to understand the steps through which work needs to be done. That quote is amazing near the end. It is often said that clients only know what they want after they see what they don't want. Meaning the team presents a design first, and then the client can see flaws in the design and correct it. But the client is unable to describe what is desired before they see the design.
The last sentence... "in the detail that must be specified"... is explained in another terrific quote:
"Design work doesn't just satisfy requirements, it elicits them."
Meaning, the requirements exist, but are not known by anyone, until a design suddenly makes them obvious.
Also, the design must go through iterations. Each iteration allows the team and the client to discover new weaknesses and needs:
"Even the best planning is not so omniscient as to get it right the first time."
Every now and then, no argument will convince an egoic client with enough bad taste:
"Einstein repeatedly argued that there must be simplified explanations of nature, because God is not capricious or arbitrary. No such faith comforts the software engineer."
Software designers are strong in a certain kind of abstract thought, which allows them to see shapes of solutions that apply to many different problems.
"I am more convinced than ever. Conceptual integrity is central to product quality."
I found the same thought expressed in another quote:
"Conceptual integrity is the most important consideration in system design."
Software follows and materializes rules. You cannot establish rules unless you are thinking very clearly. In software as in law, good rules are based on sane concepts; bad rules use bad criteria. Confused concepts will always hurt decisions made by or with the software.
The word "requirements" is still used, but now it's wrong. None of those are required. They are just ideas, and their priority is always shifting as the business learns about the market, about itself, about current needs etc. Many of those ideas necessarily will be discarded, which wouldn't happen if they were requirements.
An Agile team treats potential features as ideas, not requirements. This clarification is repeatedly made by Jeff Patton, inventor of the Agile technique of Story Mapping.
Some people have a notion that they are going to manage bugs. They think they can list the known bugs and decide which ones to fix. They want to save developer time spent on bugs!!! That's outlandish.
A bug usually is bad behavior whose origin is unknown. If you are seeing 2 buggy behaviors, they might have the same origin in the code (in other words, they might be the same bug).
If you let bugs live, they can get compounded. A bug interacts with another bug, creating a third bug. At this point, no sane person can understand what is going on with the system.
If you try to manage bugs, basically you don't understand what the hell is going on anymore. It is hard enough to reason about working software; who can reason about bugs?
The notion of managing bugs is as absurd as the idea of managing mysteries.
The only sane thing to do about bugs is to squash them as soon as they are seen. Now maybe you can reason about your system.
If I wished to reason about insanity, I would have studied Psychology, not Computer Science.
Programming is an activity that requires a very high level of concentration. The brain works very hard and gets tired.
When a programmer is overworked, his productivity goes down, he feels tired, his concentration is feeble and his decisions are worse. The quality of the software goes down and sometimes the programmer can produce more bugs than features, effectively hurting the project.
A developer must work 40 hours per week, tops. No more.
The business wants software produced as fast as possible, so it puts pressure on developers.
Developers have only one way to deliver faster: drop the quality of the writing.
So the software is now more buggy and less maintainable.
Now the business pays the price of the bugs, which is enormous (annoyed customers, no satisfaction, no word of mouth etc.). Fixing bugs also gets exponentially costlier as time passes:
You can sort of understand this if you think of an author writing a novel. You as an editor tell the author to hurry up. So the author forgets to do certain edits. Now the character that got killed in chapter 6 suddenly reappers in chapter 18 without any explanation.
In software this is much worse than in a novel. To compare, you'd have to turn the novel into a virtual reality. Now you have a Shrödinger's cat who is alive and dead at the same time. What could be worse than that in software?
But that is not the only bad consequence. Badly written software is harder to change. This means future tasks take much longer to accomplish. So you thought you were saving some time, and maybe you did if lucky, but you've hurt the entire future of the project.
Robert C. Martin expresses this concept best:
"The only way to go fast is to go well."
Pressure your developers and suffer the consequences.
In Agile, the quality of the code is not negotiable. It takes as long as it takes. Bad managers need to get a clue.
No novel is written right the first time. Every romance needs a few rewrites to become really good. Software is similar. Some parts of the software need to be reformulated, because it is impossible to get it right the first time.
Even if you don't have bugs, you have badly written parts. You are paying for these parts each time a developer needs to read them or change them. So they are worth fixing even if the fix does not change the external behavior of the code.
In a novel this would be the same as telling the same sequence of events, but telling it in a better way. Maybe the order of the chapters change. Maybe it's the vocabulary and the wording. The story itself doesn't change, but the novel becomes much better. Prevent the author from doing his rewrite, and you are hurting sales. Also worth remembering: sales aren't the only thing that matters.
Ward Cunningham is an Agile developer who is one of the creators of XP (Extreme Programming). He also invented the wiki – the notion of a collectively edited website in which creating and linking pages is very easy to do.
In this video, Cunningham talks about how he coined the famous metaphor of "technical debt" to explain to non-technical people the necessity of refactoring code.
Basically, rushing the software out the door (badly written, hard to understand, hard to change) is initially good for business, but it is like taking a loan. The debt must be repaid in the future, by spending time to refactor the software, so it becomes again easy to change.
Woody Zuill expresses the same thing in more broader terms. He says "teams and managers should spend the time to make the work easy to do".
Refactoring, and fixing technical debt, is about making future work easy to do.
Some teams have programmers "own" parts of the codebase. Joe is the one who understands this area. Sue is the one who can fix problems in this other area. The devs are experts in parts of the code.
That's the worst way to distribute expertise. It results in low Truck Numbers.
What is the smallest number of people in your project that, if hit by a truck, would put the project in trouble? That's your Truck Number.
If only Derek can manage the positronic code, then your Truck Number is one, and that is too low.
The solution is obvious. Nobody can "own" a part of the code. The entire codebase is collective. Everyone works on everything.
This forces the team to share knowledge, resulting in a better team.
We have just seen how important it is for the development team to be constantly sharing knowledge.
That is one reason to program in pairs.
If an experienced dev works for a few hours with a novice, teaching is automatically taking place.
There are more reasons, too.
This is just one more strange thing for managers to realize. Managers who don't know about pair programming think it's waste. Two people doing the job of one. Nothing further from the truth.
Pair programming is one of the prescriptions of Extreme Programming, an Agile methodology.
After working with a software development team for a while, a manager begins to feel how easy or difficult tasks are going to be, even if she never does any task herself.
Sooner or later she catches herself saying in a meeting, "I assume that's only going to take five minutes to do". That's natural, but please, blush when you say that.
First of all, nobody knows how long it's going to take. Not even the developer. Because he hasn't done it yet. If it's more than changing static copy on screen, it could vary.
Second, it's arrogant, defying and disrespectful. Writing software is not frying pancakes. More about this in the next post, about NoEstimates.
If you've read this far, congratulations: You have just remembered universal truths about software development that have been known since 1975 at least. But these are frequently forgotten, which is a shame to those involved.
Ĉi tiu blogaĵo ankoraŭ ne estas disponebla en Esperanto.
Ĉi tiu blogaĵo ankoraŭ ne estas disponebla en Esperanto.
Ĉi tiu blogaĵo ankoraŭ ne estas disponebla en Esperanto.
In recent posts I have shown that the web only has crummy technologies, but at the same time, Flutter deployed on the web is not yet free of its own crumminess, since it runs slower than in any other platform.
In this post I shall convince you, beyond any doubt, that to develop frontends, you should use the Dart language rather than TypeScript. We'll examine:
JavaScript was created in 10 days and now we have to tolerate it forever!? WAT. It is the only language I know with so many evil parts, other than INTERCAL.
The best book about it is called "JavaScript – the good parts". Everyone has read that book. However, its author now says it's time to stop using the language.
The design problems in the JavaScript language are too numerous to list here, but here are some of the most egregious:
this keywordthis can change depending on the context in which a function is called, leading to unexpected behavior and countless debugging sessions for thousands of developers..bind(), call(), or apply() to explicitly set this, which is cumbersome and unheard of in any other major programming language.this context, which can be both a benefit and a source of confusion when switching between arrow functions and regular functions.arguments object, which can be limiting in certain scenarios.this: While it solves some problems, it can also be confusing when developers expect a traditional function's this behavior.'' + 1 resulting in '1' or true + false resulting in 1.== vs. ===: The loose equality operator == performs type coercion, which can lead to unexpected results, whereas === does not, leading to a general preference for the strict equality operator but also to confusion among new developers.this and super, which become a list of gotchas for developers to memorize.super() before you can use this.
Forgetting to do so results in a reference error.
But worse, this means it is impossible to completely override a constructor in Javascript.super() must happen before accessing this, which can complicate constructor logic and initialization sequences, or even make one's idea impossible without a redesign.export default feature which I don't see in any other language, does not add value per se, and probably only exists to emulate the previous 2 module systems.JavaScript is a language notorious for its inconsistencies and flaws.
Despite possessing modern features, it suffers from fundamental issues that remain unresolved.
this, function, arrow functions, super, and the, so to speak, excessively dynamic type system...
the behavior of these things is riddled with exceptions and unexpected outcomes.
Learning JavaScript often feels like memorizing a long list of workarounds.
As a result, JavaScript is a language that makes kittens cry every day. It is legitimately a language to be hated, if we are being reasonable.
JavaScript is a hypocrite, like a person who pays for expensive, albino-white facades on their front teeth, but leave their back teeth to rot full of caries. JavaScript is the guy with a sports car who in truth is hurtful to women.
People who decide to use JavaScript outside of the browser are backwards: the browser should acquire a good language, instead of the worst language contaminating the entirety of computing.
The real reason every other language compiles to JS, and the real reason WASM exists, is not a lack of cool new features in JS.
The real reason is that in JS, this is broken, function is broken, arrow functions are broken, super() is broken, the type system is broken...
To learn JS is to learn a pointless list of exceptions to expected behavior.
Consequently, many developers choose to use languages that compile to JavaScript or explore alternatives like WebAssembly. This trend highlights a critical issue: JavaScript's fundamental flaws hinder development efficiency and cost lots of time and money.
As an example, here is a lesson for today:
this binding; if they are part of an object, they cannot talk to it.arguments; normal functions do.const is not.
Arrow functions were invented to be anonymous and to make small event handlers and callbacks, but people are abusing them and naming them with const.This is just one confusing instance where JS has 2 ways of doing the same thing, both with advantages and disadvantages depending on what you are doing. How much of this will you remember in 30 days?
Why not pick a good language instead?
In short, what needs to change in JavaScript is its WAT. And while that doesn't happen, hordes of young programmers are learning a horrible programming language first. Getting used to the most inelegant solutions. Honestly, JavaScript has become the most popular programming language, and also the worst popular programming language. The only things that are worse are those that are designed to be worse: esoteric programming languages like INTERCAL and Whitespace.
But the worst part is, they seem to have given up fixing JavaScript. They have concluded it's impossible, due to the requirement of eternal backwards compatibility. That is the wrong conclusion, and it shall be revised real soon now, as web development has clearly become unsustainable.
Most JavaScript developers are the proverbial boiled frog. They have been studying this cursed language for years and years, why worry now? "I am productive in JavaScript in spite of its shortcomings." Their attitude is that of the ostrich: "learn the good parts", shun the bad parts, and develop code today.
They will add, that all the alternatives to JavaScript are also doomed, for other reasons. Maybe they are harder to debug in the browser. Their performance is necessarily worse than JavaScript, since they compile to JavaScript. And so on.
In short, it's the famous Sunk Cost Fallacy. JavaScript is evidently not beneficial, but one sticks with it due to past investments.
Where Python 3 focused on removing all the warts from Python 2 and succeeded, people imagine this to be impossible in JS, since they believe there is an eternal backwards compatibility requirement. I predict this requirement will drop very soon, as the accumulation of horrible web standards becomes a terrible burden.
Yet, a successful precedent exists: ActionScript 3 introduced class-based inheritance, separate from, and without disrupting, the existing prototype-based system. This demonstrates that it's feasible to evolve a language without breaking existing code.
Again: It is NOT impossible to fix JavaScript; the impossibility is an illusion that makes you accept JS.
The only thing that is even more painful than fixing a floor full of rusty nails pointing up... is to forever tolerate it. But that's exactly what a boiled frog does. "I already know where the rusty nails are, I don't step on them anymore."
This almost amounts to a Human Rights issue.
My advice to you is: are you writing a large web app? Then for the love of humanity, do it in anything but JavaScript.
There is a moderately popular project by Facebook called Flow. It lets you write static type annotations on otherwise JavaScript code, it checks the types as you write, and then it simply removes the type annotations in the end, leaving only your JS code. I consider Flow a good design – if you need to write JS, that is.
Microsoft answered the same question differently.
They hired Anders Hejlsberg, the guy who had created Turbo Pascal and Borland Delphi, to make derived languages for them. First they used him in their attempt to Embrace, Extend and Extinguish Java. Microsoft then lost a tremendous lawsuit to Sun Microsystems for that misstep, so they turned to the next best war strategy: make their own Java while denying all influence. Thus C# and the Dot Net Framework were born, or rather, cloned. To this day these people are affirming that "C# belongs to the C family of languages", while it really is Java with a couple of misfeatures removed. Hejlsberg was and is the main designer of C#.
In 2012 Microsoft announced another Hejlsberg creation: TypeScript, which has become the most popular compiles-to-js language. But instead of just adding types to JS (like Flow does), it is a bastard child of JS and C#. I imagine they gave Hejlsberg these contradictory goals: "We want C# for the web, but it also must be a superset of JavaScript". The superset bit means, if you paste JS into a TS file, it just works – all JS is valid TS. It also means TS has its own separate features, augmenting JS.
The fact is, this one-way compatibility with JS is probably why TS won. But you know what I am going to say, right?
TypeScript again decides not to fix any of the bad parts of JavaScript. TypeScript is a monstrous creation, it adds even more cool features, such as algebraic types, without first fixing the basics. The decision to be a superset of JS sealed TypeScript's fate; after that decision, being a good language was impossible. It presents the best language features and the worst language features in a single thing. TypeScript is the most hypocritic programming language in the world, and as such, it could only have been born at Microsoft. Or Oracle, Apple, Facebook or Google.
Learning TypeScript is learning tens of weird unexpected syntaxes in the type system – things that should be natural and much easier – and then forgetting them while you are coding.
Every developer has noticed that, if TS seems powerful, it is because there's an enormous amount of features for annotating types. It's not simple at all, it amounts to a tremendous cognitive burden. And newer versions never simplify anything, they only add to that burden. The developers of TypeScript take too much freedom to make it impossibly complex. I have found this frustrating, and I am not alone:
Rich Harris: "We also eliminate an entire class of annoying papercuts that will be familiar to anyone who has worked with the uneven landscape of TypeScript tooling."
Here is a video detailing the latest TS release. And here are some YouTube comments sharing my sentiment:
@tacochub4353: These updates are neat... sure, but I don't really see how these methods solve the plethora of issues with using TypeScript. All they seem to do is add unnecessary complexity to an already perplexing ecosystem filled with syntactical nuances.
@JamesDSchw: My beef with many TS releases over the years surround the cognitive load they incur - more syntax and language semantics to be able to model types in existing libraries in the ecosystem.
@universe_decoded797: Typescript solving things that are not problems to create more problems is problematic. ‘Simple things are hard to create’ is a true statement.
In short, you wanted to fix JavaScript and suddenly you saw TypeScript. It overloaded your senses with so much information and impression of power, that it seemed to be the right solution. The only thing everyone forgot was the actual problem: we need to fix JS.
To choose TypeScript, one must overlook two facts:
It is currently my opinion that an object-oriented programming language is perfect for creating user interfaces, even a pure OO language such as Smalltalk.
But here, let us ponder that an object-oriented approach greatly benefits from adopting a few lessons from functional languages. Functional programming is not the opposite of object-oriented programming; to a certain extent these can be combined. Also, object orientation today accepts that composition is better than inheritance most of the time. I favor a pragmatic approach that uses notions from both these worlds. Immutability only on certain kinds of information, and a conscious effort to create pure functions and unit tests for these – these are key to writing good code.
But the current wave of functional programming languages is another thing that a healthy reader should doubt. In about 15 years of people trying functional languages and immutability in the browser (either in JS or in functional languages such as Elm, Elixir and ReScript), the functional paradigm and the insistence on immutability have failed to deliver the cleanliness and developer productivity that were promised.
Here are some arguments so we can establish that functional languages and techniques are not the panacea:
While above I have tried my best to talk ill of functional languages… knowing what I know today, to develop user interfaces, I would reach:
In 2009, Node.js brought JavaScript to the server, and now boiled frogs write their backend and frontend in the same language: the worst one. Someone help them!
Seeing this, Google unveiled their Dart language in 2011. You can think of it as the last Java clone, this time with better influences. Dart 1.0 came out on November 2013.
The initial plan for Dart was to include it in Chrome as a second native browser language, the good brother of JavaScript. This was criticized for fragmenting the web, so they gave up this idea in 2015 at the release of Dart 1.9. And then Google proceeded to dominate the web anyway – through countless bad standards – such that now it is financially impossible for anyone else to develop a new browser. We might as well have had Dart in Chrome, it would have been a tremendous blessing all these years.
There exists a parallel universe in which the frontend community gladly accepted Dart as their saviour when Google proposed it as a sane, parallel native language in the browser. I wish I lived in that universe. Frontend devs, you have Stockholm Syndrome.
Instead, Dart was sort of forgotten for a couple of years while Flutter was being developed. It was released in 2018.
Here are reasons why Dart is good for developing applications and GUIs:
private, protected and public keywords; instead, the programmer simply starts a variable name with an underscore (such as _myVariable) and that makes the variable private to the current file.
This is great language design, removing lots of noise in a single movement.Given the above, I would definitely write web apps in Dart, especially using its numerous frameworks for doing so; I would also write a large app component to be consumed by JS through a relatively small interface; but I would not write a typical JS library in Dart, unfortunately.
Going parallel to JS is unavoidable, that is why everyone wants Web Assembly to succeed: it's the only escape.
Dart is not perfect, but programming in it is bliss compared to JavaScript and TypeScript. There are alternatives out there; your responsibility is to choose something better than what everyone else is using, if you are smart.
The feared web fork is soon going to be required, and for all Web tech, not just JavaScript. Because the powers that be have introduced an enormous number of spectacularly failing standards:
The idea that these bad standards, plagued by complexity and inconsistencies, must remain in the Web forever for backwards compat is absurd and impossible. Of course one day this entire mess will be dropped.
The web platform has become so convoluted that only tech giants can afford to build browsers. This centralization of power threatens the open nature of the web.
When Firefox finally finishes failing, we'll be in the impossible situation of every browser being based on Chromium. This is due to the number of incredibly complex features and standards that a browser must implement. Thus the web no longer belongs to the people, it belongs to tech giants.
I am calling this right now: soon the people will create a "New Simple Web", from scratch, with simpler (but not necessarily more powerful) technologies, languages and protocols, to replace this Impossibly Big Ball of Backwards Compatible Spaghetti. This revolution will be painful in many ways, but it is clearly unavoidable. The most important values for the right technologies, languages and protocols will not be power, but cleanliness, simplicity and developer experience.
I believe the New Simple Web will look more like Flutter than anything else. It will be based on a single good language. No separate language for formatting. It will tend to the pragmatic needs of writing applications. But it will still somehow make contents public, as they are today. Oh, and it will have no DRM.
In order to become popular, the New Simple Web will have to offer something to the users, too. Evidently, that something will be their freedom. By then Google will already be the distopian oppressive OCP they have decided to become, so they will be closing everything on the Web: mandatory ads, mandatory privacy invasion, mandatory taxes, mandatory DRM protecting THEIR content which they actually stole from books, poorly paid videomakers etc... you name it. This is what Chrome will be.
Other tech giants will try to create an Alternative Web in advance, but they will not provide the necessary freedom, and therefore they will fail.
Someone will rise to the challenge, present a clear picture of how the New Simple Web should be built, and do it. People will use Chrome for banking and gradually migrate to the New Simple Web for everything else.
And then the cycle will begin again, inasmuch as humans are bound to forget learned lessons.
Ĉi tiu blogaĵo ankoraŭ ne estas disponebla en Esperanto.
Ĉi tiu blogaĵo ne sencus en Esperanto.