Compiler authors face a tradeoff between compiler speed and executable speed. Take longer to build a binary and you can build a better binary.
There is a counter-intuitive condition where this tradeoff breaks down. There are programs where making the compiler faster (by building a worse binary) make the program faster! Under the right conditions you can also make the compiler slower (by building a better binary) and make the program slower.
When you have a small team that write tests and a compiler that is on the boundary of interactive speed, compiler speed dominates performance.
Here is how it works. I am writing some code and I run the tests:
$ go test tailscale.io/taildb ok tailscale.io/taildb 0.024s
Test execution took 0.024s and total compile + test time was under 200ms.
Now I make a change,
$ go test tailscale.io/taildb ok tailscale.io/taildb 0.324s
Test execution now takes 0.324s and total compile + test time was about half a second.
This is not a jump in test execution time I am likely to notice skimming stdout as I am developing. This is ordinary code, it needs to run at an adequate speed but I am not explicitly working on performance.
But I feel it.
Suddenly the command that finished as soon as I pressed enter is stuttering. Is my computer doing something else? What's wrong? Oh, I wrote something slow.
The delta between the old fast code and the new slow code is in my head because I am working on it right now. The stuttering command line is a subtle UX poke, hey, you just ruined a nice program. I can spend a minute to see if there is any obvious mistake, and fix it right now. If need be I can ignore it and plow on.
It is here, at the edge of interactive programming that compiler speed is vital to program performance. If a new compiler release adds enough compile time that I no longer can feel when I break a program, then I will start missing these moments. My code will get slower.
Similarly if a new compiler release makes the compiler faster, I will start noticing my own bad code more often.
Different programming languages are used to write different kinds of programs. (A great deal of programmer time is spent discussing this topic, I suspect the strongest forces that affect this are path-dependence and aesthetics, so I stay out of it.) Programming languages that are optimized for big projects tend to have slower compilers that produce higher-quality executables.
When compiling C or C++ with gcc or llvm, programs quickly reach a point where project compile time is non-interactive. This is fine, because the projects where C/C++ programmers congregate typically take 10s of minutes to hours to build.
Large teams invest in tooling to get performance. That test execution number I skimmed earlier, 24ms, is logged by software that records all compilations, and is tracked against its historical execution time. When it jumps to 324ms, something lights up in red on a dashboard, and a release engineer sends an email. The commit is found, and someone goes and fixes it.
This works! Chromium has a huge team and shockingly-slow compile times, and yet the product is beautifully fast. Large organized teams can make these investments.
Indeed, the slow compiler is exactly what these teams want. If you can add a minute to Chromium's build time with a sophisticated compiler analysis pass that shaves 1ms off the typical frame rendering time of Chrome, everyone will cheer.
I have heard the argument that we should simply make the sophisticated tooling of Chromium easy for small teams to adopt, but this does not work. If I get a dashboard notification tomorrow that I made the code in my example slower, I will ignore it. I am not in the moment, thinking about the exact problem, and it is almost never worth revisiting it for performance. There was exactly one moment when I was willing to consider the performance of that code, and it was the moment my shell command stuttered.
So small teams and large teams need different compilers.
Our industry has aligned compilers with programming languages. That means in practice it is best if small teams and large teams use different programming languages. This is unfortunate, both because I have to listen to people tell me that some languages are slow and others are fast (when the language is being conflated with its compiler), and more importantly: sometimes small teams with small programs become big teams with big programs. Now they are using the wrong compiler. They need a slow sophisticated compiler, but none is available for their language.
I don't know how to solve this. Fortunately it is an uncommon problem. There is however, another related problem that is extremely common and deserves more attention.
No matter how fast the compiler, there comes a point at which project has grown such that compile times are no longer interactive. To pick on a particular language again (because I spent some time in its compiler and will make fewer mistakes discussing it specifically), I have experienced this several times when writing Go programs because the Go compiler is on the knife-edge of interactive performance. Every release the compiler gets faster, or slower!, and the line moves.
I believe this point should be measured.
A lot of engineering effort goes into making the Go compiler fast, but the measurements don't reflect the user experience. Compiler performance is measured as percentage changes of the time it takes to build significant body of code.
It does not matter at all if a project that takes two minutes to build is 5% faster this release. It takes two minutes! Make it one minute or three minutes, it does not matter. On the other hand, it is a really big deal if a project that takes one second to build now takes 500ms.
If a compiler team want to focus on interactive compiler experience then they should:
Interactive compilation is wonderful UX.