I’m regularily encountering some discussions about “green coding” and how to make your applications use less power. Recently a a publication from Rui Pereira e.a. from 2017 on the topic of CPU and memory imprint of different programming languages came again into my mind again.
While I totally agree with the results of this and similar studies, I think they are not helpful because they do not account for the main source of energy waste: Design inefficiency.
Here’s a recent example about what I mean with design inefficiency. Your task is to download 123 photos from a link someone shared with you via Google Drive. In the end you want to store those files on your home NAS. This is what I had to do to copy less than 1 GB of photos from google drive:
- Open a Webbrowser
- Navigate in a clumsy webapplication to the folder that you need while your fans start to spin
- Click on Download
- Google now zips all the files for a convenient single-file download, which is neat 😀
- Realize that your Browser is not supported, so repeat steps 1-3 in Google Chrome
- Google repeats the zip process now but this time you can finally download it
- Unpack the zip folder locally
- To finally move the files to my home NAS where it will be re-compressed according to the defaults of the filesystem
I summarize:
- 2x full deflate algorithm on 123 photos.
- 1 additional compression algorithm in the end (but that shouldn’t count because that was not part of the assignment)
-
- Still it’s a compress-decompress-compress step
- I needed to start two Browsers because we broke one
I think Google can improve their power efficiency by using an efficient deflate algorithm written in C. But I deeply believe that even a optimistic 30%
power reduction is only a marginal improvement over repeating the zip process, because it was not working in Firefox.
Atop of this, this whole zip process could have been avoided, if Google would provide WebDAV or a similar transfer protocol instead of just a clumsy web browser. I’m not taking about some helper-Chrome-Addon-Extension thingies that break once every two months, if there is no native support for this it simply doesn’t exist.
To come back to my original point: Comparison on the level of individual programming languages are useful for isolated optimization considerations, but in my opinion they are vastly overstated. They simply cannot compensate for a wasteful approach in the first place. Design inefficieny is by a large marging the much worse enemy to fight.
I don’t care about a 20% difference in CPU or memory imprint between go
and C
- If I have to do the same process three times those 20% are absolutely negligible in comparison to the astonishing 200% efficiency loss for achieving my goal. In other words: An obsidian sword might be better against a level 3000 dragon, but unless my objective is to fight this dragon I might be better of in avoiding the dragon instead of optimizing my battle gear (this dragon doesn’t give XP nor does it hold a treasure).
This is what we need to fix first. Comparisons on the level of programming languages are only a secondary optimization objective and almost certainly wasteful, as long as the general approach is inefficient by design.