This post expands on a recent conversation from Discord.
As of 2021, I'm the proud owner of a 2008-vintage Dell Optiplex 780 with a Core 2 Quad CPU. People go "tsk tsk, poor you" when I tell them, but my previous machine must have been even older, probably from around 2005 or so.
On that older machine, I made a voxel renderer that ran in real time in software. In Python. It was kinda slow at 25 FPS, so I also ported it to Lua. That version ran at 40 FPS on the same hardware, or 60% faster.
On my new PC, the Python version runs at the framerate cap of 60 FPS without maxing out the CPU. I have no idea how fast it is.
That's the exact same code. Never touched it again since the first release. And it was naive code in the first place, because that's how I roll. No fancy tricks. The simplest thing that can possibly work.
Then again, my old computer could emulate the Super FX chip in software. In a web browser.
Do you realize how inefficient an app must be to slow down a configuration from at most five years ago? While doing much less? And using the GPU as well?
There's simply no excuse for bloat. None whatsoever.