No Time To Play

Measuring hardware performance

by on Jul.02, 2015, under Case study

I recently had to swap my refurbished computer for a hand-me-down, to get rid of an annoying hardware defect. Both have the same amount of RAM and storage: 2 gigs and 80 gigs, respectively. The difference is in the CPU and GPU, and that’s where the comparison becomes very interesting.

You see, the old one was an AMD Sempron 64 rated at 3000+ (real clock speed 1.8GHz), with an embedded nVidia 8800 for video. The new one is an Intel Atom 330 at 1.6GHz, dual-core and hyperthreaded, with an Intel GMA 950 accelerator. You’d think multiple cores would help a lot with performance, but each individual core is slow as molasses by modern standards (which is absurd and ridiculous, but there you have it), and most software isn’t multithreaded, so it can’t take advantage of the extra cores. The result? Overall, a more responsive system as one misbehaving process can’t hog the entire CPU anymore. But individual apps are now over 50% slower…

Good thing the next games I’m planning are all turn-based.

As for the GPU? Suffice to say, Super Tux Kart — a lightweight game by any standard — used to run at roughly 70FPS on the 8800 with default settings (and original nVidia drivers), while on the GMA 950 it crawls at under 10FPS, with quality turned most of the way down. In fact, turning down the settings didn’t seem to make much of a difference at all.

Somehow, the game is still perfectly playable anyway.

But, seriously? I knew Intel made crappy graphics chipsets. (Watching Yo, Frankie! struggling to run on an Asus Eee PC 701 was an exercise in morbid fascination.) But that bad? I used to be able to run WebGL demos with software rendering, meaning the single-core AMD could carry on the Javascript interpretation and the graphics calculations both, all by itself. The new machine chokes just trying to load the same demos…

How the fuck do a couple of processor cores AND a graphics accelerator get their virtual asses handed to them by a single-core CPU from the same era and price category?

I’m beginning to think modern operating systems — even Linux — waste a lot more of the machine’s raw hardware power than generally believed. I’m beginning to think SMP doesn’t work nearly as well as hyped.

I’m beginning to think benchmarks are bunk and we have no clue how to measure performance in the real world.

And you know what the worst thing is in all this mess? My computers aren’t even that old. I have plenty of friends — and I mean in the US, not second-world Romania — who would love to have my old one. To them, it would be a noticeable upgrade. They’d love to buy some PC games, too, but you see, they can’t afford the absurdly expensive machines required for modern titles. So instead they stick to Nintendo’s mobile consoles, which don’t assume their buyers are all rich.

How have your game sales been as of late?

Creative Commons License
Measuring Hardware Performance by Felix Pleşoianu is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.


2 Comments for this entry

  • Tom H.

    I’m not sure what you mean by “embedded” 8800, but it sounds like there is absolutely no comparison between the two GPUs. The 950 is a mobile chipset that has NO vertex shader support – things that have been on normal GPUs for a decade or more aren’t supported and are offloaded to the CPU, where they’re running in the driver on the same core as your application. I’m not a driver engineer but I wouldn’t be surprised if the emulate-vertex-shaders-on-the-driver codepaths were not on anybody’s list of things to optimize.

    The 950 draws only 7W of power. The 8800 was a high-end desktop GPU on release with 30x as many cores at the 950 and with peak power draw of 105W, 15x that of the intel part.

    • Felix

      It’s possible that I’m misquoting the chipset name. 8000-something? It’s an embedded nVidia GPU that apparently went into some AMD motherboards at one point. Certainly not a big bad gForce adapter. But yeah, it clearly displays the performance of a real GPU, while the 950… I wonder why they bothered with it at all.

Posts by date

July 2015
« Jun   Aug »

Posts by month