It also has a lot to do with driver optimizations. E.g., nVidia never was too good at optimizing for AMD's 3D Now, though they're slowly getting better at it. E.g., 3DFX was always a better match for the K6-2 and K6-3 than the TNT2. E.g., everyone blames the Athlon's L2 cache for the lower frame rates with a GeForce. But the reality is: with most other video cards and drivers, the Athlon spanks a P3 for frame rates. (With a Viper II an Athlon 650 beats a Coppermine 733 quite comfortably.)
Etc, etc, etc.
Well, I guess it's as they say: lies, damned lies, and benchmarks
Now seriously, benchmarks were not invented as a marketing tool. (Or shall I say... benchmarketing?) They were supposed to be a tool used by engineers, who know exactly what they're doing, what those scores mean, etc. And most likely towards the goal of finding out what's wrong with a design, and trying to optimize it. And to that end, benchmarks are still very useful. E.g., IBM's very fast Java virtual machines are by and large the result of hundreds of micro-benchmarks designed especially to test every single part of the design, and isolate bottlenecks.
The problem arises when said benchmarks are used by marketroids on users who have no clue what those numbers mean. Or an even bigger problem is when unscrupulous companies manipulate bugs in the operating system or the benchmark programs, to make their product look faster in some popular benchmarks -- even though it's slower in everything else.
E.g., see Intel's first UDMA drivers, which basically abused a bug in the Windows 95 caching code. Basically if you confused the cache well enough, it would cache even requests for non-cached reads. Thus, in many benchmarks it would appear that the hard drive's throughput was much higher even than physically possible for that rotation speed and recording density.
Moraelin -- the proud member of the Idiots' Guild
Nah, Benchmark adn BullSh*t are not the same thing, I know, i'm still at school.
Here is my reasoning:
If I went into the headmasters office and shouted "BULLSH*T" as loud as I could, he would screem at me, put me in detention about 6 times and possibly take away my prefectship, however, if I went into his office and shouted "BENCHMARK" as loud as possible, he would think I had gone insane, but probaby not punnish me.
Of course I agree that Benchmarks are not exactly reliable for Vid cards, CPU's memory or anything else, I saw one once that claimed the Cyrix MII was faster than a PII. Never been an Intel fan myself, but even I cannot see that this is in any way true.
Could probably find a bench someplace that would put an S3 Virge PCI ahead of the GeForce2 GTS if you really looked hard enough.
My train of thought is a runaway!!
AHHH, I think my computer's got a Virus!
Oh no, that's Windows XP
Chimaera: Twin Athlon MP 1900+ @ 1824Mhz, GeForce 3 TI500, 512Megs Crucial RegECC, 100GB Total HDD, SuSE Linux 7.3 + Windoze XP Pro
One thing benchmarks are very good for is tweaking your system.
You run the benchmark once to get the "benchmark" or standard to work from. Then you start tweaking, and after each change you make you run the benchmark again and see the results of your changes, good or bad.
It is important to change things one at a time so you can see direct results in the benchmarks that you run.
Exactly, especially when changing some of the DRAM settings in the BIOS. Also checking posted benchmarks can tell you something about your system, whether you need tweaking or whether you've done an excellent job tweaking. Problem is some of these posted benchmarks do seem to be biased and slanted towards particular hardware manufacturers. You just have to sort through some of the bs to get information you can use. I like to check multiple sites and take an average, throwing out any obvious high or low marks. What bother's me more than benchmarking being used as a marketing tool are features being used as marketing tools which are non-functional in the product you buy. Of course these can be enabled at a later date, right?