This article obviously raises some excellent points - but I would like to pose the following question to the author:
"Do you honestly expect/predict that RDRAM will become the next memory standard, instead of DDR RAM or PC133 RAM?"
I ask this in light of the following issues:
1) Intel is the ONLY company backing RDRAM, yet current Intel RDRAM solutions run slower than Intel BX motherboards (not to mention VIA 133A!) according to nearly every real-world benchmark/application anyone has cared to try. (according to the web anyways!)
2) Intel is bringing PC133 SDRAM support, with the upcoming i815 chipset, and is currently recalling many RDRAM i820 chipset motherboards.
3) Common opinion *appears* to be that RDRAM is dead. Even if it has potential that few people are informed enough to understand (such as yourself), RDRAM is suffering from BAD PRESS. That can kill a product, regardless to performance.
At the risk of repeating myself: Sander, did you buy shares in Rambus, or what? There are countless benchmarks that show that Rambus is doing worse than even plain SDRAM in real life applications. Now we see DDR can run circles around it. (As expected.) Even when the RDRAM is running on an 840 chipset, the most expensive one, and which already interleaves RDRAM's for double bandwidth.
On WHAT do you base your statement that "RDRAM is not perfect, but it is currently one of the most promising solutions to bandwidth, latency and propagation delay problems and is scalable, a distinct advantage"? Just about every single real life measurement shows that RDRAM has WORSE latency and is NOT scalable. In fact, it is well documented that adding more RDRAM _increases_ latency, and decreases performance. You call that scaling? Based on WHAT? On pure faith in the Holy Rambus?
You say, and I quote: "As much as I value his hard work and dedication, this has not resolved the RDRAM vs. DDR issue, but has only given some less educated, ill-informed people who bear a grudge against Rambus and Intel another unfairly argued article to use as leverage." I'll ignore for now the insulting implication that all of us who don't buy Rambus's hype and vapourware are uneducated and ill informed. It may interest you that I do have a degrees in CS, and so do a lot of others.
But, ok, if that article is unfairly argued, how about showing us YOUR OWN measurements? So far I've only seen you some selectively incomplete theoretical data, that carefully omited all the factors which DO make RDRAM perform worse in real life. For better or worse, unlike you, he did use some real benchmarks. If you don't like them, how about giving us something else instead?
You also say: "The Intel motherboard he chose is one of the slowest I've ever benchmarked." Umm... OK, so how about telling us what board he should use instead? I doubt it will make any difference, given the magnitude of the difference in those benchmarks. We're not talking 1-2 percent here. But let's hear a suggestion anyway.
Moraelin -- the proud member of the Idiots' Guild
Let me start off by thanking you all for your honest and upfront comments. I'm in the midst of my research for the follow-up as I'm not about to make all of you find out the truth of the matter on your own account. Also I'll be sure to provide answers to the questions you've raised and any issues that I may have left out in the initial article.
Many of you also suggested that I'm 'bought' by Rambus or Intel; let me just assure you that that's not the case. I'm just as eager to find out what this RDRAM vs. SDRAM discussion is all about as you are. Also many of you state that I'm ill-educated and should leave this kind of stuff to the 'professionals'. Just to satisfy your curiousity, I have a masters degree in computer science and a bachelors in micro-system technology; you can rest assured that I'm capable and on top of things.
Regarding benchmarks; there are many ways to conduct a benchmark, and there're even more ways to interpret its results. Choosing the wrong benchmark can spoil the outcome and vice versa. Benchmarks are usually written to demonstrate a product's performance, and when issued by the product's manufacturer, they are mostly tailored to the product's features. These kinds of benchmarks generally give an estimate how the product performs in relation to others, but more importantly they show the performance that that product is capable of delivering.
And that's exactly the problem, because 'capable of delivering' and 'actually delivering' are two entirely different things and must not be interpreted as being the same thing. Real world performance is an entirely different story than a controlled benchmark environment. Even the fastest CPU/system can be crippled by either an ill configured system or running software which is not tailored to make use of the CPU's/system's features.
Furthermore, if you know how to 'benchmark' a product it is very easy to make one come out on top whereas the other comes in second place by focussing on the actual performance enhancing features of one, and neglecting those of the other. Just to give you an example, Apple's G4 CPU was boasted to offer 'supercomputer performance on the desktop', but after close examination of the benchmarks and SIMD optimizations used, it is very clear that it only does so under certain conditions. If another benchmark had been used which did not include the SIMD optimizations the outcome would have been entirely different.
So nothing new here, as the saying goes 'Lies, damn lies and benchmarks'. To do away with some of the obvious pitfalls, I've decided not to include benchmarks in the initial article as that would take the focus off of the theoretical discussion. I will however include those in the follow-up, after I have found a way to properly and more important objectively benchmark the different memory types.
I'm currently talking to Rambus, Intel, AMD, Samsung and Micron about how to best go about this and provide results that are accurate and can be easily reproduced by anyone willing to do so.
As much as I would like to participate in this discussion thread and answer everyone's concerns individually I cannot due to time constraints, I hope you all understand.
Well, ok, I guess. Guess we all could use a good and representative benchmark. Only personally I'm a lot more interested in the performance of real life applications than in synthetic benchmarks. In the end, that's what we all buy computers for, not to run benchmarks. Dunno, for me THAT is representative.
Like, for example: how long does it take to render a scene in some CAD packages or ray tracers? Basically, you can just take POVray and render something with it. Or what are the frame-rates in some popular games? Or how many pages per second can the web server of your choice serve, using ASP, CGI or Java servlets? (Static content doesn't really stress even a low end server nowadays. In real life situations you'll run out of bandwidth long before you hit any signifficant CPU usage.) Or how fast does SETI process the same packets on DDR and RDRAM. (SETI is known to be _very_ memory intensive, which is why a 1GHz Athlon won't finish faster than a 500MHz one, but an Ultra 10 will finish in less than half that time.)
BTW... in InQuest's benchmark there was _one_ DDR channel against _two_ RDRAM channels in parallel. Methinks defeats the whole argument about RDRAM's advantage because of less lines and less circuitry. Basically when you put channels in parallel, you're back to large bus widths, and we're told that's bad. (OK, so we all know that's bogus, but just for the sake of believing Rambus ) And, hey, that's unfair for DDR. Maybe the upcoming representative benchmark should do it one channel against one channel? That should show who's on top, fair and square. We wouldn't want "another unfairly argued article" to lead astray us "less educated, ill-informed people", now would we?
Moraelin -- the proud member of the Idiots' Guild
[This message has been edited by Moraelin (edited 04-21-2000).]