Lies, Damned Lies, and a Different Perspective

D

Dr. John

New Member
#2
Hi Sander,

I noticed that your performance section was entirely theoretical, without any real-world benchmarks. Your article would have made a much bigger impact if benchmarks had been used to prove your points. Based on the benchmarks we have seen, it would not be possible to show that Rambus actually outperforms SDRAM, so maybe theory is all you have to go with.

Finally, you did not even mention Double Data Rate memory, which has higher bandwidth and lower latency than SDRAM. It is also much cheaper than RDRAM.

Rambus will never catch on in the PC industry for one simple reason. Everything in computers is getting less expensive, and consumers won't go for a high priced alternative that does not offer very distinct advantages, which Rambus does not.

Dr. John
http://www.kickassgear.com
 
B

biohazard2

New Member
#3
Speed is good when talking about ram, no argument there. BUT is selling your soul, and first born child worth it, just so you can buy the ram? It would all depend on what I need the ram for.

------------------
thinking...

damn does that hurt
 
K

KaoShen

New Member
#4
I quote:

"The total SDRAM system latency is:

40+(2x10)+(3x10)=90ns for PC100
45+(2x7.5)+(3x7.5)=82.5ns for PC133...

...the RDRAM system latency is:

38.75+10+18.75=67.5ns for PC800...

Measured at either the component or system level, RDRAMs have the fastest latency."

Ok, fair enough. So, I've been misinformed, then, when I'm told that RDRAM has higher latency than SDRAM. Or at least, that view is wrong in the face of the official definition for memory latency. However, the article goes on to say, and again I quote:

"For example, a program that uses random database search using a large chunk of memory will 'thrash' the caches, and the low latency of SDRAM will really shine."

Didn't we just finish learning that RDRAM has lower latency than SDRAM?? How does this make any sense? I don't pretend to fully understand all of the technical information in this article, but I'll hazard a guess... I've heard it said in many places that the good old i440BX chipset has very low overhead when it comes to getting info into and out of memory.

If RDRAM actually has lower latency than SDRAM at the component level, then any difficulties imposed on its latency must appear somewhere else... and I think Intel's new chipsets may be the culprits (i820, i840). At the very least, many have said that they have exceptionally high overhead, especially when compared to that displayed by the 440BX. I have no idea if this is actually the case, if anyone can shed some light here, please do so.

Another confusion... the example task given for SDRAM, a "random database search". What the hell is a "random" search? It's my understanding that when you're searching for something, you a) know what you're searching for, and b) perform the search in the most efficient manner possible!! From the small amount I know about programming and search engines, I don't think anybody anywhere would ever code anything to search anything completely at random. It makes no sense.

One of the reasons RDRAM is getting a bad rap is its poor benchmarks on games (i'm sure you've all seen the article on Tom's Hardware Guide by now). Let's face it, games drive a significant (perhaps even THE significant) portion of the high-end home PC market. And games don't access large chunks of data from memory in a sequential fashion, they grab whatever it is they need whenever they need, wherever in memory it happens to be. Which is important to me, as I am a gamer first and foremost.

Now, eventually, game programmers may optimize their use of memory to take of advantage of RDRAM's ability to grab large chunks. Until this happens, however, SDRAM systems (and most likely DDR-SDRAM systems, when they appear) will post better frame rates than RDRAM systems. And better frame rates = better sales in the PC enthusiast market. Granted, that's not the be all and end all of the computer industry, by far, but I can tell you it's the only part of it that I care about when I'm shopping for parts.

So what's my point? The theoretical implications of this article are all fine and dandy, but until a quality RDRAM chipset appears (IMHO, there hasn't been one nearly up to snuff yet) I won't be buying one, nor would I recommend one to anybody else. As I believe has already been said, theory isn't nearly as important as real-world performance.
 
stribe

stribe

New Member
#5
Sander:

I'd like to follow up the post of Dr John's using the same idea but with a different tack. He points out you dont include any benchmarks to prove your assertion that RDRAM is better then SDRAM. I would also point out that you should have been a bit more specific as to why you felt these other sites you mention were not being objective when they were critiquing RDRAM and criticizing it. I dont feel like naming other sites on here since I think thats not very polite to you, but a certain hardware guide run by "Tom" not only posted an initial statement as to why they thought RAMBUS was a bunch of hype (as well as tests) but also was good enough to respond with a follow-up article to counter attacks by RAMBUS supporters on the original article with still more benchmark tests and point by point reply to their critics. (Van allen was the writer of both articles by the way. not Tom)

Just because sites criticize a technology you may like doesnt make them biased or non objective. You make generalizing statements about certain sites charging Intel wants to get our money or that Rambus is owned in part by Intel.. but I dont ever recall reading that in any of the articles I have seen.. What I've seen are actual concrete tests showing Rambus isnt all its been made out to be. Be less vague next time when making this type of claim.
 
P

Peas-n-Ques

New Member
#6
What the article so pointedly left out was that due to the current RDRAM architecture, it gets hot, so must power down to 'standby' state, when not in active use. When it is needed, it has to 'power up' again, causing more of a delay. Now when somone shrinks the die size of RDRAM to say .18-.20 micron dies, then the power consumption will be less, and the heat generation will be less, and perhaps it will not need to have a standby mode any longer. When that is the case, then I will be on the RDRAM bandwagon, but not until that point.

------------------
-- Mind your P's & Q's

[This message has been edited by Peas-n-Ques (edited 04-06-2000).]
 
M

malhavok

New Member
#7
I find an excess of theory and an absence of real-world engineering facts in your article.

I would very much like for someone here to study, in a VLSI simulator, if you have no benchmarking setup, the effects of a cache-miss on the performance of RDRAM. You might reverse your most erroneous conclusions. Mine are that it would take 8-channel (all we have now is dual) PC-1600 (all we have now is PC-800) RDRAM in order to outperform DDR-SDRAM running at 150 MHZ.

Remember that the physical path the electrons must take through RDRAM is many times that (8x-10x) as long as for SDRAM and DDR-RAM. This causes a huge latency problem, despite all the theoretical blather to the contrary.

All performance increases in personal computing have recently come from the concept of doing more work in fewer cycles. I can hardly believe that anyone would argue for a technology that does exactly the opposite.

At a grand per 128mb rimm, the case for RAMBUS memory technology is quite thin. Let us revisit this a few years from now when the price is somewhat affordable.

Karsten Hendrick
[email protected]
 
stribe

stribe

New Member
#8
My addenum to this message is: If Hardware Central and you, Sander have a specific problem with some of the benchmarks that "Tom"s site used, you should have stated the reasons why their benchmarks were flawed, rather then charge unnamed sites as being biased or not objective. I use that site as the main example because everyone else seems to be basing their information on his benchmarks. His site seems to have been the most thorough looking at the RDRAM question.
 
J

JORBATSD

New Member
#9
Dear S.,

I, and maybe just the whole world might just think that buying RAM that's costs more than you're whole system is not a smart thing to do, especially when it only runs on Intel chips. For me, I'll get an Alpha or a high-end Athlon(not a costly cumine) to my PC.
 
S

spencertk

New Member
#10
Sander,

The theory is nice but where are the real
world numbers? There are zero examples that
can be found anywhere where RMBS consistently
beats PC133 (or even PC100).

I propose that you or someone else with a
little C experience write up the simplest
of test cases and run tests on PC100, PC133
and i820/i840 using PC800 Rambus memory.

Here are the suggested guidelines:

1) Use a Unix or Linux system in single
user mode so that you can be SURE that
there are ZERO background processes
messing with the test.

2) Write simple C code that builds large
memory arrays of 10-100 MBytes that
are filled and then accessed.

3) Use various array sizes in a straight
linear progression from 1 byte to 64
bytes. The purpose for this is to
stress performance on small word size,
odd word size (broken boundary with
the odd numbered sizes > 4 or 8 bytes),
medium word size and large word size.
(A data plot should show discreet
performance steps in it.)

4) Test using a single RIMM, two RIMMs
and with three RIMMs. (This will
highlight the latency increase in
the daisy-chain Rambus design.)

5) Measure both the fill times (memory
writes) and access (memory read) times.
(Writing to memory is often 30% or more
of the workload in a heavily loaded
server system with lots and lots of
buffer caches for disk I/O, DBMS, etc.)

6) Time the fill and access speeds of the
various memory array sizes as:

a) Single dimension - Linear low to high
b) Single dimension - Linear high to low
c) Single dimension - Prime number
incremental address change (from a
simple repeatable loop segment to
be sure it gets into cache and
stays there). 3-7-11-17-23-etc.
for about 500 steps or so would be
a nice tightly coded segment that
should fit into 16 k or less.
d) Double dimension - Linear left to
right and top to bottom.
e-f-g) Other three permutations of d).
h) Double dimension variant of c).

7) Reboot between tests to be sure of
clean startup conditions.

The content of what's put into and taken
out of memory should be irrelevant. This
sort of test can be cooked up by an
experienced C coder in about an hour or
two depending on how parametric and
interactive it is designed to be.

Watch for serious performance step functions
as memory block access size changes and
either forces multi-fetch access or exceeds
the burst capability of each technology.

I'll forward this idea to Van Smith for his
consideration. It would produce very
interesting test results.

For criticisms of the above suggested test
code please email:

[email protected]

Spencer T. Kittelson
 
C

cpurdy

New Member
#11
The real problem is that Intel hasn't improved their chipset performance since the BX while forcing both chipset (e.g. 820) and memory prices (RDRAM) up significantly. So you pay more and get ... about the same.

If 820 etc. and RDRAM costs were in line with BX and SDRAM and performance were significantly better, there would not be complaints. We KNOW that Intel can do better; their chipsets have consistently been good (basically, the best) performers for the PC platform, especially with regards to memory performance. That VIA can now match Intel's latest and greatest with a cheap chipset and regular SDRAM is a travesty (since it means that they are just now catching up with two-year-old Intel performance). Intel has been sitting still perfomance-wise while making its customers (ultimately us) pay more and more.

And by backing RDRAM so aggressively, when its costs are higher and performance is no better, they open up that question of what their motivation really is. Remember when other companies could compete in this area, before Intel decided that all technologies in a PC had to licensed from them? (Processor bus and protocols, peripheral bus and protocols, serial bus and protocols, not to even mention the CPU instruction set.)

And Rambus doesn't make itself any more likable by patenting the work of open standards bodies like JEDEC. I believe that is called parasitical, not revolutionary.

RDRAM probably has a place in pre-built (non-expandable) systems like game-stations and PDAs (at least if power usage and heat dissipation drop) but it has no technical merits that qualify its existence in a PC today.

Cameron.
 
Tim King

Tim King

New Member
#12
"The total SDRAM system latency is:

"40 + (2 x 10) + (3 x 10) = 90ns for PC100 SDRAM
45 + (2 x 7.5) + (3 x 7.5) = 82.5ns for PC133 SDRAM

"Surprisingly, due to the mismatch between its interface and core timing, the PC133 SDRAM latency is significantly higher than the PC100 SDRAM."

82.5 is higher than 90?
 
M

Moraelin

New Member
#13
This article is flawed in so many aspects, it's not even funny. Those theoretical numbers are good and fine, but they conveniently ignore other factors.

For example the fact that those chunks of 16 bit data do have to be packed back into 64 bit chunks that fit the CPU's bus. The CPU can't start actually receiving _anything_ until the RDRAM has sent the fourth packet already. This, too, adds to latency.

A second thing you conveniently ignore would be the increased latency when you have more than one RDRAM chip in your computer. I don't see that anywhere in your theoretical numbers.

The most obvious implication of this is: when you add more RDRAM to your system, your performance goes down. This can actually be seen in benchmarks.

However there's more to this... to the best of my knowledge, neither SDRAM, nor RDRAM have just one chip per stick of memory. Your stick of RDRAM will have more than one chip on it, resulting from the start in a latency much greater than those theoretical numbers.

Etc, etc, etc. And, as has been noted, real world benchmarks don't support those theoretical numbers of yours at all.

Briefly: it may be a "promising" technology, but here and now we only see promises from it. Nothing tangible, nothing useful, and definitely nothing worth the price. Who knows, it may or may not become good in some unforseen future. That is, assuming DDR and other technologies don't keep moving faster than it. But until it does, it's just hype and vapourware. No more, no less.
 
R

rett

New Member
#14
Originally posted by Dr. John:
Hi Sander,

snip

Finally, you did not even mention Double Data Rate memory, which has higher bandwidth and lower latency than SDRAM. It is also much cheaper than RDRAM.

Actually, in the conclusion, DDR was mentioned with a dismissal that was not warranted. What was not mentioned was QDR, or quad data rate, which has already been announced by one company. This also will be far less expensive than RDRAM without any of the problems related to multiple modules. There is no scenario in which RDRAM can even begin to compete with QDR on price or performance. It is always going to be faster to transfer wider, especially when the width of memory access is not wider than the processors' memory bus. When that width is less, those highly pumped 16 bit pieces must be assembled into 64 or 128 bit pieces before they can be presented to the memory system. This requires both logic and timing(spelled additional system latency). Now, consider that RDRAM causes motherboards to be completely redesigned and you have an insuperable barrier to it's introduction.

Now let us consider the little matter of your logic or lack thereof. Let me quote you:

"Measured at either the component or system level, RDRAMs have the fastest latency. Surprisingly, due to the mismatch between its interface and core timing, the PC133 SDRAM latency is significantly higher than the PC100 SDRAM. The RDRAM's low latency coupled with its 1.6 gigabyte per second bandwidth provides the highest possible sustained system performance."

"From a performance point of view we must note that L1 and L2 cache hits and misses contribute greatly to memory architecture performance. Also, individual programs vary in memory use and so have different impacts on its performance. For example, a program that uses random database search using a large chunk of memory will 'thrash' the caches, and the low latency of SDRAM will really shine. On the other hand, large sequential memory transfers with little requirement for CPU processing can easily saturate SDRAM bandwidth. RDRAM will have an advantage here with its higher bandwidth. For code that fits nicely within the L1/L2 caches, memory type will have virtually no impact at all."

Paragraph one contradicts paragraph two. You spend a whole article debunking the idea that RDRAM has greater latency, and then you conclude with a paragraph that goes back to the basic industry take that SDRAM has lower latency and RDRAM has higher bandwidth. Until you can come up with some tests that validate your guesses, I'll stick with those who have actually tested the hardware. There is a famous quote from Donald Knuth when sending a piece of code to a friend. "I have only proved it correct, not actually tested it." It is fine to want to come up with a unique viewpoint. Actually having some facts to back that viewpoint will improve the likelyhood that your readers will believe it.

Everett L.(Rett) Williams
[email protected]
 
D

dil

New Member
#15
Wow. I have been disgusted with this site in the past for publishing erroneous technical information, but this article has got to take the cake. It goes back and forth on which memory has less latency, deals purely with theoretical data, gives us zero real world benchmark testing, the list goes on and on.

Lets go over the FACTS of Rambus vs. SDRAM:

1. In real world scenarios, Rambus has more latency EVERY TIME. Why else would Intel not utilize Rambus in its upcoming server chipsets? Because they realize servers use lots of memory chips and the more memory chips, the more Rambus latency.

2. Rambus cannot outperform PC133 SDRAM, so how is it going to fare against DDR????

3. Rambus is, and always will be, very expensive. In addition to the royalties, the yields of good chips are so low in production that the price has to be jacked way up to compensate. Especially with PC800 Rambus. This article failed to mention that the vast majority of Rambus chips out there are PC700.

4. The article implied that SDRAM was nearing the end of its useful life and Rambus is the way of the future. So why is Rambus the one that needs a heatsink in order to operate safely??? Something is wrong here, people!!

Hardware Central, if you are not going to backup your statements with real-world benchmarks, please leave the technical discussions to the pro's at "that other website".
 
A

albafhear

New Member
#16
Why on earth would Intel sell a chipset/mainboard that uses a type of RAM that is expensive to buy? To pay $1,200 for PC800 128MB modules is nuts - the ordinary user cannot be expected to pay that amount.

Which probably explains their CC820 mainboard, basically the i820 chipset but with the MTH which degrades system performance so PC100/PC133 RAM can be used.

I don't ever remember SDRAM being as high when it initially came out as RDRAM is just now. This RDRAM is off to a very bad start, technically it is doubted and price wise - it's tripped up before it's even got off the starting line.
 
B

burnsidb

New Member
#17
I also was disappointed in the article. I don't even fully understand all the technical discussion posted above, but from the actual benchmarks I've seen, seems that in SOME cases it is faster than SDRAM, and in other cases it is not! People mention DDR or QDR ram. Good point. The article seems to tell me that if I only wait, the price will drop and it will be so much better. I actually would bet you otherwise. I think that DDR or QDR or some other format will prove to be the best (speed for the money if nothing else). I ask you, for 90% of us computer buyers, would we rather say invest a couple hundred bucks for a faster processor, or thousand for [questionably] faster memory? (and I think I know which system would be faster in the real world!)
brian
 
R

robert inkol

New Member
#18
A few additional points:

The article criticizes DDR on the grounds of requiring 6 or 8 layer printed circuit boards. Doesn't the same objection apply to RDRAM?

I find the latency calculations more than a little suspect (worst case timing for SDRAM?). In the processor bus and in the array of memory cells on the DRAM chips, data is handled in an essentially parallel fashion. Any time you do back to back parallel to serial and serial to parallel conversions, as RDRAM does, you have overheads, even if the serial transfer is quite fast. Since the timing associated with accessing the DRAM memory cells and associated circuitry on the DRAM chips themselves would likely be similar for the two types of memory, I find it difficult to believe that RDRAM could have an advantage. For an alternative viewpoint, I have an article from Hyundai that quotes latency times of 68 to 93 ns and 45 to 50 ns for PC100 SDRAM. The lower of the first two figures matches the one quoted in the article while the second is barely half. The Hyundai figures seem more consistent with the actual performance.

A further observation is that the i820 and i840 chipsets benefit from a hub architecture that is supposed to eliminate internal bottlenecks and AGP4x. If these "advanced" designs cannot show a significant advantage under favourable circumstances (premium PC800 RDRAM and 800 MHz+ processors), it is difficult to buy arguments in favour of RDRAM. Conversely, the fact that intel is selling motherboards with the i820 crippled by the memory translator hub needed for SDRAM interfacing raises questions as to whether the company is serious in their belief that main memory is a bottleneck that requires such a radical solution. commitment in

Robert Inkol
 
U

UncaMilty

New Member
#19
The Author writes:

>But let's be fair here; a couple of years
>ago we still used EDO RAM and SDRAM was
>something new and pretty expensive. In
>hindsight we've seen the benefits of using
>SDRAM and its impact on overall system
>performance. But if we look at the
>performance benefits SDRAM offered in its
>early days on applications that were then
>popular, it also didn't seem to offer huge
>advantages. Still, we've come a long way,
>and SDRAM performance has indeed improved
>although the technology at first didn't
>seem to promise that much of an
>improvement.

This is misleading. SDRAM was knocked because the "10ns" rating made it seem much faster than "60ns" EDO SIMMs, when in fact the speed difference was slight. The big advantage of SDRAM at the time was that you didn't need to install them in pairs; memory had caught up with bus width once again.

Once the early compatibility problems were dealt with, SDRAM became a better choice, and the prices dropped steadily (and they weren't as ridiculously overpriced as RDRAM is now, by comparison).

Thus the comparison is not valid. SDRAM was a good choice to succeed an aging memory standard that was quickly becoming inadequate to the task. SDRAM still has headroom, especially with the DDR variant, and a clear and steady upgrade path that does not involve the cost penalties that RDRAM does.

Also, the fact that Intel has not yet taken those 1 million shares doesn't mean that it isn't a major factor in their support for RDRAM. Those 1 million shares will be worth hundreds of millions of dollars if RAMBUS becomes the memory technology of the future, to say nothin of the additional monies to come from licensing, and the additional power that comes with control of the DRAM market.

Funny that the author accuses "other websites" of lacking objectivity...


------------------
Milton Teruel
www.uncamilty.com
 
R

RealityCheck

New Member
#20
"However, I tend to disagree, as neither have Intel's track
record of reliability and performance, its resources, or its customer support,"
What were you smoking? Intel chips have a DOCUMENTED history of being unreliable and buggy. NOT MINOR BUGS even but Major ones such as the 286 not being able to exit protected mode without a reboot or the Entire?(PPro P2 AND P3) Pentium line having problems ADDING.
Now Why you at First try to dissaccoiate Intel from Rambus and Then bring in a FALSE statement of Intel's reliability and customer support(they lied about the problem and then lied about the severity)I can not fathom.
Rambus showed Early promise which has not been shown (in ACTUAL TESTS )and will probably be superceded shortly.

All Pro Intel/Rambus Tests I have seen pit 800 Mhz(another lie as its ddr 400) RDRAM against PC100 Mhz SDRAM instead of VC SDRAM or PC133 SDRAM or DDR SDRAM(available soon) This is INHERENTLY dishonest. Hey why didn't they just test it against FPM dram?
As to Why Intel was pushing it even though it does not work as advertised. Well they like propriatery solutions(PCI bus anyone?)