Lies, Damned Lies, and a Different Perspective
Home | Reviews and Features | Special Reports | Forums |

Page 1 of 4 123 ... LastLast
Results 1 to 15 of 48

Thread: Lies, Damned Lies, and a Different Perspective

Hybrid View

  1. #1
    Join Date
    Mar 2000
    Location
    HardwareCentral
    Posts
    181

    Lies, Damned Lies, and a Different Perspective

    Intel and Rambus have been taking quite a beating by a couple of other websites, although objectivity did not rank that high with those articles. We took it upon ourselves to take another look at Rambus and some of the other issues in the industry. This report has all the details.

    http://www.hardwarecentral.com/hardw.../reports/1686/

  2. #2
    Join Date
    Apr 2000
    Location
    N. Potomac, MD US
    Posts
    2
    Hi Sander,

    I noticed that your performance section was entirely theoretical, without any real-world benchmarks. Your article would have made a much bigger impact if benchmarks had been used to prove your points. Based on the benchmarks we have seen, it would not be possible to show that Rambus actually outperforms SDRAM, so maybe theory is all you have to go with.

    Finally, you did not even mention Double Data Rate memory, which has higher bandwidth and lower latency than SDRAM. It is also much cheaper than RDRAM.

    Rambus will never catch on in the PC industry for one simple reason. Everything in computers is getting less expensive, and consumers won't go for a high priced alternative that does not offer very distinct advantages, which Rambus does not.

    Dr. John
    http://www.kickassgear.com

  3. #3
    Join Date
    Jul 1999
    Location
    berkley, Mi, USA
    Posts
    748
    Speed is good when talking about ram, no argument there. BUT is selling your soul, and first born child worth it, just so you can buy the ram? It would all depend on what I need the ram for.

    ------------------
    thinking...

    damn does that hurt
    Give me fuel, give me fire, give me that which I desire

  4. #4
    Join Date
    Apr 2000
    Posts
    1
    I quote:

    "The total SDRAM system latency is:

    40+(2x10)+(3x10)=90ns for PC100
    45+(2x7.5)+(3x7.5)=82.5ns for PC133...

    ...the RDRAM system latency is:

    38.75+10+18.75=67.5ns for PC800...

    Measured at either the component or system level, RDRAMs have the fastest latency."

    Ok, fair enough. So, I've been misinformed, then, when I'm told that RDRAM has higher latency than SDRAM. Or at least, that view is wrong in the face of the official definition for memory latency. However, the article goes on to say, and again I quote:

    "For example, a program that uses random database search using a large chunk of memory will 'thrash' the caches, and the low latency of SDRAM will really shine."

    Didn't we just finish learning that RDRAM has lower latency than SDRAM?? How does this make any sense? I don't pretend to fully understand all of the technical information in this article, but I'll hazard a guess... I've heard it said in many places that the good old i440BX chipset has very low overhead when it comes to getting info into and out of memory.

    If RDRAM actually has lower latency than SDRAM at the component level, then any difficulties imposed on its latency must appear somewhere else... and I think Intel's new chipsets may be the culprits (i820, i840). At the very least, many have said that they have exceptionally high overhead, especially when compared to that displayed by the 440BX. I have no idea if this is actually the case, if anyone can shed some light here, please do so.

    Another confusion... the example task given for SDRAM, a "random database search". What the hell is a "random" search? It's my understanding that when you're searching for something, you a) know what you're searching for, and b) perform the search in the most efficient manner possible!! From the small amount I know about programming and search engines, I don't think anybody anywhere would ever code anything to search anything completely at random. It makes no sense.

    One of the reasons RDRAM is getting a bad rap is its poor benchmarks on games (i'm sure you've all seen the article on Tom's Hardware Guide by now). Let's face it, games drive a significant (perhaps even THE significant) portion of the high-end home PC market. And games don't access large chunks of data from memory in a sequential fashion, they grab whatever it is they need whenever they need, wherever in memory it happens to be. Which is important to me, as I am a gamer first and foremost.

    Now, eventually, game programmers may optimize their use of memory to take of advantage of RDRAM's ability to grab large chunks. Until this happens, however, SDRAM systems (and most likely DDR-SDRAM systems, when they appear) will post better frame rates than RDRAM systems. And better frame rates = better sales in the PC enthusiast market. Granted, that's not the be all and end all of the computer industry, by far, but I can tell you it's the only part of it that I care about when I'm shopping for parts.

    So what's my point? The theoretical implications of this article are all fine and dandy, but until a quality RDRAM chipset appears (IMHO, there hasn't been one nearly up to snuff yet) I won't be buying one, nor would I recommend one to anybody else. As I believe has already been said, theory isn't nearly as important as real-world performance.

  5. #5
    Join Date
    Apr 2000
    Location
    Wallaceburg Ontario Canada
    Posts
    16
    Sander:

    I'd like to follow up the post of Dr John's using the same idea but with a different tack. He points out you dont include any benchmarks to prove your assertion that RDRAM is better then SDRAM. I would also point out that you should have been a bit more specific as to why you felt these other sites you mention were not being objective when they were critiquing RDRAM and criticizing it. I dont feel like naming other sites on here since I think thats not very polite to you, but a certain hardware guide run by "Tom" not only posted an initial statement as to why they thought RAMBUS was a bunch of hype (as well as tests) but also was good enough to respond with a follow-up article to counter attacks by RAMBUS supporters on the original article with still more benchmark tests and point by point reply to their critics. (Van allen was the writer of both articles by the way. not Tom)

    Just because sites criticize a technology you may like doesnt make them biased or non objective. You make generalizing statements about certain sites charging Intel wants to get our money or that Rambus is owned in part by Intel.. but I dont ever recall reading that in any of the articles I have seen.. What I've seen are actual concrete tests showing Rambus isnt all its been made out to be. Be less vague next time when making this type of claim.

  6. #6
    Join Date
    Apr 2000
    Location
    Lansdale, PA
    Posts
    1
    What the article so pointedly left out was that due to the current RDRAM architecture, it gets hot, so must power down to 'standby' state, when not in active use. When it is needed, it has to 'power up' again, causing more of a delay. Now when somone shrinks the die size of RDRAM to say .18-.20 micron dies, then the power consumption will be less, and the heat generation will be less, and perhaps it will not need to have a standby mode any longer. When that is the case, then I will be on the RDRAM bandwagon, but not until that point.

    ------------------
    -- Mind your P's & Q's

    [This message has been edited by Peas-n-Ques (edited 04-06-2000).]

  7. #7
    Join Date
    Jun 1999
    Location
    Unknown
    Posts
    10

    I find an excess of theory and an absence of real-world engineering facts in your article.

    I would very much like for someone here to study, in a VLSI simulator, if you have no benchmarking setup, the effects of a cache-miss on the performance of RDRAM. You might reverse your most erroneous conclusions. Mine are that it would take 8-channel (all we have now is dual) PC-1600 (all we have now is PC-800) RDRAM in order to outperform DDR-SDRAM running at 150 MHZ.

    Remember that the physical path the electrons must take through RDRAM is many times that (8x-10x) as long as for SDRAM and DDR-RAM. This causes a huge latency problem, despite all the theoretical blather to the contrary.

    All performance increases in personal computing have recently come from the concept of doing more work in fewer cycles. I can hardly believe that anyone would argue for a technology that does exactly the opposite.

    At a grand per 128mb rimm, the case for RAMBUS memory technology is quite thin. Let us revisit this a few years from now when the price is somewhat affordable.

    Karsten Hendrick
    karstenhendrick@yahoo.com

  8. #8
    Join Date
    Apr 2000
    Location
    Wallaceburg Ontario Canada
    Posts
    16
    My addenum to this message is: If Hardware Central and you, Sander have a specific problem with some of the benchmarks that "Tom"s site used, you should have stated the reasons why their benchmarks were flawed, rather then charge unnamed sites as being biased or not objective. I use that site as the main example because everyone else seems to be basing their information on his benchmarks. His site seems to have been the most thorough looking at the RDRAM question.



  9. #9
    Join Date
    Apr 2000
    Location
    manila
    Posts
    2
    Dear S.,

    I, and maybe just the whole world might just think that buying RAM that's costs more than you're whole system is not a smart thing to do, especially when it only runs on Intel chips. For me, I'll get an Alpha or a high-end Athlon(not a costly cumine) to my PC.

  10. #10
    Join Date
    Apr 2000
    Location
    Sioux Falls, SD USA
    Posts
    3
    Sander,

    The theory is nice but where are the real
    world numbers? There are zero examples that
    can be found anywhere where RMBS consistently
    beats PC133 (or even PC100).

    I propose that you or someone else with a
    little C experience write up the simplest
    of test cases and run tests on PC100, PC133
    and i820/i840 using PC800 Rambus memory.

    Here are the suggested guidelines:

    1) Use a Unix or Linux system in single
    user mode so that you can be SURE that
    there are ZERO background processes
    messing with the test.

    2) Write simple C code that builds large
    memory arrays of 10-100 MBytes that
    are filled and then accessed.

    3) Use various array sizes in a straight
    linear progression from 1 byte to 64
    bytes. The purpose for this is to
    stress performance on small word size,
    odd word size (broken boundary with
    the odd numbered sizes > 4 or 8 bytes),
    medium word size and large word size.
    (A data plot should show discreet
    performance steps in it.)

    4) Test using a single RIMM, two RIMMs
    and with three RIMMs. (This will
    highlight the latency increase in
    the daisy-chain Rambus design.)

    5) Measure both the fill times (memory
    writes) and access (memory read) times.
    (Writing to memory is often 30% or more
    of the workload in a heavily loaded
    server system with lots and lots of
    buffer caches for disk I/O, DBMS, etc.)

    6) Time the fill and access speeds of the
    various memory array sizes as:

    a) Single dimension - Linear low to high
    b) Single dimension - Linear high to low
    c) Single dimension - Prime number
    incremental address change (from a
    simple repeatable loop segment to
    be sure it gets into cache and
    stays there). 3-7-11-17-23-etc.
    for about 500 steps or so would be
    a nice tightly coded segment that
    should fit into 16 k or less.
    d) Double dimension - Linear left to
    right and top to bottom.
    e-f-g) Other three permutations of d).
    h) Double dimension variant of c).

    7) Reboot between tests to be sure of
    clean startup conditions.

    The content of what's put into and taken
    out of memory should be irrelevant. This
    sort of test can be cooked up by an
    experienced C coder in about an hour or
    two depending on how parametric and
    interactive it is designed to be.

    Watch for serious performance step functions
    as memory block access size changes and
    either forces multi-fetch access or exceeds
    the burst capability of each technology.

    I'll forward this idea to Van Smith for his
    consideration. It would produce very
    interesting test results.

    For criticisms of the above suggested test
    code please email:

    spencertk@abasys.com

    Spencer T. Kittelson
    Spencer T. Kittelson

  11. #11
    Join Date
    Apr 2000
    Posts
    2
    The real problem is that Intel hasn't improved their chipset performance since the BX while forcing both chipset (e.g. 820) and memory prices (RDRAM) up significantly. So you pay more and get ... about the same.

    If 820 etc. and RDRAM costs were in line with BX and SDRAM and performance were significantly better, there would not be complaints. We KNOW that Intel can do better; their chipsets have consistently been good (basically, the best) performers for the PC platform, especially with regards to memory performance. That VIA can now match Intel's latest and greatest with a cheap chipset and regular SDRAM is a travesty (since it means that they are just now catching up with two-year-old Intel performance). Intel has been sitting still perfomance-wise while making its customers (ultimately us) pay more and more.

    And by backing RDRAM so aggressively, when its costs are higher and performance is no better, they open up that question of what their motivation really is. Remember when other companies could compete in this area, before Intel decided that all technologies in a PC had to licensed from them? (Processor bus and protocols, peripheral bus and protocols, serial bus and protocols, not to even mention the CPU instruction set.)

    And Rambus doesn't make itself any more likable by patenting the work of open standards bodies like JEDEC. I believe that is called parasitical, not revolutionary.

    RDRAM probably has a place in pre-built (non-expandable) systems like game-stations and PDAs (at least if power usage and heat dissipation drop) but it has no technical merits that qualify its existence in a PC today.

    Cameron.

  12. #12
    Join Date
    Apr 2000
    Posts
    1
    "The total SDRAM system latency is:

    "40 + (2 x 10) + (3 x 10) = 90ns for PC100 SDRAM
    45 + (2 x 7.5) + (3 x 7.5) = 82.5ns for PC133 SDRAM

    "Surprisingly, due to the mismatch between its interface and core timing, the PC133 SDRAM latency is significantly higher than the PC100 SDRAM."

    82.5 is higher than 90?

  13. #13
    Join Date
    Feb 2000
    Location
    Deutschland
    Posts
    936
    This article is flawed in so many aspects, it's not even funny. Those theoretical numbers are good and fine, but they conveniently ignore other factors.

    For example the fact that those chunks of 16 bit data do have to be packed back into 64 bit chunks that fit the CPU's bus. The CPU can't start actually receiving _anything_ until the RDRAM has sent the fourth packet already. This, too, adds to latency.

    A second thing you conveniently ignore would be the increased latency when you have more than one RDRAM chip in your computer. I don't see that anywhere in your theoretical numbers.

    The most obvious implication of this is: when you add more RDRAM to your system, your performance goes down. This can actually be seen in benchmarks.

    However there's more to this... to the best of my knowledge, neither SDRAM, nor RDRAM have just one chip per stick of memory. Your stick of RDRAM will have more than one chip on it, resulting from the start in a latency much greater than those theoretical numbers.

    Etc, etc, etc. And, as has been noted, real world benchmarks don't support those theoretical numbers of yours at all.

    Briefly: it may be a "promising" technology, but here and now we only see promises from it. Nothing tangible, nothing useful, and definitely nothing worth the price. Who knows, it may or may not become good in some unforseen future. That is, assuming DDR and other technologies don't keep moving faster than it. But until it does, it's just hype and vapourware. No more, no less.
    Moraelin -- the proud member of the Idiots' Guild

  14. #14
    Join Date
    Apr 2000
    Posts
    1
    [QUOTE]Originally posted by Dr. John:
    [B]Hi Sander,

    snip

    Finally, you did not even mention Double Data Rate memory, which has higher bandwidth and lower latency than SDRAM. It is also much cheaper than RDRAM.

    Actually, in the conclusion, DDR was mentioned with a dismissal that was not warranted. What was not mentioned was QDR, or quad data rate, which has already been announced by one company. This also will be far less expensive than RDRAM without any of the problems related to multiple modules. There is no scenario in which RDRAM can even begin to compete with QDR on price or performance. It is always going to be faster to transfer wider, especially when the width of memory access is not wider than the processors' memory bus. When that width is less, those highly pumped 16 bit pieces must be assembled into 64 or 128 bit pieces before they can be presented to the memory system. This requires both logic and timing(spelled additional system latency). Now, consider that RDRAM causes motherboards to be completely redesigned and you have an insuperable barrier to it's introduction.

    Now let us consider the little matter of your logic or lack thereof. Let me quote you:

    "Measured at either the component or system level, RDRAMs have the fastest latency. Surprisingly, due to the mismatch between its interface and core timing, the PC133 SDRAM latency is significantly higher than the PC100 SDRAM. The RDRAM's low latency coupled with its 1.6 gigabyte per second bandwidth provides the highest possible sustained system performance."

    "From a performance point of view we must note that L1 and L2 cache hits and misses contribute greatly to memory architecture performance. Also, individual programs vary in memory use and so have different impacts on its performance. For example, a program that uses random database search using a large chunk of memory will 'thrash' the caches, and the low latency of SDRAM will really shine. On the other hand, large sequential memory transfers with little requirement for CPU processing can easily saturate SDRAM bandwidth. RDRAM will have an advantage here with its higher bandwidth. For code that fits nicely within the L1/L2 caches, memory type will have virtually no impact at all."

    Paragraph one contradicts paragraph two. You spend a whole article debunking the idea that RDRAM has greater latency, and then you conclude with a paragraph that goes back to the basic industry take that SDRAM has lower latency and RDRAM has higher bandwidth. Until you can come up with some tests that validate your guesses, I'll stick with those who have actually tested the hardware. There is a famous quote from Donald Knuth when sending a piece of code to a friend. "I have only proved it correct, not actually tested it." It is fine to want to come up with a unique viewpoint. Actually having some facts to back that viewpoint will improve the likelyhood that your readers will believe it.

    Everett L.(Rett) Williams
    rett@gvtc.com

  15. #15
    Join Date
    Apr 2000
    Posts
    4
    Wow. I have been disgusted with this site in the past for publishing erroneous technical information, but this article has got to take the cake. It goes back and forth on which memory has less latency, deals purely with theoretical data, gives us zero real world benchmark testing, the list goes on and on.

    Lets go over the FACTS of Rambus vs. SDRAM:

    1. In real world scenarios, Rambus has more latency EVERY TIME. Why else would Intel not utilize Rambus in its upcoming server chipsets? Because they realize servers use lots of memory chips and the more memory chips, the more Rambus latency.

    2. Rambus cannot outperform PC133 SDRAM, so how is it going to fare against DDR????

    3. Rambus is, and always will be, very expensive. In addition to the royalties, the yields of good chips are so low in production that the price has to be jacked way up to compensate. Especially with PC800 Rambus. This article failed to mention that the vast majority of Rambus chips out there are PC700.

    4. The article implied that SDRAM was nearing the end of its useful life and Rambus is the way of the future. So why is Rambus the one that needs a heatsink in order to operate safely??? Something is wrong here, people!!

    Hardware Central, if you are not going to backup your statements with real-world benchmarks, please leave the technical discussions to the pro's at "that other website".

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •