What an anti-climax. Even with the driver increases it looks like it will be only enough to level the playing field against R350. I can't believe this FX thingy hasthe gall to hog two card slots and not even bother to make raise its game. For the same price you could have two Radeon 9700NPs hogging the slots in SLI mode.
If you've got your money for nothing, who cares if the chicks are free!
Talk about a disappointment. Without FSAA it barely outperforms the already old 9700 Pro, and with FSAA it's actually slower. And even if they squeezed another 10% or so with driver optimizations, it would _still_ be slower in 4x FSAA mode.
I'm most definitely not impressed. Especially with the price for that. I'd probably buy that kind of previous generation performance for 200 bucks, but most certainly not 659.
Let's see if ATI can get the 350 out the door in time.
hell this gives ATI even more time to release whatever it is they want to release and make it 2x times better. Why put out another product when the current one kills Nvidias new card? To me that just hurts yourself. And if they do put anything else out, I dont think it should be anything but just a core/memory speed increase.
Maybe here soon Nvidia will make these 128-bit DDRII card their highend value line, then put 256-bit DDRII on the cards and make them their highend highend cards. What idiots Nvidia were for putting in 128bit DDRII. Certainly they realized it? Hell anyone here could have told them not to do that its common sense. Is there just a shortage of it or did they just like the 128bit better?
I suggest you check out the [H] review first - good stuff from Brent as usual.
Interesting that despite the noise, the toasty operating temperature, mediocre overclocking potential and ridiculous cost it probably still doesn't take the performance crown from from the 400MHz+ "custom" 9700 Pros that can be had for a lot less and come with passive cooling.
This is pretty funny (from B3D):
Originally posted by Dave H: We knew Tom Pabst was good for something: FXFlow sound clips :!:
Originally posted by Ruffian What idiots Nvidia were for putting in 128bit DDRII. Certainly they realized it? Hell anyone here could have told them not to do that its common sense. Is there just a shortage of it or did they just like the 128bit better?
You can keep trace density on the PCB the same and double the bandwidth by switching from DDR-I to DDR-II on a 128-bit bus, hence making PCB design logistics slightly more manageable.
The problem is that despite that, the NV30 uses a twelve layer, 128-bit PCB and it retails at a price much higher than most people are willing to pay. The 9700 Pro uses an 8 layer board, has a 256-bit memory bus and costs a lot less. Not to mention the fact that it's been available for months now.
One thing worth noting is that 1 GHz memory may sound impressive -- which I suppose it's why Nvidia's marketroids throw that number around -- but 1 GHz with 128 bit bus is the same bandwidth as 500 MHz with 256 bit. Remembering that DDR means double data rate, a 256 bit card only needs 250 MHz memory to match the GF FX memory bandwidth. Needless to say, even the non-pro 9700 has more memory bandwidth than that. The Pro actually has almost 25% more memory bandwidth.
Another thing I sort of find interesting is that the 500 MHz GPU barely outperforms a 325 MHZ GPU. Heck, even the increase from a 4600 doesn't seem to match the GPU speed increase. A 500 MHz GPU should have performed _much_ faster, but somehow it doesn't. At a very wild guess, either Nvidia took lessons from Intel, and made a GPU that's actually slower per clock than a GF4, or we're noticing precisely that it's held back by the memory bandwidth.
I.e., I wouldn't expect a whole lot of black magick to happen with driver optimizations. Unless some miracle happens -- e.g., that it really has a Kyro-style tiled renderer inside that someone forgot to enable yet -- it's very likely that most (or even all) the memory access optimization tricks are already in effect. If not, it would be trailing behind the 9700 Pro.
And that's not even taking into account that Nvidia's FSAA, as I expected, still looks like crap. Judging by those pics I'm guessing it would take at least 16x AA to get comparable AA out of a GFFX as I'm getting on the 9700 Pro. At which point, I doubt that any kind of driver optimization would still keep the frame rate anywhere near playable.
To keep it fair though, at another wild guess, where it could still be competitive is 16 bit rendering. Especially since a 9700 Pro can't even do any AA in 16 bit modes. But either way, _if_ that 500 MHz GPU really is held back by memory bandwidth, then it should perform one helluva lot faster in 16 bit colour. (Though why would someone buy a 659$ graphics card to play in 16 bit, is another good question.)