Bladerunner: Under what circumstances does any CPU beat out hardware T&L? Every test I've seen shows the Geforce (Geforce 1, let alone the GF2) beating out even 700 and 800mhz CPUs, even if it is an artificial test where the CPU does not have to do other things, like sound mixing, artificial intelligence, normal game physics, etc. Take for instance 3D Mark 2000 (set to SOFTWARE T&L for the Geforce, NOT hardware T&L) shows the Geforce laying down serious smack against 700-800mhz Coppermines. And of course theres no sound, player input to parse, normal type game physics, etc. And also, it would be even faster if Mad Onion created a correct implementation of hardware T&L calls. Even Q3's T&L support was supposedly a quick hack thrown together, but the Geforce absolutely dominates Q3 benchmarks even on megafast systems. The Voodoo 5 can't catch a Geforce SDR in nonfillrate or bandwidth limited tests ("fastest" settings) even with an ungodly fast CPU. Run r_subdivisions 0.01 on a Geforce and you lose only a tiny percentage of performance, where even a megafast CPU would lose 20%.
Here's a page showing a 1.1gigahertz Tbird running FASTER than a 1.0 gigahertz Tbird at Q3 fastest with a GF2. If it were limited by the T&L, fillrate, or bandwidth it would not go any faster, but it did. Can you say "CPU limited at 1 gigahertz?" Almost sickening, isn't it.
And it's also important to realize that if by some odd freak of nature that it is faster to run with T&L turned off, Nvidia can always release drivers with a check box to turn it off. It is unlikely we'll see any extremely high polygon games that don't support hardware T&L, though. And since the Geforce is currently the only hardware T&L card, who is going to design a game that will run slow on the Geforce's T&L? No one will surpass the Geforce's T&L capability until the Geforce is long gone and we're all runnning value cards that do 50 million polygons/sec. It's for the same reason that we have a lack of T&L support now (i.e. developers design for low grade systems) that even the original Geforce's T&L will be plenty fast for a few years. Developers right now design for P2-350's and Voodoo 2's.
The MAXX has some serious issues that you may want hear about. It's AFR can cause stuttering at certain resolutions and settings. Read a few reviews because it won't show up in benchmarks. And if you want a speed comparison with the newest drivers and all, check out Anandtech's review of the Geforce2 MX. It shows the Geforce SDR, DDR, GTS, MX, Rage MAXX, TNT2 Ultra, and Voodoo 5 5500 (and some others, depending on the test) side by side in tons of Q3 settings, and UT with a variety of CPUs. Unfortunately they didn't include the MAXX in the UT tests on the Athlon 750, only the P3-550E.
Found another comparison of the Fury MAXX and the Geforce chips in UT.
http://www.anandtech.com/showdoc.html?i=1249&p=8
Seems like the MAXX runs UT slightly better once you get to a 700 or so mhz CPU, and the gap probably widens to 5-10fps once you get to a 900mhz CPU. But it still loses big time in Q3, averaging around 10-15fps behind the DDR and somtimes even the SDR.
Anyway... there is a ton of info out there. Check out Firingsquad, Anandtech, Tom's Hardware, etc.
------------------
BP6, Celermine
[email protected], 320meg, CL TNT2U 175/220