Bladerunner: Under what circumstances does any CPU beat out hardware T&L? Every test I've seen shows the Geforce (Geforce 1, let alone the GF2) beating out even 700 and 800mhz CPUs, even if it is an artificial test where the CPU does not have to do other things, like sound mixing, artificial intelligence, normal game physics, etc. Take for instance 3D Mark 2000 (set to SOFTWARE T&L for the Geforce, NOT hardware T&L) shows the Geforce laying down serious smack against 700-800mhz Coppermines. And of course theres no sound, player input to parse, normal type game physics, etc. And also, it would be even faster if Mad Onion created a correct implementation of hardware T&L calls. Even Q3's T&L support was supposedly a quick hack thrown together, but the Geforce absolutely dominates Q3 benchmarks even on megafast systems. The Voodoo 5 can't catch a Geforce SDR in nonfillrate or bandwidth limited tests ("fastest" settings) even with an ungodly fast CPU. Run r_subdivisions 0.01 on a Geforce and you lose only a tiny percentage of performance, where even a megafast CPU would lose 20%.
Here's a page showing a 1.1gigahertz Tbird running FASTER than a 1.0 gigahertz Tbird at Q3 fastest with a GF2. If it were limited by the T&L, fillrate, or bandwidth it would not go any faster, but it did. Can you say "CPU limited at 1 gigahertz?" Almost sickening, isn't it.
And it's also important to realize that if by some odd freak of nature that it is faster to run with T&L turned off, Nvidia can always release drivers with a check box to turn it off. It is unlikely we'll see any extremely high polygon games that don't support hardware T&L, though. And since the Geforce is currently the only hardware T&L card, who is going to design a game that will run slow on the Geforce's T&L? No one will surpass the Geforce's T&L capability until the Geforce is long gone and we're all runnning value cards that do 50 million polygons/sec. It's for the same reason that we have a lack of T&L support now (i.e. developers design for low grade systems) that even the original Geforce's T&L will be plenty fast for a few years. Developers right now design for P2-350's and Voodoo 2's.
The MAXX has some serious issues that you may want hear about. It's AFR can cause stuttering at certain resolutions and settings. Read a few reviews because it won't show up in benchmarks. And if you want a speed comparison with the newest drivers and all, check out Anandtech's review of the Geforce2 MX. It shows the Geforce SDR, DDR, GTS, MX, Rage MAXX, TNT2 Ultra, and Voodoo 5 5500 (and some others, depending on the test) side by side in tons of Q3 settings, and UT with a variety of CPUs. Unfortunately they didn't include the MAXX in the UT tests on the Athlon 750, only the P3-550E.
Found another comparison of the Fury MAXX and the Geforce chips in UT. http://www.anandtech.com/showdoc.html?i=1249&p=8
Seems like the MAXX runs UT slightly better once you get to a 700 or so mhz CPU, and the gap probably widens to 5-10fps once you get to a 900mhz CPU. But it still loses big time in Q3, averaging around 10-15fps behind the DDR and somtimes even the SDR.
Anyway... there is a ton of info out there. Check out Firingsquad, Anandtech, Tom's Hardware, etc.
I'm not sure you understood me, but that was probably my fault I could have put it better.
T&L still has very little real world support and we are nearing 3rd generation T&L GPU. I'm sorry but I still feel it was over hyped. I know you can argue that The chicken & egg thing you need the hardware before you can make software for it, But I haven't seen the flood of new T&L titles yet. This is especially true while most recent games are based on old Q2 or the almost equally old Unreal engine.
I was trying to state the GTS is fast without having the T&L support but doesn't always beat a system with a fast CPU. I swapped out a Geforce 1 into a friends AMD K6 2 400 system with my system a PIII 700 @ 988 running an O/c'd V3 2000, In Q3 (which only supports the lighting part of T&L anyway I seem to remember)the PIII system out performed the AMD with the geforce.
What I was saying is the Geforce will scale with the CPU more than people give credit. you can read as many benchmarks as you like but it is how the game performs to you that matters in the end.
With my Geforce 1 in the early days when the drivers sucked the big one, for instance I was getting high FPS in benchmarks and games but the real world performance was much less impressive, it was still Jerky & stuttery not smooth, like the V3 it replaced.
IMHO T&L was implemented into the cards to early in much the same way 32 colour was with the TNT, it's a useful feature but did we need it then? More Memory bandwith is what the card needs now.
I'm preatty sure in future games it'll have a choice between using t&l or direct x or something like that and the maxx is the only card under 100 dollars after rebate and has 64 mbs I know that there's a geforce 2 that has 64 mbs but damn it costs alot.
Also if you really want T&L you can get an s3 savage 2000 (if and when s3 gets its driver act together), which apparently supports T&L. In my opinion the maxx is still a great card for $84, since it does indeed have 64mb, great for games in high resolutions. Also, from what I've heard (if I'm wrong Freon will correct me) the afr stutter problems mainly occur at low resolutions and in very few games. UT, Q3 and half life work great with the card.
my computer sucks but my car stereo's better than yours!
Yes but the Maxx has two processors so it is 32mb per Chip, 64mb yes but shared. And the Geforce 64mb versions use DDR SDram as opposed to the faster DDR SGram, They say because Infinnon, the only makers of DDR SGram for video cards can't make the chips high enough density to pack 64mb on one chip.
Ummmmmm, Ok but I wonder if it has more to do with supply issues and cost factors.
Either way there is little real word gain to having 64mb on a Geforce, bandwidth is the issue
Bladerunner: "I swapped out a Geforce 1 into a friends AMD K6 2 400 system with my system a PIII 700 @ 988 running an O/c'd V3 2000, In Q3 (which only supports the lighting part of T&L anyway I seem to remember)the PIII system out performed the AMD with the geforce. "
Well, duh. It's called being CPU limited. If you put a Geforce 2 in an AMD K6, then swap it out for a Geforce 1, you'll see no difference in low resolutions because the CPU is still limiting you. It can be further proven that T&L is NOT limiting in this instance by upping the geometry detail. Upping the geometry detail say 20% will cause almost no difference in speed, showing that it's NOT the T&L that is holding you back.
It's transformation and lightning, not transformation, lightning, artificial intelligence, sound mixing, physics, input parsing and filtering, memory fetching, and multitasking. The CPU is the bottleneck.
There is a huge difference between being T&L limited and being CPU limited. You're mistaking one for the other. Also, your example of K6-2 + Geforce and P3 + Voodoo 3 is very poor because you're comparing two variables at once. And even if you take your example for what it is, that doesn't prove T&L is getting stressed at all.
And actually I'm pretty sure Quake 3 uses only the transformation and not lighting. The lighting part of the Geforce's T&L is actually not very fast. I actually doubt we'll ever see it used unless the NV20 makes a major stride towards making the lighting much faster.
it is deceiving when maxx says 64mg of processing power when only one chip is working at any given time going back and forth but never together. a marketing ploy that someone should sue them for. who me maybe.
Huhhhh, boy you guys are losing me here, so I'll throw 2 cents in. Seems the L part of T&L can be done faster with the CPU, 3DMark2000 proves this. Won't go into specifics, www.hardocp.com did, but the polygon throughput is much, much higher with a fast CPU than with the GeForce. Of course this is using GeForce in 3DMark so who knows. The GTS starts catching up, maybe surpass, but SDR and DDR is just left behind when using CPU for T&L. www.riva3d.com has some of these scores posted showing the polygon tests.
And as far as T&L utilization, OpenGL seems to always have some benefit from it. Look at a Voodoo and a GeForce at lower res and color depth, the GeForce is on the back stretch before the Voodoo leaves the gate. That's T&L in action, no fillrate expectations so no limitations. T&L in D3D seems non-existent except for 3DMark so big whup-de-do. UT can't be using it efficiently and older games seem dead to it. Unless the rendering in D3D is somehow still bottlenecking it a lower res, which may be possible. Compare OpenGL to D3D games, what's the difference? OGL has always run faster for some reason (not just Quake series). Where does Glide fall?