Well I can see why some people take you the wrong way you do come over as arrogant at times.
What I was saying is the Geforce was touted as the card that does it all and doesn't need a fast CPU. all I was saying origionaly was a that It scales far more than was given credit at first. T&L is still not that important, list me all the up and comming games that require it.
I admire that you research a lot of facts but you need to have and use these things in the real world envorirment to find how they really perform. As I said on paper my Geforce 1 looked super but that was far from the physical truth.
I get this all the time in my job with paper engineers, On paper it works fine, put it on the car and he goes slower!! Oh.
I was answering the original question, the Geforce is fast without T&L support, but will be better with a really fast CPU and is faster than the Maxx. simple as that.
Also bear in mind that T&L, even in the rare cases where it's supported, only really matters in low resolutions and colour depths. E.g., if you want to play Q3 in 800x600x16, with all eye candy turned off (which is what "fastest" means), ok, T&L is the way to go.
However, in 1280x768x32, or the 1600x1200 I've heard some people brag about, fill rate becomes the limit. If you can play at that res, thank your GTS's memory bandwidth, not its T&L. At that point, almost any P3 or Athlon will be just as good as the hardware T&L. It doesn't have to be strictly speaking faster than the hardware T&L. It just needs to finish before the renderer, which isn't that hard.
And for the MX, T&L will likely make even less of a difference, since its memory will hit the ceiling at half the bandwidth. Hence, you don't even have to go into insane resolutions before T&L no longer actually makes any difference.
Moraelin -- the proud member of the Idiots' Guild
Yeah I heard the from nvidia that teh mx will cost sround 119 dollars which is a really good price but the only company I know of so far that is making a mx is guilliomont but it's thinking about selling it teh the 150 price range
Maybe I took this the wrong way, but it sounded like you're saying that a fast enough CPU (say 800-1000mhz) is faster at T&L than a Geforce. That is different from saying that the Geforce "doesn't need a fast CPU" or that the Geforce doesn't scale with CPU speed. Sorry if I misunderstood you.
"Seems the L part of T&L can be done faster with the CPU, 3DMark2000 proves this. "
In this case using the so-called "software T&L" setting, the Geforce does the transformation and the CPU does the lighting. The comparison is to use "hardware T&L" where the Geforce does both. BUT, you must understand that 3D Mark 2000's poor hardware T&L calls are at least partly responsible for the difference. Nvidia responded to this article and explained that 3D Mark 2000's hardware T&L code was horrible. With proper T&L calls, the GF can annihilate the fastest CPUs at T&L.
That said, even without proper T&L code and just set to "software T&L" the Geforce flies through the high poly test, beating out every other video card regardless of CPU by a large margin. So what does 3D Mark 2000 REALLY prove? 1) If you're going to write a program to use hardware T&L you have to do it right or just leave the option out, 2) even without special hardware T&L calls, the Geforce picks up the tranformation duties and does them INCREDIBLY fast.
For a good example of PROPER hardware T&L, check out the Reverend's new GF2 review. In MDK2 using proper T&L calls (using both the T and the L of the GF2) it totally blows the snot out of a P3-800. And this is a REAL games with physics, AI, input parsing, etc. to process.
So basically what one can conclude is that 1) 3D Mark 2000's so-called 'hardware T&L' is very poorly implememted, 2) at least the GF2 (possibly the GF1) is faster at T&L than a P3-800 in a real game, and 3) you must take synthetic benchmarks for what they are amd take into account their flaws.
Also before someone tries to say otherwise, it would be a complete falacy to say that 3D Mark 2000's poor hardware T&L calls are going to be representative of upcoming games. Who is going to write hardware T&L that DOESN'T run good on a Geforce? The Geforce is the only card that has T&L to program for. If a game developer couldn't get it to run well, they'd remove the option and just do the lighting in software. Kinda a "do it right or don't do it at all."
On the MX: I think the $119 quoted by Nvidia was supposed to be the street price and not the MSRP. And once a few more manufacturers release their own versions of the MX, the price will drop fast as witnessed by the < $250 Guillemot and Leadtek GF2 cards. Every time another manfacturer jumped in, the price dropped 5%.
Still at $150 it's a pretty good deal. The first overclocking review I've seen shows it to be a great overclocker, but obviously it depends most on how the RAM overclocks and they may have gotten lucky with fast RAM. Maybe if we'll see Creative put the same RAM as their TNT2 Ultra on their MX, which overclocks to 220+ mhz easily. Even for a value card you'd think they could run 183 or 200mhz SDR. I'd pay an extra $20 for 5ns RAM...
Omicron777: Yes, good T&L support is starting to show, and it is growing rapidly. I guess from the Reverend's review, MDK2's T&L is supposed to be a huge leap even with a P3-800.
Appology accepted and it was half my fault by the way I worded my first post. I still think it was implemented to early as A bucket load of T&L isn't goung to help the man problem with the Geforce and that's memory bandwidth. Lets be serious we buy thes fast graphics cards to be able to run high resolutions at acceptable frame rates.
Binning the T&L and giving me some better solution to the bandwith would impress me more at present.
My other point is you can quote facts a figures till you are blue in the face but using the stuff is the best way to see how they work. How often have I seen a test of one piece of hardware on a site getting rave reviews when other says the opposite? at least one of them must be wrong.
OmiCRon777Just because you happen to have two games that support T&L I did say full implemation. so that removes Q3 and dfo the others support it fully to a geforce standard? is hardly reason to get excited. I do beleive it will be a useful feature eventually.
who cares if it has full implementation?? im sure we all know that the T&L processor is a really big T and a little L. with a really complex game with much more going on than just level geometry, even the SLIGHTEST implementation of T&L would make a difference. face it. its not useless and its here to stay, and in a few months youll have an @ss load of games that support it and run way faster than any v5-5500 will be able to run it.
You really should read my replies before answering as no where did I say it was useless. I said it was implemented it to the cards to early. The SDR Geforce is what 8 to 9 months old and would have been better with a bandwidth solution than T&L at that stage. The Geforce will have been through 3 generations of GPU before T&L will be of any real consequense.
If 3Dfx ever get a V5 6000 out it will walk over a Geforce because it will not be bandwidth limited. Yes it may not have the hardware T&L or as high fill rate but it will have the power to run high res without the memory problems of the Geforce design. which is the restricting factor of the single processor design.
[This message has been edited by BladeRunner (edited 07-04-2000).]
well, the SDR is already an obsolete card. How could nvidia have possibly fixed the bandwidth issue?? they already use the fastest memory on the comercial market thats affordable to everyone. 3dfx's method sucks, so you have 2 chips accessing 32mg buffers of ram each? on paper it sounds impressive,with two data channels and all, but it really doesnt work nearly as fast as the GTS and is only marginally faster than DDR at really high resolutions (12*10 and up). the 6000, i will bet my house, will not come close to walking all over the GTS. It may be a little faster, but it has 4 32mg buffers. the card will seriously choke when you try and load more than 32mgs of textures, whereas the 64mg GTS (even with DDR SDRAM) *SHOULD* be able to beat 6000, and lets not even get to the price. and by the time 3dfx gets their act together and releases the damn card, the nv20 will only be a little away. if 3dfx did a simultanious release of all the VSA-100 cards, it would have seriously dominated the market, but they didnt.
...just my two bytes
My GeForce DDR and PIII/933 provide some awesome graphics - should keep me happy for a few years! Now, I grew up on Atari 2600 and "Kings Quest" on a 2-floppy IBM-PC, so I may be easily satisfied, but enjoy the moment while you enjoy the debate!
The real feature of the Voodoo 5 6000 will be nearly free 2X FSAA. Yes, I think you'd be right to say that at 1024x768x32bit, the 6000 will perform very similarly to the GTS in a majority of games. But you'll be able to turn on 2X FSAA with virtually no performance loss at that resolution. And 4X FSAA at 1024x768x16 will still fly right along at 60-70 fps.
But 1024x768x32bit with 4X FSAA will still be bandwidth limited, just like the Voodoo 5 5500 is at 1024x768x32bit with 2X FSAA.
I think in the end the extra $300 blown on the 6000 is only buying you an extra notch of FSAA. Bump up from no FSAA to 2X FSAA or 2X FSAA to 4X FSAA--you'll see the same performance as the 5500. Anyway getting way off topic...