T&L
Home | Reviews and Features | Special Reports | Forums |

Page 1 of 2 12 LastLast
Results 1 to 15 of 29

Thread: T&L

Hybrid View

  1. #1
    Join Date
    May 2000
    Posts
    87

    T&L

    are more games really gona use t&l because i am about to get a rage fury maxx and i heard it was better than the geforce.

  2. #2
    Join Date
    Aug 1999
    Location
    Norwich, UK
    Posts
    3,790
    Where did you here that?? GTS is King at the moment but It won't last long. T&L was all hyped up but it still performs well without it and better is some cases if you have a really fast CPU

  3. #3
    Join Date
    Jun 2000
    Location
    mclean, va, USA
    Posts
    298
    The maxx beats out a geforce sdr and matches a ddr at very high resolutions (1280X1024 plus) on very powerful systems (700+). Otherwise the geforce beats the maxx.
    my computer sucks but my car stereo's better than yours!

  4. #4
    Join Date
    Dec 1999
    Location
    whitby/ cobourg, ont, canada
    Posts
    1,263
    isn't t&l just a software thing that you activate when you play games with opengl. also the geforce ddr with tv-out and all that stuff cost as much as the max
    duron 800@900 - asus a7v - 640mb pc133 - radeon 8500le 128mb - sb live - 16x dvd-rom - 48x24x48 burner

  5. #5
    Join Date
    May 2000
    Posts
    87
    no you can get the maxx @ www.compuplus.com for only 84 bucks after rebates great price.

  6. #6
    Join Date
    Jan 2000
    Location
    USA
    Posts
    500
    Bladerunner: Under what circumstances does any CPU beat out hardware T&L? Every test I've seen shows the Geforce (Geforce 1, let alone the GF2) beating out even 700 and 800mhz CPUs, even if it is an artificial test where the CPU does not have to do other things, like sound mixing, artificial intelligence, normal game physics, etc. Take for instance 3D Mark 2000 (set to SOFTWARE T&L for the Geforce, NOT hardware T&L) shows the Geforce laying down serious smack against 700-800mhz Coppermines. And of course theres no sound, player input to parse, normal type game physics, etc. And also, it would be even faster if Mad Onion created a correct implementation of hardware T&L calls. Even Q3's T&L support was supposedly a quick hack thrown together, but the Geforce absolutely dominates Q3 benchmarks even on megafast systems. The Voodoo 5 can't catch a Geforce SDR in nonfillrate or bandwidth limited tests ("fastest" settings) even with an ungodly fast CPU. Run r_subdivisions 0.01 on a Geforce and you lose only a tiny percentage of performance, where even a megafast CPU would lose 20%.
    Here's a page showing a 1.1gigahertz Tbird running FASTER than a 1.0 gigahertz Tbird at Q3 fastest with a GF2. If it were limited by the T&L, fillrate, or bandwidth it would not go any faster, but it did. Can you say "CPU limited at 1 gigahertz?" Almost sickening, isn't it.

    And it's also important to realize that if by some odd freak of nature that it is faster to run with T&L turned off, Nvidia can always release drivers with a check box to turn it off. It is unlikely we'll see any extremely high polygon games that don't support hardware T&L, though. And since the Geforce is currently the only hardware T&L card, who is going to design a game that will run slow on the Geforce's T&L? No one will surpass the Geforce's T&L capability until the Geforce is long gone and we're all runnning value cards that do 50 million polygons/sec. It's for the same reason that we have a lack of T&L support now (i.e. developers design for low grade systems) that even the original Geforce's T&L will be plenty fast for a few years. Developers right now design for P2-350's and Voodoo 2's.

    The MAXX has some serious issues that you may want hear about. It's AFR can cause stuttering at certain resolutions and settings. Read a few reviews because it won't show up in benchmarks. And if you want a speed comparison with the newest drivers and all, check out Anandtech's review of the Geforce2 MX. It shows the Geforce SDR, DDR, GTS, MX, Rage MAXX, TNT2 Ultra, and Voodoo 5 5500 (and some others, depending on the test) side by side in tons of Q3 settings, and UT with a variety of CPUs. Unfortunately they didn't include the MAXX in the UT tests on the Athlon 750, only the P3-550E.

    Found another comparison of the Fury MAXX and the Geforce chips in UT. http://www.anandtech.com/showdoc.html?i=1249&p=8
    Seems like the MAXX runs UT slightly better once you get to a 700 or so mhz CPU, and the gap probably widens to 5-10fps once you get to a 900mhz CPU. But it still loses big time in Q3, averaging around 10-15fps behind the DDR and somtimes even the SDR.

    Anyway... there is a ton of info out there. Check out Firingsquad, Anandtech, Tom's Hardware, etc.

    ------------------
    BP6, Celermine 566@850, 320meg, CL TNT2U 175/220
    BP6, P3-700@770 (for now), 320mb, eVGA MX Plus (dual CRT), waiting for VIVO module

  7. #7
    Join Date
    Dec 1999
    Location
    whitby/ cobourg, ont, canada
    Posts
    1,263
    wow the max has come down alot the last time i checked it was $400 canadian money
    duron 800@900 - asus a7v - 640mb pc133 - radeon 8500le 128mb - sb live - 16x dvd-rom - 48x24x48 burner

  8. #8
    Join Date
    Aug 1999
    Location
    Norwich, UK
    Posts
    3,790
    I am getting really really sick of this damn posts not showing thing

  9. #9
    Join Date
    Aug 1999
    Location
    Norwich, UK
    Posts
    3,790
    Freon

    I'm not sure you understood me, but that was probably my fault I could have put it better.

    T&L still has very little real world support and we are nearing 3rd generation T&L GPU. I'm sorry but I still feel it was over hyped. I know you can argue that The chicken & egg thing you need the hardware before you can make software for it, But I haven't seen the flood of new T&L titles yet. This is especially true while most recent games are based on old Q2 or the almost equally old Unreal engine.

    I was trying to state the GTS is fast without having the T&L support but doesn't always beat a system with a fast CPU. I swapped out a Geforce 1 into a friends AMD K6 2 400 system with my system a PIII 700 @ 988 running an O/c'd V3 2000, In Q3 (which only supports the lighting part of T&L anyway I seem to remember)the PIII system out performed the AMD with the geforce.

    What I was saying is the Geforce will scale with the CPU more than people give credit. you can read as many benchmarks as you like but it is how the game performs to you that matters in the end.

    With my Geforce 1 in the early days when the drivers sucked the big one, for instance I was getting high FPS in benchmarks and games but the real world performance was much less impressive, it was still Jerky & stuttery not smooth, like the V3 it replaced.

    IMHO T&L was implemented into the cards to early in much the same way 32 colour was with the TNT, it's a useful feature but did we need it then? More Memory bandwith is what the card needs now.


  10. #10
    Join Date
    May 2000
    Posts
    87
    I'm preatty sure in future games it'll have a choice between using t&l or direct x or something like that and the maxx is the only card under 100 dollars after rebate and has 64 mbs I know that there's a geforce 2 that has 64 mbs but damn it costs alot.

  11. #11
    Join Date
    Jun 2000
    Location
    mclean, va, USA
    Posts
    298
    Also if you really want T&L you can get an s3 savage 2000 (if and when s3 gets its driver act together), which apparently supports T&L. In my opinion the maxx is still a great card for $84, since it does indeed have 64mb, great for games in high resolutions. Also, from what I've heard (if I'm wrong Freon will correct me) the afr stutter problems mainly occur at low resolutions and in very few games. UT, Q3 and half life work great with the card.
    my computer sucks but my car stereo's better than yours!

  12. #12
    Join Date
    Aug 1999
    Location
    Norwich, UK
    Posts
    3,790
    Yes but the Maxx has two processors so it is 32mb per Chip, 64mb yes but shared. And the Geforce 64mb versions use DDR SDram as opposed to the faster DDR SGram, They say because Infinnon, the only makers of DDR SGram for video cards can't make the chips high enough density to pack 64mb on one chip.

    Ummmmmm, Ok but I wonder if it has more to do with supply issues and cost factors.

    Either way there is little real word gain to having 64mb on a Geforce, bandwidth is the issue

  13. #13
    Join Date
    Jan 2000
    Location
    USA
    Posts
    500
    Bladerunner: "I swapped out a Geforce 1 into a friends AMD K6 2 400 system with my system a PIII 700 @ 988 running an O/c'd V3 2000, In Q3 (which only supports the lighting part of T&L anyway I seem to remember)the PIII system out performed the AMD with the geforce. "

    Well, duh. It's called being CPU limited. If you put a Geforce 2 in an AMD K6, then swap it out for a Geforce 1, you'll see no difference in low resolutions because the CPU is still limiting you. It can be further proven that T&L is NOT limiting in this instance by upping the geometry detail. Upping the geometry detail say 20% will cause almost no difference in speed, showing that it's NOT the T&L that is holding you back.

    It's transformation and lightning, not transformation, lightning, artificial intelligence, sound mixing, physics, input parsing and filtering, memory fetching, and multitasking. The CPU is the bottleneck.

    There is a huge difference between being T&L limited and being CPU limited. You're mistaking one for the other. Also, your example of K6-2 + Geforce and P3 + Voodoo 3 is very poor because you're comparing two variables at once. And even if you take your example for what it is, that doesn't prove T&L is getting stressed at all.

    And actually I'm pretty sure Quake 3 uses only the transformation and not lighting. The lighting part of the Geforce's T&L is actually not very fast. I actually doubt we'll ever see it used unless the NV20 makes a major stride towards making the lighting much faster.

    ------------------
    BP6, Celermine 566@850, 320meg, CL TNT2U 175/220
    BP6, P3-700@770 (for now), 320mb, eVGA MX Plus (dual CRT), waiting for VIVO module

  14. #14
    Join Date
    Sep 1999
    Location
    Staten Is.,NY
    Posts
    12,153
    it is deceiving when maxx says 64mg of processing power when only one chip is working at any given time going back and forth but never together. a marketing ploy that someone should sue them for. who me maybe.

  15. #15
    Join Date
    Nov 1998
    Location
    Winston-Salem,NC, US
    Posts
    289
    Huhhhh, boy you guys are losing me here, so I'll throw 2 cents in. Seems the L part of T&L can be done faster with the CPU, 3DMark2000 proves this. Won't go into specifics, www.hardocp.com did, but the polygon throughput is much, much higher with a fast CPU than with the GeForce. Of course this is using GeForce in 3DMark so who knows. The GTS starts catching up, maybe surpass, but SDR and DDR is just left behind when using CPU for T&L. www.riva3d.com has some of these scores posted showing the polygon tests.

    And as far as T&L utilization, OpenGL seems to always have some benefit from it. Look at a Voodoo and a GeForce at lower res and color depth, the GeForce is on the back stretch before the Voodoo leaves the gate. That's T&L in action, no fillrate expectations so no limitations. T&L in D3D seems non-existent except for 3DMark so big whup-de-do. UT can't be using it efficiently and older games seem dead to it. Unless the rendering in D3D is somehow still bottlenecking it a lower res, which may be possible. Compare OpenGL to D3D games, what's the difference? OGL has always run faster for some reason (not just Quake series). Where does Glide fall?

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •