BFG PhysX and Performance Updates

by Derek Wilson on 5/17/2006 5:00 AM EST
Comments Locked

67 Comments

Back to Article

  • yanyorga - Monday, May 22, 2006 - link

    Firstly, I think it's very likely that there is a slowdown due to the increased number of objects that need to be rendered, giving credence to the apples/oranges arguement.

    However, I think it is possible to test where there are bottlenecks. As someone already suggested, testing in SLI would show whether there is an increased GPU load (to some extent). Also, if you test using a board with a 2nd GPU slot which is only 8x and put only 1 GPU in that slot, you will be left with at least 8x left on the pci bus. You could also experiment with various overclocking options, focusing on the multipliers and bus.

    Is there any info anywhere in how to use the PPU for physics or development software that makes use of it?
  • Chadder007 - Friday, May 26, 2006 - link

    That makes wonder why City of Villans was tested with PPU at 1500 Debris objects comparing it to software at 422 Debris objects. Anandtech needs to go back and test WITH a PPU at 422 Debris objects to compare it to the software only mode to see if there is any difference.
  • rADo2 - Saturday, May 20, 2006 - link

    Well, people have now pretty hard time justifying spending $300 on a decelerator.

    I am afraid, however, that Ageia will be more than willing to "slow down a bit" their future software drivers, to show some real-world "benefits" of their decelerator. By adding more features to their SW (by CPU) emulation, they may very well slow it down, so that new reviews will finally bring their HW to the first place.

    But these review will still mean nothing, as they compare Ageia SW drivers, made intentionally bad performing, with their HW.

    Ageia PhysX is a totally wrong concept, Havok FX can do the same via SSE/SSE2/SSE3, and/or SM 3.0 shaders, it can also use dualcore CPUs. This is the future and the right approach, not additional slow card making big noise.

    Ageia approach is just a piece of nonsense and stupid marketing..
  • Nighteye2 - Saturday, May 20, 2006 - link

    Do not take your fears to be facts. I think Ageia's approach is the right one, but it'll need to mature - and to really get used. The concept is good, but execution so far is still a bit lacking.
  • rADo2 - Sunday, May 21, 2006 - link

    Well, I think Ageia approach is the worst possible one. If game developers are able to distribute threads between singlecore CPU and PhysX decelerator, they should be able to use dualcore CPUs for just the same, and/or SM3.0 shaders. This is the right approach. With quadcore CPUs, they will be able to use 4 core, within 5-6 yers about 8 cores, etc. PhysX decelerator is a wrong direction, it is useful only for very limited portfolio of calculations, while CPU can do them as well (probably even faster).

    I definitely do NOT want to see Ageix succeed..
  • Nighteye2 - Sunday, May 21, 2006 - link

    That's wrong. I tested it myself running Cellfactor without PPU on my dual-core PC. Even without the liquid and cloth physics, large explosions with a lot of debree still caused large slowdowns, after which it stayed slow until most of the flying debree stopped moving.

    On videos I saw of people playing with a PPU, slowdowns also occurred but lasted only a fraction of a second.

    Also, the CPU is also needed for AI, and does not have enough memory bandwidth to do proper physics. If you want to get it really detailed, hardware physics on a dedicated PPU is the best way to go.
  • DigitalFreak - Thursday, May 18, 2006 - link

    Don't know how accurate this is, but it might give the AT guys some ideas...

    http://www.hardforum.com/showthread.php?t=1056037">HardForum
  • Nighteye2 - Saturday, May 20, 2006 - link

    I tried it without the PPU - and there's very notable slowdowns when things explode and lots of crates are moving around. And that's from running 25 FPS without moving objects. I imagine performance hits at higher framerates will be even bigger. At least without PPU.
  • Clauzii - Thursday, May 18, 2006 - link

    The German site Hartware.de showed this in their test:

    Processor Type: AGEIA PhysX
    Bus Techonology: 32-bit PCI 3.0 Interface
    Memory Interface: 128-bit GDDR3 memory architecture
    Memory Capacity: 128 MByte
    Memory Bandwidth: 12 GBytes/sec.
    Effective Memory Data Rate: 733 MHz
    Peak Instruction Bandwidth: 20 Billion Instructions/sec
    Sphere-Sphere collision/sec: 530 Million max
    Convex-Convex(Complex) collisions/sec.: 533,000 max

    If graphics are moved to the card, a 12GB/s memory will be limiting, I think :)
    Would be nice to see the PhysiX RAM @ the specced 500MHz, just to see if it has anything to do with that issue..
  • Clauzii - Thursday, May 18, 2006 - link

    Not test - preview, sorry.
  • hatsurfer - Thursday, May 18, 2006 - link

    I just got my card today from Newegg. I only had about an hour to play before I left for work. I wanted to see the effects when I destroyed a building. I played through the first mission on 1600x1200 and my frames stayed a solid 30 with v-sync enabled due to playing on a large LCD. It was pretty nice to see the MANY building parts flying in every direction with the smoke effect. All in all it looked pretty cool and realistic. I am currently gaming on a EVGA 7800 GTX awaiting my 7900 GTX from EVGAs step-up program. I would like to take full advantage of my optimal resolution 1920x1200 but my frames dropped to the 20's. I can only imagine my 7900 GTX will get me my full HD resolution, which I can't sustain on a single 7800 GTX anyway.

    I think anyone with a high-end system isn't going to have any hangups when it comes to frame rates. Is this for low end budget gaming systems? Probably not just yet, but neither is the price tag of $300+. So, right now it's a nice little extra piece of eye candy for me to enjoy and in the end that's what is important anyway.

    I hope this technology takes off and drives the number of supported titles up and if it gets incorporated into other components so be it (I really hope so too!). We'll just do another upgrade that we're all used to doing every few months anyway. Such it has always been at the cutting edge of PC'ing and such it will always be.
  • Mabus - Wednesday, May 17, 2006 - link

    Well I wonder how long before ATI or Nvidia Buys this company and integrates the logic into it's GPU. Not only will this allow better performance but it will allow optimisations with drivers and communications and instruction exclusivity. This will be very nice smaller card, one less space used on mobo and cooler running etc etc..

    remember the seperate match co-pro well this tech will definatly go the same way, it is only a matter of time. And face it the current benchies show that time is what we need
    to get that performance where us gamers want it.

    Mabus signing off.....
  • Trisped - Wednesday, May 17, 2006 - link

    Why didn’t you compare to stats of Asus card with those of the BFG card. Sound, power, if they are the same then say so, otherwise what is the point of reviewing two different cards for the same thing?

    COV is an unfair test- Max Physics Debris Count should be the same so you can see if there is a performance boost at the same level. I know that if I give my GPU 1000 more objects it is going to go much slower, with or without the PhysX card. What I want to see is running with 1500 and 422 “Debris” both with and without the card.

    I would like to see the tests run with not only 2 different processors, but what about an Intel dual core and what about with different video card configurations? If you have a lower end video card should you expect to have less of an FPS impact or more?

    How long before there is a PCIe card? Would there be a performance boost using PCIe? I would think that if the physics and GPU were on the same bus then there would be less latency as well as faster communication. I think the fact that the PhysX card wouldn’t have to wait for the processor to stop sending it info before it started transmitting would also be important to speed.

    Spelling/Grammar:
    “If spawning lots of effects on the PhysX card makes the system stutter, then it is defeating its won purpose.” Should be one, not won.
  • VooDooAddict - Wednesday, May 17, 2006 - link

    I'm waiting to pass judgment on this tech till after I see reviews with high end Dual core CPUs and a PhysX board connected via PCI-Express.

    I just can't see how connection something like this via PCI was ever going to work. This isn't mainly a one way communication like an audio board during gaming. Send the information needing to be output, the sound board outputs it. My understanding is that the PhysX board also needs to communicate the calculated information back to the CPU and GPU.

    Now would be a great time for Nvidia to step up with some new demos for their physics hardware acceleration via SLI. For an extra couple bills ... SLI right now is much more justifiable then this physics hardware. AND SLI is really one of those things that isn't a nessesity. With SLI based physics you could run at low physics and max frames for Multiplayer twitch games... just enable hardware physics for Single player and MMOs. I'm hoping that will give you the best of both worlds. I'll be waiting for reviews like that.
  • DerekWilson - Wednesday, May 17, 2006 - link

    You are correct in that PhysX requires two way communication during gameplay in many cases. This hits on why I think the demos run smoother than the games out right now. In GRAW and CoV, data needs to be sent to the PhysX card, the PPU needs to do some work, and data needs to be sent back to the CPU.

    In the demos that use the PhysX hardware for everything, as the scene is being setup, data is loaded into the PhysX on-board memory. Interacting with the world requires much less data to be sent to the PPU per frame, as all the objects have already been created, setup and stored locally. This should significantly cut down on traffic and provide less of a performance impact while also enabling more complicated effects and physics.
  • DerekWilson - Wednesday, May 17, 2006 - link

    just going to say again that the above is just a theory
  • poohbear - Wednesday, May 17, 2006 - link

    yea yea yea, so how do we overclock these things? :)
  • Tephlon - Wednesday, May 17, 2006 - link

    I know this doesn't have EXACT bearing on the article, but I thought some readers would be interested.
    I now own one of these cards (BFG Physx PPU), and it doesn't seem as bad to me as the benchmarks really make it out to be. I installed the card and played the game with the exact settings I had previously, and didn't FEEL a difference. I havn't straight benchmarked it, though, so the numbers might very well end up similar to anandtech's findings.
    Although it has no real bearing on gameplay, I have noticed more with the addition of the PPU than anyone ever credits. Surrounding cars to rock more accuratly when nearby explosions errupt, and their suspension and doors move more accurately due to bullets and explosions. Even lightpoles bend and react better than without the card. I even had one explosion that caused the 'cross bars' (not sure what to call them) on top of the lightpole to break loose and swing back and forth for a good 20 seconds until they broke free and hit the ground. It was really neat. I know its not important stuff, but I can SEE a positive difference with the card, but don't really FEEL the lost performance it's claimed to have. Often I do see the fraps meter on my G15 lcd drop dramatically for a VERY quick instant, but I don't feel this in actual gameplay. I also believe this also happened on my rig before I had the PPU. Maybe its just that I'm not a 'BEST PERFORMANCE EVER' kinda guy. I've heard guys on forums complain about their FPS dropping from 56fps to 51fps. So what. This isn't a run&gun game. My roommate plays co-op with me and fraps tells him he's running 20fps average. It doesn't look it to me, and he also doesn't feel it either. An issue with fraps? Maybe. But again my point is as this point the loss of fps (for me) doesn't equal loss of performance, so I'm pleased with the card. I'm also curious for others that own a card to throw up their comments as well.

    Again, my findings aren't that SCIENTIFIC, but they're real life. I enjoy it, and REALLY hope they see enough support to get better implimentation in future games. I think improved support and tweaked drivers will convince the masses. I hope it happens soon, or it might be a lost cause.

    For those interested. My rig.

    Asus A8N32-SLI
    Corsair XMS 2GB (2 x 1GB) TWINX2048-3500LLPRO
    BFG Geforce 7900GTOC x2
    BFG Ageia Physx PPU
    BFG 650watt PSU
    Creative Audigy 2 ZS
    NEC Black 16X DVD+R
    WD 36gb Raptor
    WD 320gb SATA
  • Tephlon - Wednesday, May 17, 2006 - link

    ooops. I guess I probebly have a cpu to go with that rig. tehe.

    AMD Athlon X2 4400+ Toledo



    Toodles.
  • poohbear - Wednesday, May 17, 2006 - link

    are u sure those physics effects are'nt already there without the physix's card? havoc states GRAW already uses the havoc engine to begin w/, physix only adds extra particles for grenades and stuff.
  • Tephlon - Wednesday, May 17, 2006 - link

    yeah, no. I know that havoc is doing genaric physics. And light poles DO normally bend without the card. Cars do shake and explode. Cans can be kicked. All that stuff is normally threre.
    I'm just saying the card seems to accentuate all of it. Not just more particles, but better explosions. Better ragdoll. Pots break a bit different, etc.
    It was definately there before, but I think it all looks better with the physx. My roommate said he noticed the difference as well. I let him borrow it for a while while I was at work.
    Again, I know I have no proof, atleast not to show you atm... but to me it all seems better than before.
    If I get a chance I'll fraps a run through a level once with and once without, and throw the links up here. I personally have seen several sites' comparison vids, but I don't feel they show everything very well.
    Again, I'd heard it only adds particles to explosions, like you did, but I swear I can see the difference with everything.
    Anyone ever heard Ageia say EXACTLY what difference there is for GRAW with their card?
  • DerekWilson - Wednesday, May 17, 2006 - link

    Perception of experiences can greatly be affected by expectations. None of us are ever able to be 100% objective in all cases.

    That being said, in your first post you mention not "feeling" the performance impact. If you'll take a look at our first article on PhysX (and the comments) you will notice that I reported the same thing. There aren't really huge slow downs in GRAW, but only one or two frames that suffer. If the errant frame(s) took a quarter of a second to render, we would definitely notice it. But while an AVERAGE of 15 frames per second can look choppy having a frame or two take 0.066 seconds to render is not going to significantly impact the experience.

    Minimum framerates are important in analysing performance, but they are much more difficult to properly understand than averages. We want to see high minimum framerates becuase we see that as meaning less slow-down or stutter. But generally (in gpu limited situations) minimum framerates aren't outliers to the data set -- they mark a low point where framerate dips down for a good handful of frames. In the case of GRAW with PhysX, the minimum is really non contiguous with the performance of the rest of the frames.

    CoV is another story. The framerate drops several times and we see stuttering. It's definitely something easily "felt" during gameplay. But CoV Issue 7 is still beta, so we might see some performance imporvements when the code goes live.
  • Tephlon - Wednesday, May 17, 2006 - link

    Derek, I totally agree. I wasn't arguing about anything technicial the article, or the relativity minimum fps has on the 'feel' or 'playablilty'. It just doesn't seem like most readers (here and elsewhere) understand it. I also won't hide the fact that I DO WANT this tech to succeed, partly because I heard them speak at quakecon and I liked what I heard/saw, and partly because I've dropped $300 in good faith that my early adoption will help the cause and push us forward in the area of true physics in gaming. And even though my perception is undoubtably affected because of my expectations, its not entirely misled either. Even with my bias I can be realistic and objective. If this thing did nothing for visuals/gameplay and killed my experiance with crappy performance, I'd of course have a different opinion on the matter.

    I was simply saying that readers seem to lose sight of the big picture. Yeah, its in the rough stages. Yeah, it only works with a few games. I'm not here to pitch slogans and rants to make you buy it, I just wanted people to understand that device 'as it is now' isn't without its charm. It seems the only defense that's brought in for the card is that the future could be bright. It DOES have some value now, if your objective about it and not out to flame it immediately. I like what it does for my game, even if its not revolutionary. I just hope that there are enough people objective enough to give this company/card a chance to get off the ground. I DO think its better for industry if the idea of a seperate physics card can get off the ground.
    I dunno, maybe I see too much of 3DFX in them, and it gets me nostalgic.
    Again, Derek, I wasn't knocking on the report at all, and I hope it wasn't taken that way. I think it said just what it was supposed to or even could say. I was more trying to get the readers a balenced look at the card on the use side, since straight numbers send people into frenzies.

    Did all that get to what I was trying to convey? I dunno, I confuse myself sometimes. I wasn't meant to be an author of ANYTHING. In any case, good luck to you.
    Good luck to all.
  • DerekWilson - Wednesday, May 17, 2006 - link

    lol, I certainly didn't take it as a nagative commentary on anything I said. I was trying to say that I appreciate what you were saying. :-)

    At a basic level, I very much agree with your perspective. The situation does resemble the 3dfx era with 3d graphics. Hardware physics is a good idea, and it would be cool if it ends up working out.

    But is this part the right part to get behind to push the industry in that direction?

    AnandTech's first and foremost responsibility is to the consumer, not the industry. If the AGEIA PhysX card is really capable of adding significant value to games, then its success is beneficial to the consumer. But if the AGEIA PhysX card falls short, we don't want to see anyone jump on a bandwagon that is headed over a cliff.

    AGEIA has the engine and developer support to have a good chance at success. If we can verify their capabilities, then we can have confidence in recommending purchasing the PhysX card to people who want to push the agenda of physics hardware. There is a large group of people out there who feel the same way you do about hardware and will buy parts in order to benefit a company or industry segment. If you've got the ability and inclination, that's cool.

    Honesly, most people that go out and spend $300 on a card right now will need to find value in something beyond what has been added in GRAW, CoV, and the near term games. If we downplayed the impact of the added effects in GRAW and CoV, its because the added effects are no where near worth $300 they cost. It is certainly a valid perspective to look towards the future. You have the ability to enjoy the current benefits of the hardware, and you'll already have the part when future games that make more compelling use of the technology come out.

    We just want to make sure that there is a future with PhysX before start jumping up and down screaming its praises.

    So ... I'm not trying to say that anything is wrong with what you are saying :-)

    I'm just saying that AnandTech has a heavy responsibility to its readers to be more cautious when approaching new markets like this. Even if we would like to see it work out.
  • Tephlon - Thursday, May 18, 2006 - link

    true. I do get your point.

    And again, you're right. With a more balenced perspective on the matter, I sure can't see you suggesting a 300 dollar peice of hardware on a hunch either. I do respect how your articles are based on whats best for the little guy. I think I'd honestly have to say, if you were to suggest this product now AS IF it was as good as sliced bread... I would be unhappy with my purchase based on your excitment for it.
    teheh. Yeah, you made the right call with your article.
    Touche', Derek. TOUCHE'



    thehe. I guess not everyone can gamble the $300, and thats understandable. :-(

    Like I said... here's hopin'. :-D
  • RogueSpear - Wednesday, May 17, 2006 - link

    I'm not an expert on CPUs, but all of this has me wondering - isn't physics type code the kind of mathematical code that MMX and/or SSE and their follow-ons were supposed to accelerate? I'm sure physics was never mentioned way back then, but I do remember things like encryption/decryption and media encoding/decoding as being targets for those technologies. Are game developers currently taking advantage of those technologies? I know that to a certain point there is parity between AMD and Intel CPUs as far as compatibility with those instruction sets.
  • apesoccer - Wednesday, May 17, 2006 - link

    Seems like this was a pretty limited review...Were you guys working with a time table? Like 4hrs to use this card or something?

    I think i would have tried more then just single core cpu's...since we're heading mostly towards multicore cpus. I also would have run tests at the same lvl (if possible; it feels like we're intentionally being kept in the dark here) to compare software and hardware with the same number of effects, at different levels and at resolutions...At low res, you're maxing the cpu out right? Well, then if the ppu uses 15% of the cpu but outputs 30% more effects, you're being limited by the cpu even more...You should see greater returns the higher the resolution you go...Since you're maxing your gpu's out more (rather then the cpus) the higher the res. All of this is moot if the overhead cpu usage by the ppu can be run on a second cpu core...since that's where the industry is headed anyway. And making software/hardware runs on a dual core should give us a better idea of whether or not this card is worth it.
  • peternelson - Wednesday, May 17, 2006 - link

    To the people who say it's a decelerator. It is a little slower but it is NOT doing the same amount of work. The visual feast is better in the hardware accelerated game than without the card. But we need a way to quantify that extra as just "fps" ignores it.

    Second, Anandtech PLEASE get yourselves a PCI bus analyser, it need not be expensive. I want to know the % utilisation on the PCI bus. At 32 bit 33MHz it is potential max 133 MByte/sec.

    How much of that is being used to talk to and from the PHYSX card, and is it a bottleneck that would be solved by moving to PCI Express? Also in your demo setups, considering what peripherals you are using, are you hogging some of the PCI bandwidth for (say) a PCI based soundcard etc which would be unfair on the physx card.

    ALSO one of the main purposes of THIS review I would say is to COMPARE the ASUS card with the BFG card. You don't seem to do that. So assuming I want a physx card, I still don't know which of the two to buy. Please compare/contrast Asus vs BFG.
  • DerekWilson - Wednesday, May 17, 2006 - link

    honestly, the asus and bfg cards perform identically, pull about the same ammount of power and produce similar levels of noise.

    If you are trying to decide, buy the cheaper one. There aren't enough differences to make one better than the other (unless blue leds behind fans really does it for you).

    We didn't do a more direct comparison because we have an engineering sample ASUS part, while our BFG is full retail. We generally don't like to make direct comparisons with preproduction hardware in anything other than stock performance. Heat, noise, power, pcb layout, and custom drivers can all change dramatically before a part hits retail.

    We will look into the pci bus utilization.
  • peternelson - Wednesday, May 17, 2006 - link


    Thanks, so, I will have to choose on features like the nice triangle box on the BFG ;-)

    In gaming on older machines where both the sound and network and possibly other things are all on the same PCI bus, then either the physx or the other stuff could suffer from bus contention.

    I hope you can either ask or do some analysing to watch the amount of traffic there is.
  • apesoccer - Wednesday, May 17, 2006 - link

    That's a good question as well...especially for those of us using other additional pci cards...
  • mbhame - Wednesday, May 17, 2006 - link

    You guys are giving Ageia WAY too much slack. :(
    Call a spade a spade and save face.
  • apesoccer - Wednesday, May 17, 2006 - link

    There's no use in throwing in the towel before we get in the ring...
  • mbhame - Wednesday, May 17, 2006 - link

    Throwing in the towel...? How do you infer that from what I said?

    I said "Call a spade a spade". Whether Anandtech.com chooses an easy-out path of "Currently the PPU sucks..." (not in so many words) or not, there is tremendous grace extended to Ageia around here, and frankly, it stinks.

    Obviously there is a fine line between journalism with respect (which 99% of other websites are ignorant of) and brown-nosing, or needing to get a pair. All I'm saying is it's not very clear where this site's stand is amongst these possibilities.
  • Trisped - Wednesday, May 17, 2006 - link

    I think everyone is being cautiously optimistic that the tech will improve. I wasn't on the 3d accelerator screen when that first happened, but from what I hear those cards were expensive and actually were worse then not having them. But now they are required for every game and windows vista.

    We want to wait to see if they can work out the bugs, give us better comparisons, and to compare it to the GPU only systems that are suppose to be coming. Once we have all the facts we can pass a final verdict, until then everything is guess work.
  • apesoccer - Wednesday, May 17, 2006 - link

    There's alot of grace given to it everywhere...I have yet to see an article bash them. There has been a lot of interest in this product, and frankly, the general concensis is that we want to see it succeed. That aside, i don't think they can make a precise statement saying...This product is going to suck balls...or this is going to be the next Sliced Bread...

    My problem with it, is the lack of depth to the findings (and your statement "Call a spade a spade"...), I wish they had tried more kinds of CPU's with different kinds of GPU's, at several resolutions at both the same settings hardware/software and different ones. Without those tests, you can't really say you've tested the product.

    Basically...because they haven't done enough work with it yet [imo](due to time restraints or whatever...), we can't make any real statements about this product. Other then, at the one hardware setting they ran it at, compared to the different software setting ( >< ), the software setting scored better in fps. Which tells us what? The ppu uses overhead cpu cycles when doing at least 3x the amount of work the cpu would be doing at the lower sofware settings. So lets see some different settings (and some of the hardware/software running at the same), so we can get a better idea of the big picture.
  • mbhame - Wednesday, May 17, 2006 - link

    I don't agree with your assessment on the general consensus. My circles vehemently want it to fail as it's an additional cost to our PCs, an additional heat source, an additional power requirement... and for what?

    I think you're kidding yourself if you think some other CPU:GPU combination would yield appreciably-different results.
  • DerekWilson - Wednesday, May 17, 2006 - link

    we're working very hard to find ways to more extensively test the hardware. you've hit the nail on the head with why people haven't been tearing this part up. we are certainly suspicious of its capabilities at this point, but we just don't have the facts to draw a hard line on its real value (exept in the context of "right now").
  • mbhame - Wednesday, May 17, 2006 - link

    Well then make stronger statements in the present-tense. Just because someone makes a product designed to do "X", it doesn't mean that they'll succeed in doing so. You guys come across as if it's a given the PPU *will be* a success and in doing so generate a level of expectation of success. As it stands now this is a total flop - treat it as such. Then IF and when they DO make it worthwhile for some appreciable reason then we can marvel at their about-face collectively.

    It's not cynicism, it's reality.
  • AnnonymousCoward - Friday, May 19, 2006 - link

    Why do you need stronger criticism? You've been able to determine what's going on, and that's because the performance charts speak for themselves. I'd rather read what's currently on the conclusion page, instead of the obvious "This product's performance sucks with current games."
  • AndreasM - Wednesday, May 17, 2006 - link

    In http://www.xtremesystems.org/forums/showthread.php...">some cases the PPU does increase performance. The next version of Ageia's SDK (ETA July) is supposed to support all physics effects in software, ATM liquid and cloth effects are hardware only; which is why some games like Cellfactor can't really run in software mode properly (yet). Hopefully Immersion releases a new version of their demo with official software support after Ageia releases their 2.4 SDK.
  • UberL33tJarad - Wednesday, May 17, 2006 - link

    How come there's never a direct comparison between CPU and PPU using the same physics? Making the PPU do 3x the work and not losing 3x performance doesn't seem so bad. It puts the card in a bad light because 90% of the people who will read this article will skip the text and go straight for the graphs. I know it can't be done in GRAW without different sets of physics (Havok for everything then Ageia for explosions) why not use the same Max Physics Debris Count?
  • Genx87 - Wednesday, May 17, 2006 - link

    I am still in contention it is a GPU limitation of having to render the higher amount of objects.

    One way to test this is to setup identical systems but one with SLI and the other with a single GPU.

    1. Test the difference between the two systems without physics applied so we get an idea of how much the game scales.
    2. Then test using identical setups using hardware physics and note if we see any difference. My theory is the amount of objects that need to be rendered is killing the GPU's.

    There is definately a bottleneck and it would be agreat if an article really tried to get to the bottom of it. Is it CPU, PPU or GPU? It doesnt appear that CPU is "that" big an issue as the difference between the FX57 and Opty 144 isnt that big.

  • UberL33tJarad - Wednesday, May 17, 2006 - link

    Well that's why I would be very intersted if some benchmarks could come out of http://pp.kpnet.fi/andreasm/physx/">this demo. The low res and lack of effects and textures makes it a great example to test CPUvsPPU strain. One guy said he went from http://www.xtremesystems.org/forums/showpost.php?p..."><5fps to 20fps, which is phenomenal.

    You can run the test in software or hardware mode and has 3k objects interacting with each other.

    Also, if you want to REALLY strain a system, try http://www.novodex.com/rocket/NovodexRocket_V1_1.e...">this demo. Some guy on XS tried a 3ghz Conroe and got <3fps.
  • DigitalFreak - Wednesday, May 17, 2006 - link

    Good idea.
  • maevinj - Wednesday, May 17, 2006 - link

    "then it is defeating its won purpose"
    should be one

  • JarredWalton - Wednesday, May 17, 2006 - link

    Actually, I think it was supposed to be "own", but I reworded it anyway. Thanks.
  • Nighteye2 - Wednesday, May 17, 2006 - link

    2 things:

    I'd like to see a comparison done with equal level of physics, even if it's the low level of physics. Such a comparison could be informative about the bottlenecks. In CoV you can set the number of particles - do tests at 200 and 400 without the physx card, and tests at 400, 800 and 1500 with the physx card. Show how the physics scale with and without the physx card.

    Secondly, do those slowdowns also occur in Cellfactor and UT2007 when objects are created? It seems to me like the slowdown is caused by suddenly having to route part of the data over the PPU, instead of using the PPU for object locations all the time.
  • DerekWilson - Wednesday, May 17, 2006 - link

    The real issue here is that the type of debris is different. Lowering number on the physx cards still gives me things like packing peanuts, while software never does.

    It is still an apples to oranges comparison. But I will play around with this.
  • darkdemyze - Wednesday, May 17, 2006 - link

    Seems there is a lot of "theory" and "ideal advantages" surrounding this card.

    Just as the chicken-egg comparison states, it's going to be a tough battle for AGEIA to get this new product going with lack of support from developers. I seriuosly doubt many people, even the ones who have the money, will want a product they don't get anything out of besides a few extra boxes flying through the air or a couple of extra grenade shards coming out of the explosion when there is such a decrament in performance.

    At any rate, seems like just one more hardware component to buy for gamers. Meh.
  • phusg - Wednesday, May 17, 2006 - link

    > Performance issues must not exist, as stuttering framerates have nothing to do with why people spend thousands of dollars on a gaming rig.

    What does this sentence mean? No, really. It seems to try to say more than just, "stuttering framerates on a multi-thousand dollar rig is ridiculous", or is that it?
  • nullpointerus - Wednesday, May 17, 2006 - link

    I believe he means that the card can't survive in the market if it dramatically lowers framerates on even high end rigs.
  • DerekWilson - Wednesday, May 17, 2006 - link

    check plus ... sorry if my wording was a little cumbersome.
  • QChronoD - Wednesday, May 17, 2006 - link

    It seems to me like you guys forgot to set a baseline for the system with the PPU card installed. From the picture that you posted in the CoV test, the nuber of physics objects looks like it can be adjusted when the AGIEA support is enabled. You should have ran a benchmark with the card installed but keeping the level of physics the same. That would eliminate the loading on the GPU as a variable. Doing so would cause the GPU load to remain nearly the same with the only difference being to do the CPU and PPU taking time sending info back and forth.
  • Brunnis - Wednesday, May 17, 2006 - link

    I bet a game like GRAW actually would run faster if the same physics effects were run directly on the CPU instead of this "decelerator". You could add a lot of physics before the game would start running nearly as bad as with the PhysX card. What a great product...
  • DigitalFreak - Wednesday, May 17, 2006 - link

    I'm wondering the same thing.

    "We still need hard and fast ways to properly compare the same physics algorithm running on a CPU, a GPU, and a PPU -- or at the very least, on a (dual/multi-core) CPU and PPU."

    Maybe it's a requirement that the developers have to intentionally limit (via the sliders, etc.) how many "objects" can be generated without the PPU in order to keep people from finding out that a dual core CPU could provide the same effects more efficiently than their PPU.
  • nullpointerus - Wednesday, May 17, 2006 - link

    Why would ASUS or BFG want to get mixed up in a performance scam?
  • DerekWilson - Wednesday, May 17, 2006 - link

    Or EPIC with UnrealEngine 3?

    Makes you wonder what we aren't seeing here doesn't it?
  • Visual - Wednesday, May 17, 2006 - link

    so what you're showing in all the graphs is lower performance with the hardware than without it. WTF?
    yes i understand that testing without the hardware is only faster because it's running lower detail, but that's not clearly visible from a few glances over the article... and you do know how important the first impression really is.

    now i just gotta ask, why can't you test both software and hardware with the same level of detail? that's what a real benchmark should show atleast. Can't you request some complete software emulation from AGEIA that can fool the game that the card is present, and turn on all the extra effects? If not from AGEIA, maybe from ATI or nVidia, who seem to have worked on such emulations that even use their GFX cards. In the worst case, if you can't get the software mode to have all the same effects, why not then atleast turn off those effects when testing the hardware implementation? In the city of villians for example, why is the software test ran with lower "Max Physics Debris Count"? (though I assume there are other effects that get automatically enabled with the hardware present and aren't configurable)

    I just don't get the point of this article... if you're not able to compare apples to apples yet, then don't even bother with an article.
  • Griswold - Wednesday, May 17, 2006 - link

    I think they clearly stated in the first article, that GRAW for example, doesnt allow higher debris settings in software mode.

    But even if it did, a $300 part that is supposed to be lightning fast and what not, should be at least as fast as ordinary software calculations - at higher debris count.

    I really dont care much about apples and oranges here. The message seems to be clear, right now it isnt performing up to snuff for whatever reason.
  • segagenesis - Wednesday, May 17, 2006 - link

    I feel so tempted to bring up the old cliche "The message is clear..." when you word it like that :)

    Really why is there not more "WTF" here? A better analogy to what you describe is the old "Hardware Decelerators" that say the S3 Virge was. And for $300? Damn, next thing we know they will be sub-licensing Patty-On-Patty technology from Burger King with a dual core physics processor for only $600! *groan*

    They have the right idea here but this is some of the poorest execution possible in convincing people you need this product.
  • Magnadoodle - Wednesday, May 17, 2006 - link

    Calling this a physics decelerator seems just perfect. I wish anandtech would use some biting humour now and then. But that would mean degraded relations with Asus and BFG.

    Oh well, let's just get nostalgic about the days of unconstrained journalism and reread those old 6% Pcgamer reviews.
  • abhaxus - Friday, May 19, 2006 - link

    When I got my original voodoo 1 card, the first thing I did was plug it in and run a few timedemos in GLquake... surprise surprise, it was actually a few FPS slower than I was running in software mode. Of course, I was running software mode at 320x240 and GL at 640x480 and the game looked incredible.

    I haven't seen a PhysX card in person but the trailers for cellfactor look very impressive. With PhysX being taken advantage of throughout the design and coding process I can't wait to see what the final results are for new games... of course, new drivers and a PCIe version will help too.

    That said... I really think that this card will eventually turn out to be only for people that don't have a dual core CPU. Seems like most everything could be done by properly multithreading the physics calculations.
  • Nighteye2 - Wednesday, May 17, 2006 - link

    It's perfectly possible to remain be critical while remaining polite. Biting humour is unnecessarily degrading and does not add any value. Even 6% ratings can be given in perfectly polite wording.
  • DerekWilson - Wednesday, May 17, 2006 - link

    We certainly aren't pulling punches, and we wouldn't do anything to preserve a relationship with any company. If we make someone angry, we've still got plenty of ways to get a hold of their product.

    I hope we were able to make it clear that CoV giving similar results to GRAW gave us pause about the value of PhysX when applied to games that just stick in some effects here and there. We also (I hope clearly) stressed that there isn't enough value in the product for consumers to justify a purchase at this time.

    But we weren't overly hard on AGEIA as we could be for a couple reasons. First, CellFactor and HangarofDoom are pretty interesting demos. The performance of them and the possibilities presented by games like them indicate that PhysX could be more useful in the future (especially with its integration into UE3 and other game engines). Second, without more tools or games we just can't determine the actual potential of this hardware. Sure, right now developers aren't making practical use of the technology and it isn't worth its price tag. But it is very premature for us to stamp a "decelerator" label on it and close the case.

    Maybe we will end up calling this thing a lemon, but we just need more hard data before we will do so.
  • Magnadoodle - Wednesday, May 17, 2006 - link

    Yes, I understand your point of view, and I don't think you're pulling any punches or being biaised. In fact, a biting review would be more biaised than anything. I was just remarking that this would have made a perfect occasion to have a bit of fun with AGEIA and drag them through the dredges. I nostalgically recalled the quite biting and humorous style PC Gamer put into their 6% reviews. PC Gamer never was a pantheon of game reviewing, but they didn't have to be nice to nobody (actually to "nobodies", because they had to be nice to big corporations). My point was more about the lack of wits and style in web publications these days than about anandtech being biaised. Not that anandtech has bad writers, just that it's more scientific than sarcastic.

    Anyway, good review Mr. Wilson and keep up the good work.
  • Seer - Wednesday, May 17, 2006 - link

    Im also wondering about this claim that the driver update increased framerates. In all but two of the tests, the avg fps was either the same or a decrease. The largest increase was 1 fps, totally within the margin of error. (I'm talking about the GRAW tests). So, um, yeah, no increase there.

Log in

Don't have an account? Sign up now