I have read all your posts with great interest, I feel that some very good points are being made, so here's my 2 cents worth ;-)
I believe the 'IDEA' of having a dedicated PPU in your increasingly expensive monster rig is highly appealing, even intoxicating and I believe this 'IDEA' coupled with some clever marketing will ensure a good number of highly overpriced, or at least expensive, sales of this mystical technology in it's current (ineficient) form.
For some, the fact that it's expensive and also holds such high promises will ensure it's place as a 'Must have' component for the legions of early adopters. The brilliant idea of launching them through Alienware, Falcon Northwest and the top of the line Dell XPS600 systems was a stroke of marketing genius as this adds to the allure of owning one when they finally launch to the retail market...If it's good enough for a system most of us can never afford but covet none the less it's damn well good enough for my 'monster RIG'. This arrangement will allow the almost guaranteed sales of the first wave of cards on the market. I have noticed that some UK online retailers have already started taking pre-launch orders for the £218 OEM 128MB version I just have to woner how many of these pre-orders have actually been sold?
The concept of a dedicated PPU is quite simply phenominal, We spend plenty of money upgrading our GPU's, CPU's and quite recently Creative have brought us the first true APU (X-Fi series) that it makes sense for there to be a dedicated PPU and berhaps even an AiPU to follow.
The question is, will these products actually benefit us to the value of their cost?
I would say that a GPU, or in fact up to 4 GPU's running over PCIe x32 (2xPCIe x16 channels) become increasingly less value for money the more GPU's added to the equation. i.e. a 7900GTX 512MB at £440 is great bang for the buck compared to Quad SLI 7900GTX 512MB at over £1000. The framerates in the Quad machine are not 4x the single GPU. Perhaps this is where GPU's could trully be considered worthy of nVidia or ATI's Physics SLI load balancing concept. SLI GPU's are not working flat out 100% of the time...Due to the extremely high bandwidth of Dual PCIe x16 ports there should be a reasonable amount of bandwidth to spare on Physics calculations, perhaps more if Dual PCIe x32 (or even quad x16) Motherboards inevitably turn up. I am not saying that GPU's are more efficient than a DEDICATED and designed for PPU, just that if ATI and nVidia decided the market showed enough potential, they could simply 'design in' or add PPU functionality to their GPU cores or GFX cards. This would allow them to tap into the extra bandwidth PCIe x16 affords.
The Ageis PhysX PPU in it's current form runs over the PCI bus, a comparitively Narrow bandwicth bus, and MUST communicate with the GPU in order for it to render the extra particles and objects in any scene. This in my mind would create a Bottleneck as it would only be able to communicate at the bandwidth and speed afforded by the Narrow bandwidth and slower PCI bus. The slowest path governs the speed of even the fastest...This would mean that adding a dedicated PPU, even a very fast and efficient one, would be severely limited by the bus it was running over. This phenomenon is displayed in all the real world benchmarks I have seen of the Ageis PhysX PPU to date, The framerates actually DROP when the PPU is enabled.
To counter this, I believe, Ageis through ASUS, BFG and any other manufacturing partner they sign up with will have to release products designed for the PCIe bus. I believe this is what Ageis knows as the early manufacturing samples were able to be installed in the PCI bus as well as the PCIe bus (although not at the same time ;-) ). I believe the PCI bus was chosen for launch due to the very high installed user base of PCI motherboards, every standard PC I know of that would want a PPU in their system. I belive this is a mistake, as the users most likely to purchase this part in the 'Premium price' period would likely have PCIe in their system, or at least would be willing to shell out an extra £50-£140 for the privelage. Although I could be completely wrong in this as it may allow for some 'Double Selling' as when they release the new and improved PCIe version, the early adopters will be forced to buy into it again at a premium price.
This leads me neatly onto the price. I understand that Ageis, quite rightly, are handing out the PhysX SDK freely, this is to allow maximum compatibilty and support in the shortest period of time. This does however mean that the end user, who purchases the card in the beginning will have to pay the full price for the card...£218 for the 128MB OEM version. As time goes by and more units are sold, the installed userbase of the PPU will grow and the balance will shift, Ageis will be able to start charging the developers to use their 'must have' Hardware Physics support in their games/software and this will subsidise the cost of the card to the end user, therefore making them even more affordable to the masses and therefore making it a much more 'Must Have' for the developers. This will take several generations of the PPU before we feel the full impact of this I believe.
If ATI and nVidia are smart, they can capitalise on their high installed initial userbase and properly market the idea of Hardware physics for free with their SLI physics, they may be able to throw a spanner in the works for Agies while they attempt to attain market share. This may benefit the consumer, although it may also knock Agies out of the running depending on how effective ATI and nVidias driver based solution first appears. It could also prompt a swift buy out from either ATI or nVidia like nvidia did with 3DFX.
Using the CPU for Physics, even on a multicore CPU, in my opinion is not the way forward. The CPU is not designed for physics calculations, and from what I hear they are not (comparitively) very efficient at performing these calculations. A dedicated solution will always be better in the long run. This will free up the CPU to run the OS and also for Ai calculations and well as antivirus, firewall, background applications and generally keeping the entire system secure and stable. Multicore will be a blessing for PC's and consoles, but not for such a specific and difficult (for a CPU) task.
"Deep breath" ;-)
So there you have it, My thoughts on the PPU situation as it stands now and into the future. Right now I will not be buying into the dream, but simply keeping the dream alive by closely watching how it develops until such a time as I believe the 'Right Time' comes. £218 for an unproven, generally unsupported, and possibly seriously flawed incarnation of the PPU dream is not in my opinion The Right Time, Yet ;-)
The Cellfactor demo is available on Ageia's website now. If you try to play it with out a PhysX card you get an error message. However, the demo includes a game editor. If you open the editor, you can open up the demo level and it allows you to play the game with out the PhysX card. You can't play with bots in the editor, nor can you see cloth or fluid dynamics. Everything else is present, it's like playing the game normally otherwise. I was able to play the game inside the editor with no performance problems. I have a dual core AMD with a single X1900XT and 2GB of RAM. CPU usage does go up to 80% when playing the game and blowing up things and whatnot, but it's a smooth experience with no noticable slow down. Graphics-wise, everything is present inside the editor, including dynamic lighting and normal mapping.
If Ageia wanted to show people how well the PPU works and is needed in a game like Cellfactor, they should have allowed you to play the game normally with out a PhysX card. Since they didn't do that, it makes me think it's not actually needed.
People don't want another separate card mainly becuase of slot problem. But if the card uses PCIX1, which I bet most people with new motherboard doesn't have anything at that slot, it doesn't make it unfavourable.
Since as mentioned, heat and noise of the card is low
If Aegia really want to spur additional support for their PPU card then they should develop a lower-level API than Novodex. Something similar to OpenGL or DirectX for graphics cards. This would encourage other middleware developers to support the PPU for the "heavy lifting". Then Havok and other physics middleware developers would be working with Ageia and not against them. The current situation is as if Nvidia were to provide a proprietary game engine as the only way to access the power of the GPU. Then only the game companies that would agree to support this engine would get graphics accleration.
additionally, AGEIA is working with Havok to try to get them to include support for the PhysX hardware in their product.
when we spoke with AGEIA about it, we learned that its more important to them to get software support than to create an SDK business. they want PhysX acceleration in Havok very much. but Havok is doing pretty well on its own at the moment and are being a little more cautious about getting involved.
as is generally the case, small companies don't like to have their success tied up in the success of other small companies. Havok needs to decided if the ROI is worth it for them at this point, while there wouldn't really be a downside for AGEIA to let them include support.
A software SDK (Software Development Kit) is a set of libraries that people can use. So rather than write their own physics routines, they can use PhysX. Just like they can use Havok. The way we understand it, if done properly the PhysX libraries can run on either the CPU or the PPU, though the CPU should be slower. Right now, we don't have any 100% identical comparison other than AGEIA's test app, which doesn't really appear complex enough to be truly indicative of maximum performance potential.
Hmmm - seems like the modern equivelant of the old-school maths co-processors.
Yes these are a good idea and correct me if I'm wrong, but isn't that $250 (Aus $) CPU I forked out for supposedly quite good at doing these sorts of calculations what with it's massive FPU capabilities and all? I KNOW that current CPU's have more than enough ability to perform the calculations for the physics engines used in todays video games. I can see why companies are interested in pushings physics add-on cards though...
"Are your games running crap due to inefficient programming and resource hungry operating systems? Then buy a physics processing unit add-in card! Guaranteed to give you an unimpressive performance benefit for about 2 months!" If these PPU's are to become mainstream and we take another backwards step in programming, please oh please let it be NVidia who takes the reigns... They've done more for the multimedia industry in the last 7 years than any other company...
CPUs are quite well suited for handling physics calculations for a single object, or even a handful of objects ... physics (especially game physics) calculations are quite simple.
when you have a few hundred thousand objects all bumping into eachother every scene, there is no way on earth a current CPU will be able to keep up with PhysX. There are just too many data dependancies and too much of a bandwidth and parallel processing advantage on AGEIA's side.
As for where we will end up if another add-in card business takes off ... well that's a little too much to speculate on for the moment :-)
Maybe there needs to be minimum ratio's to cpu and gpu speeds that Ageia and others can use to make sure they hit the upper performance market.
Software looks ok, looks like i might be going with software PhysX if available along with software Raid, even though i would prefer to go with the hardware... if bridged pci bus did not screw up my sound card with noise and wasn't so slow... maybe.. but my thinking is this product needs to be faster and wider... pci-e X4 or something like it, like I read in earlier articles it was supposed to be.
pci... forget it... 773 mhz... forget it... for me... 1.2 GHZ and pci-e X4 would have this product rocking.
any way to short this company?
They really screwed the pouch on speed for the upper end... should rename their product a graphics decelerator for faster cpus,.. and a poor man's accelerator.. but what person who owns a cpu worth $50 and a video card worth $50 will be willing to spend the $200 or more Ageia wants for this...
Great idea, but like the blockhead who give us RAID hardware... not fast enough.
afaik, raid hardware becomes useful for heavy error checking configurations like raid 5. with raid 0 and raid 1 (or 10 or 0+1) there is no error correction processing overhead. in the days of slow CPUs, this overhead could suck the life out of a system with raid 5. Today it's not as big an impact in most situations (espeically consumer level).
raid hardware was quite a good thing, but only in situations where it is necessary.
I was reading the article on FiringSquad (http://www.firingsquad.com/features/ageia_physx_re...">exact page here) where Ageia responded to Havok's claims about where the credit is due, performance hits, etc, and on performance hits they said:
quote: We appreciate feedback from the gamer community and based partly on comments like the one above, we have identified an area in our driver where fine tuning positively impacts frame rate.
So they essentially responded immediately with a driver update that supposedly improves performance.
Driver support is certainly a concern with any new hardware, but if Ageia keeps up this kind of timely response to issues and performance with frequent driver updates, in my mind they'll have taken care of one of the major factors in determining their success, and swinging their number of advantages to outweigh their obstacles for making it in the market.
i dont get it. Ghost Recon videos WITHOUT PhysX looks much more natural. the videos i have seen with it look pretty stupid. everything that blows up or gets shot has the same little black pieces flying around. i have shot up quite a few things in my life and seen plenty of videos of real explosions and thats not what it looks like.
The PC version of the new Ghost Recon game was supposed to be released along side the Xbox360 version but was delayed at the last minute for a couple of months. My guess is that PhysX implementation was a second thought while developing the game and the delay came from the developers trying to figure out what to do with it.
I gotta' tell you, as a scientist, this whole topic of putting 'physics' into games makes for an intensely amusing read. Of course I understand what's meant here, but when I first look at people's text in these articles/discussion, I'm always taken aback: "Wait? We need to <i>add</i> physics to stuff? Man, THAT's why my experiments have been failing!"
Anyway...
I wonder if the types of computations employed by our controversial little PhysX accelerator could be harvested *outside* of the gaming environment. As someone who both loves to game, but also would love to telecommute to my lab, I'd ideally like to be able to handle both tasks using one machine (I'm talking about in-house molecular modeling, crystallographic analysis, etc... ). Right now I have to rely on a more appropriate 'gaming' GPU at home, but hustle on in to work to use an essentially indentical computer which has been outfitted with a Quadro graphics card do to my crazy experiments. I guess I'm curious if it's plasuable to make, say, a 7900GT + PhysX perform comparable calculations to a Quado/Fire-style workstation graphics setup. 'Cause seriosuly, trying to play BF2 on your $1500 Quadro card is seriously disappointing. But then, so is trying to perform realtime molecular electron density rendering on your $320 7900GT.
SO - anybody got any ideas? Some intimate knowledge of the difference between these types of calculations? Or some intimate knowledge of where I can get free pizza and beer? Ah, Grad School.
quote: I wonder if the types of computations employed by our controversial little PhysX accelerator could be harvested *outside* of the gaming environment.
It will be great for military use as well as the automobile industry.
Yes dual core CPU's aren't being properly utilised by games. Only handful of games like COD2, Quake4 etc.. have big improvements with the dual core CPU patches. I would rather game companies spend time trying to utilise dual core so the 2nd core gets to do alot of physics work, rather then sitting mostly idle. Plus the cost of AGEIA card is too high. Already I'm having trouble justifying buying a 7900gt or x1900 when i have a decent graphics card, cant upgrade graphics every year. $300 for physics card that only handful of games support is too much. And the AT benchies dont show alot of improvement.
Physx API claims to be multy core compatiable, what will happen is probably is the the API and engine will load balance the calculation between any available resources which is either the PPU, CPU or better yet both.
Isn't it difficult to reprogram a game to make use of the hardware accelerated physics as well? If the GRAW tests perform poorly because the support is "tacked on", wouldn't that suggest that doing PhysX properly is at least somewhat difficult? Given that PhysX has a software and hardware solution, I really want to be able to flip a switch and watch the performance of the same calculations running off the CPU. Also, if their PhysX API can be programmed to take advantage of multiple threads on the PhysX card (we don't have low level details, so who knows?), it ought to be able to multithred calculations on SMP systems as well.
I'd like to see the CellFactor people add the option of *trying* to run everything without the PhysX card. Give us an apples-to-aplles comparison. Until we can see that sort of comparison, we're basically in the dark, and things can be hidden quite easily in the dark....
PPU support is simply achieved by using the Novadex physics engine or any game engine that uses the Novadex engine (ex/ Unreal Engine 3.0). The developers of GRAW decided to take a non-basic approach to adding PPU support, adding additional graphical effects for users of the PPU - this is similar to how Far Cry 64 advertises better graphics b/c it is 64bits as an advertising gimmick. GRAW seems to have issues in general and is not a very reliable test.
At quakecon '05, I had the opportunity to listen to the CEO of Ageia speakd and then meet with Ageia representatives. They had a test system that was an Athlon64 X2, a top of the line video card, and a PPU. The demo that I was able to play was what looked to be an early version of the Unreal Engine 3.0 (maybe Huxley?) and during the demo they could turn on and off PPU support. Everytime we switched between the two, we would take notice of the CPU usage meter and of the FPS and there was a huge difference.
It will be really interesting to see what happens when Microsoft releases their physics API (think DirectX but for physics) - this should make everyones lives better.
Having downloaded and viewed the videos, my reaction is "so what?". I guess the physx sequence has a bit more crap flying around, but it's also quite alot slower (slower probably than just letting the CPU process the extra crap). It seems obvious that this game doesn't make proper use of the physx card, as I can't otherwise imagine that Aegia would have wasted so much time and money on it.
Well, looking at the videos, I know what I prefer. The framerate hit with physx is definitely too noticeable. I'm curious to see how this turns out in other games, and with driver revisions and newer versions of the hardware (and probably pci-e would be a good idea as well)
In any case, I read somewhere they weren't expecting these cards to evolve as fast as GPU's. Rather, it'd have a life cycle about the same as for soundcards. That seemed a bit encouraging to me. Having to fork out $300+ for yet another card every year or two didn't sound too attractive. But if I just have to buy it once, I guess it might catch on.
just because something is true about the hardware doesn't mean it will every come to fruition in the software. it isn't jumping to a conclusion to say that the PPU is *capable* of outperforming a quadcore cpu when it comes to physics calculations -- that is a fact, not an opinion due to the architecture.
had the first quote said something about games that use physics performing better on one rather than the other, that would have been jumping to conclusions.
the key here is the developers and how the problem of video game physics maps to hardware that is good at doing physics calculations. there are a lot of factors.
quote: it isn't jumping to a conclusion to say that the PPU is *capable* of outperforming a quadcore cpu when it comes to physics calculations -- that is a fact, not an opinion due to the architecture.
Its clearly an opinion. For it to be a fact, it would have to be verifiable. However, no one has made a quad core x86 processor, and no game engine has been written to use one.
The poster simply stated his opinion and then blasted other people for having their own opinions, all without realizing how stupid it sounded which is why it was such a funny post.
It is a simple fact that a dedicated processor for X will always outperfrom a general purpose processor when doing X from a hardware perspective.
Whether or not the software yields the same results is another question. Assuming that the PCI bus is not holding back performance of the PPU, it is incredibly unlikely that quad core CPUs will be able to outperform the PPU.
quote: It is a simple fact that a dedicated processor for X will always outperfrom a general purpose processor when doing X from a hardware perspective.
Clearly false. General purpose processors sometimes beat specialized units. It depends on resources available to each device, and the specifics of the problem. Specialization is a trade off. If your calculation has some very specific and predictable quality, you might design a custom processor that exploits some property of your problem effectively enough to overcome the billions Intel and AMD poured into developing a general purpose core. But you may also end up with an expensive processor thats left behind by off the shelf components :)
Furthermore, this statement is hopelessly general. What if X is running Linux? Or any other application that x86 CPUs are already specialized for. Can you really concieve of an even specialized processor for this task that didn't resemble a general purpose CPU? Doubtful.
quote: Assuming that the PCI bus is not holding back performance of the PPU, it is incredibly unlikely that quad core CPUs will be able to outperform the PPU.
You're backpeddleing. You said:
"Too bad not even quadcores will be able to outperfrom the PPU when it comes to physics calculations."
Now you're saying they might be able to do it. So much for jumping to conclusions?
People keep mentioning Cell Factor. Well and good that it uses more physics calculations as well as the PhysX card. Unfortunately, right now it requires the PhysX card and it's looking like 18 MONTHS (!) before the game ships - if it ever gets done. We might as well discuss how much better Havok FX is going to be in The Elder Scrolls V. :p
For the first generation, we're far more likely to see a lot of the "tacked on" approach as companies add rudimentary support to existing designs. We also don't have a way to even compare Cell Factor with and without PhysX. Are they hiding something? I mean, 15% faster under the AGEIA test demo using a high-end CPU isn't looking like much. If they allow CellFactor to run on software (CPU) PhysX calculations, get that to support SMP systems for the calculations, and we get 2 FPS in Cell Factor, that's great. It shows the PhysX card does soemthing. If they allow all that and the dual core chips end up coming very close to the same performance, we've got a problem.
Basically, right now we're missing real world (i.e. gaming) apples-to-apples comparisons. It's like comparing X800 to 6800 cards under games that only supported SM3.0 or SM1.1 - better shaders or faster performance, but X800 could have come *much* closer with proper SM2.0 support.
AMD & Intel could license the PhysX technology and include a dedicated PhysX (or generic multi-API) core on their processors and market them as game processors. Although some science and technology applications could make use of it as well. Being on-die would reduce latency and provide a huge amount of bandwidth between cores. Accessing system memory could slow things down but still be much faster than data transfers across a PCI bus.
The reason that framerates drop with the PhysX card installed is simply that the graphics card is given more complex effects to render.
At some point in the future, games will be coded with a physics API in mind. Interactions between the player and the game environment will be through this API, regardless of whether there is dedicated hardware available.
It's a truth universally acknowledged that graphics are better left to the graphics card - I don't hear anyone suggesting that the second core in a duallie system should perform all the graphics calculations. I think that in time, this will be true of physics too.
Once the first generation of games built from the ground up with a physics API in mind come out, this will sell like hot cakes.
The reasons frame rates drop is the fact that with the physics engine, the video card have more to render - in the grenade explosion images, the "with physics" image has tens of dumpster bits flying, while in the "non physics" there are hardly a couple.
If there would have been the same complexity of scenes, I wonder how much faster the ageia would be
They also arrive at the conclusion that it is not a GPU bottleneck.
Furthermore, the only thing the PPU seems to do in GRAW is render a couple of particles, while not improving or accelerating *at all* the processing of physics. This particle effect could have been processed very well by a GPU.
I guess Anandtech didn't notice that the physics were exactly the same, thus pointing out the somewhat elicit nature of better physics.
The havok guys did miss a few things pointed out earlier in the comments. Some destructable objects do break off into real persistant objects under PhysX -- like the dumpster lid and car doors. Also, the debris in the explosions is physically simulated rather than scripted. While I certainly agree that the end effect in these cases has no impact on "goodness", it is actually doing something.
I'll certainly agree that "better physics" is kind of strange to think about. But it is really similar to how older games used to do 3D with canned animations. More realtime simulation opened up opportunities to do so many amazing things that just couldn't be done otherwise. This should extend well to physics.
Also, regardless of how (or how efficiently) the developers did it, there's no denying that the game feels better with the hardware accelerated aspects. Whether they could have done the same thing on the CPU or GPU, they didn't.
I'd still love to find a way to test the performance of this thing running the hardware physics on the CPU.
When I play it without the PhysX hardware, doors just seem to pop open -- not fly off ... though I haven't exhaustively blown up every object -- there could be some cases where these types of things happen in software as well.
Shoot up the tires, car doors, etc enough and they come off. Same with the garbage can lid, throw a nade, it'll blow right off the container, all without a PPU.
how is the game going to "feel better" with a PPU when it slams your framerate down from buttery smooth to choppy? sorry, i'll take the FPS over any degree of better simulated physics, ESPECIALLY on a budget PC. i mean, look at the numbers! opteron minimum fps at 8x6 was 46, and with the PPU hardware it dropped to 12 - over a 75% decrease!!
Note that the min framerate is much lower than the average -- with the majority of frames rolling along at average framerates, one or two frames that drop to an instantaneous 12-17fps isn't going to make the game feel choppy. The benchmark was fairly short, so even outliers have an impact on the average -- futher going to show that these minimum fps are not anything to worry about. At the same time, they aren't desierable either.
Also, I would certainly not recommend this part to anyone but the hardcore enthusiast right now. People with slow graphics cards and processors would benefit much more by upgrading one or the other. In these early stages with little software support, the PPU will really only look attractive to people who already have very powerful systems and want something else to expand the capabilities.
if you watch the videos, there's no noticable choppiness in the motion of the explosion. and I can say from gameplay experience that there's no noticeable mouse lag when things are exploding either. thus, with the added visual effects, it feels better. certainly a subjective analysis, but I hope that explains how I could get that impression.
You must be joking. Watching the videos, the PhysX one is WAY choppier compared to the software one. The PhysX video even halts for a split second, in a way that's more than noticeable; it's downright terrible.
And the graphics/effect of the extra debris? Negligible. I've seen more videos from this game (for example: http://www.pcper.com/article.php?aid=245">http://www.pcper.com/article.php?aid=245 ) and the extra stuff with PhysX in this game is just not impressive or a big deal, and in some cases it's actually worse (like the URINATING walls and ground when shooting them). It's not realistic, it's not fun, not particularly cool, and it's slow.
I was hoping we made this clear in the article ...
While there is certainly more for the GPU to do, the numbers under a CPU limited configuration (800x600 with lower quality settings) we see a very low minimum framerate and a much lower average when compared to software physics.
The drop in performance is much much less signficant when we look at a GPU limited configuration -- if all those object were bottlenecking on the graphics card, then giving them high quality textures and rendering them at a much higher resolution would show a bigger impact on performance.
Tie that in with the fact that both the CPU and GPU limited configurations turn out the same minimum framerate and we really can conclude that the bottleneck is somewhere other than the GPU.
It becomes more difficult to determin whether the bottleneck is at the CPU (game/driver/api overhead) or the PPU (pci bus/object generation/actual physics).
They should have waited for cellfactor's release as a launch line up for aegia. First impression is important and the glued on approach of GRAW is nothing but negative publicity for aegia. Not everyone knows that physx was nothing more of a patched addon to havok in GRAW and will think that this is how the cards will turn out.
If you can't do it right then don't do it at all. GRAW's implementation was a complete failure, even for a first generation product.
As much as I would want to give them the benefit of the doubt... With what you say its that simple really. Most gamers (at least ones I know?) want worry free performance, and if spending extra money on hardware results in worse performance then this product will be short lived.
I watched the non-game PhysX demos and they looked really damn cool, but they really should have worked on making a PCI-E version from the start... boards with PCI slots are already becoming dated, and those that have 16x slots for graphics have at least the small 1x slot!
I wouldn't say it was a complete failure. IMO the benchmarks were quite disappointing, and this was compounded by the lack of effects configurability in the game, but the videos were quite compelling. If you look at the dumpster (?), you can see that not only does the lid blow off but it bends and crumples. If we see more than just canned animations in better games (Cell Factor?), then this $300 should be worth its cost to high-end gamers. I'd say Aegia is off to a rough start, not an implosion.
There are quite a few other things PhysX does in the game as well -- though they all are kinda "tacked on" as well. You noticed the lid of the dumpster which only pops open undersoftware. In hardware it is a seperate object that can go flying around depending on the explosion. The same is true of car doors and other similar parts -- under software they'll just pop open, but with hardware they go flying if hit right.
It is also interesting to add that the explosions and such are scripted under software, but much of it becomes physically simulated under hardware. While this fact is kinda interesting, it really doesn't matter to the gamer in this title. But for other games that make more extensive use of the hardware, it could be quite useful.
That should be very cool. I was playing F.E.A.R. recently and decided to rake my SMG over a bunch of glass panes in an "office" level. Initially, it looked and sounded good because I have the settings cranked up reasonably high, but then I noticed all the glass panes were breaking in exactly the same way. Rather disappointing...
In your introduction you might have mentioned there are TWO partners for bringing this technology to market: Asus and BFG Tech.
Both have shipping boards. I wonder if they perform identically or if there is some difference.
I agree that PCI could be a bottleneck, but I'm more concerned that putting a lot of traffic on the pci bus will impair my OTHER pci devices.
PCIE x1 would have been much more sensible. I hope that they won't need a respin to add pcie functionality but fear this may be the case.
I agree Cellfactor looks more heavy use of physics so may make the difference with/without PPU more noticeable/measurable.
I also wonder how much the memory size on the Physx board matters? Maybe a second gen board could double that to 256. I'm also interested in whether PPU could be given some abstraction layer and programmed to do non-physics useful calculations as is being done on graphics cards now. This might speed its adoption.
I agree with the post that in volume, this kind of chip could find its way onto 3d graphics cards for gaming. As BFG make GPU cards, they might move that direction.
I want the physics card to be equivalent to a sound card in terms of standardization and how often I feel compelled to upgrade it. In other words, it would be upgraded far less often than graphics cards are. Putting the physics hardware on a graphics card means you would throw away (or sell at a loss) perfectly good physics capability just to get a faster GPU, or get a second card to go to SLI/Crossfire. This is a bad idea for all the same reasons you'd say putting sound card functionality on a graphics card is a bad idea.
Yes, you could do all kind of nice calculations on the physics boards. However, moving geometry data from the video card to the physics board to be calculated and moving them back to the video card would be shooting yourself in all feets.
I think this could run well as an accelerator for rendering images or for 3D applications... how soon until 3DStudio, PhotoShop and so on take advantage?
quote: I hope that they won't need a respin to add pcie functionality but fear this may be the case.
The pre-production cards had both PCI and PCIe support at the same time. You simply flipped the card depending on which interface you wanted to use. So I believe that the PPU offers native PCIe support and that BFG and ASUS could produce PCIe boards today if Ageia would give them permission to.
quote: I agree with the post that in volume, this kind of chip could find its way onto 3d graphics cards for gaming.
Bad idea. Putting the PPU onboard with a GPU means higher costs all around (longer PCBs, possibly more layers, more ram). Also, the two chips will be fighting for banwidth which is never a good thing.
Actually, putting this on the GPU core would be much cheaper. You'd save by getting rid of all the duplicated hardware: DRAMs, memory controller, power circuitry, PCI bridge, cooling, PCB, etc.
Not to mention you'd likely gain a lot of performance by having a PCI-E 16x slot and an ondie link to the GPU.
I wonder how much of the 2TB/s internal bandwidth will be used on the Ageia card... if enough of it, then the video card will have very little bandwidth remaining for its operations (graphic rendering). However, if the cooling really needs that heat sink/fan combo, and the card really needs that power connector, you won't be able to put one on the highest end video cards (for power and heat reasons).
Cellfactor to be released in Q4 2007.. maybe... Your PhysX is going to be a little old by the time the full game is released...Should be up to quad-core CPUs and lots of cycles available for physics calculations by that time.
I have recently been playing Oblivion a lot, like several million others. The Havok software physics are just great --- and you NEED the highest-end graphics for optimum visual experience in that game --- see the current Anandtech article. Sorry, I care little about (er) "better particle effects" or "more realistic explosions", even when I play Far Cry. In fact, from my experiences with BF2 and BF1942 I find them more than adequately immersive with their great scenery graphics and their CURRENT physics effects -- even the old and noble BF1942.
On single-player games, I would far prefer seeing additional hardware, or compute-cycles, being directed at advanced-AI than physics. What point fancy physics-effects if the AI enemy has about as much built-in intelligence as a lump of Swiss cheese? Sure does not help the game's immersive experience at all. And tightly-scripted AI just does not work in open-area scenarios (c.f: Unreal 2 and the dumb enemies easily sneaked from behind -- somebody forgot to script that eventuality amongst many others that can occur in an open play-area). The successful tightly-scripted single-play shooters like Doom3, HL2, FEAR etc all have overt or disguised "corridors". So, the developers of open-area games like Far Cry or Oblivion chose an algorithmic *intelligent-agent AI* approach, with a simple overlay of scripting to set some broad behavioral and/or location boundaries. A distinct move in the right direction but there are some problems with the AI implementation in both games. More sophisticated AI algorithms will require more compute-power, which, if performed on the CPU, will need to be traded off with cycles available for graphics. Dual-core will help, but a general-purpose DSP might help even more... they are not expensive and easily integrated into a motherboard.
Back to the immediate subject of the Ageia PPU and physics effects:-
I am far more intrigued by Havok's exercises with Havok FX harnessing both dual-core CPU power and GPU power in the service of physics emulation. Would be great to have action games with a physics-adjustable slider so that one can trade off graphics with physics effects in a seamless manner, just as one can trade-off advanced-graphics elements in games today.... which is exactly where Havok is heading. No need to support marginal added hardware like the PhysX. Now, if the PhysX engine was an option on every high-end motherboard, for say not more than $50 extra, or as an optional motherboard plug-in at say $75, (like the 8087 of yore) and did not take up any additional precious peripheral slots, then I would rate its chances of becoming main-stream to be pretty high. Seems as if Ageia should license their hardware DESIGN as soon as possible to nVidia or ATi at (say) not more than $15 a copy and have them incorporate the design into their motherboard chip-sets.
The current Ageia has 3 strikes against it for cost, hardware interface ( PCI ) and software-support reasons. The PhysX PPU certainly has NO hope at all as a periphreal device as long as it stays in PCI form. Must migrate to PCIe asap. Remember that a X1, or X4 PCIe card will happily work in a PCIe X16 slot, and there are still several million SLI and Crossfire motherboard with empty 2nd video slots. Plus, even on a dual-SLI with dual-slot-width video cards and an audio card present, it is more likely to find one PCIeX1 or X4 slot vacant that does not compromise the video-card ventilation than to find a PCI slot that is not either covered up by the dual-width video cards or that does not completely block airflow to one or other of the video cards.
So if a PCIe version of the PhysX ever becomes available... you will be able to sell your PCI version... at about the price of a doorstop. Few will want the PCI version if a used PCIe version is also available.
Hard on the wallet being an early adopter at times.....
The developers did a poor job when it came to how the implemented PPU support in GRAW.
CellFactor is a MUCH better test of what the PhysX card is capable. The physics in CellFactor are MUCH more intense. When blowing up a load of crap, my fps only drop 2fps at the most, and that is mainly b/c my 9800Pro is struggling to render the actual effects of a grenade explosion.
quote: It seems most likely that the slowdown is the cost of instancing all these objects on the PhysX card and then moving them back and forth over the PCI bus and eventually to the GPU. It would certainly be interesting to see if a faster connection for the PhysX card - like PCIe X1 - could smooth things out....
You've certainly got a point there. Seeing as how a physics card is more like a co-processor than anything else, the PCI bus is probably even more of a limitation than it would be with a graphics card, where most of the textures can simply be loaded into the framebuffer beforehand.
I still believe that the best option is to piggyback PPU's onto graphics cards. Not only does this allow them to share the MUCH higher bandwidth PCIe x16 slot, but it would also mean nearly instant communication between the physics chip and the GPU. The two chips could share the same framebuffer (RAM), as well as a cooling solution. This would lower costs significantly and increase performance.
combo boards, while not impossible to make, are going to be much more complex. There could also be power issues as PhysX and today's GPUs require external power. It'd be cool to see, and it might speed up adoption, but I think its unlikely to happen given the roi to board makers.
The framebuffer couldn't really be shared between the two parts either.
On page 2: "A graphics card, even with a 512-bit internal bus running at core speed, has less than 350 Mb/sec internal bandwidth." - er, I'm guessing that should read 350Gb/sec?
Thanks for the quick response - I've just finished the article. It's good stuff, interesting analysis, and commentary and general subtext of "nice but not essential" is extremely useful.
One random thing - is images.anandtech.com struggling, or is my browser just being a pain? I've been having trouble seeing a lot of the images in the article, needing various reloads to get them to show etc.
Anandtech images doesn't work properly if you disable referer logging (pretty annoying), can that be the root of your problem? (adblock disabling it or something)
Seems to be doing fine from our end. If you're at a company with a firewall or proxy, that could do some screwy stuff. We've also had reports from users that have firewall/browser settings configured to only show images from the source website - meaning since the images aren't from www.anandtech.com, they get blocked.
As far as I know, both the images and the content are on the same physical server, but there are two different names. I could be wrong, as I don't have anything to do with the system administration. :)
Weird, seems to be fine now I've disabled AdBlock in Firefox... that'll teach me. It's not like I block any of AnandTech's ads anyway, apart from the intellitxt stuff - that drives me NUTS.
That's why there is an issue. The game was very poorly coded. So when PhysX is enabled, it's using two physics layers: Havok and PhysX vs. one or the other.
Hence the conclusion. The added effects look nice, but performance currently suffers. Is it poor coding or poor hardware... or a combination of both? Being the first commercial game to support the AGEIA PPU, it's reasonable to assume that the programmers are in a bit of a learning stage. Hopefully, they just didn't know how to make optimal use of the hardware options. Right now, we can't say for sure, because there's no way to compare the "non-accelerated" physics with the "accelerated" physics. We've seen games that look less impressive and tax GPUs more in the past, so it may simply be a case of writing the code to make better use of the PPU. I *don't* think having CrossFire would improve performance much at present, at least in GRAW, because the minimum frame rates are clearly hitting a PPU/CPU bottleneck.
quote: We're still a little skeptical about how much the PhysX card is actually doing that couldn't be done on a CPU -- especially a dual core CPU. Hopefully this isn't the first "physics decellerator", rather like the first S3 Virge 3D chip was more of a step sideways for 3D than a true enhancement.
Thats a good question. Its hard to imagine a PCI bus device keeping up with a dual core processor when one can only talk over a 133MB bus with hundreds or even thousands of ticks worth of latency, while the other can talk in just a few clocks over a bus running at 10+ GB per second. I imagine all those FUs spend most of their time waiting on syncronization traffic over the PCI bus.
It will be interesting to see what a PCI-E version of this card can do.
I'd be curious to hear how this performs with SLI and CrossFire setups. Also, do motherboards offer enough space for 2 GPUs, and two PCI cards (sound and physics)? It looked like the Asus board meant for the Core Duo might.
If mobo manufacturers are keeping this in mind, it may make AM2 and Conroe boards even more interesting!
I really hope the price of these things drop by A LOT. especially if games in the future might "require" a physx card. I have to pay $400 for a top of the line 7900GTX and then another 300 for one of these??? argh! might as well pick up console gaming now!
Honestly, I'm hoping these cards just quietly die. I love the idea of game pysics, but the priceing is murder. These things may be pocket change for some of the "enthusiasts", but as soon as you require another $300 card just to play a video game, the Joe Sixpacks are going to drop PC gaming like it's hot. Do you have any idea how much bitching you hear from most of them when they're told that their $100 video card isn't fast enough to play a $50 game? Or worse yet, that their "new" PC doesn't even have a graphics card, and they have to throw more money into it just to be eligable for gaming? The multiplayer servers could be awfully empty when a required Pysics card alone costs more then a new PS3. :\
That's not how it works. New types of hardware are initially luxury items both in the sense that they are affordable only by a few and way overpriced. When the rich adopt these things, the middle class end up wanting them, and manufacturers find ways to bring the prices down by scaling down the hardware or using technological improvements. So in other words, pipe down, let those who can afford them buy them, and in an few years we may see $50-75 versions for ordinary gamers.
I wish I could find the article now, but back a ways there was an interview with Ageia where it was said that prices would span a similar range as video cards. So yes, there will probably be low-end, minrange, and highend card.
What I'm concerned about is the people who already are fighting a budget to game. These are the highschool kids with little to no income, the 40 year old with two kids and a morgage, and the casual gamer who's probably just as interested in a $170 PS2. What happens when they have to buy no only an enty level $150 video card, but also a $150 physics card? I can only imagine if gaming was currently limited to only those people with a $300 budget for a 7900GT or X1800 XL that we'd see PC gaming become a very elite selection for "enthusiasts" only.
Hopefully we can get some snazzy physics without increasing the cost of admision so much, either by taking advantage of the dual core CPUs that are even now worming their way into the mainstream PCs, or some sort of new video card technology.
Game developers have to eat, too. They won't produce games requiring extravagant hardware. Your fear is irrational. When you go into a doctor's office to get a shot, do you insist that the needle be sterilized right in front of your eyes before it comes anywhere near your skin? No. The doctor wants to eat, so he's not going to blow his eduction and license by reusing needles...
Well obviously it won't be a problem if it's not required. If it becomes an unnecessary enhancment card, like an X-Fi, then all is well. All I've been saying is if it DOES become a required card there is the possibility for monetary problems for the bread-and-butter casual gamers who fill the servers.
I for one will not be an early adopter for one of these. First generation hardware is always so cool to look at but it's almost always something you do not want to own. DX9 Video Cards are a great example: ATi 9700 PRO Was a great card if you played DX8 games but by the time software rolled around to take advantage of DX9 hardware, the 9700PRO just was not truly cut to handle it. The 9700PRO lacked a ton of features that second generation DX9 Cards had. My point is you should wait for the revision/version 2.0 of this card and you wont regret it. By then programs should be on store shelves to take full advandage of PhysX hardware.
I think it handled the first gen dx9 games relatively well. Farcry was a great example as it played quite well on my 9700 pro (which lasted me till Sept. '05 when I upgraded to a refurb. x800 pro). It also was able to run most games on max details (although crappy framerates but it was able to do it!). I think the 9700 pro offered it lot for its time and was able to play the 1st gen Dx9 games well enough.
Sure, the 9x00 series could handle DX9, just not at maxed out settings. I played Farcry on a 9800xt, and it ran smoothly at medium-high settings. But the physx card is just plain disappointing, since it causes such a performance hit in GRAW, even at cpu-limited resolutions. Either the developers did not code the physics properly, or the physx card is not all that it's hyped up to be. We'll need more games using the ppu to know for sure.
I bought a 9700 Pro, i saw it as so far ahead of ti4600 with its 4x the proformance when AA was apply. First card to play games with AA and AF even if wasnt directx9 games. BUT this ageia thing seem little pointless to me, i actually rather have 2 ATI or 2 Nvidia card, at least this gives you an option, less physics or better graphic experience. comes in handle for those 98% of games that not ageia compatible yet.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
101 Comments
Back to Article
Clauzii - Thursday, May 11, 2006 - link
ASUS PhysX P1:Processor Type: AGEIA PhysX
Bus Techonology: 32-bit PCI 3.0 Interface
Memory Interface: 128-bit GDDR3 memory architecture
Memory Capacity: 128 MByte
Memory Bandwidth: 12 GBytes/sec.
Effective Memory Data Rate: 733 MHz
Peak Instruction Bandwidth: 20 Billion Instructions/sec
Sphere-Sphere collision/sec: 530 Million max
Convex-Convex(Complex) collisions/sec.: 533,000 max
jkay6969 - Monday, May 8, 2006 - link
I have read all your posts with great interest, I feel that some very good points are being made, so here's my 2 cents worth ;-)I believe the 'IDEA' of having a dedicated PPU in your increasingly expensive monster rig is highly appealing, even intoxicating and I believe this 'IDEA' coupled with some clever marketing will ensure a good number of highly overpriced, or at least expensive, sales of this mystical technology in it's current (ineficient) form.
For some, the fact that it's expensive and also holds such high promises will ensure it's place as a 'Must have' component for the legions of early adopters. The brilliant idea of launching them through Alienware, Falcon Northwest and the top of the line Dell XPS600 systems was a stroke of marketing genius as this adds to the allure of owning one when they finally launch to the retail market...If it's good enough for a system most of us can never afford but covet none the less it's damn well good enough for my 'monster RIG'. This arrangement will allow the almost guaranteed sales of the first wave of cards on the market. I have noticed that some UK online retailers have already started taking pre-launch orders for the £218 OEM 128MB version I just have to woner how many of these pre-orders have actually been sold?
The concept of a dedicated PPU is quite simply phenominal, We spend plenty of money upgrading our GPU's, CPU's and quite recently Creative have brought us the first true APU (X-Fi series) that it makes sense for there to be a dedicated PPU and berhaps even an AiPU to follow.
The question is, will these products actually benefit us to the value of their cost?
I would say that a GPU, or in fact up to 4 GPU's running over PCIe x32 (2xPCIe x16 channels) become increasingly less value for money the more GPU's added to the equation. i.e. a 7900GTX 512MB at £440 is great bang for the buck compared to Quad SLI 7900GTX 512MB at over £1000. The framerates in the Quad machine are not 4x the single GPU. Perhaps this is where GPU's could trully be considered worthy of nVidia or ATI's Physics SLI load balancing concept. SLI GPU's are not working flat out 100% of the time...Due to the extremely high bandwidth of Dual PCIe x16 ports there should be a reasonable amount of bandwidth to spare on Physics calculations, perhaps more if Dual PCIe x32 (or even quad x16) Motherboards inevitably turn up. I am not saying that GPU's are more efficient than a DEDICATED and designed for PPU, just that if ATI and nVidia decided the market showed enough potential, they could simply 'design in' or add PPU functionality to their GPU cores or GFX cards. This would allow them to tap into the extra bandwidth PCIe x16 affords.
The Ageis PhysX PPU in it's current form runs over the PCI bus, a comparitively Narrow bandwicth bus, and MUST communicate with the GPU in order for it to render the extra particles and objects in any scene. This in my mind would create a Bottleneck as it would only be able to communicate at the bandwidth and speed afforded by the Narrow bandwidth and slower PCI bus. The slowest path governs the speed of even the fastest...This would mean that adding a dedicated PPU, even a very fast and efficient one, would be severely limited by the bus it was running over. This phenomenon is displayed in all the real world benchmarks I have seen of the Ageis PhysX PPU to date, The framerates actually DROP when the PPU is enabled.
To counter this, I believe, Ageis through ASUS, BFG and any other manufacturing partner they sign up with will have to release products designed for the PCIe bus. I believe this is what Ageis knows as the early manufacturing samples were able to be installed in the PCI bus as well as the PCIe bus (although not at the same time ;-) ). I believe the PCI bus was chosen for launch due to the very high installed user base of PCI motherboards, every standard PC I know of that would want a PPU in their system. I belive this is a mistake, as the users most likely to purchase this part in the 'Premium price' period would likely have PCIe in their system, or at least would be willing to shell out an extra £50-£140 for the privelage. Although I could be completely wrong in this as it may allow for some 'Double Selling' as when they release the new and improved PCIe version, the early adopters will be forced to buy into it again at a premium price.
This leads me neatly onto the price. I understand that Ageis, quite rightly, are handing out the PhysX SDK freely, this is to allow maximum compatibilty and support in the shortest period of time. This does however mean that the end user, who purchases the card in the beginning will have to pay the full price for the card...£218 for the 128MB OEM version. As time goes by and more units are sold, the installed userbase of the PPU will grow and the balance will shift, Ageis will be able to start charging the developers to use their 'must have' Hardware Physics support in their games/software and this will subsidise the cost of the card to the end user, therefore making them even more affordable to the masses and therefore making it a much more 'Must Have' for the developers. This will take several generations of the PPU before we feel the full impact of this I believe.
If ATI and nVidia are smart, they can capitalise on their high installed initial userbase and properly market the idea of Hardware physics for free with their SLI physics, they may be able to throw a spanner in the works for Agies while they attempt to attain market share. This may benefit the consumer, although it may also knock Agies out of the running depending on how effective ATI and nVidias driver based solution first appears. It could also prompt a swift buy out from either ATI or nVidia like nvidia did with 3DFX.
Using the CPU for Physics, even on a multicore CPU, in my opinion is not the way forward. The CPU is not designed for physics calculations, and from what I hear they are not (comparitively) very efficient at performing these calculations. A dedicated solution will always be better in the long run. This will free up the CPU to run the OS and also for Ai calculations and well as antivirus, firewall, background applications and generally keeping the entire system secure and stable. Multicore will be a blessing for PC's and consoles, but not for such a specific and difficult (for a CPU) task.
"Deep breath" ;-)
So there you have it, My thoughts on the PPU situation as it stands now and into the future. Right now I will not be buying into the dream, but simply keeping the dream alive by closely watching how it develops until such a time as I believe the 'Right Time' comes. £218 for an unproven, generally unsupported, and possibly seriously flawed incarnation of the PPU dream is not in my opinion The Right Time, Yet ;-)
JKay6969
DeathBooger - Monday, May 8, 2006 - link
The Cellfactor demo is available on Ageia's website now. If you try to play it with out a PhysX card you get an error message. However, the demo includes a game editor. If you open the editor, you can open up the demo level and it allows you to play the game with out the PhysX card. You can't play with bots in the editor, nor can you see cloth or fluid dynamics. Everything else is present, it's like playing the game normally otherwise. I was able to play the game inside the editor with no performance problems. I have a dual core AMD with a single X1900XT and 2GB of RAM. CPU usage does go up to 80% when playing the game and blowing up things and whatnot, but it's a smooth experience with no noticable slow down. Graphics-wise, everything is present inside the editor, including dynamic lighting and normal mapping.If Ageia wanted to show people how well the PPU works and is needed in a game like Cellfactor, they should have allowed you to play the game normally with out a PhysX card. Since they didn't do that, it makes me think it's not actually needed.
AnnihilatorX - Sunday, May 7, 2006 - link
People don't want another separate card mainly becuase of slot problem. But if the card uses PCIX1, which I bet most people with new motherboard doesn't have anything at that slot, it doesn't make it unfavourable.Since as mentioned, heat and noise of the card is low
hellcats - Saturday, May 6, 2006 - link
If Aegia really want to spur additional support for their PPU card then they should develop a lower-level API than Novodex. Something similar to OpenGL or DirectX for graphics cards. This would encourage other middleware developers to support the PPU for the "heavy lifting". Then Havok and other physics middleware developers would be working with Ageia and not against them. The current situation is as if Nvidia were to provide a proprietary game engine as the only way to access the power of the GPU. Then only the game companies that would agree to support this engine would get graphics accleration.Walter Williams - Sunday, May 7, 2006 - link
Microsoft is currently working on a physics API that will be like their DirectX but for physics.DerekWilson - Sunday, May 7, 2006 - link
additionally, AGEIA is working with Havok to try to get them to include support for the PhysX hardware in their product.when we spoke with AGEIA about it, we learned that its more important to them to get software support than to create an SDK business. they want PhysX acceleration in Havok very much. but Havok is doing pretty well on its own at the moment and are being a little more cautious about getting involved.
as is generally the case, small companies don't like to have their success tied up in the success of other small companies. Havok needs to decided if the ROI is worth it for them at this point, while there wouldn't really be a downside for AGEIA to let them include support.
Celestion - Saturday, May 6, 2006 - link
If the 360 doesn't have the the PhysX chip in it, what can PhsyX SDK be used for?JarredWalton - Saturday, May 6, 2006 - link
A software SDK (Software Development Kit) is a set of libraries that people can use. So rather than write their own physics routines, they can use PhysX. Just like they can use Havok. The way we understand it, if done properly the PhysX libraries can run on either the CPU or the PPU, though the CPU should be slower. Right now, we don't have any 100% identical comparison other than AGEIA's test app, which doesn't really appear complex enough to be truly indicative of maximum performance potential.Celestion - Sunday, May 7, 2006 - link
ThanksIckus - Saturday, May 6, 2006 - link
Hmmm - seems like the modern equivelant of the old-school maths co-processors.Yes these are a good idea and correct me if I'm wrong, but isn't that $250 (Aus $) CPU I forked out for supposedly quite good at doing these sorts of calculations what with it's massive FPU capabilities and all? I KNOW that current CPU's have more than enough ability to perform the calculations for the physics engines used in todays video games. I can see why companies are interested in pushings physics add-on cards though...
"Are your games running crap due to inefficient programming and resource hungry operating systems? Then buy a physics processing unit add-in card! Guaranteed to give you an unimpressive performance benefit for about 2 months!" If these PPU's are to become mainstream and we take another backwards step in programming, please oh please let it be NVidia who takes the reigns... They've done more for the multimedia industry in the last 7 years than any other company...
DerekWilson - Saturday, May 6, 2006 - link
CPUs are quite well suited for handling physics calculations for a single object, or even a handful of objects ... physics (especially game physics) calculations are quite simple.when you have a few hundred thousand objects all bumping into eachother every scene, there is no way on earth a current CPU will be able to keep up with PhysX. There are just too many data dependancies and too much of a bandwidth and parallel processing advantage on AGEIA's side.
As for where we will end up if another add-in card business takes off ... well that's a little too much to speculate on for the moment :-)
thestain - Saturday, May 6, 2006 - link
Just my opinion, but this product is too slow.Maybe there needs to be minimum ratio's to cpu and gpu speeds that Ageia and others can use to make sure they hit the upper performance market.
Software looks ok, looks like i might be going with software PhysX if available along with software Raid, even though i would prefer to go with the hardware... if bridged pci bus did not screw up my sound card with noise and wasn't so slow... maybe.. but my thinking is this product needs to be faster and wider... pci-e X4 or something like it, like I read in earlier articles it was supposed to be.
pci... forget it... 773 mhz... forget it... for me... 1.2 GHZ and pci-e X4 would have this product rocking.
any way to short this company?
They really screwed the pouch on speed for the upper end... should rename their product a graphics decelerator for faster cpus,.. and a poor man's accelerator.. but what person who owns a cpu worth $50 and a video card worth $50 will be willing to spend the $200 or more Ageia wants for this...
Great idea, but like the blockhead who give us RAID hardware... not fast enough.
DerekWilson - Saturday, May 6, 2006 - link
afaik, raid hardware becomes useful for heavy error checking configurations like raid 5. with raid 0 and raid 1 (or 10 or 0+1) there is no error correction processing overhead. in the days of slow CPUs, this overhead could suck the life out of a system with raid 5. Today it's not as big an impact in most situations (espeically consumer level).raid hardware was quite a good thing, but only in situations where it is necessary.
Cybercat - Saturday, May 6, 2006 - link
I was reading the article on FiringSquad (http://www.firingsquad.com/features/ageia_physx_re...">exact page here) where Ageia responded to Havok's claims about where the credit is due, performance hits, etc, and on performance hits they said:So they essentially responded immediately with a driver update that supposedly improves performance.
http://ageia.com/physx/drivers.html">http://ageia.com/physx/drivers.html
Driver support is certainly a concern with any new hardware, but if Ageia keeps up this kind of timely response to issues and performance with frequent driver updates, in my mind they'll have taken care of one of the major factors in determining their success, and swinging their number of advantages to outweigh their obstacles for making it in the market.
toyota - Friday, May 5, 2006 - link
i dont get it. Ghost Recon videos WITHOUT PhysX looks much more natural. the videos i have seen with it look pretty stupid. everything that blows up or gets shot has the same little black pieces flying around. i have shot up quite a few things in my life and seen plenty of videos of real explosions and thats not what it looks like.DeathBooger - Friday, May 5, 2006 - link
The PC version of the new Ghost Recon game was supposed to be released along side the Xbox360 version but was delayed at the last minute for a couple of months. My guess is that PhysX implementation was a second thought while developing the game and the delay came from the developers trying to figure out what to do with it.shecknoscopy - Friday, May 5, 2006 - link
Think <I>you're</i> part of a niche market?I gotta' tell you, as a scientist, this whole topic of putting 'physics' into games makes for an intensely amusing read. Of course I understand what's meant here, but when I first look at people's text in these articles/discussion, I'm always taken aback: "Wait? We need to <i>add</i> physics to stuff? Man, THAT's why my experiments have been failing!"
Anyway...
I wonder if the types of computations employed by our controversial little PhysX accelerator could be harvested *outside* of the gaming environment. As someone who both loves to game, but also would love to telecommute to my lab, I'd ideally like to be able to handle both tasks using one machine (I'm talking about in-house molecular modeling, crystallographic analysis, etc... ). Right now I have to rely on a more appropriate 'gaming' GPU at home, but hustle on in to work to use an essentially indentical computer which has been outfitted with a Quadro graphics card do to my crazy experiments. I guess I'm curious if it's plasuable to make, say, a 7900GT + PhysX perform comparable calculations to a Quado/Fire-style workstation graphics setup. 'Cause seriosuly, trying to play BF2 on your $1500 Quadro card is seriously disappointing. But then, so is trying to perform realtime molecular electron density rendering on your $320 7900GT.
SO - anybody got any ideas? Some intimate knowledge of the difference between these types of calculations? Or some intimate knowledge of where I can get free pizza and beer? Ah, Grad School.
Walter Williams - Friday, May 5, 2006 - link
It will be great for military use as well as the automobile industry.
escapedturkey - Friday, May 5, 2006 - link
Why don't developers use the second core of many dual core systems to do a lot of physics calculations? Is there a drawback to this?DrZoidberg - Sunday, May 7, 2006 - link
Yes dual core CPU's aren't being properly utilised by games. Only handful of games like COD2, Quake4 etc.. have big improvements with the dual core CPU patches. I would rather game companies spend time trying to utilise dual core so the 2nd core gets to do alot of physics work, rather then sitting mostly idle. Plus the cost of AGEIA card is too high. Already I'm having trouble justifying buying a 7900gt or x1900 when i have a decent graphics card, cant upgrade graphics every year. $300 for physics card that only handful of games support is too much. And the AT benchies dont show alot of improvement.Walter Williams - Friday, May 5, 2006 - link
We are starting to see multithreaded games that basically do this.However, it is very difficult and time consuming to make a game multi-threaded, hence why not many games are this way.
Hypernova - Friday, May 5, 2006 - link
Physx API claims to be multy core compatiable, what will happen is probably is the the API and engine will load balance the calculation between any available resources which is either the PPU, CPU or better yet both.JarredWalton - Friday, May 5, 2006 - link
Isn't it difficult to reprogram a game to make use of the hardware accelerated physics as well? If the GRAW tests perform poorly because the support is "tacked on", wouldn't that suggest that doing PhysX properly is at least somewhat difficult? Given that PhysX has a software and hardware solution, I really want to be able to flip a switch and watch the performance of the same calculations running off the CPU. Also, if their PhysX API can be programmed to take advantage of multiple threads on the PhysX card (we don't have low level details, so who knows?), it ought to be able to multithred calculations on SMP systems as well.I'd like to see the CellFactor people add the option of *trying* to run everything without the PhysX card. Give us an apples-to-aplles comparison. Until we can see that sort of comparison, we're basically in the dark, and things can be hidden quite easily in the dark....
Walter Williams - Friday, May 5, 2006 - link
PPU support is simply achieved by using the Novadex physics engine or any game engine that uses the Novadex engine (ex/ Unreal Engine 3.0). The developers of GRAW decided to take a non-basic approach to adding PPU support, adding additional graphical effects for users of the PPU - this is similar to how Far Cry 64 advertises better graphics b/c it is 64bits as an advertising gimmick. GRAW seems to have issues in general and is not a very reliable test.At quakecon '05, I had the opportunity to listen to the CEO of Ageia speakd and then meet with Ageia representatives. They had a test system that was an Athlon64 X2, a top of the line video card, and a PPU. The demo that I was able to play was what looked to be an early version of the Unreal Engine 3.0 (maybe Huxley?) and during the demo they could turn on and off PPU support. Everytime we switched between the two, we would take notice of the CPU usage meter and of the FPS and there was a huge difference.
It will be really interesting to see what happens when Microsoft releases their physics API (think DirectX but for physics) - this should make everyones lives better.
johnsonx - Friday, May 5, 2006 - link
Having downloaded and viewed the videos, my reaction is "so what?". I guess the physx sequence has a bit more crap flying around, but it's also quite alot slower (slower probably than just letting the CPU process the extra crap). It seems obvious that this game doesn't make proper use of the physx card, as I can't otherwise imagine that Aegia would have wasted so much time and money on it.DerekWilson - Friday, May 5, 2006 - link
We'd really love to test that, but it is quite impossible to verify right now.
Walter Williams - Friday, May 5, 2006 - link
Have you seen the CellFactor videos yet?Spoonbender - Friday, May 5, 2006 - link
Well, looking at the videos, I know what I prefer. The framerate hit with physx is definitely too noticeable. I'm curious to see how this turns out in other games, and with driver revisions and newer versions of the hardware (and probably pci-e would be a good idea as well)In any case, I read somewhere they weren't expecting these cards to evolve as fast as GPU's. Rather, it'd have a life cycle about the same as for soundcards. That seemed a bit encouraging to me. Having to fork out $300+ for yet another card every year or two didn't sound too attractive. But if I just have to buy it once, I guess it might catch on.
tk109 - Friday, May 5, 2006 - link
With quad cores around the corner this really isn't looking to promising.Walter Williams - Friday, May 5, 2006 - link
Too bad not even quadcores will be able to outperfrom the PPU when it comes to physics calculations.You all need to wait for another game that uses the PPU to be reviewed before jumping to any conclusions.
The developers of GRAW did a very poor job compared to the developers of CellFactor. This will come to light soon.
saratoga - Friday, May 5, 2006 - link
Haha.
DerekWilson - Friday, May 5, 2006 - link
just because something is true about the hardware doesn't mean it will every come to fruition in the software. it isn't jumping to a conclusion to say that the PPU is *capable* of outperforming a quadcore cpu when it comes to physics calculations -- that is a fact, not an opinion due to the architecture.had the first quote said something about games that use physics performing better on one rather than the other, that would have been jumping to conclusions.
the key here is the developers and how the problem of video game physics maps to hardware that is good at doing physics calculations. there are a lot of factors.
saratoga - Saturday, May 6, 2006 - link
Its clearly an opinion. For it to be a fact, it would have to be verifiable. However, no one has made a quad core x86 processor, and no game engine has been written to use one.
The poster simply stated his opinion and then blasted other people for having their own opinions, all without realizing how stupid it sounded which is why it was such a funny post.
Walter Williams - Saturday, May 6, 2006 - link
I did not blast anybody...It is a simple fact that a dedicated processor for X will always outperfrom a general purpose processor when doing X from a hardware perspective.
Whether or not the software yields the same results is another question. Assuming that the PCI bus is not holding back performance of the PPU, it is incredibly unlikely that quad core CPUs will be able to outperform the PPU.
saratoga - Saturday, May 6, 2006 - link
Clearly false. General purpose processors sometimes beat specialized units. It depends on resources available to each device, and the specifics of the problem. Specialization is a trade off. If your calculation has some very specific and predictable quality, you might design a custom processor that exploits some property of your problem effectively enough to overcome the billions Intel and AMD poured into developing a general purpose core. But you may also end up with an expensive processor thats left behind by off the shelf components :)
Furthermore, this statement is hopelessly general. What if X is running Linux? Or any other application that x86 CPUs are already specialized for. Can you really concieve of an even specialized processor for this task that didn't resemble a general purpose CPU? Doubtful.
You're backpeddleing. You said:
"Too bad not even quadcores will be able to outperfrom the PPU when it comes to physics calculations."
Now you're saying they might be able to do it. So much for jumping to conclusions?
JarredWalton - Friday, May 5, 2006 - link
People keep mentioning Cell Factor. Well and good that it uses more physics calculations as well as the PhysX card. Unfortunately, right now it requires the PhysX card and it's looking like 18 MONTHS (!) before the game ships - if it ever gets done. We might as well discuss how much better Havok FX is going to be in The Elder Scrolls V. :pFor the first generation, we're far more likely to see a lot of the "tacked on" approach as companies add rudimentary support to existing designs. We also don't have a way to even compare Cell Factor with and without PhysX. Are they hiding something? I mean, 15% faster under the AGEIA test demo using a high-end CPU isn't looking like much. If they allow CellFactor to run on software (CPU) PhysX calculations, get that to support SMP systems for the calculations, and we get 2 FPS in Cell Factor, that's great. It shows the PhysX card does soemthing. If they allow all that and the dual core chips end up coming very close to the same performance, we've got a problem.
Basically, right now we're missing real world (i.e. gaming) apples-to-apples comparisons. It's like comparing X800 to 6800 cards under games that only supported SM3.0 or SM1.1 - better shaders or faster performance, but X800 could have come *much* closer with proper SM2.0 support.
NastyPope - Friday, May 5, 2006 - link
AMD & Intel could license the PhysX technology and include a dedicated PhysX (or generic multi-API) core on their processors and market them as game processors. Although some science and technology applications could make use of it as well. Being on-die would reduce latency and provide a huge amount of bandwidth between cores. Accessing system memory could slow things down but still be much faster than data transfers across a PCI bus.Woodchuck2000 - Friday, May 5, 2006 - link
The reason that framerates drop with the PhysX card installed is simply that the graphics card is given more complex effects to render.At some point in the future, games will be coded with a physics API in mind. Interactions between the player and the game environment will be through this API, regardless of whether there is dedicated hardware available.
It's a truth universally acknowledged that graphics are better left to the graphics card - I don't hear anyone suggesting that the second core in a duallie system should perform all the graphics calculations. I think that in time, this will be true of physics too.
Once the first generation of games built from the ground up with a physics API in mind come out, this will sell like hot cakes.
Calin - Friday, May 5, 2006 - link
The reasons frame rates drop is the fact that with the physics engine, the video card have more to render - in the grenade explosion images, the "with physics" image has tens of dumpster bits flying, while in the "non physics" there are hardly a couple.If there would have been the same complexity of scenes, I wonder how much faster the ageia would be
Magnadoodle - Friday, May 5, 2006 - link
Actually, if you look at this statement by Havok: http://www.firingsquad.com/news/newsarticle.asp?se..."> (Already linked)They also arrive at the conclusion that it is not a GPU bottleneck.
Furthermore, the only thing the PPU seems to do in GRAW is render a couple of particles, while not improving or accelerating *at all* the processing of physics. This particle effect could have been processed very well by a GPU.
I guess Anandtech didn't notice that the physics were exactly the same, thus pointing out the somewhat elicit nature of better physics.
DerekWilson - Friday, May 5, 2006 - link
The havok guys did miss a few things pointed out earlier in the comments. Some destructable objects do break off into real persistant objects under PhysX -- like the dumpster lid and car doors. Also, the debris in the explosions is physically simulated rather than scripted. While I certainly agree that the end effect in these cases has no impact on "goodness", it is actually doing something.I'll certainly agree that "better physics" is kind of strange to think about. But it is really similar to how older games used to do 3D with canned animations. More realtime simulation opened up opportunities to do so many amazing things that just couldn't be done otherwise. This should extend well to physics.
Also, regardless of how (or how efficiently) the developers did it, there's no denying that the game feels better with the hardware accelerated aspects. Whether they could have done the same thing on the CPU or GPU, they didn't.
I'd still love to find a way to test the performance of this thing running the hardware physics on the CPU.
JumpyBL - Saturday, May 6, 2006 - link
I see these same effects without PhysX.
DerekWilson - Saturday, May 6, 2006 - link
When I play it without the PhysX hardware, doors just seem to pop open -- not fly off ... though I haven't exhaustively blown up every object -- there could be some cases where these types of things happen in software as well.JumpyBL - Saturday, May 6, 2006 - link
Shoot up the tires, car doors, etc enough and they come off. Same with the garbage can lid, throw a nade, it'll blow right off the container, all without a PPU.Fenixgoon - Friday, May 5, 2006 - link
how is the game going to "feel better" with a PPU when it slams your framerate down from buttery smooth to choppy? sorry, i'll take the FPS over any degree of better simulated physics, ESPECIALLY on a budget PC. i mean, look at the numbers! opteron minimum fps at 8x6 was 46, and with the PPU hardware it dropped to 12 - over a 75% decrease!!DerekWilson - Friday, May 5, 2006 - link
Note that the min framerate is much lower than the average -- with the majority of frames rolling along at average framerates, one or two frames that drop to an instantaneous 12-17fps isn't going to make the game feel choppy. The benchmark was fairly short, so even outliers have an impact on the average -- futher going to show that these minimum fps are not anything to worry about. At the same time, they aren't desierable either.Also, I would certainly not recommend this part to anyone but the hardcore enthusiast right now. People with slow graphics cards and processors would benefit much more by upgrading one or the other. In these early stages with little software support, the PPU will really only look attractive to people who already have very powerful systems and want something else to expand the capabilities.
if you watch the videos, there's no noticable choppiness in the motion of the explosion. and I can say from gameplay experience that there's no noticeable mouse lag when things are exploding either. thus, with the added visual effects, it feels better. certainly a subjective analysis, but I hope that explains how I could get that impression.
mongo lloyd - Friday, May 5, 2006 - link
You must be joking. Watching the videos, the PhysX one is WAY choppier compared to the software one. The PhysX video even halts for a split second, in a way that's more than noticeable; it's downright terrible.And the graphics/effect of the extra debris? Negligible. I've seen more videos from this game (for example: http://www.pcper.com/article.php?aid=245">http://www.pcper.com/article.php?aid=245 ) and the extra stuff with PhysX in this game is just not impressive or a big deal, and in some cases it's actually worse (like the URINATING walls and ground when shooting them). It's not realistic, it's not fun, not particularly cool, and it's slow.
Clauzii - Friday, May 5, 2006 - link
I also think thats why we see such a massive FPS drop. We are trying to render, say, 100 times as many objects now?DerekWilson - Friday, May 5, 2006 - link
I was hoping we made this clear in the article ...While there is certainly more for the GPU to do, the numbers under a CPU limited configuration (800x600 with lower quality settings) we see a very low minimum framerate and a much lower average when compared to software physics.
The drop in performance is much much less signficant when we look at a GPU limited configuration -- if all those object were bottlenecking on the graphics card, then giving them high quality textures and rendering them at a much higher resolution would show a bigger impact on performance.
Tie that in with the fact that both the CPU and GPU limited configurations turn out the same minimum framerate and we really can conclude that the bottleneck is somewhere other than the GPU.
It becomes more difficult to determin whether the bottleneck is at the CPU (game/driver/api overhead) or the PPU (pci bus/object generation/actual physics).
Clauzii - Monday, May 8, 2006 - link
Youre right.. have to wait and see what happens whith drivers/PCIe then...Hypernova - Friday, May 5, 2006 - link
They should have waited for cellfactor's release as a launch line up for aegia. First impression is important and the glued on approach of GRAW is nothing but negative publicity for aegia. Not everyone knows that physx was nothing more of a patched addon to havok in GRAW and will think that this is how the cards will turn out.If you can't do it right then don't do it at all. GRAW's implementation was a complete failure, even for a first generation product.
segagenesis - Friday, May 5, 2006 - link
As much as I would want to give them the benefit of the doubt... With what you say its that simple really. Most gamers (at least ones I know?) want worry free performance, and if spending extra money on hardware results in worse performance then this product will be short lived.I watched the non-game PhysX demos and they looked really damn cool, but they really should have worked on making a PCI-E version from the start... boards with PCI slots are already becoming dated, and those that have 16x slots for graphics have at least the small 1x slot!
nullpointerus - Friday, May 5, 2006 - link
I wouldn't say it was a complete failure. IMO the benchmarks were quite disappointing, and this was compounded by the lack of effects configurability in the game, but the videos were quite compelling. If you look at the dumpster (?), you can see that not only does the lid blow off but it bends and crumples. If we see more than just canned animations in better games (Cell Factor?), then this $300 should be worth its cost to high-end gamers. I'd say Aegia is off to a rough start, not an implosion.DerekWilson - Friday, May 5, 2006 - link
There are quite a few other things PhysX does in the game as well -- though they all are kinda "tacked on" as well. You noticed the lid of the dumpster which only pops open undersoftware. In hardware it is a seperate object that can go flying around depending on the explosion. The same is true of car doors and other similar parts -- under software they'll just pop open, but with hardware they go flying if hit right.It is also interesting to add that the explosions and such are scripted under software, but much of it becomes physically simulated under hardware. While this fact is kinda interesting, it really doesn't matter to the gamer in this title. But for other games that make more extensive use of the hardware, it could be quite useful.
nullpointerus - Friday, May 5, 2006 - link
That should be very cool. I was playing F.E.A.R. recently and decided to rake my SMG over a bunch of glass panes in an "office" level. Initially, it looked and sounded good because I have the settings cranked up reasonably high, but then I noticed all the glass panes were breaking in exactly the same way. Rather disappointing...DigitalFreak - Friday, May 5, 2006 - link
... may use the Gamebryo engine, but it uses Havok for physics.Bull Dog - Friday, May 5, 2006 - link
You are correct and I noticed that the first time I read the article.DerekWilson - Friday, May 5, 2006 - link
Sorry if what I said wasn't clear enough -- I'll add that Oblivion uses Havok.peternelson - Friday, May 5, 2006 - link
In your introduction you might have mentioned there are TWO partners for bringing this technology to market: Asus and BFG Tech.
Both have shipping boards. I wonder if they perform identically or if there is some difference.
I agree that PCI could be a bottleneck, but I'm more concerned that putting a lot of traffic on the pci bus will impair my OTHER pci devices.
PCIE x1 would have been much more sensible. I hope that they won't need a respin to add pcie functionality but fear this may be the case.
I agree Cellfactor looks more heavy use of physics so may make the difference with/without PPU more noticeable/measurable.
I also wonder how much the memory size on the Physx board matters? Maybe a second gen board could double that to 256. I'm also interested in whether PPU could be given some abstraction layer and programmed to do non-physics useful calculations as is being done on graphics cards now. This might speed its adoption.
I agree with the post that in volume, this kind of chip could find its way onto 3d graphics cards for gaming. As BFG make GPU cards, they might move that direction.
iNsuRRecTiON - Saturday, May 6, 2006 - link
Hey,the ASUS PhysX card does already have 256 MB RAM instead of 128 MB RAM, compared to BFG Tech. card..
best regards,
iNsuRRecTiON
fishbits - Friday, May 5, 2006 - link
I want the physics card to be equivalent to a sound card in terms of standardization and how often I feel compelled to upgrade it. In other words, it would be upgraded far less often than graphics cards are. Putting the physics hardware on a graphics card means you would throw away (or sell at a loss) perfectly good physics capability just to get a faster GPU, or get a second card to go to SLI/Crossfire. This is a bad idea for all the same reasons you'd say putting sound card functionality on a graphics card is a bad idea.Calin - Friday, May 5, 2006 - link
Yes, you could do all kind of nice calculations on the physics boards. However, moving geometry data from the video card to the physics board to be calculated and moving them back to the video card would be shooting yourself in all feets.I think this could run well as an accelerator for rendering images or for 3D applications... how soon until 3DStudio, PhotoShop and so on take advantage?
tonjohn - Friday, May 5, 2006 - link
The pre-production cards had both PCI and PCIe support at the same time. You simply flipped the card depending on which interface you wanted to use. So I believe that the PPU offers native PCIe support and that BFG and ASUS could produce PCIe boards today if Ageia would give them permission to.
Bad idea. Putting the PPU onboard with a GPU means higher costs all around (longer PCBs, possibly more layers, more ram). Also, the two chips will be fighting for banwidth which is never a good thing.
Higher costs and lower performance = a bad idea.
FYI: I have a BFG PhysX card.
saratoga - Friday, May 5, 2006 - link
Actually, putting this on the GPU core would be much cheaper. You'd save by getting rid of all the duplicated hardware: DRAMs, memory controller, power circuitry, PCI bridge, cooling, PCB, etc.Not to mention you'd likely gain a lot of performance by having a PCI-E 16x slot and an ondie link to the GPU.
Calin - Monday, May 8, 2006 - link
I wonder how much of the 2TB/s internal bandwidth will be used on the Ageia card... if enough of it, then the video card will have very little bandwidth remaining for its operations (graphic rendering). However, if the cooling really needs that heat sink/fan combo, and the card really needs that power connector, you won't be able to put one on the highest end video cards (for power and heat reasons).kilkennycat - Friday, May 5, 2006 - link
"I have a BFG PhysX card"Use it as a door-stop ?
Pray tell me where you plug one of these if you have the following:-
Dual 7900GTX512 (or dual 1900XTX)
and
Creative X-Fi
already in your system.
Walter Williams - Friday, May 5, 2006 - link
Actually, I use it to play CellFactor. Your missing out.
SLi and CrossFire are the biggest waste of money unless you are doing intense rendering work.
I hope people with that setup enjoy their little fps improvement per dollar while I'm playing CellFactor, which requires the PPU to run.
kilkennycat - Friday, May 5, 2006 - link
Cellfactor MP tech demo....Cellfactor to be released in Q4 2007.. maybe... Your PhysX is going to be a little old by the time the full game is released...Should be up to quad-core CPUs and lots of cycles available for physics calculations by that time.
I have recently been playing Oblivion a lot, like several million others. The Havok software physics are just great --- and you NEED the highest-end graphics for optimum visual experience in that game --- see the current Anandtech article. Sorry, I care little about (er) "better particle effects" or "more realistic explosions", even when I play Far Cry. In fact, from my experiences with BF2 and BF1942 I find them more than adequately immersive with their great scenery graphics and their CURRENT physics effects -- even the old and noble BF1942.
On single-player games, I would far prefer seeing additional hardware, or compute-cycles, being directed at advanced-AI than physics. What point fancy physics-effects if the AI enemy has about as much built-in intelligence as a lump of Swiss cheese? Sure does not help the game's immersive experience at all. And tightly-scripted AI just does not work in open-area scenarios (c.f: Unreal 2 and the dumb enemies easily sneaked from behind -- somebody forgot to script that eventuality amongst many others that can occur in an open play-area). The successful tightly-scripted single-play shooters like Doom3, HL2, FEAR etc all have overt or disguised "corridors". So, the developers of open-area games like Far Cry or Oblivion chose an algorithmic *intelligent-agent AI* approach, with a simple overlay of scripting to set some broad behavioral and/or location boundaries. A distinct move in the right direction but there are some problems with the AI implementation in both games. More sophisticated AI algorithms will require more compute-power, which, if performed on the CPU, will need to be traded off with cycles available for graphics. Dual-core will help, but a general-purpose DSP might help even more... they are not expensive and easily integrated into a motherboard.
Back to the immediate subject of the Ageia PPU and physics effects:-
I am far more intrigued by Havok's exercises with Havok FX harnessing both dual-core CPU power and GPU power in the service of physics emulation. Would be great to have action games with a physics-adjustable slider so that one can trade off graphics with physics effects in a seamless manner, just as one can trade-off advanced-graphics elements in games today.... which is exactly where Havok is heading. No need to support marginal added hardware like the PhysX. Now, if the PhysX engine was an option on every high-end motherboard, for say not more than $50 extra, or as an optional motherboard plug-in at say $75, (like the 8087 of yore) and did not take up any additional precious peripheral slots, then I would rate its chances of becoming main-stream to be pretty high. Seems as if Ageia should license their hardware DESIGN as soon as possible to nVidia or ATi at (say) not more than $15 a copy and have them incorporate the design into their motherboard chip-sets.
The current Ageia has 3 strikes against it for cost, hardware interface ( PCI ) and software-support reasons. The PhysX PPU certainly has NO hope at all as a periphreal device as long as it stays in PCI form. Must migrate to PCIe asap. Remember that a X1, or X4 PCIe card will happily work in a PCIe X16 slot, and there are still several million SLI and Crossfire motherboard with empty 2nd video slots. Plus, even on a dual-SLI with dual-slot-width video cards and an audio card present, it is more likely to find one PCIeX1 or X4 slot vacant that does not compromise the video-card ventilation than to find a PCI slot that is not either covered up by the dual-width video cards or that does not completely block airflow to one or other of the video cards.
So if a PCIe version of the PhysX ever becomes available... you will be able to sell your PCI version... at about the price of a doorstop. Few will want the PCI version if a used PCIe version is also available.
Hard on the wallet being an early adopter at times.....
tonjohn - Friday, May 5, 2006 - link
The developers did a poor job when it came to how the implemented PPU support in GRAW.CellFactor is a MUCH better test of what the PhysX card is capable. The physics in CellFactor are MUCH more intense. When blowing up a load of crap, my fps only drop 2fps at the most, and that is mainly b/c my 9800Pro is struggling to render the actual effects of a grenade explosion.
DerekWilson - Friday, May 5, 2006 - link
We will be taking a look at CellFactor as soon as we canEgglick - Friday, May 5, 2006 - link
You've certainly got a point there. Seeing as how a physics card is more like a co-processor than anything else, the PCI bus is probably even more of a limitation than it would be with a graphics card, where most of the textures can simply be loaded into the framebuffer beforehand.
I still believe that the best option is to piggyback PPU's onto graphics cards. Not only does this allow them to share the MUCH higher bandwidth PCIe x16 slot, but it would also mean nearly instant communication between the physics chip and the GPU. The two chips could share the same framebuffer (RAM), as well as a cooling solution. This would lower costs significantly and increase performance.
DerekWilson - Friday, May 5, 2006 - link
combo boards, while not impossible to make, are going to be much more complex. There could also be power issues as PhysX and today's GPUs require external power. It'd be cool to see, and it might speed up adoption, but I think its unlikely to happen given the roi to board makers.The framebuffer couldn't really be shared between the two parts either.
Rolphus - Friday, May 5, 2006 - link
On page 2: "A graphics card, even with a 512-bit internal bus running at core speed, has less than 350 Mb/sec internal bandwidth." - er, I'm guessing that should read 350Gb/sec?JarredWalton - Friday, May 5, 2006 - link
Yes. Correcting....Rolphus - Friday, May 5, 2006 - link
Thanks for the quick response - I've just finished the article. It's good stuff, interesting analysis, and commentary and general subtext of "nice but not essential" is extremely useful.One random thing - is images.anandtech.com struggling, or is my browser just being a pain? I've been having trouble seeing a lot of the images in the article, needing various reloads to get them to show etc.
ATWindsor - Friday, May 5, 2006 - link
Anandtech images doesn't work properly if you disable referer logging (pretty annoying), can that be the root of your problem? (adblock disabling it or something)JarredWalton - Friday, May 5, 2006 - link
Seems to be doing fine from our end. If you're at a company with a firewall or proxy, that could do some screwy stuff. We've also had reports from users that have firewall/browser settings configured to only show images from the source website - meaning since the images aren't from www.anandtech.com, they get blocked.As far as I know, both the images and the content are on the same physical server, but there are two different names. I could be wrong, as I don't have anything to do with the system administration. :)
Rolphus - Friday, May 5, 2006 - link
Weird, seems to be fine now I've disabled AdBlock in Firefox... that'll teach me. It's not like I block any of AnandTech's ads anyway, apart from the intellitxt stuff - that drives me NUTS.JarredWalton - Friday, May 5, 2006 - link
Click the "About" link, then "IntelliTxt". You might be pleasantly surprised to know it can be turned off.Rolphus - Friday, May 5, 2006 - link
Jarred,THANK YOU!
*adds AnandTech to the list of non-blocked sites*
Rolphus - Friday, May 5, 2006 - link
I say "weird" because only about 1/3 of the images failed to load... if everything was missing, it would be fairly obvious what was going on ;)escapedturkey - Friday, May 5, 2006 - link
You should realize that the game you are benchmarking is using Havok physics even with PhysX.http://www.firingsquad.com/news/newsarticle.asp?se...">http://www.firingsquad.com/news/newsarticle.asp?se...
That's why there is an issue. The game was very poorly coded. So when PhysX is enabled, it's using two physics layers: Havok and PhysX vs. one or the other.
JarredWalton - Friday, May 5, 2006 - link
Hence the conclusion. The added effects look nice, but performance currently suffers. Is it poor coding or poor hardware... or a combination of both? Being the first commercial game to support the AGEIA PPU, it's reasonable to assume that the programmers are in a bit of a learning stage. Hopefully, they just didn't know how to make optimal use of the hardware options. Right now, we can't say for sure, because there's no way to compare the "non-accelerated" physics with the "accelerated" physics. We've seen games that look less impressive and tax GPUs more in the past, so it may simply be a case of writing the code to make better use of the PPU. I *don't* think having CrossFire would improve performance much at present, at least in GRAW, because the minimum frame rates are clearly hitting a PPU/CPU bottleneck.saratoga - Friday, May 5, 2006 - link
Thats a good question. Its hard to imagine a PCI bus device keeping up with a dual core processor when one can only talk over a 133MB bus with hundreds or even thousands of ticks worth of latency, while the other can talk in just a few clocks over a bus running at 10+ GB per second. I imagine all those FUs spend most of their time waiting on syncronization traffic over the PCI bus.
It will be interesting to see what a PCI-E version of this card can do.
Orbs - Friday, May 5, 2006 - link
I'd be curious to hear how this performs with SLI and CrossFire setups. Also, do motherboards offer enough space for 2 GPUs, and two PCI cards (sound and physics)? It looked like the Asus board meant for the Core Duo might.If mobo manufacturers are keeping this in mind, it may make AM2 and Conroe boards even more interesting!
Orbs - Friday, May 5, 2006 - link
Derek, on the performance comparison page, you mention the framerate differences with an FX-60. I think you mean FX-57 here.JarredWalton - Friday, May 5, 2006 - link
FX-57. Fixed. That was my fault - been playing with FX-60 chips too much lately. ;)evident - Friday, May 5, 2006 - link
I really hope the price of these things drop by A LOT. especially if games in the future might "require" a physx card. I have to pay $400 for a top of the line 7900GTX and then another 300 for one of these??? argh! might as well pick up console gaming now!Mr Perfect - Friday, May 5, 2006 - link
Honestly, I'm hoping these cards just quietly die. I love the idea of game pysics, but the priceing is murder. These things may be pocket change for some of the "enthusiasts", but as soon as you require another $300 card just to play a video game, the Joe Sixpacks are going to drop PC gaming like it's hot. Do you have any idea how much bitching you hear from most of them when they're told that their $100 video card isn't fast enough to play a $50 game? Or worse yet, that their "new" PC doesn't even have a graphics card, and they have to throw more money into it just to be eligable for gaming? The multiplayer servers could be awfully empty when a required Pysics card alone costs more then a new PS3. :\bob661 - Friday, May 5, 2006 - link
Joe SixPacks aren't gamers. They're email and Word users. People that game know what hardware is required.nullpointerus - Friday, May 5, 2006 - link
That's not how it works. New types of hardware are initially luxury items both in the sense that they are affordable only by a few and way overpriced. When the rich adopt these things, the middle class end up wanting them, and manufacturers find ways to bring the prices down by scaling down the hardware or using technological improvements. So in other words, pipe down, let those who can afford them buy them, and in an few years we may see $50-75 versions for ordinary gamers.Mr Perfect - Saturday, May 6, 2006 - link
I wish I could find the article now, but back a ways there was an interview with Ageia where it was said that prices would span a similar range as video cards. So yes, there will probably be low-end, minrange, and highend card.What I'm concerned about is the people who already are fighting a budget to game. These are the highschool kids with little to no income, the 40 year old with two kids and a morgage, and the casual gamer who's probably just as interested in a $170 PS2. What happens when they have to buy no only an enty level $150 video card, but also a $150 physics card? I can only imagine if gaming was currently limited to only those people with a $300 budget for a 7900GT or X1800 XL that we'd see PC gaming become a very elite selection for "enthusiasts" only.
Hopefully we can get some snazzy physics without increasing the cost of admision so much, either by taking advantage of the dual core CPUs that are even now worming their way into the mainstream PCs, or some sort of new video card technology.
nullpointerus - Sunday, May 7, 2006 - link
Game developers have to eat, too. They won't produce games requiring extravagant hardware. Your fear is irrational. When you go into a doctor's office to get a shot, do you insist that the needle be sterilized right in front of your eyes before it comes anywhere near your skin? No. The doctor wants to eat, so he's not going to blow his eduction and license by reusing needles...Mr Perfect - Monday, May 8, 2006 - link
Well obviously it won't be a problem if it's not required. If it becomes an unnecessary enhancment card, like an X-Fi, then all is well. All I've been saying is if it DOES become a required card there is the possibility for monetary problems for the bread-and-butter casual gamers who fill the servers.Googer - Friday, May 5, 2006 - link
I for one will not be an early adopter for one of these. First generation hardware is always so cool to look at but it's almost always something you do not want to own. DX9 Video Cards are a great example: ATi 9700 PRO Was a great card if you played DX8 games but by the time software rolled around to take advantage of DX9 hardware, the 9700PRO just was not truly cut to handle it. The 9700PRO lacked a ton of features that second generation DX9 Cards had. My point is you should wait for the revision/version 2.0 of this card and you wont regret it. By then programs should be on store shelves to take full advandage of PhysX hardware.Jedi2155 - Monday, May 8, 2006 - link
I think it handled the first gen dx9 games relatively well. Farcry was a great example as it played quite well on my 9700 pro (which lasted me till Sept. '05 when I upgraded to a refurb. x800 pro). It also was able to run most games on max details (although crappy framerates but it was able to do it!). I think the 9700 pro offered it lot for its time and was able to play the 1st gen Dx9 games well enough.munky - Friday, May 5, 2006 - link
Sure, the 9x00 series could handle DX9, just not at maxed out settings. I played Farcry on a 9800xt, and it ran smoothly at medium-high settings. But the physx card is just plain disappointing, since it causes such a performance hit in GRAW, even at cpu-limited resolutions. Either the developers did not code the physics properly, or the physx card is not all that it's hyped up to be. We'll need more games using the ppu to know for sure.rqle - Friday, May 5, 2006 - link
I bought a 9700 Pro, i saw it as so far ahead of ti4600 with its 4x the proformance when AA was apply. First card to play games with AA and AF even if wasnt directx9 games. BUT this ageia thing seem little pointless to me, i actually rather have 2 ATI or 2 Nvidia card, at least this gives you an option, less physics or better graphic experience. comes in handle for those 98% of games that not ageia compatible yet.PeteRoy - Friday, May 5, 2006 - link
I hope this thing will be integrated into video cards mobo or CPU instead of seperated card.