Sorry PhysX, you're toast.
Multi-core CPUs will have you beat, and by manufacturers who have much more influence in the industry. Even if it did catch on, AMD and Nvidia would just add support and bury you in that market segment.
Aegia would do better trying to get in on the console action. At least there they will have a customer-base.
Yeah, I never liked the idea from the beginning. Count me as one of those who'd rather spend extra $200 on faster CPU than dedicated physics card. What are the chances that many games will use PhysX in a meaningful way, how long PhysX will be around? And, if PhysX is able to run in software mode on one core of a multicore CPU, I'd rather go that way.
the simple fact is gamers would rather buy a nicer cpu with more cores with that money. if those cores still can only deliver slightly less physics than the addon, people are willing to live with it. we aren't in a desperate rush to get physics. with the rate at which cpus keep progressing it won't matter, we'll get it regardless. so why worry. its not like sony or nintendo are quaking in their boots at the insane games that physx can create:P the compelling need is not apparent. digital worlds aren't detailed enough for it to matter, people still don't expect things to work in games the same way as reality. physics is limited to blowing stuff up and stacking boxes.
and as i said, for online play it won't ever be set for physx cards. if it affects game play than you can't play together with other non physx, so to guarantee compatibility it will be limited to fx and it becomes nothing more than eye candy again.
Not in the list is Rise of Legends. I don't know if it's official, but when installed it installs PhsyX and has quite robust physics in game (ragdolls in a RTS, land deformation, unit movement, etc.).
What I really don't understand and what this article didn't answer, is WHY game developers would pay for a license for a SDK when you can get a better, more user friendly, better supported, more robust, and finely free SDK. It just doesn't make sense to me.
Developers have nothing to lose from using PhysX, but have a lot to gain.
FYI for people that can't read the article, PhysX has a software mode it operates in. The software mode is natively made to run on more then one core. When it all comes down to it, even if you are a advocate for doing physics on a spair core PhysX already does that.
Wow, this article definitely isn't up to the quality level I generally expect from Anandtech. Typos everywhere and then gems like this:
"Being embarrassingly parallel in nature, physics simulations aren't just a good match for GPUs/PPUs with their sub-processors, but a logical fit for multi-core CPUs."
What you say? For one, all processors are not created equal. CPUs are awesome for general purpose work, but a GPU will eat its lunch when it comes to vector math. GPUs are massively parallel vector processors. Physics math generall *is* vector math. While there are problems with doing physics processing which others have already pointed out, suggesting that CPUs are better suited to the job because of parallelism is baffling.
I think you're misinterpreting what I'm saying. GPUs are well suited to embarrassingly parallel applications, however with the core-war now you can put these tasks on a CPU which while not as fast at FP as a GPU/PPU, is quickly catching up thanks to having multiple CPU cores and how easy it is to put embarrassingly parallel tasks on such a CPU. GPUs are still better suited, but CPUs are becoming well enough suited that the GPU advantage is being chipped away.
It's still not parallel enough. A modern CPU has only four cores (the Cell/BE doesn't count since it's not a CPU for PCs), but effective physics processing requires much more parallelism. The Ageia card is better suited to physics processing than any CPU that we'll be able to buy in the next five years. In addition, Ageia identifies a few shortcomings of GPUs when applied to physics calculations. GPUs can't perform read-modify-write operations in their shader units -- they can only perform read operations. In addition, GPUs aren't optimized for applications where each shader unit must execute different code -- they're designed to execute the same code, but on different parts of the image. As a result, some shader units finish their calculations before other shader units and simply sit idle instead of processing the next batch of data. The problem here is that the parallelism advantages become hamstrung by inefficiency. In the end, physics computations are too subtantially different from graphics computations for one optimized processing unit to be applied to a half-hearted form of the other.
What's to blame for Ageia's failure? I think there's a fundamental problem with the way gamers think about an immersive gaming experience. Gamers are too preoccupied with resolution, texture and model detail, lighting, and frame rates to notice that objects in games don't behave like real objects. The focus is on visual realism, not physical realism, but both are required for a true virtual reality experience. In addition, the PhysX hardware was too expensive from the start -- it had to be cheaper than GPUs in order for anyone to take a chance on it. A $99 PhysX card was desperately needed last year.
Of course, all of what you're saying assumes that we actually need that much physics processing power. I remember reading about a flight simulator a while back where they described all of the complex calculations being done to make the flight model as realistic as possible. After the lengthy description of the surface dynamics calculations and whatever else was involved in making the planes behave realistically, the developer than made the comment that all of that used less than 5% of the CPU power. Most of the remaining CPU time was used for graphics.
Granted, that was Flight Unlimited and it was a while ago, but the situation is still pretty similar to what we have today. As complex as physics might be if you model it exactingly, it's really not necessary and the graphics still demand the majority of the CPU power. AI and physics are the other things the CPU handles. People can come up with situations (i.e. Cell Factor) where hardware physics is necessary to maintain acceptable performance. The real question is whether those situations are really necessary in order to deliver a compelling game.
Right now, games continue to be predominantly single core - even the most physics oriented games (Half-Life 2?) don't use multiple cores. And physics calculations aren't really consuming a majority of even one core! Now, give physics two cores that only need to do physics (on a quad core system), and do you see any reason that any current game is going to need a PPU? Especially when the cost of the PPU card is about as much as the cost of a quad core CPU?
I don't, and I don't expect to before AGEIA is pretty much gone. Maybe Intel or AMD will by their intellectual property and incorporate the tech into a future CPU. Short term, I just don't think they're relevant.
wouldn't they have to create physx only servers if the physics affect gameplay? they certainly aren't going to require physx to play online...i'm guessing it will only be limited to effects for the most part because of this.
I'd imagine that hardware physics would not be much more taxing then non-hardware physics, except for RAM, on servers.
I'd say that the server would make everyones computer do the number crunching. The server would just say 'player 1 fires a rocket from this position at this angle (and thus hits this wall like so)". Every players individual PhysX card would do its own processing, and calculate to the same answer.
This is purely speculation though, I have no real knowledge of video game programing (or any kind of programming for that matter).
While it's not mass market like Gaming, there is Microsoft Robotics Studio that implements AGEIA PhysX hardware (& software ?)
So they are trying ;-)
Microsoft Robotics Studio targets a wide audience in an attempt to accelerate robotics development and adoption. An important part of this effort is the simulation runtime. It was immediately obvious that PC and Console gaming has paved the way when it comes to affordable, widely usable, robotics simulation. Games rely on photo-realistic visualizations with advanced physics simulation running within real time constraints. This was a perfect starting point for our effort.
We designed the simulation runtime to be used in a variety of advanced scenarios with high demands for fidelity, visualization, and scaling. At the same time, a novice user with little to no coding experience can use simulation; developing interesting applications in a game-like environment. Our integration of the AGEIA PhysX Technologies enables us to leverage a very strong physics simulation product that is mature and constantly evolving towards features that will be invaluable to robotics. The rendering engine is based on Microsoft XNA Framework.
So expect there to be a large surge a Dell for the 15yr olds to hook up the lego.
There is no need. Not to mention Epic hasn't said anything about it in over two years. If anything it would just be eye candy since Unreal Tournament 3 relies on it's online multiplayer. You can't have added interactive features only a percentage will be able utilize in a multiplayer game.
Some Unreal Engine 3 titles are replacing the built in Ageia SDK in favor of Havok's SDK. Stranglehold and Blacksite are examples of this.
My friends and I have had this 'chicken and egg' discussion on many occassions, specifically about why physics hardware is not taking off. As long as a game only uses the physics for eye-candy, the feature won't affect gameplay at all and therefore will be able to be turned off by those who don't have the resources to play with it turned on (no PhysX card, no multiple cores, no SLI graphics, whatever). So who's gonna buy a 200-400 dollar card that's not needed?
In order for hardware like PhysX to take off, there MUST be a game where the physics is up front, interactive, what makes the game fun to play, and it MUST be required. Not only that, but it better be one hell of a game, one that people just can't do without. I mean, after all, since this is the 'egg' in the chicken-egg scenario, you're basically spending 400 bucks for the game that you want to play, since there are no other games that are even worth mentioning (again, if it's just eye candy, who cares).
If you don't believe me about the eye-candy comments (about how eye-candy has its place but is over-valued), then please explain to me why the Wii is outselling its direct competition? It's because the games are FUN (mostly because of the innovative interface), not because they look great (they don't). I mean, come on, who cares what a game looks like if it's tedious and frustrating, shoot, even just boring to play.
What we're longing for is a game where there are no more canned animations for everything. For instance, you don't press a fire button to swing a sword. You somehow define a sword stroke that's different every time you swing. Also, whether or not you hit your target should not be defined by your distance from your target. It should be defined by the strength of the joints that make up your character, along with the mass of the sword, along with the mass of whatever gets in the way of your swing, etc etc. We're actually working on such a game. It's early in the development, and we don't plan on having anything beyond what can be played at LAN parties, but it's a dream we all share and maybe, just maybe, we can eek out something interesting. FYI, we are using the PhysX SDK...
UT3 should use physX for environments and not just features. Reading the article shows that PhysX can be done in s/w. That way, everyone can pay the same game, and join the same servers, etc., but if they are running on an older system, PhysX will just eat their CPU's resources completely. If they upgrade to 64 core 256bit CPUs, then it will run nice, or if they pop in a little PCI card, it will run nice.
Either way it is definite that the game has be be revolutionary, good, and always have PhysX running for at least the enrivonmental aspects (maybe leave it as an option for Particle physics so they can get performance back some how for playing on their Compy 486).
The issue of getting access to the results of any calculation performed on a GPU is a mjor one. On that subject you might be interested to look at the preprint of a scientific paper regarding using multiple GPUs to perform real physical (not game-related physics) calculations using nVidia CUDA SDK. The preprint is by http://arxiv.org/abs/0707.2991">Schive et alSchive et al (astro-ph/0707.2991), at the arXiv.org physics preprint server.
I would think AMD would be pushing more physics by using a co-processor. Why not Aegia team up with AMD to make one for games and sell a AMD CPU bundled with the co-processor for gamers? I think that will be a lesser risk then making a completely independent card for it.
When I say first order physics, I mean the most obvious type: fully destructible environments.
In UT3, you could have a fully destructible environment as an on/off option without making the game unbalanced in single player. The game is mindless killing, who care is you blow a hole through a wall to kill your enemy?
I guess you could have fully destructible environments processed via hardware and software, but I'd assume that the software performance hit would be huge, maybe only playable on a quad core.
dedicated hardware for this is pointless, with GPU speeds and numbers of cores on a die increasing on CPUs, I see no point in focusing on another pipe.
Plus the article has a ton of typographical errors :(.
indeed thats what i do. almost.
i hate that printarticle shows up in a popup, can't open it in a tab easily with a middle-click too... same as the comments page btw. really hate it.
so i manually change the url - category/showdoc -> printarticle, it all stays in the same tab and is great. i'm planning on writing a ".user.js" (for opera/greasemonkey/trixie) for fixing the links some time
Other than what was mentioned in the article, I think another big problem is that the PCI bus doesn't have enough bandwidth (bi-directional or otherwise) for a card doing heavy real-time processing. For whatever reason, manufacturers still seem apprehensive about using PCIe x1, so it will be rough for standalone cards to perform at any decent level.
I've always felt the best application for physics processors would be to piggyback them on high-end videocards with lots of ram. Not only would this solve the PCI bandwidth problem, but the physics processor would be able to share the GPU's fast memory, which is probably what constitutes the majority of the cost for standalone physics cards.
This setup would benefit both NVidia/ATI and Ageia. On one hand, Ageia gets massive market penetration by their chips being sold with the latest videocards, while NVidia/ATI get to tout having a huge new feature. They could also use their heavy influence to get game developers to start using the Ageia chip.
I thought one of the advantages of DX10 was that it would allow one to partition off some of the GPU subprocessors for physics work.
I was *very* surprised that the author implied that the GPUs were not well suited to embarrassingly parallel applications.... um.... what's more embarrassingly parallel than rendering?
I think you're misinterpreting what I'm saying. GPUs are well suited to embarrassingly parallel applications, however with the core-war now you can put these tasks on a CPU which while not as fast at FP as a GPU/PPU, is quickly catching up thanks to having multiple CPU cores and how easy it is to put embarrassingly parallel tasks on such a CPU. GPUs are still better suited, but CPUs are becoming well enough suited that the GPU advantage is being chipped away.
As for DX10, there's nothing specifically in it for physics. Using SM4.0/geometry shaders you can do some second-order work, but first-order work looks like it will need to be done with CUDA/CTM which isn't a part of DX10. You may also be thinking of the long-rumored DirectPhysics API, which is just that: a rumor.
Actualy, because of the limited bandwidth capabilities of any GPU interface, the CPU is far better suited. Sure a 16x PCIe interface is limited to a huge 40Gbit/s bandwidth (asyncronous), and as I said, this may *seem* huge, but I personally know many game devers who have maxed this limit easily when experimenting with game technologies. When, and if the PCIe bus expands to 32x, and *if* graphics OEMs / motherboard OEMs implement it, then we'll see something that resembles the CPU-> memory of current bandwidth capabilities(10GB/s). By then however, who is to say how much the CPU -> memory bandwidth will be capable of. Granted, having said all that, this is why *we* load compressed textures into video memory, and do the math on the GPU . . .
Anyhow, the whole time reading this article, I could not help but think that with current CPUs being at 4 cores, and Andahls law, that the two *other* cores could be used for this purpose, and it makes total sense. I think it would behoove Aegia, and Havok both to forget about Physics hardware, and start working on a liscencable software solution.
Plus since most titles are GPU limited (and with more cores and very overclockable Intel chips will only become more so) it might be better to send the Physics stuff to the idle CPU cores rather than the saturated GPU, regardless of what offers ideal performance.
We need a standard API so that a variety of solutions would be possible. All a manufacturer would then need to do is write drivers to interface with their hardware.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
32 Comments
Back to Article
Axion22 - Wednesday, August 1, 2007 - link
Sorry PhysX, you're toast.Multi-core CPUs will have you beat, and by manufacturers who have much more influence in the industry. Even if it did catch on, AMD and Nvidia would just add support and bury you in that market segment.
Aegia would do better trying to get in on the console action. At least there they will have a customer-base.
Zak - Wednesday, August 29, 2007 - link
Yeah, I never liked the idea from the beginning. Count me as one of those who'd rather spend extra $200 on faster CPU than dedicated physics card. What are the chances that many games will use PhysX in a meaningful way, how long PhysX will be around? And, if PhysX is able to run in software mode on one core of a multicore CPU, I'd rather go that way.Z.
0roo0roo - Sunday, July 29, 2007 - link
the simple fact is gamers would rather buy a nicer cpu with more cores with that money. if those cores still can only deliver slightly less physics than the addon, people are willing to live with it. we aren't in a desperate rush to get physics. with the rate at which cpus keep progressing it won't matter, we'll get it regardless. so why worry. its not like sony or nintendo are quaking in their boots at the insane games that physx can create:P the compelling need is not apparent. digital worlds aren't detailed enough for it to matter, people still don't expect things to work in games the same way as reality. physics is limited to blowing stuff up and stacking boxes.and as i said, for online play it won't ever be set for physx cards. if it affects game play than you can't play together with other non physx, so to guarantee compatibility it will be limited to fx and it becomes nothing more than eye candy again.
Bensam123 - Saturday, July 28, 2007 - link
There are quite a few more game available that feature PhysX then just GRAW and GRAW2.http://ageia.com/physx/titles.html">http://ageia.com/physx/titles.html
Not in the list is Rise of Legends. I don't know if it's official, but when installed it installs PhsyX and has quite robust physics in game (ragdolls in a RTS, land deformation, unit movement, etc.).
What I really don't understand and what this article didn't answer, is WHY game developers would pay for a license for a SDK when you can get a better, more user friendly, better supported, more robust, and finely free SDK. It just doesn't make sense to me.
Developers have nothing to lose from using PhysX, but have a lot to gain.
FYI for people that can't read the article, PhysX has a software mode it operates in. The software mode is natively made to run on more then one core. When it all comes down to it, even if you are a advocate for doing physics on a spair core PhysX already does that.
commandar - Friday, July 27, 2007 - link
Wow, this article definitely isn't up to the quality level I generally expect from Anandtech. Typos everywhere and then gems like this:"Being embarrassingly parallel in nature, physics simulations aren't just a good match for GPUs/PPUs with their sub-processors, but a logical fit for multi-core CPUs."
What you say? For one, all processors are not created equal. CPUs are awesome for general purpose work, but a GPU will eat its lunch when it comes to vector math. GPUs are massively parallel vector processors. Physics math generall *is* vector math. While there are problems with doing physics processing which others have already pointed out, suggesting that CPUs are better suited to the job because of parallelism is baffling.
Ryan Smith - Friday, July 27, 2007 - link
Reposted from earlier in the comments:I think you're misinterpreting what I'm saying. GPUs are well suited to embarrassingly parallel applications, however with the core-war now you can put these tasks on a CPU which while not as fast at FP as a GPU/PPU, is quickly catching up thanks to having multiple CPU cores and how easy it is to put embarrassingly parallel tasks on such a CPU. GPUs are still better suited, but CPUs are becoming well enough suited that the GPU advantage is being chipped away.
taterworks - Saturday, July 28, 2007 - link
It's still not parallel enough. A modern CPU has only four cores (the Cell/BE doesn't count since it's not a CPU for PCs), but effective physics processing requires much more parallelism. The Ageia card is better suited to physics processing than any CPU that we'll be able to buy in the next five years. In addition, Ageia identifies a few shortcomings of GPUs when applied to physics calculations. GPUs can't perform read-modify-write operations in their shader units -- they can only perform read operations. In addition, GPUs aren't optimized for applications where each shader unit must execute different code -- they're designed to execute the same code, but on different parts of the image. As a result, some shader units finish their calculations before other shader units and simply sit idle instead of processing the next batch of data. The problem here is that the parallelism advantages become hamstrung by inefficiency. In the end, physics computations are too subtantially different from graphics computations for one optimized processing unit to be applied to a half-hearted form of the other.What's to blame for Ageia's failure? I think there's a fundamental problem with the way gamers think about an immersive gaming experience. Gamers are too preoccupied with resolution, texture and model detail, lighting, and frame rates to notice that objects in games don't behave like real objects. The focus is on visual realism, not physical realism, but both are required for a true virtual reality experience. In addition, the PhysX hardware was too expensive from the start -- it had to be cheaper than GPUs in order for anyone to take a chance on it. A $99 PhysX card was desperately needed last year.
JarredWalton - Sunday, July 29, 2007 - link
Of course, all of what you're saying assumes that we actually need that much physics processing power. I remember reading about a flight simulator a while back where they described all of the complex calculations being done to make the flight model as realistic as possible. After the lengthy description of the surface dynamics calculations and whatever else was involved in making the planes behave realistically, the developer than made the comment that all of that used less than 5% of the CPU power. Most of the remaining CPU time was used for graphics.Granted, that was Flight Unlimited and it was a while ago, but the situation is still pretty similar to what we have today. As complex as physics might be if you model it exactingly, it's really not necessary and the graphics still demand the majority of the CPU power. AI and physics are the other things the CPU handles. People can come up with situations (i.e. Cell Factor) where hardware physics is necessary to maintain acceptable performance. The real question is whether those situations are really necessary in order to deliver a compelling game.
Right now, games continue to be predominantly single core - even the most physics oriented games (Half-Life 2?) don't use multiple cores. And physics calculations aren't really consuming a majority of even one core! Now, give physics two cores that only need to do physics (on a quad core system), and do you see any reason that any current game is going to need a PPU? Especially when the cost of the PPU card is about as much as the cost of a quad core CPU?
I don't, and I don't expect to before AGEIA is pretty much gone. Maybe Intel or AMD will by their intellectual property and incorporate the tech into a future CPU. Short term, I just don't think they're relevant.
0roo0roo - Friday, July 27, 2007 - link
wouldn't they have to create physx only servers if the physics affect gameplay? they certainly aren't going to require physx to play online...i'm guessing it will only be limited to effects for the most part because of this.Bladen - Sunday, July 29, 2007 - link
I'd imagine that hardware physics would not be much more taxing then non-hardware physics, except for RAM, on servers.I'd say that the server would make everyones computer do the number crunching. The server would just say 'player 1 fires a rocket from this position at this angle (and thus hits this wall like so)". Every players individual PhysX card would do its own processing, and calculate to the same answer.
This is purely speculation though, I have no real knowledge of video game programing (or any kind of programming for that matter).
FluffyChicken - Thursday, July 26, 2007 - link
While it's not mass market like Gaming, there is Microsoft Robotics Studio that implements AGEIA PhysX hardware (& software ?)So they are trying ;-)
Microsoft Robotics Studio targets a wide audience in an attempt to accelerate robotics development and adoption. An important part of this effort is the simulation runtime. It was immediately obvious that PC and Console gaming has paved the way when it comes to affordable, widely usable, robotics simulation. Games rely on photo-realistic visualizations with advanced physics simulation running within real time constraints. This was a perfect starting point for our effort.
We designed the simulation runtime to be used in a variety of advanced scenarios with high demands for fidelity, visualization, and scaling. At the same time, a novice user with little to no coding experience can use simulation; developing interesting applications in a game-like environment. Our integration of the AGEIA PhysX Technologies enables us to leverage a very strong physics simulation product that is mature and constantly evolving towards features that will be invaluable to robotics. The rendering engine is based on Microsoft XNA Framework.
So expect there to be a large surge a Dell for the 15yr olds to hook up the lego.
DeathBooger - Thursday, July 26, 2007 - link
There is no need. Not to mention Epic hasn't said anything about it in over two years. If anything it would just be eye candy since Unreal Tournament 3 relies on it's online multiplayer. You can't have added interactive features only a percentage will be able utilize in a multiplayer game.Some Unreal Engine 3 titles are replacing the built in Ageia SDK in favor of Havok's SDK. Stranglehold and Blacksite are examples of this.
Bladen - Friday, July 27, 2007 - link
Physics cards go here >Non physics cards go there <
Schrag4 - Thursday, July 26, 2007 - link
My friends and I have had this 'chicken and egg' discussion on many occassions, specifically about why physics hardware is not taking off. As long as a game only uses the physics for eye-candy, the feature won't affect gameplay at all and therefore will be able to be turned off by those who don't have the resources to play with it turned on (no PhysX card, no multiple cores, no SLI graphics, whatever). So who's gonna buy a 200-400 dollar card that's not needed?In order for hardware like PhysX to take off, there MUST be a game where the physics is up front, interactive, what makes the game fun to play, and it MUST be required. Not only that, but it better be one hell of a game, one that people just can't do without. I mean, after all, since this is the 'egg' in the chicken-egg scenario, you're basically spending 400 bucks for the game that you want to play, since there are no other games that are even worth mentioning (again, if it's just eye candy, who cares).
If you don't believe me about the eye-candy comments (about how eye-candy has its place but is over-valued), then please explain to me why the Wii is outselling its direct competition? It's because the games are FUN (mostly because of the innovative interface), not because they look great (they don't). I mean, come on, who cares what a game looks like if it's tedious and frustrating, shoot, even just boring to play.
What we're longing for is a game where there are no more canned animations for everything. For instance, you don't press a fire button to swing a sword. You somehow define a sword stroke that's different every time you swing. Also, whether or not you hit your target should not be defined by your distance from your target. It should be defined by the strength of the joints that make up your character, along with the mass of the sword, along with the mass of whatever gets in the way of your swing, etc etc. We're actually working on such a game. It's early in the development, and we don't plan on having anything beyond what can be played at LAN parties, but it's a dream we all share and maybe, just maybe, we can eek out something interesting. FYI, we are using the PhysX SDK...
Myrandex - Friday, July 27, 2007 - link
UT3 should use physX for environments and not just features. Reading the article shows that PhysX can be done in s/w. That way, everyone can pay the same game, and join the same servers, etc., but if they are running on an older system, PhysX will just eat their CPU's resources completely. If they upgrade to 64 core 256bit CPUs, then it will run nice, or if they pop in a little PCI card, it will run nice.Either way it is definite that the game has be be revolutionary, good, and always have PhysX running for at least the enrivonmental aspects (maybe leave it as an option for Particle physics so they can get performance back some how for playing on their Compy 486).
AttitudeAdjuster - Thursday, July 26, 2007 - link
The issue of getting access to the results of any calculation performed on a GPU is a mjor one. On that subject you might be interested to look at the preprint of a scientific paper regarding using multiple GPUs to perform real physical (not game-related physics) calculations using nVidia CUDA SDK. The preprint is by http://arxiv.org/abs/0707.2991">Schive et alSchive et al (astro-ph/0707.2991), at the arXiv.org physics preprint server.Warder45 - Thursday, July 26, 2007 - link
I wonder what that new Lucasarts game Fracture(I think) is using for the deformable terrain.jackylman - Thursday, July 26, 2007 - link
Typo in the last paragraph of Page 3:"...if the PhysX hardware is going to take of or not..."
Sulphademus - Friday, July 27, 2007 - link
" We except Ageia will be hanging on for dear life until then."Also page 3. I except you mean expect.
Regs - Thursday, July 26, 2007 - link
I would think AMD would be pushing more physics by using a co-processor. Why not Aegia team up with AMD to make one for games and sell a AMD CPU bundled with the co-processor for gamers? I think that will be a lesser risk then making a completely independent card for it.Bladen - Thursday, July 26, 2007 - link
When I say first order physics, I mean the most obvious type: fully destructible environments.In UT3, you could have a fully destructible environment as an on/off option without making the game unbalanced in single player. The game is mindless killing, who care is you blow a hole through a wall to kill your enemy?
I guess you could have fully destructible environments processed via hardware and software, but I'd assume that the software performance hit would be huge, maybe only playable on a quad core.
Bladen - Thursday, July 26, 2007 - link
Whether or not the game has fully destructible environments, I don't know.Verdant - Thursday, July 26, 2007 - link
dedicated hardware for this is pointless, with GPU speeds and numbers of cores on a die increasing on CPUs, I see no point in focusing on another pipe.Plus the article has a ton of typographical errors :(.
bigpow - Wednesday, July 25, 2007 - link
Maybe it's just me, but I hate multiple-page reviewsShark Tek - Wednesday, July 25, 2007 - link
Just click on the "Print this article" link and you will have the whole article in one page.Visual - Thursday, July 26, 2007 - link
indeed thats what i do. almost.i hate that printarticle shows up in a popup, can't open it in a tab easily with a middle-click too... same as the comments page btw. really hate it.
so i manually change the url - category/showdoc -> printarticle, it all stays in the same tab and is great. i'm planning on writing a ".user.js" (for opera/greasemonkey/trixie) for fixing the links some time
Egglick - Wednesday, July 25, 2007 - link
Other than what was mentioned in the article, I think another big problem is that the PCI bus doesn't have enough bandwidth (bi-directional or otherwise) for a card doing heavy real-time processing. For whatever reason, manufacturers still seem apprehensive about using PCIe x1, so it will be rough for standalone cards to perform at any decent level.I've always felt the best application for physics processors would be to piggyback them on high-end videocards with lots of ram. Not only would this solve the PCI bandwidth problem, but the physics processor would be able to share the GPU's fast memory, which is probably what constitutes the majority of the cost for standalone physics cards.
This setup would benefit both NVidia/ATI and Ageia. On one hand, Ageia gets massive market penetration by their chips being sold with the latest videocards, while NVidia/ATI get to tout having a huge new feature. They could also use their heavy influence to get game developers to start using the Ageia chip.
cfineman - Wednesday, July 25, 2007 - link
I thought one of the advantages of DX10 was that it would allow one to partition off some of the GPU subprocessors for physics work.I was *very* surprised that the author implied that the GPUs were not well suited to embarrassingly parallel applications.... um.... what's more embarrassingly parallel than rendering?
Ryan Smith - Wednesday, July 25, 2007 - link
I think you're misinterpreting what I'm saying. GPUs are well suited to embarrassingly parallel applications, however with the core-war now you can put these tasks on a CPU which while not as fast at FP as a GPU/PPU, is quickly catching up thanks to having multiple CPU cores and how easy it is to put embarrassingly parallel tasks on such a CPU. GPUs are still better suited, but CPUs are becoming well enough suited that the GPU advantage is being chipped away.As for DX10, there's nothing specifically in it for physics. Using SM4.0/geometry shaders you can do some second-order work, but first-order work looks like it will need to be done with CUDA/CTM which isn't a part of DX10. You may also be thinking of the long-rumored DirectPhysics API, which is just that: a rumor.
yyrkoon - Thursday, July 26, 2007 - link
Actualy, because of the limited bandwidth capabilities of any GPU interface, the CPU is far better suited. Sure a 16x PCIe interface is limited to a huge 40Gbit/s bandwidth (asyncronous), and as I said, this may *seem* huge, but I personally know many game devers who have maxed this limit easily when experimenting with game technologies. When, and if the PCIe bus expands to 32x, and *if* graphics OEMs / motherboard OEMs implement it, then we'll see something that resembles the CPU-> memory of current bandwidth capabilities(10GB/s). By then however, who is to say how much the CPU -> memory bandwidth will be capable of. Granted, having said all that, this is why *we* load compressed textures into video memory, and do the math on the GPU . . .Anyhow, the whole time reading this article, I could not help but think that with current CPUs being at 4 cores, and Andahls law, that the two *other* cores could be used for this purpose, and it makes total sense. I think it would behoove Aegia, and Havok both to forget about Physics hardware, and start working on a liscencable software solution.
Sunrise089 - Wednesday, July 25, 2007 - link
Plus since most titles are GPU limited (and with more cores and very overclockable Intel chips will only become more so) it might be better to send the Physics stuff to the idle CPU cores rather than the saturated GPU, regardless of what offers ideal performance.KeithP - Wednesday, July 25, 2007 - link
We need a standard API so that a variety of solutions would be possible. All a manufacturer would then need to do is write drivers to interface with their hardware.-KeithP