No more mysteries: Apple's G5 versus x86, Mac OS X versus Linux
by Johan De Gelas on June 3, 2005 7:48 AM EST- Posted in
- Mac
Micro CPU benchmarks: isolating the FPU
But you can't compare an Intel PC with an Apple. The software might not be optimised the right way." Indeed, it is clear that the Final Cut Pro, owned by Apple, or Adobe Premiere, which is far better optimised for the Intel PC, are not very good choices to compare the G5 with the x86 world.So, before we start with application benchmarks, we performed a few micro benchmarks compiled on all platforms with the same gcc 3.3.3 compiler.
The first one is flops. Flops, programmed by Al Aburto, is a very floating-point intensive benchmark. Analyses show that this benchmark contains:
- 70% floating point instructions;
- only 4% branches; and
- Only 34% of instructions are memory instructions.
Al Aburto, about Flops:
" Flops.c is a 'C' program which attempts to estimate your systems floating-point 'MFLOPS' rating for the FADD, FSUB, FMUL, and FDIV operations based on specific 'instruction mixes' (see table below). The program provides an estimate of PEAK MFLOPS performance by making maximal use of register variables with minimal interaction with main memory. The execution loops are all small so that they will fit in any cache."Flops shows the maximum double precision power that the core has, by making sure that the program fits in the L1-cache. Flops consists of 8 tests, and each test has a different, but well known instruction mix. The most frequently used instructions are FADD (addition), FSUB (subtraction) and FMUL (multiplication). We used gcc -O2 flops.c -o flops to compile flops on each platform.
MODULE | FADD | FSUB | FMUL | FDIV | Powermac G5 2.5 GHz | Powermac G5 2.7 GHz | Xeon Irwindale 3.6 GHz | Xeon Irwindale 3.6 w/o SSE2* | Xeon Galatin 3.06 GHz | Opteron 250 2.4 GHz |
1 | 50% | 0% | 43% | 7% | 1026 | 1104 | 677 | 1103 | 1033 | 1404 |
2 | 43% | 29% | 14% | 14% | 618 | 665 | 328 | 528 | 442 | 843 |
3 | 35% | 12% | 53% | 0% | 2677 | 2890 | 532 | 1088 | 802 | 1955 |
4 | 47% | 0% | 53% | 0% | 486 | 522 | 557 | 777 | 988 | 1856 |
5 | 45% | 0% | 52% | 3% | 628 | 675 | 470 | 913 | 995 | 1831 |
6 | 45% | 0% | 55% | 0% | 851 | 915 | 552 | 904 | 1030 | 1922 |
7 | 25% | 25% | 25% | 25% | 264 | 284 | 358 | 315 | 289 | 562 |
8 | 43% | 0% | 57% | 0% | 860 | 925 | 1031 | 910 | 1062 | 1989 |
Average: | 926 | 998 | 563 | 817 | 830 | 1545 |
The results are quite interesting. First of all, the gcc compiler isn't very good in vectorizing. With vectorizing, we mean generating SIMD (SSE, Altivec) code. From the numbers, it seems like gcc was only capable of using Altivec in one test, the third one. In this test, the G5 really shows superiority compared to the Opteron and especially the Xeons.
The really funny thing is that the new Xeon Irwindale performed better when we disabled support for the SSE-2, and used the "- mfpmath=387" option. It seems that the GCC compiler makes a real mess when it tries to optimise for the SSE-2 instructions. One can, of course, use the Intel compiler, which produces code that is up to twice as fast. But the use of the special Intel compiler isn't widespread in the real world.
Also interesting is that the 3.06 GHz Xeon performs better than the Xeon Irwindale at 3.6 GHz. Running completely out of the L1-cache, the high latency (4 cycles) of the L1-cache of Irwindale hurts performance badly. On the Galatin Xeon, which is similar to Northwood, Flops benefits from the very fast 2-cycle latency.
The conclusion is that the Opteron has, by far, the best FPU, especially when more complex instructions such a FDIV (divisions) are used. When the code is using something close to the ideal 50% FADD/FSUB and 50% FMUL mix and is optimised for Altivec, the G5 can roll its muscles. The normal FPU is rather mediocre though.
Micro CPU benchmarks: isolating the Branch Predictor
To test the branch prediction, we used the benchmark " Queens". Queens is a very well known problem where you have to place n chess Queens on an n x n board. The catch is that no single Queen must be able to attack the other. The exhaustive search strategy for finding a solution to placing the Queens on a chess board so they don't attack each other is the algorithm behind this benchmark, and it contains some very branch intensive code.Queens has about:
- 23% branches
- 45% memory instructions
- No FP operations
RUN TIME (sec) | |
Powermac G5 2.5 GHz | 134.110 |
Xeon Irwindale 3.6 GHz | 125.285 |
Opteron 250 2.4 GHz | 103.159 |
At 2.7 GHz, the G5 was just as fast as the Xeon. It is pretty clear that despite the enormous 31 stage pipeline, the fantastic branch predictor of the "Xeon Pentium 4" is capable of keeping the damage to a minimum. The Opteron's branch predictor seems to be at the level of G5's: the branch misprediction penalty of the G5 is 30% higher, and the Opteron does about 30% better.
The G5 as workstation processor
It is well known that the G5 is a decent workstation CPU. The G5 is probably the fastest CPU when it comes to Adobe After Effects and Final Cut Pro, as this kind of software was made to be run on a PowerMac. Unfortunately, we didn't have access to that kind of software.First, we test with Povray, which is not optimised for any architecture, and single-threaded.
Povray Seconds |
|
Dual Opteron 250 (2.4 GHz) | 804 |
Dual Xeon DP 3.6 GHz | 1169 |
Dual G5 2.5 GHz PowerMac | 1125 |
Dual G5 2.7 GHz PowerMac | 1049 |
Povray runs mostly out of the L2- and L1-caches and mimics almost perfectly what we have witnessed in our Flops benchmarks. As long as there are little or no Altivec or SSE-2 optimisations present, the Opteron is by far the fastest CPU. The G5's FPU is still quite a bit better than the one of the Xeon.
The next two tests are the only 32 bit ones, done in Windows XP on the x86 machines.
Lightwave 8.0 Raytrace |
Lightwave 8.0 Tracer Radiosity |
|
Dual Opteron 250 (2,4 GHz) | 47 | 204 |
Dual Xeon DP 3,6 GHz | 47.3 | 180 |
Dual G5 2,5 GHz PowerMac | 46.5 | 254 |
The G5 is capable of competing in one test. Lightwave rendering engine has been meticulously optimised for SSE-2, and the " Netburst" architecture prevails here. We have no idea how much attention the software engineers gave Altivec, but it doesn't seem to be much. This might of course be a result of Apple's small market share.
Cinema 4D Cinebench |
|
Dual Opteron 250 (2.4 GHz) | 630 |
Dual Xeon DP 3.6 GHz | 682 |
Dual G5 2.5 GHz PowerMac | 638 |
Dual G5 2.7 GHz PowerMac | 682 |
Maxon has invested some time and effort to get the Cinema4D engine running well on the G5 and it shows. The G5 competes with the best x86 CPUs.
116 Comments
View All Comments
Rosyna - Friday, June 3, 2005 - link
Actually, for better or worse the GCC Apple includes is being used for most Mac OS X software. OS X itself was compiled with it.elvisizer - Friday, June 3, 2005 - link
rosyna's right.i'm just not sure if there IS anyway to do the kind of comparison you seem to've been shooting for (pure competition between the chips with as little else affecting the outcome as possible). you could use the 'special' compilers on each platform, but those aren't used for compiling most of the binaries you buy at compusa.
elvisizer - Friday, June 3, 2005 - link
why didn't you run some tests with YD linux on the g5?!?!?!?!?!?!? you could've answered the questions you posed yourself!!!!!argh.
and you definitly should've included after effects. "we don't have access to that software" what the heck is THAT about?? you can get your hands on a dual 3.6 xeon machine, a dual 2.5 gr, and adual 2.7 g5, and you can't buy a freaking piece of adobe software at retail?!?!?!?!?!
some seroiusly weird decisions being made here.
other than that, the article was ok. re-confirmed suspicions i've had for awhile about OS X server handling large numbers of thread. My OS X servers ALWAYS tank hard with lots of open sessions, so i keep them around only for emergencies. They are so very easy to admin, tho, they're still attractive to me for small workgroup sizes. like last month, I had to support 8 people working on a daily magazine being published at e3. litterally inside the convention center. os x server was perfect in that situation.
Rosyna - Friday, June 3, 2005 - link
There appears to be either a typo or a horrible flaw in the test. It says you used GCC 3.3.3 but OS X comes with gcc version 3.3 20030304 (Apple Computer, Inc. build 1809).If you did use GCC 3.3.3 then you were giving the PPC a severe disadvantage as the stock GCC has almost no optimizations for PPC while it has many for x86.
Eug - Friday, June 3, 2005 - link
"But do you really think that Oracle would migrate to this if it wasn't on a par?"[Eug dons computer geek wannabe hat]
There are lots of reasons to migrate, and I'm sure absolute performance isn't always the primary concern. We won't know the real performance until we actually see tests on Oracle/Sybase.
My uneducated guess is that they won't be anywhere near as bad as the artifical server benches might suggest, but OTOH, I could easily see Linux on G5 significantly besting OS X on G5 for this type of stuff.
ie. The most interesting test I'd like to see is Oracle on the G5, with both OS X and Linux, compared to Xeon and Opteron with Linux.
And yeah, it would be interesting to see what gcc 4 brings to the table, since 3.3 provides no autovectorization at all. It would also be interesting to see how xlc/xlf does, although that doesn't provide autovectorization either. Where are the autovectorizing IBM compilers that were supposed to come out???
melgross - Friday, June 3, 2005 - link
As none of us has actual experiance with this, none of us can say yes or no.But do you really think that Oracle would migrate to this if it wasn't on a par? After all Ellison isn't on Apple's board anymore, so there's nothing to prove there.
I also remember that going back to Apple's G4 XServes, their performance was better than the x86 crowd, and the Sun servers as well. Those tests were on several sites. Been a while though.
JohanAnandtech - Friday, June 3, 2005 - link
querymc: Yes, you are right. The --noaltivec flag and the comment that altivec was enabled by default in the gcc 3.3.3 compiler docs made me believe there is autovectorization (or at least "scalarisation"). As I wrote in the article we used -O2 and and then tried a bucket load of other options like --fast-math --mtune=G5 and others I don't remember anymore but it didn't make any big difference.querymc - Friday, June 3, 2005 - link
The SSE support would probably also be improved by using GCC 4 with autovectorization, I should note. There's a reason it does poorly in GCC 3. :)querymc - Friday, June 3, 2005 - link
Johan: I didn't see this the first time through, but you need to make a slight clarification to the floating point stuff. There is no autovectorization capability in GCC 3.3. None. There is limited support for SSE, but that is not quite the same, as SSE isn't SIMD to the extent that AltiVec is. If you want to use the AltiVec unit in otherwise unaltered benchmarks, you don't have a choice other than GCC 4 (and you need to pass a special flag to turn it on).Also, what compiler flags did you pass on each platform? For example, did you use --fast-math?
JohanAnandtech - Friday, June 3, 2005 - link
Melgross: Apple told me that most xserves in europe are sold as "do it all". A little webserver (apache), a database sybase, samba and so on. They didn't have any client who had heavy traffic on the webserver, so nobody complains.Sybase/oracle seems to have done quite a bit of work to get good performance out of Mac OS-x, so it must be interesting to see how they managed to solve those problems. But I am sceptical that Oracle/Sybase runs faster on Mac OS x than on Linux.