Swiftech Apogee GTZ
Tuesday, 05 May 2009 11:50
Before I start, I’d like to thank Swiftech for sending out a sample of their Apogee GTZ for testing. They’re the first company I’ve ever approached about reviewing a product and they were wonderful. Without manufacturer supplied samples, these tests would probably not happen and hopefully more manufacturers continue to support those of us testing products for the community.
The Swiftech Apogee GTZ is Swiftech’s recent offering in the high-end waterblock market. It brings a new base design features 225µm (0.009″) micro structures over the center of the block as well as a direct impingement design to direct and accelerate flow over the tiny pins. The external appearance is kept simple with just a black acetyl top with some branding features and an interchangeable metallic mounting plate. The other big feature of the GTZ is the new mounting system–it is designed to be the easiest and most consistent mounting system Swiftech has shipped to-date. It features large thumbscrews and a backplate that provide the right amount of mounting pressure every mount. This test will focus on the performance of the block as flow changes in comparison with other blocks.
Thermal Testing Methodology/Specification
My waterblock tests are an evolution from previous tests from Martin, skinnee, et al. I’m utilizing mostly the same measurement hardware and technique, but am also going to be varying pumping power through the loop. What will result is a flow vs. temperature curve for each block to compare against each other. It’ll give a lot of info about a block: how responsive it is to an increase or decrease in flow, how that response compares to other blocks, how much impact it has on the flowrate of a system compared to other blocks, and ultimately, how each block compares to another block overall.
A total of 4 tests per mount with 5 mounts were completed. Each test was at a different flowrate and everything was logged.
- The processor I’m using for this test is my B3 QX6700. I’m running it at 9×400 (3600MHz) at 1.49V loaded on a Gigabyte EP45T-Extreme. It is lapped. I’m running 2GB of G.Skill DDR3 1600MHz. All heatsinks on the board are stock and there is no airflow provided anywhere over the board. The video card is a 4850 1GB with VF830 running in the top slot. The board is sitting on my desk alongside my Odin 1200W PSU and DVDRW and HDD drives.
- The watercooling loop I’m using is very untraditional, but allows me to test the way I want to test.
- It consists of an MCR320 + MCR220Res sandwich with three Sanyo Denki “San Ace” 109R1212H1011 fans and 5 (3+2) 120x120x20mm Yate Loons cored out as shrouds. The sandwich allows for high-dissipation ability in a compact setup. The ‘Res’ part of the MCR220Res is used not as a res, but as a drain port.
- For pumps, I use three MCP350s modded to MCP355s. One is attached to an XSPC Res Top and the other two are attached to the EK Dual Turbo Top–all three are in series. The MCP attached to the XSPC Res Top I can modulate the supply voltage freely between 7.65V and 12.65V. The two MCPs on the EK Dual Turbo Top always run at 12V. I have six pump settings I run with every mount: 1) All three on at full speed, 2) XSPC Res Top only (at 12.65V), 3) XSPC Res Top only (at 10V), 4) XSPC Res Top only (at 7.65V). The ability to consistently vary flow is a huge aspect of my testing.
- I use a Koolance FM17 for my flowrate measurement. I recognize its lack of ‘professionalism’ (compared to a King Instruments flowmeter or something of that ilk) but still use it because it 1) covers the entire range I anticipate I’ll be testing in (~.2GPM up to 3GPM), 2) outputs measured flowrate every second via RPM wire, which is logged for the entire test and then averaged and has thus far brought on extremely consistent results.
- Loop order: CPU block -> MCR220Res -> Koolance FM17 -> MCR320 -> XSPC Res Top + MCP -> EK Dual Turbo Top + 2xMCP -> CPU block. Air flow order: in -> temp probe array -> MCR320 -> San Ace H1011 -> MCR220Res -> out
- I do a 5 mount test, each with their own TIM application. It takes a ton of extra time (each block takes 5x4x120min to test), but it’s totally worth it. In the words of Martin “It’s not uncommon at all to see mounting variations as high as 2 degrees or more, so with only one mount, that error is 2 degrees. When you mount 5 times and average those results, your standard deviation is significantly lowered and the overall testing confidence improved. In addition multiple mounts serve as a means to validate data, because each test is carried out again and again, chances are if some variable is affecting results, it will show.”
- I have 10 temperature probes in use: 6 Dallas DS18B20 Digital one-wire sensors on the intake of my sandwich, 4 Intel DTS sensors in the processor.
- For temperature logging, I use OCCT v3.0.0.RC1′s internal CPU polling that is performed every second on all four DTS sensors and is automatically output to .csv files. I also use OCCT for loading the CPU. For intake air temperatures, I use Crystalfontz 633 WinTest b1.9 to log the Dallas temp probe data on my Crystalfontz 633. I also use WinTest b1.9 to log fan RPM and Koolance FM17 flowrate output. Martin et al. have been over the many advantages and qualities of the Crystalfontz + Dallas temp probe combinations–it really is a wonderful setup and aids the testing process immensely.
- For processor loading, I find OCCT v3.0.0.RC1 to be extremely competent. It provides a constant 100% load (so long as WinTest b1.9′s packet debugger is fully disabled) and is extraordinarily consistent. It allows me to, in one button push, start both the loading and the logging as well, which helps. I immediately also start to log the Crystalfontz data simultaneously. I run a 120 minute program, the first minute is idle, then I have 115 minutes of load, and then 4 minutes of idle. The first 26 minutes of load are thrown out as warmup and only the remaining 90 minutes of load are used for data compilation. During the last 4 minutes of idle, I adjust the pumps to be prepared to immediately begin the next 120 minute program.
- For TIM, I use MX-2. It’s plentiful, representative of what a lot of people use, and has no break-in period. I use the dot in the center method and validate all my mounts to be at least “good” visually upon removing the waterblock.
- Like Martin, I have found that simply using processor temperature minus ambient temperature is not adequate. So I mapped out the thermal response of my setup and found that a correction of .216C per degree Celsius was needed. That is, for every 1C below 21C ambient (my arbitrary pivot point), I need to add .216C to the delta to correct it. The opposite is true as well, for every 1C above 21C ambient, I need to subtract .216C to the delta to correct it. I then add that corrected delta to 21C and get my adjusted core temperatures for 21C ambient. I found the .216C correction vector to be very accurate for ambients ranging between 16C and 27C (past that, I did not test). Even with all the correction automatically performed for me, I still try my hardest to maintain a 21C ambient when testing. I would expound on this further, but 1) Martin already did an excellent job and all I did was mirror his technique and testing for my own testbed and 2) I seem to have too much data for Excel to reliably function–the majority of the time I try to work on the spreadsheet containing all this data, it crashes. Of note is that my E6700 has a correction factor or .221 from my tests, so it seems the entire 65nm Core 2 family may have the same of ~.22C.
- My graphs….they may look a little different than what you’ve seen before, but I feel they’re a great way to show all the individual data points from testing while also highlighting the averages of that data. I’ve termed them Planet/Moon graphs–each data point get its own moon and 3 moons get averaged into a planet. From there, the planets get a line drawn through them (not a trendline, just a regular line with the “smooth line” option checked). For something like flow vs. cooling, I’ve found Excel’s trendlines to be totally incompetent. This applies to HSFs too. In fact, I have yet to see a situation where they do work involving flow vs. cooling.
- While I do 5 mounts, I discard the best and worst mounts and use the data of the middle three. I still show you the data from the worst and best, but it’s not used in the ‘big’ graphs or the averages calculations. I take the middle three to hopefully get a fair representation of what to expect from the block in how it compares to other blocks.
Thermal Test Results
Now finally some results! First up, the big graph with all my data presented as conveniently as possible.
Specific Pumping Power
Now that we have looked at the plotted results, let’s isolate the data into groupings at a specific pumping power. This ignores flowrate and isolates the CPU temperatures at a given pumping power.
- Full Pumping Power: All three MCP355 pumps are on at full speed–more power than an RD-30 and the definition of overkill for just a CPU loop.
- Medium-high Pumping Power: Single MCP355 with Res Top at 12.65V in a simple loop is pretty representative of the amount of flow you can expect in a high-end loop.
- Medium-low Pumping Power: Single MCP355 with Res Top at 10V in a simple loop is fairly representative of the amount of flow you can expect from a multi-block loop or a simple loop with a lesser pump.
- Low Pumping Power: Single MCP355 with Res Top at 7.65V is fairly representative of the amount of flow you can expect from a loop with lots of restriction or a loop with a pump meant for silence and low flow.
Tables of all the formatted data acquired from testing, including all 5 mounts.
Another graph for your enjoyment…it’s something I’ll be using in every waterblock test and comparison: ‘Typical’ performance of a block, i.e., how you can expect it to perform compared to other blocks with little regard to the rest of the loop. The data graphed is flow vs. temp and the flowrate data is the harmonic mean (great for averaging rates) of the 4 settings I tested and the temp data is the average of the 4. It’s a pretty interesting way of representing the data and really shows how, in general, a block performs in regards to both flow and temperatures. I realize the graph is a little bare right now, but it will fill out.
In terms of performance, it easily bests the best Fuzion V2 configuration I tested previously–the red quad nozzle. This is especially true at high flowrates where it continues to scale with flow and establishes a noticeable lead over the Fuzion V2. Comparing flowrates, they were also much more similar than I expected–the block seems to have gained a reputations as a restrictive block, but I found its flowrate (and therefore restriction) at all four pump settings to be very similar to the Fuzion V2 + Quad Nozzle. Overall, the performance is really solid, easily the best I’ve tested so far. It also scales well with flow, something I like seeing.
There’s also the matter of the mounting system–it’s awesome. It’s something every manufacturer should take note of and implement in their own designs. It provided extremely consistent results, and more than anything else, was also ridiculously easy use. A little dab of TIM, put the block over the socket, tighten the thumbscrews by hand, finish with a screwdriver, and you have a perfect mount–every time. A Billy Mays infomercial couldn’t even do it justice.
The price ($65 at the time of writing) is also commendable as it seems a lot of flagship blocks are getting more and more expensive these days. The kit includes everything you need for mounting (including the proper backplate, a highly-recommended accoutrement many blocks do not include) as well as both 1/2" and 3/8" barbs and matching Herbie Clips.
Overall, the Swiftech Apogee GTZ is a great block that has no weaknesses. The mounting system is exemplary, the performance is the best I’ve tested so far, its restriction is lower than I expected, it has a complete accessories kit, and the price is lower than other flagship blocks.