Core i7 CPU Block Roundup #1
Saturday, 04 July 2009 12:01
Before I start, I’d like to thank Swiftech for sending out a sample of their Apogee GTZ for testing and Koolance for sending out the new midplate and updated mounting kit for the CPU-350. They’ve both been a great help and are great companies to work with.
The Swiftech Apogee GTZ is Swiftech’s recent offering in the high-end waterblock market. It brings a new base design features 225µm (0.009″) micro structures over the center of the block as well as a direct impingement design to direct and accelerate flow over the tiny pins. The external appearance is kept simple with just a black acetyl top with some branding features and an interchangable metallic mounting plate. The other big feature of the GTZ is the new mounting system–it is designed to be the easiest and most consistent mounting system Swiftech has shipped to-date. It features large thumbscrews and a backplate that provide the right amount of mounting pressure every mount.
The Koolance CPU-350 is Koolance’s current flagship block. Like the Swiftech, it has a micro-pin design and an impingment structure to accelerate flow and increase turbulence. However, the similarities end there–the pin density is lower, the flow is directed at the base rather than across the base, the midplate is interchangeable, and the overall construction is vastly different. The mounting system, although comprised of high-quality, custom made parts, is an infinite-range mounting system rather than Swiftech’s fixed mounting pressure system.
The D-Tek Fuzion V2 is a simple successor to the very successful Fuzion V1. The external appearance and overall construction of the block have been modernized a bit and the popular Pro Mount was made standard. The base and overall design (from a performance point of view) are relatively unchanged, although there are some minor changes were made. Overall, it was an evolutionary change to an already popular high-performance block.
This test will focus on the performance of the blocks as flow changes in comarison with each other.
Thermal Testing Methodology/Specification
My waterblock tests are an evolution from previous tests from Martin, skinnee, et al. I’m utilizing mostly the same measurement hardware and technique, but am also going to be varying pumping power through the loop. What will result is a flow vs. temperature curve for each block to compare against each other. It’ll give a lot of info about a block: how responsive it is to an increase or decrease in flow, how that response compares to other blocks, how much impact it has on the flowrate of a system compared to other blocks, and ultimately, how each block compares to another block overall.
A total of 4 tests per mount with 5 mounts were completed. Each test was at a different flowrate and everything was logged.
- The processor I’m using for this test is my C0/C1 i7 920. I’m running it at 21×196 (4120MHz) at 1.46V loaded on a Gigabyte EX58-Extreme. It is unlapped. I’m running 2GB of G.Skill DDR3 1600MHz. All heatsinks on the board are stock and I have fans blowing over the MOSFET area for added stability. The video card is a 4850 1GB with VF830 running in the top slot. The board is sitting on my desk alongside my Odin 1200W PSU and DVDRW and HDD drives.
- The watercooling loop I’m using is very untraditional, but allows me to test the way I want to test.
- It consists of an MCR320 + MCR220Res sandwich with three Sanyo Denki “San Ace” 109R1212H1011 fans and 5 (3+2) 120x120x20mm Yate Loons cored out as shrouds. The sandwich allows for high-dissipation ability in a compact setup. The ‘Res’ part of the MCR220Res is used not as a res, but as a drain port.
- For pumps, I use three MCP350s modded to MCP355s. One is attached to an XSPC Res Top and the other two are attached to the EK Dual Turbo Top–all three are in series. The MCP attached to the XSPC Res Top I can modulate the supply voltage freely between 7.65V and 12.65V. The two MCPs on the EK Dual Turbo Top always run at 12V. I have six pump settings I run with every mount: 1) All three on at full speed, 2) XSPC Res Top only (at 12.65V), 3) XSPC Res Top only (at 10V), 4) XSPC Res Top only (at 7.65V). The ability to consistently vary flow is a huge aspect of my testing.
- I use a Koolance FM17 for my flowrate measurement. I recognize its lack of ‘professionalism’ (compared to a King Instruments flowmeter or something of that ilk) but still use it because it 1) covers the entire range I anticipate I’ll be testing in (~.2GPM up to 3GPM), 2) outputs measured flowrate every second via RPM wire, which is logged for the entire test and then averaged and has thus far brought on extremely consistent results.
- Loop order: CPU block -> MCR220Res -> Koolance FM17 -> MCR320 -> XSPC Res Top + MCP -> EK Dual Turbo Top + 2xMCP -> CPU block. Air flow order: in -> temp probe array -> MCR320 -> San Ace H1011 -> MCR220Res -> out
- I do a 5 mount test, each with their own TIM application. It takes a ton of extra time (each block takes 5x4x120min to test), but it’s totally worth it. In the words of Martin “It’s not uncommon at all to see mounting variations as high as 2 degrees or more, so with only one mount, that error is 2 degrees. When you mount 5 times and average those results, your standard deviation is significantly lowered and the overall testing confidence improved. In addition multiple mounts serve as a means to validate data, because each test is carried out again and again, chances are if some variable is affecting results, it will show.”
- I have 10 temperature probes in use: 6 Dallas DS18B20 Digital one-wire sensors on the intake of my sandwich, 4 Intel DTS sensors in the processor.
- For temperature logging, I use OCCT v3.0.0.RC1′s internal CPU polling that is performed every second on all four DTS sensors and is automatically output to .csv files. I also use OCCT for loading the CPU. For intake air temperatures, I use Crystalfontz 633 WinTest b1.9 to log the Dallas temp probe data on my Crystalfontz 633. I also use WinTest b1.9 to log fan RPM and Koolance FM17 flowrate output. Martin et al. have been over the many advantages and qualities of the Crystalfontz + Dallas temp probe combinations–it really is a wonderful setup and aids the testing process immensely.
- For processor loading, I find OCCT v3.0.0.RC1 to be extremely competent. It provides a constant 100% load (so long as WinTest b1.9′s packet debugger is fully disabled) and is extraordinarily consistent. It allows me to, in one button push, start both the loading and the logging as well, which helps. I immediately also start to log the Crystalfontz data simultaneously. I run a 120 minute program, the first minute is idle, then I have 115 minutes of load, and then 4 minutes of idle. The first 26 minutes of load are thrown out as warmup and only the remaining 90 minutes of load are used for data compilation. During the last 4 minutes of idle, I adjust the pumps to be prepared to immediately begin the next 120 minute program.
- For TIM, I use MX-2. It’s plentiful, representative of what a lot of people use, and has no break-in period. I use the dot in the center method and validate all my mounts to be at least “good” visually upon removing the waterblock.
- Like Martin, I have found that simply using processor temperature minus ambient temperature is not adequate for Intel’s 65nm Core 2 processors. However, I have found that ambient and core temps scale perfectly fine (1:1) with i7.
- My graphs….they may look a little different than what you’ve seen before, but I feel they’re a great way to show all the individual data points from testing while also highlighting the averages of that data. I’ve termed them Planet/Moon graphs–each data point get its own moon and 3 moons get averaged into a planet. From there, the planets get a line drawn through them (not a trendline, just a regular line with the “smooth line” option checked). For something like flow vs. cooling, I’ve found Excel’s trendlines to be totally incompetent. This applies to HSFs too. In fact, I have yet to see a situation where they do work involving flow vs. cooling.
- While I do 5 mounts, I discard the best and worst mounts and use the data of the middle three. I still show you the data from the worst and best, but it’s not used in the ‘big’ graphs or the averages calculations. I take the middle three to hopefully get a fair representation of what to expect from the block in how it compares to other blocks.