Thermal Paste V1.1
Sunday, 23 August 2009 18:37
This is a really quick supplement to my Indigo Xtreme review…a little data, a few words, and we’ll call it done. Due to the brevity of this write-up, I highly recommend reading the full Indigo Xtreme review where I detail all my methods and give full impressions of the numerous other TIMs shown in these charts.
Arctic Silver Ceramique is another old-timer from Arctic Silver. It’s a white, generic looking paste that’s not to be confused with actual generic paste (which usually varies from bad to horrible). Ceramique promises to perform and is used by many these days due to its non-conductive and non-capacitive properties, as well as its very low price and high quantity syringes. Due to popular demand, it was added to this supplement.
Arctic Cooling MX-3 is the successor to the extremely popular MX-2. It promises to perform better and have all the same non-capacitive, non-curing, non-conducting properties of MX-2. It’s really new to the scene and I was luckily able to order some from Petra’s Tech Shop and squeeze in a few tests before transitioning my testbed and going on a vacation.
Thermal Testing Methodology/Specification
My TIM tests are a derivative of my waterblock tests. I use Dallas One Wire DS18B20 temperature probes at various points through my watercooling loop and at the air intake to measure temperatures, I use the same pump and block on every test, and use good testing practice by performing 5 mounts (when possible). Where applicable, I follow manufacturer’s installation procedures to the letter. For my TIM tests, I’ll be plotting temperature vs. time, in the form of a 60 minute moving average (or less for the first hour of data). Despite the 1C resolution of the Intel DTS sensors, these tests can be considered statistically highly precise due to the immense amount of data acquired from polling every sensor/probe/meter every second over the course of 12 hours. A moving average is used to smooth out the noise associated with this kind of measurement and to maintain a very high precision of information. A typical TIM test, in raw .CSV outputs, will include roughly 6,500,000 data points per TIM. In the end, all that data can be processed down to one value: what temperature the TIM provides.
I will be examining two specific components of TIM performance: how long it takes to cure (if within the 12 hour testing time) and what kind of temperatures an end-user can expect.
A single 12hr test per mount with 5 mounts was completed for each TIM. Everything was held consistent between tests and everything was logged.
- The processor I’m using for this test is my C0/C1 i7 920. I’m running it at 21×200 (4200MHz) at 1.49V loaded on a Gigabyte EX58-UD5. It is unlapped. I’m running 2GB of G.Skill DDR3 1600MHz. All heatsinks on the board are stock and I have fans blowing over the MOSFET area for added stability. The video card is a 4850 1GB with VF830 running in the top slot. The board is sitting on my desk alongside my Odin 1200W PSU and DVDRW and HDD drives.
- The watercooling loop I’m using is very untraditional, but allows me to test the way I want to test.
- It consists of a two MCR320s with three pairs of Yate Loon D12SH-12 fans in push/pull on each radiator. I use a D-Tek DB-1 pump on the radiator subloop.
- For the block subloop, I use a Swiftech GTZ for its consistent mounting and a Laing D5 at setting 5. Also in the loop are three Laing DDC3.2s (turned off) as well as a Koolance KM-17 flowmeter to monitor and ensure there is no change in flowrate during a test or between tests.
- I use a shared Bitspower reservoir between the two subloops.
- I do a five mount test, each with their own TIM application and full cleaning between. I’m fond of semi-discarding the best and worst mount data–I present it to the reader, but my final analysis and numbers are all based on the median three mounts. As a reviewer, I feel it is my duty to present the reader with performance numbers of a product that represent what its typical performance is. Often times the best and worst mounts are somewhat anomalous; by performing five mounts and focusing on the middle three mounts (in terms of thermal performance), I feel I am best representing the expected performance of a product.
- I have 26 temperature probes in use: 22 Dallas DS18B20 Digital one-wire sensors and 4 Intel DTS sensors in the processor.
- For temperature logging, I use OCCT v3.1.0’s internal CPU polling that is performed every second on all four DTS sensors and is automatically output to .CSV files. I also use OCCT for loading the CPU. For air intake and various water temperatures temperatures, I use Crystalfontz 633 WinTest b1.9 to log the Dallas temp probe data on my Crystalfontz 633. I also use WinTest b1.9 to log pump RPM and Koolance FM17 flowrate output. I have found, much to my chagrin, that programs like RealTemp, CoreTemp, Everest, etc., all have their own massive flaw in temperature logging that prevent them from being used for such a test. These flaws range from data formatting issues, to sensor polling issues, to random, yet common, stalls in the software (especially when logging).
- For processor loading, I find OCCT v3.1.0 to be extremely competent. With the Small Data Set setting, it provides a constant 100% load (so long as WinTest b1.9’s packet debugger is fully disabled) and is extraordinarily consistent. It allows me to, in one button push, start both the loading and the logging simultaneously, which helps. I immediately also start to log the Crystalfontz data via WinTest b1.9. I run a 12 hour and 5 minute program, the first minute is idle, then I have 12 hours of load, and then 4 minutes of idle.
- I have found that simply using processor temperature minus ambient temperature is not adequate for Intel’s 65nm Core 2 processors. However, I have found that ambient and core temps scale perfectly fine (1:1) with i7.