The Grid in My Basement, part 3: That Sinking Feeling

Size matters. At least when you are building rackmount machines. Of course were I not suffering from sleep deprivation when I made my hardware purchasing decisions I would have realized that you can’t put a MASSIVE heat sink into a tiny space, but such is life.

Anyway, the very spiffy Blue Orb II CPU cooler is never ever gonna fit in the 2U case I bought. That was evident by inspection before I even unpacked the coolers. Had I done my homework on the motherboard and case dimensions I would have realized that a package with a combined fan + heatsink height of 90.3mm would never fit. Not only that, but the heatsink has length and width dimensions of 140×140 which means it might not fit the motherboard at all. There’s a huge row of capacitors next to the retention module base, and the DIMM sockets are proximate on the other side. This is all badness from the perspective of installing a massive heatsink.

So with a heavy sigh I file for my first RMA from Newegg and package the Orbs for shipment back. Bummer drag, they looked so cool too. So I start looking for an appropriate K8 heatsink for my new nodes, and the fun really begins.

First, you may be wondering why I didn’t use the cooler that came with the CPU when I bought it. Well, in order to save money I bought the OEM version of everthing I could. That eliminates a lot of unnecssary packaging, instruction manuals, and in some cases features – like the CPU cooler on my AMD X2s. So I need to buy a cooler on the aftermarket.

The assumption that manufacturers make is that you WILL overclock if you are buying an aftermarket cooler. Therefore, the heatsinks reflect this assumption and most are massive. Looking at heatsinks in close up is sorta like looking at big scary machinery. Pipes and tubes run in all directions, massive banks of fins jut out at weird angles and rise up toward to sky towering over the motherboard. None of these devices are particularly well suited for the tight space of a 2U (or heaven help you a 1U) case.

I start shopping around for low-profile CPU coolers for 2U cases and run into several problems. First, there aren’t too many cooler vendors out there that make this stuff. Second, the ones that do aren’t terribly interested in Socket 939 applications. Third, the low-profile stuff tends to be crazy expensive – $95 for a low-profile heat sink and fan? No thanks…

So I pick up a ruler, open up the case, and start measuring. And measuring. After a good deal of plotting, I calculate that my heat sink can be no more than 70 x 70 x 65mm. And then I start shopping. And shopping.

Finally, after literally 2 evenings wasted googling around, I hit on a cooler/heatsink sold by ASUS – the same manufacturer that makes the motherboard I am using. I look at the height dimension and am psyched – the combined total of both devices is only 55mm tall! The bad news is that the heatsink runs 77 x 68 x 40mm – meaning that it’s too big potentially. I look on the ASUS website (sidebar your honor: remind me to rant and rant later about web sites that provide everything BUT the information you need) and find nothing helpful regarding compatibility with their own motherboards.

So I reason as follows: The height dimension will fit just fine; the heatsink will probably fit an ASUS motherboard since ASUS makes both; the absence of a compatibility list means it’s compatible with all their offerings or somebody is just lazy. So I bite the bullet and order up the ASUS Crux K8 MH7S 70mm Hydraulic CPU Cooling Fan with Heatsink and hope for the best.

2 days later I get the parts, and a couple days after that I open up the build-in-progress machine and install the heatsink. Have I mentioned how stressful putting a heatsink in can be? I mean there you are with all this expensive hardware that looks pretty darn fragile, and you are pushing down on it with no small amount of force trying to more-or-less permanently mate the CPU to the heatsink. Every time I do this I expect the motherboard to crack or something equally as awful.

Good news! The new cooler fits perfectly. It clears the lid of the case beautifully, and the dimensions of both heatsink and cooler are within the perimter of the retention module.

The Grid in my Basement, part 2: Parts is Parts…

When I began this project the original goal was “reasonable performance for $500 per machine”. That is turning out to be a bit of a challenge, especially since I decided not to cut corners on the rackmount chassis. There is nothing like working in a case for an hour and emerging with half a dozen cuts to your hands from rough edges to cost justify a clean well-made chassis. Further challenging the $500 bottom line was the desire to run either a dual core or dual CPU configuration.

Form Factor, Topology, etc.
Socket 939 is fading away, and my research showed me that prices for 939 gear were fading similarly. So as a money saving technique, I decided to actively seek out socket 939 hardware for this project. I also decided to focus on a good quality motherboard while not necessarily using a server motherboard…this may turn out to be a poor decision – we’ll see once things are up and running. After reviewing the data sheets and specs on a number of motherboards I decided to use an ATX form factor.

Performance and Cost Considerations

I want good performance without breaking the bank. While a sweet dual core/dual core system with tons of memory and a massive SCSI array would make me smile, it would put the project way beyond budget. So here are the tradeoffs I made:

  • Running a single dual-core CPU instead of dual CPUs with dual cores. This means that each box will only be 2-way instead of 4-way, then again with 4 nodes running a clustering tool like openMosix I will have an 8-way box which is still pretty cool.
  • Have you noticed how pricey memory is lately? I’ll start out with 1G per node, but make sure that my motherboard can support at least 4G for future expansion. Note to self: hoard memory later when it’s cheap and make a killing on eBay when prices go up again.
  • SAS or SCSI gives killer I/O performance but at a price. I’ll build these machines with SATA-II devices in the 250-320G range, perhaps spending a little more for a larger on-device cache.
  • My original plan was to build a blade server system, using DIY parts from a vendor like ATXBlade. But in analyzing the cost – $550 for blade storage unit, $325 for each blade chassis – I decided that I didn’t really need to build a dense server farm. After all, I have a full height rack and will probably not build out enough systems to exceed the capacity of the rack.

Parts Manifest
After a significant amount of consideration, here is the parts manifest for each node:

Here is parts manifest for the chassis:

The links above are all to Newegg.com product pages, because that’s where I bought everything. The CPU I spec above is presently not available, but this CPU has very similar specs.

Motherboard selection was driven by the following selection criteria:

  • Socket 939, ATX form factor
  • Able to support AMD Athlon X2
  • At least 4GB memory
  • Support for at least 4 SATA-II devices
  • On-board RAID support for RAID-0, RAID-1, RAID 0+1, and JBOD
  • On-board video support
  • Front Side Bus speed of at least 1000MgHz
  • On-board gigabit network support

The up-to-speed reader will note that the motherboard I chose does not come with on-board video support. I noticed that too – AFTER I had ordered the motherboards. There is a whole story behind this that I’ll write down later. There is also a question about SATA performance – some spec sheets state the motherboard is SATA-I (1.5Gb/sec) while other spec sheets state it’s SATA-II (3.0Gb/sec). I think the board was rev’d at some point and this may have been part of the rev. At any rate, if it turns out to be SATA-I the I can still do some benchmarking and perhaps install a SATA-II card later.

The Grid in My Basement, part 1

The name of the game is parallelism…in short: take apart a problem, break it up into independent pieces, and run as many of those independent pieces at once on separate computers (well at least on separate CPUs). This is nothing new…parallel computing has been around since Cro-Magnon Geek solved problems by dropping boxes of punch cards bearing almighty FORTRAN into card reading machines and then lurking impatiently in the line printer room for pages of greenbar while multi-gazillion dollar CPUs the size of refrigerators cogitated about his fast fourier transforms and what not. What is compelling in THIS century is that you can do it cost effectively. Ok, not even cost effectively – downright cheap.

In cheapest form, parallel computing has become nearly free. Witness the Elastic Compute Cloud over at Amazon (buzz kill: ECC is in beta and they aren’t accepting new users at the moment; double plus bad: the largest number of machines you can have is something like 20) where you can rent time on virtual machines for cents per hour. In more expensive forms parallel computing is, well, still pricey. If you’re a high energy physicist or financial type with a big grid and you are running name brand hardware you are putting up some mighty big dollars.

So I find myself – as an employed practitioner wanting to test my employer’s new software, and as an entreprenuer designing and trying new concepts in search of the Next Big Geeky Thing – wanting to have access to my very own grid. Thus is born the idea of the Grid in my Basement, or as I prefer to call it: The Data Basement.

My plan is to build and deploy a basic grid capable of doing real work in a cost effective manner. I want to accomplish this with some fairly real-world parameters, so I need real computing horsepower. So after hours and hours of combing through CPU specs, motherboard specs, CPU cooler specs (naw, I would NEVER overclock…), power supply specs and the like I now have boxes and boxes of cool stuff en route from my good friends at NewEgg (what self-respecting techno-weenie doesn’t love newegg?). Over the next few weeks I’ll write about the Data Basement as it gets built out and evolves into something useful. With any luck I’ll also publish a tutorial with photos about rolling your rackables.

Budgetary Issues and Physical Plant

I need to build my Data Basement on a budget. After all, I still need to pay the mortgage, feed the family, buy Hoosiers for the racecar, and pay for all the electricity my spiffy new grid will need. I randomly chose a budget in the range of $3,000 – $3,500. The grid will live in the basement, and I want to save space there, so I will use a single standard rack that I picked up in the past. The rack will sit on a wooden platform I’ll build from scrap wood (cheap protection in case of minor flooding). Since my basement has reasonably high humidity, I’ll put a dehumidifier plumbed to a wasteline to keep humidity levels under control. The grid will pull between around 1.4Kw (about 13 amps @ 110VAC), so there will not be a need for any special electrical work. I want UPS support eventually on the grid, but initially I’ll just use some spike filters on each AC line. I already have broadband via DSL with a wired/wireless router, but I’ll need a gig switch with several ports so that the grid nodes can communicate with each other at speed.

The next posting will discuss parts selection for the individual compute nodes.