Thursday, January 08, 2026

Data Center Server computers and their disk drives

512 vs 520

Learned this one the hard way, didn't understand some details before I spent some money. Fortunately only a little bit of money. One of the sellers tried to warn me, but I didn't know what he was saying, and I didn't understand the situation yet.

There are several features/details about disk drives you want to understand before you buy any and end up with units you cannot use.

I am only addressing the machines I am using, so others may be different. Those machines are:

HP DL360 Gen 6/7/8/9

HP DL380 Gen 7/8/9

All of these machines have internal RAID hardware. Sometimes that hardware is a removable daughter board, sometimes its directly on the mobo; I have both. I have 410, 420, 440 model/version numbers. 

Sometimes the RAID software interface is an old-school text interface, specifically Gen 6 and 7, with a mousable GUI for Gen 8/9 (and presumably 10/11).

Your first criteria of concern is the physical size, either 2.5 inch or 3.5 inch, and the quantity that the machine can mount.

For the machines that will take 3.5 inch: comes in both 1U and 2U. You will need matching caddy trays that are 3.5 inch and match the machine. Again, Gen 6/7 are different from Gen 8/9+. Finding the right screws is important too, and the screws for a 3.5 inch drive are different from a 2.5 inch. Quantities that can be installed are 4 in a 1U server, and 12 or 15 in a 2U; imagine having 15 28-TB drives in your machine, that's 400 terabytes. (I have seen pictures of another brand 2U server taking 24 drives!)

For the machines that will take 2.5 inch platter or SSD: also comes in both 1U and 2U, quantities vary, from 5 to 8 to 10 to 16 to 24. Platter drives in 2.5 inch are three rotation speeds: 7200, 10K, and 15K. The most common is the 10K. Capacity goes from 36GB to 72GB to 146 to 300, 450, 500, 600, 900, 1.2 TB, and I think there's a 1.8TB unit. I have a few of nearly all those sizes. For a data center you really want the units from 500 to 1.2T. Most all those are 10K, which is good.

The faster a platter spins the faster you can read data from it. 

SSDs are 5-10X faster right out of the box. You really want to use them as much as you can, esp given that in the same physical size you can get up to 8TB per unit. They are a good bit more expensive. EBay will sell you a 500GB 10K 2.5 inch platter drive for $20, a 900GB for $25. These are pretty good deals.

All these details you can read on the device label.

The thing you CAN'T read on the label is what the bytes-per-block/sector number is. There are several possibilities here, 512, 520, and I have one drive that is 4096.

My machines will only take 512. The 520 is an attempt to squeeze a teeny bit more data onto the disk, the 4096 is going to be a bit wasteful but that drive is 10TB so you probably don't ever notice that.

512 vs 520 

Before you click Buy It Now make sure you know the answer to this, else you just bought a brick.

-------------

That said, it is my casual understanding from reading online that it is possible to convert one to the other, but the process sounds a little complicated, and it cannot be done on my machines.

What I have read of that conversion: first off, standard RAID on my machines cannot do it, and will barely even tell me about it. Install one, go to the Gen 8/9 gun-based Smart Storage Administrator, look at details on one of the physical drives, the drawer window on the right will say something like "can't be used for RAID" and will maybe also say "520". So you are going to have to take this drive to another machine that has a daughter card (or PCI) that is itself configured for HBA mode, not RAID, so that software will have more direct access to the drive, then some other software tools (ShredOS apparently) will allow you to turn a 520 into a 512. I have never even gotten close to trying this. One of these days, maybe...

Wednesday, January 07, 2026

How to build a small Data Center, from scratch

Data Center Article 1.

So I am now trying to build a Data Center that is functional, but not an ugly monster like those in Northern Virginia.

Step One is to reuse an existing building, so I'm not starting a "greenfield build".

There are lots of buildings around that are kinda just basically warehouses, or at least warehouse-like in that they are big empty boxes. Most town of any size have buildings that are empty and are the right minimum size (which I would say is about 8000 sq ft, although 5000 would work too. Anything smaller is probably too expensive to rent.

Step Two is to make sure that building has adequate power. I'm aiming towards about 10 KW per rack, which means NOT doing big AI. If a building has 2-5 megawatts present, or available, that's adequate for a small space.

Step Three is to worry about cooling--are you going to need anything special? For 2-3 MW, probably not. You'll want more than is present in the building unless it was a cold storage building, I would figure on one megawatt of cooling for 3 megawatts of computing.

Step Four is that you need a decision about what software services you are supporting

Step Five is you need a rack design

Step Six is you need a network design

From 2010 through 2014 I was a user of what we now call "cloud services" except that didn't exist with that name at the time. But I needed what is now "rentable cpu", and some reasonable amount of data storage. So what I am trying to build to provide for others is what would have satisfied what I needed but couldn't get 10-15 years ago.

I will address all of those things in this series; some are of course harder than others.

I will also have some sidebar articles about details (such as the ProxMox-on-HP-Gen7).

I am primarily using ProxMox (https://proxmox.com/en/) for Virtual Machines; will probably have some Windows Server 2022 Data Center for folks who need Windows.

I am using NextCloud for casual data storage available to outside users. 

For large internal data storage availability, I am still investigating tools (things like CEPH, for example). I actually started to write my own version of this a few years ago, after having thought about it for a few years around 2011-2012; I ended up not needing it on that project, but that was because I got customer permission to buy a SAN.

Data Center: How to install ProxMox 9 on older servers

Keywords: ProxMox 9, HP DL380 Gen7, installer solved

How to create a small Data Center, Article 2.

How to install ProxMox 9.1 (and likely a few other recent but older versions) onto older rack-mount servers

It took me quite a while, over a year I think, to get to this answer. Not because I spent all that time looking for the answer, but because I didn't really ever find a good enough, or complete, answer.

So here's the problem: ProxMox 9 (8 as well, and apparently some late versions of 7) are different in ways not introduced by the creators (www.proxmox.com), and not detected by the creators, because they aren't test installing on older hardware (specifically HP DL 360/380 Gen 6/7).

What if you need to install on older hardware, for whatever reason? You seem to be SOL. This problem happens on HP DL360 Gen 6 and 7. The problem doesn't happen on Gen 8/9/10/11, because that's newer hardware, newer video hardware specifically, that does in fact include this video mode properly.

But you aren't SOL, although finding out how to get around the problem is non-trivial.

The problem is that while you can get to the initial splash screen and menu, you can't proceed beyond that, because the screen goes black and never recovers.

Why? The newer/est versions of linux installer attempt to set a video mode on the MOBO video electronics that does not exist, the screen goes black and stays that way. The installer doesn't detect a video mode failure, but it can't continue. This problem has nothing to do with a video driver, it is a failure in the hardware, there isn't a solution like "well just update your drivers" like you always have with Windows.

If you read widely enough online, you find a lot of comments to questions where people ask "why is this happening?" but not really much good in the way of answers.

So here's the solution (I did this successfully on five Gen 7 machines, and will be testing on Gen 6 shortly): 

Before you even get to the install, make sure your RAID config is what you want.

Then, when you get that initial splash screen with the four-item list of install options, the first one is "graphical", and on Gen 8/9/10 this works fine, the second one is "Terminal UI" and this is the one you want to use. 

The first menu option, "Graphical", is highlighted, so press down arrow to move the highlight to the second choice, "Terminal UI". Now press "e" (which in this case is short for "edit"). This is a hidden option built into GRUB. Deep in the background there is a file on your installer USB stick that holds a little "grub script" that is being executed to do this install. You can edit this script on-the-fly at this point and add a special codeword in the right place to cause the installed to not set this illegal video mode.

When. you press "e" you get an opportunity to edit the script, the line you want to edit is the one that starts "linuxvm". You want to move the insert cursor to the end of that line, hit space, then enter "nomodeset" which is the magic word that tells linuxvm to NOT set a video mode, but to just use the text interface.

Next press control-x to terminate the editing, that little script will get executed, "nomodeset" will get used, and you will get the "terminal UI" version of all the otherwise graphical (i.e., needs a mouse) entry screens, wanting the same info, and of you go, give the same answers, let it run. It will auto-restart, and if you aren't paying attention to that, you may end up back at the ProxMox installer screen. If that happens, it means you didn't remove the USB stick, so do that and reboot the machine. Now you're good.

You can now integrate this new machine, PM install into you cluster.

Alternate aspects: rather than have to do the "edit" to add "nomodeset" every time, you can modify the original script on your USB stick. This file is "grub.cfg" and you enter nomodeset the same place. This is on line 69 in /boot/grub/grub.cfg

I find this explanation on a Dell website, with no pictures, but apparently their machines have the exact same problem with some older models. I am not a Dell user, so I can't attest to the success of this, but it sounds right. I've lost that URL, sorry. 

You can find plenty of discussion that is just "me too" noise, or partial suggestions that aren't helpful, or non-answers like "have you tried this?", and really no answers anywhere, except for some really peculiar magic keystrokes that sound like unique-to-someone accidents. My solution above will work for all. 

So far I have used this on 5 DL380 Gen 7 and 1 DL360 Gen 6; my other Gen 6 seems to have some other hardware problem that is preventing me even booting the USB Stick; might be that this server is finally dead.

Here's the Dell link where I found the right words:

Dell link for nomodeset