I'm currently building Miniature.io, you can have a look here.

Refurbishing good ol’ development lab

January 27, 2018

Ever visited r/homelab at Reddit? Folks build computer labs and keep them in their basements.

Well, I do have one myself.

I’ve started building up my DevOps skills a couple of years ago and to have a sandbox to play with started buying networking and virtualization hardware.  While buying more and more stuff, it turned out that packing it into an industrial but portable rack cabinet might help keep the space cleaner and look more ‘professional’. And here it is, my dev lab aka “The Garage Cloud”.


I’ve recently moved back to my home country to start working on a bunch of new ideas and needed a sandbox again. This time to host number of applications you can find in any IT company these days, GitLab, Jenkins, Redmine, Grafana, ELK stack to name a few. But, also to host entire virtualized environment where projects are being automagically tested by continuous integration pipelines.


Why not cloud?

Money! I just did have hardware already, and I needed it to run 24/7 (monitoring, logging, data access, etc). It’s not production grade though, so even if it’s down for a couple of hours world isn’t going to end.

Apart from raw CPU power, my lab needs plenty of ram to run virtual machines and lots of quite fast storage to do testing. If I moved that to a public cloud I would’ve had to start spending quite a lot each month on a production grade environment that I didn’t need.

Well, sooner or later I will move to cloud, but for the time being this setup just fits my needs.



The only real issue I had was my old primary hypervisor, a prehistoric server with two AMD Opteron 6128, Asus KGPE-D16 and 64GB od DDR3 ECC ram. Issue? Lack of support for SSE 4.2 instruction set, required by some software I use.

So, a few searches on Ebay and here it is, decommissioned hardware from some cloud provider I presume. Ordered four server nodes to build a new Proxmox cluster. Here are some pictures of the build.

The hardware is the following, from the top:

  • Some old rack console unit
  • Dell 8 port KVM
  • Switch: HP ProCurve 1810G
  • Router: 1U Supermicro server: Intel Atom 330, 4GB RAM, 40GB SSD, Dual GbE NIC
  • Backup: RaspberryPI and 1.5TB USB HDD
  • Media: HP Microserver N54L, 8GB DDR3 ECC, 4x 2TB
  • Hypervisors: four 1U whitelabel servers: Xeon E3-1230 V2, 16GB DDR3 ECC, Dual GbE NIC
  • Storage: custom 4U server: Xeon X3430, 16GB DDR3 ECC, Intel S3420 mobo, Intel 520 240GB SSD, 8x 500GB HDD, 3x 2TB HDD, Quad GbE NIC
  • 1600VA UPS



I’ve been running a 4 node Proxmox 5.x cluster so far, it’s been stable as rock and ability to manage the fleet of virtual machines via a web browser is fabulous. I’m using both KVM virtual machines and LCX containers, both running Docker.

The entire cluster has 32 virtual CPUs available and 64GB ram. Each node is connected via a dual bonded 2x 1GbE link to switch in LACP (802.3ad) mode.

Servers are 1U and 1U usually means loud as f**k. Well, they’re not that loud but still too loud to keep them in an office room.



I’m a fan of FreeNAS distro, it’s been serving me great for years and has out of the box features that would cost you a great deal of money if bought from some enterprise vendor. I’m talking about ZFS file system, and it’s built in ability to use SSDs as cache drives, fast LZ4 compression and snapshots.

FreeNAS 11.0-U4 is what I’m currently running there, details:

  • Stripe-Mirror (RAID 10) with eight 500GB drives, used as the primary storage for virtual machines
  • RAIDZ (RAID5) with three 2TB drives for Owncloud data and an additional storage space
  • Intel 520 240GB SSD divided into L2ARC and ZIL for both pools
root@storage01:~ # zpool iostat -v
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
freenas-boot   739M  3.09G      0      0  1.25K     78
  da0p2      739M  3.09G      0      0  1.25K     78
----------  -----  -----  -----  -----  -----  -----
tank1        306G  1.51T     10  1.01K   667K  32.7M
  mirror    76.8G   387G      2    177   168K  7.21M
    ada4        -      -      1     98   117K  7.22M
    ada5        -      -      1     98   117K  7.22M
  mirror    76.6G   387G      2    176   167K  7.15M
    ada6        -      -      1     97   117K  7.15M
    ada7        -      -      1     97   117K  7.15M
  mirror    76.5G   388G      2    177   166K  7.19M
    ada8        -      -      1     98   116K  7.19M
    ada9        -      -      1     98   117K  7.19M
  mirror    76.2G   388G      2    173   166K  7.04M
    ada10       -      -      1     95   116K  7.04M
    ada11       -      -      1     95   117K  7.04M
logs            -      -      -      -      -      -
  ada0p2    28.4M  7.91G      0    325      1  4.16M
cache           -      -      -      -      -      -
  ada0p3    18.3G   110G      8     58   463K  5.68M
----------  -----  -----  -----  -----  -----  -----
tank2        808G  4.65T      3      2   350K   271K
  raidz1     808G  4.65T      3      1   350K   141K
    ada1        -      -      1      0   162K  71.1K
    ada2        -      -      1      0   162K  71.1K
    ada3        -      -      1      0   163K  71.1K
logs            -      -      -      -      -      -
  ada0p4     256K  7.94G      0      1      0   129K
cache           -      -      -      -      -      -
  ada0p5    47.6G   402M      0      1  2.70K   189K
----------  -----  -----  -----  -----  -----  -----

Storage is connected to hypervisors via a quad bonded 4x 1Gbe links in LACP mode. I’m using NFS for sharing virtual machine disks in Proxmox, performance is okeyish and it just works.

Had a bad luck with release 11.1 which has a critical memory leak bug related to ZFS, reported here:  https://redmine.ixsystems.com/issues/27422 so if you’re going to use FreeNAS either go for 11.0-U4 or get the latest 11.1-U1 which presumably has this bug fixed.



My additional investment was around €1000 for new servers and couple of other items. Now I have an environment running 24/7 with 32 virtual CPU cores (four Xeon servers, 8 threads each), 64GB DDR3 RAM (can extend it to 128GB though) and around 5TB of hybrid storage with 2TB on RAID10 equivalent reaching around 4000 IOPS in random IO tests.

Nice thing about this setup is that under normal daily load, it burns less than 500W of electric power which is ok.

Correct me if I’m wrong but, renting an equivalent of that capacity in a datacenter or a cloud would cost a lot more than 0.5kWh * 24h * 30d + 30 euro for 100/60 Mbit FTTH link. Surely, it isn’t production grade, doesn’t have redundant power supply nor a dual uplink but that’s a tradeoff I’m good with for the time being.

So, not bad imho ?


Got questions? Drop me a message.

Comments (2)

Drop a comment

Your email address will not be published.