One of the main bottlenecks when running high performance virtualization systems is the harddisk. Now, you could of course switch your entire system over to SSDs, but that is costly and you’ll end up with a lot less storage, or a massive RAID array.

Modern filesystems like ZFS have solved this problem by allowing for ‘hybrid’ systems. These use the traditional harddisks for persistent storage, and use SSD drives in front of them to cache the read and write queries. This way you get the best of both worlds. Nearly SSD performance and the storage size of a traditional drive.

At we use Proxmox to power our VPS offers, which uses LVM and EXT4 for it’s filesystem which doesn’t have a ‘SSD caching’ method built into it. Facebook seems to have had a similar issue, so they created FlashCache. Flashcache is a kernel module that allows you to add a block caching partition in front of any other partition of your system, resulting in an amazing speedup of your system.

After having spent a night or two on getting this to work on Proxmox 2, I decided to write a small tutorial here. I’d also like to thank @toxicnaan for his l33t hax0r skillz.


Updating your system

Get your system up to date and make sure you’ve got the latest Kernel.

apt-get update
apt-get dist-upgrade
apt-get install dkms build-essential git


Kernel Headers

You will now need to install the Kernel Headers for your Kernel so that you can compile the module. Make sure you install the correct version of the headers. These need to be the same as the Kernel you’re running.

uname -a # to get your kernel version
apt-get install pve-headers-2.6.32-17-pve # to install the headers for version 2.6.32-17


Get FlashCache

Now that we’ve got the Kernel tools, we can get FlashCache and build it.

git clone git://
cd flashcache/

make -f Makefile.dkms boot_conf
make install


Load FlashCache

Next we need to load FlashCache into our running Kernel and make sure it’s loaded upon boot.

modprobe flashcache
echo flashcache >> /etc/modules


Re-purposing the SSD drives

Now it’s time to find a new use for our SSD drives, namely as cache. You can skip this step if your server doesn’t have the SSD drives mounted as /var/lib/vz

umount /var/lib/vz
vgremove pve
pvremove /dev/md2


Re-purposing the 2 HDD drives

Now let’s prepare the 2 HDD drives to be used as the storage for /var/lib/vz.

umount /data
pvcreate /dev/md0
lvcreate -l 100%VG -n storage pve
mkfs.ext4 /dev/mapper/pve-storage


Creating the FlashCache partition

Now let’s create the FlashCache partition on the SSD drives & mount it.

flashcache_create -p back pvec-storage /dev/md2 /dev/mapper/pve-storage
mount /dev/mapper/pvec-storage /var/lib/vz
echo 1 > /proc/sys/dev/flashcache/md2+pve-storage/fast_remove


Editing /etc/fstab

Next step is to edit /etc/fstab and remove the /data and /var/lib/vz mounts. If you forget to do this (as I did for quite a while), your server will struggle to boot on it’s own, and you’ll end up with the datacenter techs thinking you’re an idiot 🙂

vi /etc/fstab


The init.d file

This next step is important. We need to add an init.d file to do some operations, like mounting the filesystem and cleaning it up. It will also unmount the drive before shutting down, as if you don’t do this, your kernel will freeze on shutdown. Make sure you edit your file according to your needs.


# Start or stop Flashcache

# Provides:          flashcache
# Required-Start:
# Required-Stop:     $remote_fs $network pvedaemon
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Flashcache SSD caching
# Description:       Flashcache SSD caching


flashcache_start() {
if df -h | grep /var/lib/vz > /dev/null
echo "Flashcache allready running"
flashcache_load /dev/md2
mount /dev/mapper/pvec-storage /var/lib/vz
#mount /dev/mapper/pve-backup /mnt/backup
echo 1 > /proc/sys/dev/flashcache/md2+pve-storage/fast_remove
echo "Flashcache started"

flashcache_stop() {
if df -h | grep /var/lib/vz > /dev/null
#umount /mnt/backup
umount /var/lib/vz
dmsetup remove pvec-storage
echo "Flashcache stopped"
echo "Flashcache not running"

case "$1" in


        $0 stop
        $0 start

exit 0


Enabling the init.d file.

Now we need to make the file executable and make sure it’s run on boot.

chmod +x /etc/init.d/flashcache
update-rc.d flashcache defaults


Give it a spin

Right, that should do it. Reboot your machine and see if it comes back.


If all went well, your drive should be mounted with FlashCache in between.

root@vh43:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
none                   32G  256K   32G   1% /dev
/dev/md1               10G  1.3G  8.2G  14% /
tmpfs                  32G     0   32G   0% /lib/init/rw
tmpfs                  32G     0   32G   0% /dev/shm
/dev/fuse              30M   12K   30M   1% /etc/pve
                      1.8T  196M  1.7T   1% /var/lib/vz

You can also see the statistics of FlashCache by running:

cat /proc/flashcache/md2+pve-storage/flashcache_stats

That’s it! Your Proxmox system should now have it’s VMs on the FlashCache drive.

If you have any questions or feedback, just leave them below.


Read more