Today we encountered a situation where a Proxmox system’s KVM virtual machine refused to delete after the storage volume that it’s virtual HDD resided on was lost; trying to delete the KVM from the web GUI resulted in the following error:
TASK ERROR: storage ‘proxmoxHDD’ does not exists
Attempting to delete it from the command line using:
qm destroy [VM ID]
storage ‘proxmoxHDD’ does not exists
Fortunately, there’s a way around this. The KVM config files live in:
Move or erase the [VM ID].conf file and when you refresh your web GUI the VM should be gone.
Sometimes you may with to add files to an existing backup; if you issue a command like:
tar -cvf /dev/st0 backupfiles
…and the tape is not already set to the end of the previous archive you will over-write any data from the position on the tape. Use the “eom” command to move the tape to the end of the alread-recorded files like so:
mt -f /dev/[path-to-tape] eom
mt -f /dev/st0 eom
Now you can use tar to add a file to the tape without over-writing the existing data.
If you have compressible data you may save space on you tapes by using compression; this comes at a cost of CPU cycles to do the compressing, which can often be a worthwhile tradeoff for a long-term backup. To do this is quite simple – add in the -z switch to your tar command.
tar -cvzf /dev/[tape-device] [folder or files to back up]
tar -cvzf /dev/st0 /opt/movies
For some file types – e.g. movies, mp3s, compressed picture files and the like you probably won’t see a great deal of space saved – though if it enough to save you from using two tapes instead of one, it may be worth it even so. Text and other file types may compress more easily and you may see more of a savings – it will vary greatly depending on your dataset. Try it and see!
Sometimes you may see people using the -j switch instead – this uses the bzip2 algorithm rather than the gzip algorithm (the -z switch). You will probably find that gzip is slightly better supported and bzip2 sometimes provides slightly better compression but takes longer. If you are chasing better compression it may be worth replacing the z switch with j to see if it helps.
A new feature with ESXi 5.1 is the ability to check SSD health from the command line. Once you have SSH’d into the ESXi box, you can check the drive health with the following command:
esxcli storage core device smart get -d [drive]
…where [drive] takes the format of: t10.ATA?????????. You can find out the right drive name by the following:
ls -l /dev/disks/
This will return output something like the following:
Here I can use the t10.xxx names without the :1 at the end to see the two SSDs available, copying and pasting the entire line as the [drive]. The command output should look like:
~ # esxcli storage core device smart get -d t10.ATA_____M42DCT064M4SSD2__________________________000000001147032121AB
Parameter Value Threshold Worst
—————————- —– ——— —–
Health Status OK N/A N/A
Media Wearout Indicator N/A N/A N/A
Write Error Count N/A N/A N/A
Read Error Count 100 50 100
Power-on Hours 100 1 100
Power Cycle Count 100 1 100
Reallocated Sector Count 100 10 100
Raw Read Error Rate 100 50 100
Drive Temperature 100 0 100
Driver Rated Max Temperature N/A N/A N/A
Write Sectors TOT Count 100 1 100
Read Sectors TOT Count N/A N/A N/A
Initial Bad Block Count 100 50 100
One figure to keep an eye on is the reserved sector count – this should be around 100, and diminishes as the SSD replaces bad sectors with ones from this reservoir. The above statistics are updated every 30 minutes. As a point of interest, in this case ESXi isn’t picking up on the data correctly – the SSD doesn’t actually have exactly 100 power-on hours and 100 power cycle count.
Assuming it works for your SSDs, this is quite a useful tool – knowing when a drive is likely to fail can give you the opportunity for early replacement and less downtime due to unexpected failures.
Here’s a few shots of one of our recent storage server builds:
Continue reading “Customer Build: Luna II”
Here is the follow-up to our initial review of the Asus Z9PE-D16 motherboard!
Continue reading “Asus Z9PE-D16 Review: Part Two”
Up for review today we have one of Asus’ dual-socket-2011 server motherboards – the Z9PE-D16. Hit the break to find out what it’s all about and why you might care about it even if you don’t need to run two CPUs…
Continue reading “Asus Z9PE-D16 review: Part One”
WD have announced a new line of drives specifically designed for 24/7 home and small business network-attached-storage use – dubbed the Red Series. Initial release information suggests improved power consumption across the board as well as improved performance and modifications to make them more suitable for constant use. The vibration compensation is quite interesting – 3D Active Balance, to use their term – and there is also mention of intelligent error recovery to prevent the drive dropping from a RAID controller.
You can read more about the drives here
. Our testing unit is in-house and we should have some numbers soon!