Installing VirtualBox Guest Additions for a CentOS Guest

The steps to prepare a CentOS guest for installing the VirtualBox Guest Additions are outlined here, but since it references a few other pre-req steps, here’s a list of all the steps I took to get all the dependencies installed before you can install the Guest Additions:

Install yum-priorities (http://wiki.centos.org/PackageManagement/Yum/Priorities):

yum install yum-priorities

Install RPMForge (http://wiki.centos.org/AdditionalResources/Repositories/RPMForge):

 

wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm

rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt

rpm -i rpmforge-release-0.5.2-2.el6.rf.*.rpm

Install dkms:

yum --enablerepo rpmforge install dkms

Install dev tools and kernel development:

yum groupinstall "Development Tools"
yum install kernel-devel

 

 

 

 

Fedora 18 & 19 – terrible performance on Virtual Box

I’m not sure what the deal is with Fedora 18 & 19 but the performance running under Virtual Box in Windows, regardless of how much memory you throw at the VM, really is unbearably slow. I had 18 installed for a while but haven’t used it for several months. I just fired it up again and it’s unusable. Downloaded 19 and started the install and it took a couple of hours. I might have something else going on on my laptop that slowing down the performance, but as it is it’s unusable.

I was looking for a RHEL derivative other than Oracle Linux since I wasn’t prepared wait to download the massive disk images which are several GB (really?!). I just realized though that CentOS is RHEL compatible. Very cool. Will download and see if it’s more usable under Virtual Box.

Rebuilding a software controlled RAID on Ubuntu

One of my RAID arrays on my server decided that one of the drives was bad and dropped it out of my array. I have two software defined RAID 1 mirrored arrays, /dev/md0 which contains my main drives, and then a smaller array, /dev/md1

This is what mdadm was showing for when one of the drives was dropped out:

kevin@linuxsvr:~$ sudo mdadm --detail /dev/md1
/dev/md1:
 Version : 0.90
 Creation Time : Sat May 16 18:38:51 2009
 Raid Level : raid1
 Array Size : 1485888 (1451.31 MiB 1521.55 MB)
 Used Dev Size : 1485888 (1451.31 MiB 1521.55 MB)
 Raid Devices : 2
 Total Devices : 1
Preferred Minor : 1
 Persistence : Superblock is persistent

Update Time : Tue Mar 5 14:10:24 2013
 State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
 Spare Devices : 0
UUID : 44b55b61:84e84f5f:5c7760e0:2ac997c6
 Events : 0.90560
Number Major Minor RaidDevice State
 0 8 21 0 active sync /dev/sdb5
 1 0 0 1 removed

I couldn’t find any messages in syslog for what was wrong with my drive, and the SMART status for both drives was still good. I did have to power off the server to move it without a clean shutdown, so this was probably self-inflicted…

On one of my arrays, adding back the missing drive caused it to add as a spare, it re-sync’d and then everything was back to normal. On the other, it wouldn’t add back:

kevin@linuxsvr:~$ sudo mdadm --add /dev/md1 /dev/sdc5
mdadm: /dev/sdc5 reports being an active member for /dev/md1, but a --re-add fails.
mdadm: not performing --add as that would convert /dev/sdc5 in to a spare.
mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sdc5" first.

I found a few posts describing to fail the drive, remove and then add it back, but this still gave the same error:

sudo mdadm --manage /dev/md1 --fail /dev/sdc5
sudo mdadm --manage /dev/md1 --remove /dev/sdc5
sudo mdadm --manage /dev/md1 --add /dev/sdc5

I don’t know exactly what the recommendation in the error message did, but using the –zero-superblock option and then adding back the drive again did the job. It resync’d successfully and everything’s back to normal.

This post on StackExchange has some good info and suggestions. This one too.