Iomega ix-200: Monitoring RAID resyncing status

When the Dashboard webapp is showing ‘verifying data protection configuration’ you can get a more detailed status from the box if you ssh into and ‘cat /proc/mdstat’ – you’ll see something like this:

Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md1 : active raid1 sda2[0] sdb2[1]
974722192 blocks super 1.0 [2/2] [UU]
[====>…………….] resync = 21.3% (208528384/974722192) finish=340.8min speed=37457K/sec

md0 : active raid1 sda1[0] sdb1[1]
2040128 blocks [2/2] [UU]

unused devices: <none>

Iomega ix2-200: High disk activity and low throughput from clients

My ix2-200 box has been doing a lot of thrashing for a while, each time that I turn it on. I though for a while that this was just some disk indexing going on, but it seemed more recently that it was everytime I powered it on.

SSH’ing into the box and taking a look in some log files I found these messages repeating every couple of minutes, in the /var/log/soho.log file:

2012/12/31 12:48:02.764433: executord[892.40d6e2a0]: (1324) WARNING: Restarting 'mt-daapd' due to excess memory usage (198598656 used, 67108864 allowed)
2012/12/31 12:48:02.962593: executord[892.40d6e2a0]: (1245) DIAGNOSTIC: restarting process 'mt-daapd'.
2012/12/31 12:48:58.426400: executord[892.40d6e2a0]: (1528) DIAGNOSTIC: Started mt-daapd[14707]
2012/12/31 12:48:58.538179: executord[892.40d6e2a0]: (1371) DIAGNOSTIC: Signal received with no commands

Searching for mt-daapd I found this post that described similar behavior. I followed the steps to edit the daap.conf file (mine was located in a different location, here: /mnt/soho_storage/media) and removed all the filetypes for the extensions setting except .mp3, .m4a and .m4p.

That seems to have done it. Once the service restarted again, it hasn’t since turned up in the log file for the last hour or so that I’ve been watching. This makes sense if that indexer has issues with large files, since I primarily use this box to keep copies of our home movies, all of which as in .mp4 format, and most of the files are large, ~ 1 GB each.

The good news it that also seems to have given back some performance – file transfers are now pretty snappy, whereas before it seemed they were dragging unnecessarily slowly.