I'm experimenting with various RAID configurations of a new Adaptec 6805 (not the 'E' model) Attached are 8x Hitachi 5400RPM 2TB SATA 6Gb Drives using the miniSAS to SATA cables provided.
I created a RAID6 array comprising four disks, two from each connector. Then I tried expanding the array to the remainder of the disks.
My problem is that raid reconfiguration never seems to progress. After more than 24 hours the Adaptec Storage Manager still shows that it's at 0% completion. I'm not actively using the logical array - so I can't imagine that should be a signficant factor.
Thanks for the responses. Just to clarify - there's no load on the drives, and very little data (~5GB). After about 35 hours it finally showed about 1% - so, I gave up and deleted the array.
My main concern really is: If it takes ~35 hours to do 1%, extrapolating that out means it'd be somewhere around 4-5 months to do a rebuild if a drive failed.
I'll test this out by unplugging a drive and seeing how long it takes to rebuild.
The 0% bit is a bit odd but if the array was completely empty I'd expect the reconfiguration to be done quite quickly, certainly under an hour. That said if there's data on it then this kind of thing can take days, and those disks are big and slow so that won't help either. And that is indicative of how long rebuilds can take, it's why sysadmins don't like to use R5 or R6 when using large, slow disks - rebuilds take ages and expose you to risk while they do so.
If you have no data on the array or you can quickly move it to another drive then consider destroying the array and rebuilding it as you need. Personally I'd have bought 3TB drives and R10's them but that's your call.
I have an Adaptec 5405Z with 20x2TB 7200 RPM drives on a SAS Backplane. I attempted to do a reconfig on it to go from 8 drives to 20 drives. We use it for Security Video Storage. Since the box was essentially brand new, I figured why not see how long it woudl take with ~2TB of data on the array. After about a week and it only getting to 10%, I gave up. Backed it up, wiped it and started over. I tried the same test with RAID 10 and RAID 5, both seemed to crawl, but would have eventually finished. Though in truth, I am certain my DVR software was slowing down the reconfig by constantly writing to the disk. Did you factor additional load on the drives into the rebuild?
Obviously I have a few more drives than you, but for whatever reason, the reconfig seems to drag on for quite some time. When I tried to get Adaptec to answer the question, they weren't much help as the answer seemed to be "it depends on your configuration and how much data is on the array".
Adaptec 5805 (with BBU) here and attempting to upsize an 8 disk RAID5 array. First we replaced all 2TB disks with 3TB (nearline SAS drives), where every rebuild (the array degraded every time we replaced one disc of course) took about 3 days.
Now after all discs are replaced we started the upsizing from the ~13TB LUN to the new ~19TB. Well, after a week we have completed ~10% of this task.
This seems long - I have 12x 750GB on a 5805Z (BBU), RAID 6, home server. When I lost one drive, it took ~4 hours to rebuild. During that rebuild, another drive died (that's why I would NEVER use RAID 5 - disks always fail under heavy load - like a rebuild / OCE). That one also took about 4 hours.
I replaced the 4 750's with 3TB drives, and created a 2nd RAID6 array on the additional space (i.e. 4x ~2TB). Just last week, I replaced another 750GB drive with a 3TB, and expanded the second array (i.e. to 5x 2TB) - it took just under a week (from 4TB->6TB).
So you are upsizing the entire array at once? Mine seemed to be going at adding 2TB/week. Yours would extrapolate to 3 * (2TB/week) or 3 weeks total, 33% / week, but I don't know if the math scales like that. It would have been much faster to just define a second array in the additional space...