I never use RAID-5, so I'd never noticed this before:
-f, --force Insist that mdadm accept the geometry and layout specified without question. Normally mdadm will not allow creation of an array with only one device, and
( Read more... )
If you want to performance test configurations, why not define, eg, a 2GB or 10GB partition on the start of each disk, then build those into a RAID set? The rebuild will be much faster, and you'll get to do your RAID-set layout tests much quicker. (Of course if you want to fill the RAID set with 400GB of data this won't help -- but filling a disk with 400GB of data takes Some Time (tm) too. And of course the start of the disk is faster access than the outer disk -- but if you care about this, try with partitions defined at different points on the disk
( ... )
Incidentally ((460GB * 1024)/45 MBps)/3600 = just under 3 hours. So absolutely best case to write out over an entire disk is 3 hours. Thus 6 hours to write it all seems a bit long, but not unbelievable. As I said, there's a reason that I do my RAID sets in smaller chunks than "whole disk".
Yeah, I'd done my math wrong. My original 100 MB/s number was from a 4-stripe LV. So I probably don't get anywhere near 45 MB/s and probably more like 25 MB/s instead.
I was basing the 45MB/s figure on the Blk_read and Blk_write figures in your output (about 45,000 per second per disk). And 45MB/s is definitely the right order of magnitude for a modern disk platter, which was part of why I took that figure without much extra consideration.
However the iostat man page sugggests that the blocks reported are actually sectors in Linux 2.4 kernels and later, and thus is 512 Bytes. Given that figure -- which calculates out to 22.5MB/s -- that translates pretty directly to 6 hours to resync with the same calculation as I used previously.
Although I'd be wondering why you're getting only 22.5MB/s off your disks; it seems a bit low for a modern disk that is SCSI connected or even SATA connected.
Don't use RAID 5. A disk will fail and while it rebuilds a second disk will fail, too.
If you have more than 4-5 disks, use RAID6. If you have just 4 disks, stick with RAID10.
I also second the suggestion of splitting up the disks in smaller chunks. Beware though that the Linux SATA (or SCSI, I forget) layer only likes ~15-16 partitions per disk. I started making each of my raid chunks 80-100GB. That way when disks are +1TB, I don't have to merge the old partitions.
Comments 14
Reply
Ewen
Reply
6 hours starts to make sense. :)
Reply
However the iostat man page sugggests that the blocks reported are actually sectors in Linux 2.4 kernels and later, and thus is 512 Bytes. Given that figure -- which calculates out to 22.5MB/s -- that translates pretty directly to 6 hours to resync with the same calculation as I used previously.
Although I'd be wondering why you're getting only 22.5MB/s off your disks; it seems a bit low for a modern disk that is SCSI connected or even SATA connected.
Ewen
Reply
If you have more than 4-5 disks, use RAID6. If you have just 4 disks, stick with RAID10.
I also second the suggestion of splitting up the disks in smaller chunks. Beware though that the Linux SATA (or SCSI, I forget) layer only likes ~15-16 partitions per disk. I started making each of my raid chunks 80-100GB. That way when disks are +1TB, I don't have to merge the old partitions.
- ask
Reply
Leave a comment