![]() ![]() Wait till it reshapes: $ cat /proc/mdstat Step-3: Convert to RAID-0 back: $ sudo mdadm -grow -level 0 -raid-devices=4 /dev/md0 Step-2: Add a disk: $ sudo mdadm -manage /dev/md0 -add /dev/xvdk Md0 : active raid4 xvdj xvdi xvdhģ1432704 blocks super 1.2 level 4, 512k chunk, algorithm 5 Step-1: Convert to RAID-4: $ sudo mdadm -grow -level 4 /dev/md0 Mdadm: level of /dev/md0 changed to raid4 UUID : e8780813:5adbe875:ffb0ab8a:05f1352dĪdd a disk to RAID-0 in one-step (doesn't work): $ sudo mdadm -grow /dev/md0 -raid-devices=4 -add /dev/xvdk Name : temp:DB_RAID2 (local to host temp) Initial setup: $ sudo mdadm -create -verbose /dev/md0 -level=0 -name=DB_RAID2 -raid-devices=3 /dev/xvdh /dev/xvdi /dev/xvdjĪrray Size : 31432704 (29.98 GiB 32.19 GB) I know this is old, but these steps could be helpful to folks. How do I switch to RAID 0, without losing any data, taking into account the fact that the state is clean, degraded? I don't need redundancy since I backup by taking volume snapshots in AWS. My aim here is to have a 4TB RAID 0 array. Mdadm -detail /dev/md127 now reports the following: /dev/md127: I restarted the machine and reshaping continued then finished seemingly successfully, but the array level is now reported as RAID 4 and the useable capacity hasn't changed. About 1 hour through, reshaping stopped and the volume became inaccessible. ![]() The array took about 40 hours to reshape. Tip #1: /proc/sys/dev/raid/' | cut -c6-8)Įcho "Setting read-ahead to 16MiB for disk "$i" in RAID (probably)"Įcho "Setting read_ahead_kb to 1024 for disk "$i" in RAID."Įcho 1024 > /sys/block/$i/queue/read_ahead_kbĮcho "Setting nr_requests to 256 for disk "$i" in RAID."Įcho 256 > /sys/block/$i/queue/nr_requestsĮcho "Disabling NCQ for disk "$i" in RAID.I had a three-disk RAID 0 array and ran the following to add a fourth disk: mdadm -manage /dev/md127 -add /dev/xvdiĮach disk is a 1TB EC2 volume. If the option value is omitted, it defaults to 1 to enable lazy journal inode zeroing. This speeds up filesystem initialization noticeably, but carries some small risk if the system crashes before the journal has been overwritten entirely one time. lazy_journal_init – If enabled, the journal inode will not be fully zeroed out by mke2fs.If the option value is omitted, it defaults to 1 to enable lazy inode table zeroing. This speeds up filesystem initialization noticeably, but it requires the kernel to finish initializing the filesystem in the background when the filesystem is first mounted. lazy_itable_init – If enabled and the uninit_bg feature is enabled, the inode table will not be fully initialized by mke2fs.There is an option to enable or disable this feature while running mkfs.ext4 command: This only affects if you have just created an ext4 filesystem. As a result, your RAID rebuild is going to operate at minimal speed. A process called “ext4lazyinit” runs in the background to create rest of all inode tables. This feature allows the faster creatation of a file system. When creating an ext4 file system, the Linux kernel uses lazy initialization. A note about lazy initialization and ext4 file system The recovery speed was around 4000K/sec and will complete in approximately in 22 hours. Next, I type the command cat /proc/mdstat and it reported that md0 is active and recovery is in progress. Recently, I build a small NAS server running Linux for one my client with 5 x 2TB disks in RAID 6 configuration for all in one backup server for Linux, Mac OS X, and Windows XP/Vista/7/10 client computers. Why speed up Linux software RAID rebuilding and re-syncing?
0 Comments
Leave a Reply. |