Ubuntu Server Setup with Dual RAID Systems
This guide walks you through installing and configuring Ubuntu Server with two RAID systems. Learn how to set up RAID 1 for SSDs and RAID 0 for HDDs using mdadm
Setting Up Ubuntu Server with Dual RAID Systems
This guide walks you through the installation and configuration of Ubuntu Server with two RAID systems. (Actually here is the first option. I am using one SSDs just for OS without Raid and other HDDs with Raid level 0 (without redundancy)
Start Ubuntu installation
installimage
Please note!: by default all disks are used for software raid. change this to (SWRAID 0) if you want to leave your other harddisk(s) untouched!
Step 1: Prepare the System In the installation screen, configure the drives for RAID. Comment out the HDDs (DRIVE3-DRIVE6) as shown below to leave only SSDs for the OS installation (e.g., DRIVE1 and DRIVE2).
DRIVE1 /dev/nvme0n1
DRIVE2 /dev/nvme1n1
#Comment for HDDs
#DRIVE3 /dev/sda
#DRIVE4 /dev/sdb
#DRIVE5 /dev/sdc
#DRIVE6 /dev/sdd
# change SWRAID 1 to SWRAID 0
SWRAID 0
By selecting SWRAID 0 it just uses first SSD and the second one is untouched.
save the changes using f2 and close the config using mouse(upper right X) or keyboard.
Step 2: Post-Installation - Check Drive Configuration
Once Ubuntu is installed, verify the drive configuration with:
lsblk
The output should look similar to this:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 3.2G 1 loop
sda 8:0 0 20T 0 disk
sdb 8:16 0 20T 0 disk
sdc 8:32 0 20T 0 disk
sdd 8:48 0 20T 0 disk
nvme0n1 259:0 0 953.9G 0 disk
├─nvme0n1p1 259:2 0 32G 0 part
├─nvme0n1p2 259:6 0 1G 0 part
└─nvme0n1p3 259:7 0 920.9G 0 part
nvme1n1 259:1 0 953.9G 0 disk
Step 3: Configure RAID for Remaining Drives
Create RAID 0 for the HDDs (sda, sdb, sdc, and sdd) using the following commands:
mdadm --create --verbose /dev/md10 --level=0 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
mkfs.ext4 /dev/md10
it asks for Writing superblocks and filesystem accounting information: press Enter and wait.
mkdir -p /mnt/storage
mount /dev/md10 /mnt/storage
Verify the RAID status:
cat /proc/mdstat
and
df -h
Step 4: Persist RAID Configuration Ensure RAID persists across reboots by appending the configuration to /etc/mdadm/mdadm.conf and updating initramfs.
mdadm --detail --scan | tee -a /etc/mdadm/mdadm.conf
update-initramfs -u
UUID=$(blkid -s UUID -o value /dev/md10)
echo "UUID=$UUID /mnt/storage ext4 defaults,nofail 0 2" | tee -a /etc/fstab
Note: if using sudo you need to also add sudo before tee -a and after | as well in first and last command.
Step 5: Verify Setup Reboot and ensure both RAID systems are functioning as expected. Confirm the mount point for RAID 0 at /mnt/storage.
Optional Troubleshooting: In case of problem after reboot: 1- check this:
cat /etc/fstab
it should look something like this:
proc /proc proc defaults 0 0
# /dev/nvme0n1p1
UUID=d496782e-88ac-478a-be5f-fa6bde9a1639 none swap sw 0 0
# /dev/nvme0n1p2
UUID=33ee252c-dd16-400a-9e53-e56946186a4a /boot ext3 defaults 0 0
# /dev/nvme0n1p3
UUID=af53c1f3-7098-442f-adb2-e014dde72f96 / ext4 defaults 0 0
UUID=df8c5776-c534-4e6a-8aa6-d9c2b922dd56 /mnt/storage ext4 defaults,nofail 0 2
then check with :
df -h | grep /mnt/storage
More info about RAID Systems RAID Level Description Pros Cons RAID 0 Data is striped across multiple drives for increased speed. High performance and full capacity utilization. No redundancy; if one drive fails, all data is lost. RAID 1 Data is mirrored across two drives for redundancy. Redundancy ensures data safety; easy recovery. Storage capacity is halved; slower write speeds. RAID 5 Data and parity information are striped across multiple drives. Redundancy with better capacity utilization. Slower write speeds due to parity calculations; requires at least 3 drives. RAID 10 Combines RAID 0 and RAID 1 for performance and redundancy. High performance and redundancy. High cost; requires a minimum of 4 drives.
By following these steps, you can successfully configure a dual RAID system on your Ubuntu Server, ensuring both speed and redundancy for your data.