First of all, I’m sorry if this isn’t the best place to post this; but I’m not aware of any more general tech support / PC building / ASUS forums here that I could post this to instead.
I’m trying to turn an old PC of mine built around an Asus M5A78L-M PLUS/USB3 into a home server, part of which means I’d like to have the two 6TB harddrives I’ve installed in it in a RAID 1 array. The manual, which I found a copy of online, briefly mentions that it supports RAID 0, 1, and 10 for drives in ports 1-4, then says to look for more information in the “RAID/AHCI Supplementary Guide” in the support DVD, which I don’t have and can’t find online. In the “SATA Configuration” section of the BIOS, I set the “SATA Port1 - Port4” setting from “[IDE]” to “[RAID]”, which caused an additional step to start happening in the boot process where something labeled “RAID Option ROM Version 3.0.1540.39” by AMD would scan my drives for several seconds, then show some information on them and prompt me to press control+F to enter a “RAID Option ROM Utility”. When I do that it takes me to another screen with 4 options: “View Drive Assignments”, “Define LD”, “Delete LD”, and “Controller Configuration”. I go to “Define LD” and it lets me select 1 of 10 slots, and when I select one it allows me to choose its RAID mode from RAID 0, RAID 1 RAID 10, JBOD, or “RAID READY”, allows me to change its “Stripe Block” (64 KB or 128 KB for 0 & 10, NA for others), its “Fast Init” (On or Off for all options) its “Gigabyte Boundary” (On or off for the RAIDs, NA for JBOD and “RAID READY”), and shows its “Cache Mode” (“WriteThru” for all except JBOD, which is NA), and gives me a list of all my SATA hard drives and allows me to pick which whether its “Assignment” is N or Y. I set “RAID Mode” to RAID 1, leave Fast Init and Gigabyte Boundary as On, add both drives to the LD and press control+Y to save. It asks me to confirm I’m okay with all the data on the drive being wiped, then gives me an option to choose the LD size manually or choose the biggest available size, which I do. After a few seconds I’m taken back to the Define LD Menu, with an entry in slot #1 - “RAID Mode” is listed as RAID 1, “Total Drv” is 2, “Capacity(GB)” is 1602.95, and “Status” is Functional. No extra prompts are given and everything seems mostly fine. As far as I can tell, the array is set up.
I insert my Linux Mint Installation livedisc and reboot. I open Gnome Discs. it’s still showing both discs independently, with no third 6TB drive nor any ~1.6TB drive. I open GParted. It shows both discs independently. I start the Install Linux Mint utility. Once I get to the appropriate part, it’s detecting exactly two hard drives, both 6TB. Every other Linux installer I’ve tried (Debian, OpenMediaVault, Fedora Server) have detected two separate drives and no Raid anything. The two HDDs are connected to SATA ports #3 and #4. I’ve never set up any sort of RAID array before, and don’t know enough about anything to know what exactly I’m doing wrong here. Can anyone help me figure out what the problem is?
Your motherboard supports a specific form of software raid that it expects windows to respect (with a driver iirc).
Basically that bios setting will enable it to “boot from the software raid”, then once you get to the operating system it needs to understand how to handle the software raid and make it a proper volume.
This made sense in the before-times with windows and booting from spinning disks (well, not really but still), but now you’re better off making a proper software raid yourself.
Your chipset doesn’t have a proper hardware raid controller (actually it probably has most of one by now, just not the iop itself, basically like a small cpu core that figures out what to do next), so it’s not proper hardware raid, just software games to let your cpu do software raid.
If you’re running Linux just use lvm, it’s fast and you’ll be pretty happy with the results.
Better yet use zfs, it will change your life.
The RAID on your motherboard is a mess and you should avoid it like the plague. — Wendell from Level1Tech
Creating RAID with either zfs or btrfs is much more easier and they perform better than motherboard’s RAID implementations. If you want a UI, you can even install TrueNAS Core as a server and manage zfs pools, share on network etc.
What you’re trying to use is “hardware” RAID. Using hardware RAID is generally a bad idea. If you’re using Linux, use software RAID instead.
Also consider using Btrfs, it will make having a RAID setup even easier.
Nothing wrong with hardware raid in general. But most consumer motherboards do not have true hardware raid - but instead fake raid. Which is some basic hardware boot time support for software raid. IE the BIOS can understand the basic raid features to boot the system - before handing it off to the OS to manage.
I would not use fake raid on a Linux system if you can avoid it, full software raid is just better than most consumer hardware fake raid support.
True hardware raid generally requires a separate expensive card that has its own controller and ram buffer.
Yeah, never hardware raid. It’s a disaster waiting to happen, even with expensive dedicated cards.
Easiest is with btrfs raid1, the drives don’t even have to be the same size like in other raid systems. For example you can combine two 2TB drives with a single 4TB drive into one 4TB Raid1 and also remove and change things as you want. ZFS has a few more features but is much more rigid and likely to break on Linux kernel updates as it isn’t part of the kernel like btrfs.