Main Page: Difference between revisions

From Jeremy Bryan Smith
Jump to navigation Jump to search
No edit summary
Line 49: Line 49:


===Storage===
===Storage===
====RAID====
=====From Hardware to Software=====
If you've ever been forced to use hardware RAID and then need to migrate from hardware RAID to software RAID ...
<br>
I had 6 disks behind a Dell PERC 6/i hardware RAID controller and wanted to set up a ZFS pool. But for reasons beyond my control at the time, changing out the controller for a proper HBA was not an option. So, I did the unthinkable and configured 6 RAID-0 devices on the PERC controller, one for each physical disk, and then created a zpool from the virtual disks the controller exposed to the OS. It worked well enough for my purposes for years, but now I have the ability to swap in a real HBA. But the question is, do I have to copy all of the data off, swap the controller, then copy all data back? I was surprised to discover that the data on the disks, with the exception of a small section of metadata at the end of each disk, was in-tact just as if the disks were being used directly. In fact, the metadata was in Linux MDADM RAID format! So that must be what those PERC controllers are doing behind the scene. No surprise there, taking advantage of open source. I wonder if they are violating the GPL. I'll have to look at the firmware. Anyway, This may only work if you configured each disk as a RAID-0. I haven't tested other configurations yet.
<br>
This is what I saw:
<span style="font-weight:bold; user-select:none">root@ubuntu-server:~#</span> parted /dev/sdg unit b p
<span style="color:#909090">Warning: Not all of the space available to /dev/sdg appears to be used, you can fix the GPT to use all of the space (an extra 1073200 blocks) or continue with the current setting?
Fix/Ignore? Ignore                                                       
Model: ATA ST9500530NS (scsi)
Disk /dev/sdg: 500107862016B
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number  Start      End            Size          File system  Name      Flags
  1      1048576B  9437183B      8388608B                    Reserved
  2      10485760B  18874367B      8388608B                    GRUB      bios_grub
  3      20971520B  499558366719B  499537395200B  zfs          ZFS
</span>
Looking at the MDADM RAID(s):
<span style="font-weight:bold; user-select:none">root@ubuntu-server:~#</span> grep sdg /proc/mdstat
<span style="color:#909090">md125 : active raid0 sdg[0]
md127 : inactive sdf[5](S) sdh[4](S) sde[3](S) sdd[2](S) sdg[1](S) sdc[0](S)
</span>
<span style="font-weight:bold; user-select:none">root@ubuntu-server:~#</span> mdadm --detail /dev/md125
<span style="color:#909090">/dev/md125:
          Container : /dev/md/ddf0, member 0
        Raid Level : raid0
        Array Size : 487849984 (465.25 GiB 499.56 GB)
      Raid Devices : 1
      Total Devices : 1
              State : clean
    Active Devices : 1
    Working Devices : 1
    Failed Devices : 0
      Spare Devices : 0
        Chunk Size : 128K
Consistency Policy : none
    Container GUID : 44656C6C:20202020:10000079:10281F17:4B825EED:49CB41EA
                  (Dell    02/22/20 10:39:41)
                Seq : 00000004
      Virtual Disks : 6
    Number  Major  Minor  RaidDevice State
        0      8      96        0      active sync  /dev/sdg
</span>
<span style="font-weight:bold; user-select:none">root@ubuntu-server:~#</span> mdadm --detail /dev/md127
<span style="color:#909090">/dev/md127:
            Version : ddf
        Raid Level : container
      Total Devices : 6
    Working Devices : 6
    Container GUID : 44656C6C:20202020:10000079:10281F17:4B825EED:49CB41EA
                  (Dell    02/22/20 10:39:41)
                Seq : 00000004
      Virtual Disks : 6
      Member Arrays : /dev/md/disk1_0 /dev/md/disk5_0 /dev/md/disk3_0 /dev/md/disk4_0 /dev/md/disk2_0 /dev/md122
    Number  Major  Minor  RaidDevice
        -      8      64        -        /dev/sde
        -      8      32        -        /dev/sdc
        -      8      112        -        /dev/sdh
        -      8      80        -        /dev/sdf
        -      8      48        -        /dev/sdd
        -      8      96        -        /dev/sdg
</span>
And the RAID device:
<span style="font-weight:bold; user-select:none">root@ubuntu-server:~#</span> parted /dev/md125 unit b p
<span style="color:#909090">Model: Linux Software RAID Array (md)
Disk /dev/md125: 499558383616B
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number  Start      End            Size          File system  Name      Flags
  1      1048576B  9437183B      8388608B                    Reserved
  2      10485760B  18874367B      8388608B                    GRUB      bios_grub
  3      20971520B  499558366719B  499537395200B  zfs          ZFS
</span>
So, capacity of the disk itself is 500,107,862,016 bytes and the capacity of the RAID-0 device presented is 499,558,383,616 bytes.<br>
So, there is 500,107,862,016 - 499,558,383,616 = 549,478,400 bytes / 1024 = 536,600 kilobytes used up by the MDADM RAID signature at the end of the disk.<br>
Cool! So, can we just zero out that mdadm signature and use the disks normally? Let's see...
====Controllers====
====Controllers====
See Serve the Home's [https://www.servethehome.com/buyers-guides/top-hardware-components-freenas-nas-servers/top-picks-freenas-hbas/ Top Picks for FreeNAS HBAs] for a good overview.<br>
See Serve the Home's [https://www.servethehome.com/buyers-guides/top-hardware-components-freenas-nas-servers/top-picks-freenas-hbas/ Top Picks for FreeNAS HBAs] for a good overview.<br>

Revision as of 23:45, 10 April 2020

Welcome to the Wiki of Jeremy Bryan Smith. Here I shall disseminate knowledge that I deem important enough that I may need to recall at a later point in time and/or that I believe may be useful for others out there.
Feel free to poke around, make comments, suggestions, and ingest the bits of information I have to share.

Regards,
Jeremy Bryan Smith


Technology

Applications and Plug-ins

Here I will provide my opinions, recommendations on usage and optimal configuration, links to related third-party tools, and my own related tools

  • Web Browsers - The web browsers and related extensions / plug-ins, and tweaks for usability and security, etc. that I recommend
  • vim - My text editor of choice

Devices

Software

Programming

Android

Microsoft Windows

E-mail

Storage

RAID

From Hardware to Software

If you've ever been forced to use hardware RAID and then need to migrate from hardware RAID to software RAID ...
I had 6 disks behind a Dell PERC 6/i hardware RAID controller and wanted to set up a ZFS pool. But for reasons beyond my control at the time, changing out the controller for a proper HBA was not an option. So, I did the unthinkable and configured 6 RAID-0 devices on the PERC controller, one for each physical disk, and then created a zpool from the virtual disks the controller exposed to the OS. It worked well enough for my purposes for years, but now I have the ability to swap in a real HBA. But the question is, do I have to copy all of the data off, swap the controller, then copy all data back? I was surprised to discover that the data on the disks, with the exception of a small section of metadata at the end of each disk, was in-tact just as if the disks were being used directly. In fact, the metadata was in Linux MDADM RAID format! So that must be what those PERC controllers are doing behind the scene. No surprise there, taking advantage of open source. I wonder if they are violating the GPL. I'll have to look at the firmware. Anyway, This may only work if you configured each disk as a RAID-0. I haven't tested other configurations yet.
This is what I saw:

root@ubuntu-server:~# parted /dev/sdg unit b p
Warning: Not all of the space available to /dev/sdg appears to be used, you can fix the GPT to use all of the space (an extra 1073200 blocks) or continue with the current setting? 
Fix/Ignore? Ignore                                                        
Model: ATA ST9500530NS (scsi)
Disk /dev/sdg: 500107862016B
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start      End            Size           File system  Name      Flags
 1      1048576B   9437183B       8388608B                    Reserved
 2      10485760B  18874367B      8388608B                    GRUB      bios_grub
 3      20971520B  499558366719B  499537395200B  zfs          ZFS

Looking at the MDADM RAID(s):

root@ubuntu-server:~# grep sdg /proc/mdstat 
md125 : active raid0 sdg[0]
md127 : inactive sdf[5](S) sdh[4](S) sde[3](S) sdd[2](S) sdg[1](S) sdc[0](S)

root@ubuntu-server:~# mdadm --detail /dev/md125
/dev/md125:
         Container : /dev/md/ddf0, member 0
        Raid Level : raid0
        Array Size : 487849984 (465.25 GiB 499.56 GB)
      Raid Devices : 1
     Total Devices : 1

             State : clean 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 128K

Consistency Policy : none

    Container GUID : 44656C6C:20202020:10000079:10281F17:4B825EED:49CB41EA
                  (Dell     02/22/20 10:39:41)
               Seq : 00000004
     Virtual Disks : 6

    Number   Major   Minor   RaidDevice State
       0       8       96        0      active sync   /dev/sdg

root@ubuntu-server:~# mdadm --detail /dev/md127
/dev/md127:
           Version : ddf
        Raid Level : container
     Total Devices : 6

   Working Devices : 6

    Container GUID : 44656C6C:20202020:10000079:10281F17:4B825EED:49CB41EA
                  (Dell     02/22/20 10:39:41)
               Seq : 00000004
     Virtual Disks : 6

     Member Arrays : /dev/md/disk1_0 /dev/md/disk5_0 /dev/md/disk3_0 /dev/md/disk4_0 /dev/md/disk2_0 /dev/md122

    Number   Major   Minor   RaidDevice

       -       8       64        -        /dev/sde
       -       8       32        -        /dev/sdc
       -       8      112        -        /dev/sdh
       -       8       80        -        /dev/sdf
       -       8       48        -        /dev/sdd
       -       8       96        -        /dev/sdg

And the RAID device:

root@ubuntu-server:~# parted /dev/md125 unit b p
Model: Linux Software RAID Array (md)
Disk /dev/md125: 499558383616B
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start      End            Size           File system  Name      Flags
 1      1048576B   9437183B       8388608B                    Reserved
 2      10485760B  18874367B      8388608B                    GRUB      bios_grub
 3      20971520B  499558366719B  499537395200B  zfs          ZFS

So, capacity of the disk itself is 500,107,862,016 bytes and the capacity of the RAID-0 device presented is 499,558,383,616 bytes.
So, there is 500,107,862,016 - 499,558,383,616 = 549,478,400 bytes / 1024 = 536,600 kilobytes used up by the MDADM RAID signature at the end of the disk.
Cool! So, can we just zero out that mdadm signature and use the disks normally? Let's see...


Controllers

See Serve the Home's Top Picks for FreeNAS HBAs for a good overview.
Basically:

  • Stay away from hardware RAID
  • There are a few RAID cards that can be flashed to IT mode to support proper HBA (passthrough/JBOD) mode.
  • There are many re-branded / OEM versions that should also work fine (Dell, IBM)
  • There are many being sold used

Check these out:

  • Internal
    • LSI SAS HBA 9201-8i: 8 port, SATA III, 6 Gbps, ~ $40 - $50 USD on ebay as of 2020-03-02
      Good replacement for PERC 6/i
    • IBM M1015 (LSI 9220-8i): 8 port, SATA III, 6 Gbps,
    • Dell H200: 8 port, SATA III, 6 Gbps, $30 - $40 USD on ebay as of 2020-03-02
      Good replacement for PERC 6/i
  • External

Filesystems

Data Integrity

Storage Analysis

Misc

Security/Privacy

Admirable People

Life Lessons

Lists

  • Travel - Lists for things to do before and things to bring when travelling.