LSI: LSI StorCli Cheatsheet
Install Process
- Search for 'StorCli' here: http://www.avagotech.com/support/download-search
unzip 1.19.04_StorCLI.zip
cd storcli_all_os/
cd Linux
rpm -ivh ./storcli-1.19.04-1.noarch.rpm
rpm -qpl storcli-1.19.04-1.noarch.rpm
# Files are in /opt/MegaRAID/storcli/
# add this to you $PATHQuery
- Get everything
storcli64 /c0 show all
# and for a list of options; storcli64 /c0 show helpReset UBad to UGood
storcli /c0 /e252 /sall set goodConfiguring
- Example for setting up a RAID60 (on 36 Drives, 18 Drives per array)
[root@oswald ~]# storcli64 /c0 add vd type=raid60 drives=85:0-23,88:0-11 pdperarray=18
Controller = 0
Status = Success
Description = Add VD SucceededExtend a RAID6 array (TT)
first identify your new disks and the existing virtual disk(s) using the show all command above
[root@storage1 ~]# /opt/MegaRAID/storcli/storcli64 /c0 show all [SNIP] Virtual Drives = 1 VD LIST : ======= -------------------------------------------------------------- DG/VD TYPE State Access Consist Cache Cac sCC Size Name -------------------------------------------------------------- 0/239 RAID6 Optl RW Yes RWBD - ON 29.107 TB -------------------------------------------------------------- VD=Virtual Drive| DG=Drive Group|Rec=Recovery Cac=CacheCade|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded Optl=Optimal|dflt=Default|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady B=Blocked|Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled Check Consistency Physical Drives = 8 PD LIST : ======= ---------------------------------------------------------------------------- EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type ---------------------------------------------------------------------------- 252:0 9 UGood - 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - 252:1 8 UGood - 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - 252:2 2 Onln 0 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - 252:3 3 Onln 0 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - 252:4 4 Onln 0 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - 252:5 5 Onln 0 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - 252:6 0 Onln 0 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - 252:7 1 Onln 0 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - ---------------------------------------------------------------------------- EID=Enclosure Device ID|Slt=Slot No|DID=Device ID|DG=DriveGroup DHS=Dedicated Hot Spare|UGood=Unconfigured Good|GHS=Global Hotspare UBad=Unconfigured Bad|Sntze=Sanitize|Onln=Online|Offln=Offline|Intf=Interface Med=Media Type|SED=Self Encryptive Drive|PI=Protection Info SeSz=Sector Size|Sp=Spun|U=Up|D=Down|T=Transition|F=Foreign UGUnsp=UGood Unsupported|UGShld=UGood shielded|HSPShld=Hotspare shielded CFShld=Configured shielded|Cpybck=CopyBack|CBShld=Copyback Shielded UBUnsp=UBad Unsupported|Rbld=Rebuild Enclosures = 1 Enclosure LIST : ============== ------------------------------------------------------------------------ EID State Slots PD PS Fans TSs Alms SIM Port# ProdID VendorSpecific ------------------------------------------------------------------------ 252 OK 8 8 0 0 0 0 0 - VirtualSES ------------------------------------------------------------------------ EID=Enclosure Device ID | PD=Physical drive count | PS=Power Supply count TSs=Temperature sensor count | Alms=Alarm count | SIM=SIM Count | ProdID=Product ID [SNIP]
In the above the virtual disk id is 239 and you can see that it a raid 6.
There is just the one enclosure (252) and a total of 8 drives in that enclosure. The new drives are the UGood ones are the top in "slot 0 and slot 1". This allows us to develop the following command line.
[root@storage1 ~]# /opt/MegaRAID/storcli/storcli64 /c0/v239 start migrate type=r6 option=add drives=252:0-1 CLI Version = 007.1705.0000.0000 Mar 31, 2021 Operating system = Linux 3.10.0-1127.el7.x86_64 Controller = 0 Status = Success Description = Start VD Operation Success
you can monitor the progress with:
[root@storage1 ~]# /opt/MegaRAID/storcli/storcli64 /c0/v239 show migrate CLI Version = 007.1705.0000.0000 Mar 31, 2021 Operating system = Linux 3.10.0-1127.el7.x86_64 Controller = 0 Status = Success Description = None VD Operation Status : =================== -------------------------------------------------------------- VD Operation Progress% Status Estimated Time Left -------------------------------------------------------------- 239 Migrate 1 In progress 7 Days 2 Hours 35 Minutes --------------------------------------------------------------
rescan disk and extend LVM
we waited a while and it's done but not we need to work our way up the levels resizing everything from the block dev in linux (assuming you can't reboot), then resize the LVM Physical volume (PV). Updating the PV SHOULD update the volume group data automatically and then we need to extend the LVM volume and finally the Filesystem on top.
first up check job complete
[root@asgard ~]# ssh storage1 Last login: Tue Oct 31 16:54:37 2023 from tt-node01.localdomain [root@storage1 ~]# /opt/MegaRAID/storcli/storcli64 /c0/v239 show migrate CLI Version = 007.1705.0000.0000 Mar 31, 2021 Operating system = Linux 3.10.0-1127.el7.x86_64 Controller = 0 Status = Success Description = None VD Operation Status : =================== ------------------------------------------------------------ VD Operation Progress% Status Estimated Time Left ------------------------------------------------------------ 239 Migrate - Not in progress - ------------------------------------------------------------
then check the size
[root@storage1 ~]# /opt/MegaRAID/storcli/storcli64 /c0/v239 show all CLI Version = 007.1705.0000.0000 Mar 31, 2021 Operating system = Linux 3.10.0-1127.el7.x86_64 Controller = 0 Status = Success Description = None /c0/v239 : ======== -------------------------------------------------------------- DG/VD TYPE State Access Consist Cache Cac sCC Size Name -------------------------------------------------------------- 0/239 RAID6 Optl RW Yes RWBD - ON 43.661 TB -------------------------------------------------------------- VD=Virtual Drive| DG=Drive Group|Rec=Recovery Cac=CacheCade|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded Optl=Optimal|dflt=Default|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady B=Blocked|Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled Check Consistency PDs for VD 239 : ============== ---------------------------------------------------------------------------- EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type ---------------------------------------------------------------------------- 252:2 2 Onln 0 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - 252:3 3 Onln 0 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - 252:4 4 Onln 0 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - 252:5 5 Onln 0 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - 252:6 0 Onln 0 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - 252:7 1 Onln 0 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - 252:0 9 Onln 0 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - 252:1 8 Onln 0 7.276 TB SAS HDD N N 512B HUS728T8TAL5204 U - ---------------------------------------------------------------------------- EID=Enclosure Device ID|Slt=Slot No|DID=Device ID|DG=DriveGroup DHS=Dedicated Hot Spare|UGood=Unconfigured Good|GHS=Global Hotspare UBad=Unconfigured Bad|Sntze=Sanitize|Onln=Online|Offln=Offline|Intf=Interface Med=Media Type|SED=Self Encryptive Drive|PI=Protection Info SeSz=Sector Size|Sp=Spun|U=Up|D=Down|T=Transition|F=Foreign UGUnsp=UGood Unsupported|UGShld=UGood shielded|HSPShld=Hotspare shielded CFShld=Configured shielded|Cpybck=CopyBack|CBShld=Copyback Shielded UBUnsp=UBad Unsupported|Rbld=Rebuild VD239 Properties : ================ Strip Size = 256 KB Number of Blocks = 93761961984 VD has Emulated PD = Yes Span Depth = 1 Number of Drives Per Span = 8 Write Cache(initial setting) = WriteBack Disk Cache Policy = Disk's Default Encryption = None Data Protection = None Active Operations = None Exposed to OS = Yes OS Drive Name = /dev/sdc Creation Date = 04-08-2022 Creation Time = 03:31:37 PM Emulation type = default Cachebypass size = Cachebypass-64k Cachebypass Mode = Cachebypass Intelligent Is LD Ready for OS Requests = Yes SCSI NAA Id = 600304802545e3f02a7ea2d92c4cba60 Unmap Enabled = No
Great it's now 43.661 TB. what about in the os?
[root@storage1 ~]# lsblk /dev/sdc NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 29.1T 0 disk └─vg0-archive 253:0 0 29.1T 0 lvm /tt-archive
lets fix that by rescanning just that
[root@storage1 ~]# echo 1 >/sys/class/block/sdc/device/rescan [root@storage1 ~]# lsblk /dev/sdc NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 43.7T 0 disk └─vg0-archive 253:0 0 29.1T 0 lvm /tt-archive
ok
what about the physical volume (spoiler it'll be the old value we then need to resize it with pvresize /dev/<blockdev>). Don't worry for an extend if you pick the wrong drive nothing will happen. lets show all the output from check to resize.
[root@storage1 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sdc vg0 lvm2 a-- <29.11t 0 # we don't neeed these but it's useful to refer back to later [root@storage1 ~]# vgs VG #PV #LV #SN Attr VSize VFree vg0 1 1 0 wz--n- <29.11t 0 [root@storage1 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert archive vg0 -wi-ao---- <29.11t [root@storage1 ~]# pvresize /dev/sdc Physical volume "/dev/sdc" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized [root@storage1 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sdc vg0 lvm2 a-- 43.66t 14.55t
OK, now recheck vgs and lvs and then extend the logical volume (in this case lvextend -l +100%Free /dev/mapper/vg0-archive</code)
[root@storage1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg0 1 1 0 wz--n- 43.66t 14.55t
[root@storage1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
archive vg0 -wi-ao---- <29.11t
[root@storage1 ~]# lvextend -l +100%Free /dev/mapper/vg0-archive
Size of logical volume vg0/archive changed from <29.11 TiB (238448 extents) to 43.66 TiB (357673 extents).
Logical volume vg0/archive successfully resized.
[root@storage1 ~]# lsblk /dev/sdc
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdc 8:32 0 43.7T 0 disk
└─vg0-archive 253:0 0 43.7T 0 lvm /tt-archive
Great all the block level manipulation is done what about the filesystem? another spoiler. . . it's still ~30TB, we need to resize the fs with xfs_growfs /dev/mapper/vg0-archive as this is xfs
[root@storage1 ~]# df -h /tt-archive
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg0-archive 30T 27T 2.3T 93% /tt-archive
[root@storage1 ~]# blkid /dev/mapper/vg0-archive
/dev/mapper/vg0-archive: UUID="73ce74c0-2e48-4eeb-a0ea-f566fedd6451" TYPE="xfs"
[root@storage1 ~]# df -h /tt-archive
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg0-archive 44T 27T 17T 62% /tt-archive
note than on this system I ran the growfs commmand in screen as I was worried it might take a while. For my future self it was instant