This is not a generic HOWTO, but working example. This memo suited to my disk layout, your may be differ.
I have three disks contained three partitions on each. Here is an fsdisk output :
# fdisk -l /dev/sda Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/sda1 63 273104 273042 133.3M fd Linux raid autodetect /dev/sda2 274432 42217471 41943040 20G 8e Linux LVM /dev/sda3 42217472 1953525167 1911307696 911.4G bf Solaris
First partitions of each disk are forming mirror MD device formatted as /boot
# mdadm --detail /dev/md10 /dev/md10: Version : 0.90 Creation Time : Tue Apr 15 14:28:27 2014 Raid Level : raid1 Array Size : 136448 (133.27 MiB 139.72 MB) Used Dev Size : 136448 (133.27 MiB 139.72 MB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 10 Persistence : Superblock is persistent Update Time : Sun Jan 10 01:00:09 2016 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 UUID : 8970d8ee:ddf9b701:42694042:34338434 (local to host localhost) Events : 0.291 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 1 1 active sync /dev/sda1 2 8 17 2 active sync /dev/sdb1
Second partitions on each disk are configured as PV to rootvg:
# pvs PV VG Fmt Attr PSize PFree /dev/sda2 rootvg lvm2 a-- 20.00g 8.41g /dev/sdb2 rootvg lvm2 a-- 20.00g 8.41g /dev/sdc2 rootvg lvm2 a-- 20.00g 8.41g
Third partitions on each disk are forming ZFS pool:
# zpool status z1 pool: z1 state: ONLINE status: The pool is formatted using a legacy on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on software that does not support feature flags. scan: scrub repaired 0 in 25h54m with 0 errors on Sat Jan 9 04:58:44 2016 config: NAME STATE READ WRITE CKSUM z1 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 ata-Hitachi_HUA721010KLA330_GTE005PAKB1DBL-part3 ONLINE 0 0 0 ata-Hitachi_HUA721010KLA330_GTE005PAKAUMYL-part3 ONLINE 0 0 0 ata-Hitachi_HUA721010KLA330_GTE005PAKB1DXL-part3 ONLINE 0 0 0 errors: No known data errors
My z1 pool holds all user's data, doing snapshots for backup purpose, etc.
All LV in rootvg VG created as raid5 type.
This layout protect my system from sigle disk failure.
Create new LV for new OS version and populate it:
# lvcreate -n fc22slash -L3g --type raid5 /dev/rootvg # mkfs.ext3 -j -b4096 /dev/rootvg/fc22slash # mkdir -p /mnt/fc2{1,2} # mount /dev/rootvg/fc22slash /mnt/fc22 # mount -o bind / /mnt/fc21 # rsync -av /mnt/fc21/ /mnt/fc22/ # umount /mnt/fc21
I have /var as separate FS, then repeat same for it:
# lvcreate -n fc22var -L2g --type raid5 /dev/rootvg # mkfs.ext3 -j -b4096 /dev/rootvg/fc22var # mount /dev/rootvg/fc22var /mnt/fc22/var # mount -o bind /var /mnt/fc21 # rsync -av /mnt/fc21/ /mnt/fc22/var/ # umount /mnt/fc21
Some things can goes wrong during update, let's take snapshot for initial state:
# umount /mnt/fc22/var /mnt/fc22 # lvcreate -L3g -s -n fc22slash_backup /dev/rootvg/fc22slash # lvcreate -L2g -s -n fc22var_backup /dev/rootvg/fc22var # mount /dev/rootvg/fc22slash /mnt/fc22 # mount /dev/rootvg/fc22var /mnt/fc22/var
Mount pseudo FS into new OS and do chroot into it.
# cd /mnt/fc22 # mount -t proc proc proc # mount -t sysfs sysfs sys # mount -o bind /dev dev # mount -o bind /dev/pts dev/pts # mount -o bind /boot boot # chroot . # export PS1="chroot> "
Edit /etc/fstab in chrooted environment, replacing FC21 volumes to new.
chroot> vi /etc/fstab chroot> df Filesystem Size Used Avail Use% Mounted on /dev/mapper/rootvg-fc22slash 2.9G 1.9G 948M 67% / /dev/mapper/rootvg-fc22var 2.0G 1.1G 729M 61% /var devtmpfs 993M 0 993M 0% /dev /dev/md10 126M 47M 70M 41% /boot
Clean yum cache:
chroot> yum clean all Cleaning repos: fedora rpmfusion-free rpmfusion-free-updates rpmfusion-nonfree rpmfusion-nonfree-updates updates Cleaning up everything chroot> rm -rf /var/cache/yum
Locate fedora-release RPM at any Fedora repository avaliable on Internet, for example here: Index of f. Update package using rpm command, copy-pasting links:
chroot> rpm -Uhv http://mirror.isoc.org.il/pub/fedora/releases/22/Server/x86_64/os/Packages/f/fedora-release-22-1.noarch.rpm \ http://mirror.isoc.org.il/pub/fedora/releases/22/Server/x86_64/os/Packages/f/fedora-repos-22-1.noarch.rpm Retrieving http://mirror.isoc.org.il/pub/fedora/releases/22/Server/x86_64/os/Packages/f/fedora-release-22-1.noarch.rpm Retrieving http://mirror.isoc.org.il/pub/fedora/releases/22/Server/x86_64/os/Packages/f/fedora-repos-22-1.noarch.rpm warning: /var/tmp/rpm-tmp.0zLI2Q: Header V3 RSA/SHA256 Signature, key ID 8e1431d5: NOKEY Preparing... ################################# [100%] Updating / installing... 1:fedora-repos-22-1 ################################# [ 25%] 2:fedora-release-22-1 ################################# [ 50%] Cleaning up / removing... 3:fedora-repos-21-3 ################################# [ 75%] 4:fedora-release-21-2 ################################# [100%] chroot>
Now run yum check-update to verify ability of upgrade, finally run yum update
Generate new /boot/grub2/grub.cfg
# grub2-mkconfig > /boot/grub2/grub.cfg-fc22
Now edit resulted file. Script usually mix all possible kernels with all possible root locations. Remove unreasonalbe entries, then backup old grub.cfg file and overwrite it with new.
Leave chrooted environment, cross fingers and reboot the computer.
Fix all services that should be run at server. For example, I was need reinstall ZFS software, etc.
Remove LVM snapshots
# lvremove /dev/rootvg/fc22slash_backup # lvremove /dev/rootvg/fc22var_backup