DISCLAIMER: The author is not responsible for the loss of data and other problems caused by execution of the described commands.
Currently, the need for mirroring of the system disk is less in demand, since most servers are virtual and are backed up by a fault-tolerant storage. Even physical servers come with a RAID controller, mirroring system disk by default.
But what if this is not true and you have to mirror system disk? Let's first practice a little with VM. I've taken RedHat 7.2 system installed by Next->Next rule on single disk.
First, we need to understand what we have at the starting point and what we want to see at the end. Then we plan a migration steps.
[root@localhost ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/rhel-root 18307072 884460 17422612 5% / devtmpfs 497924 0 497924 0% /dev tmpfs 508384 0 508384 0% /dev/shm tmpfs 508384 6760 501624 2% /run tmpfs 508384 0 508384 0% /sys/fs/cgroup /dev/vda1 508588 109896 398692 22% /boot tmpfs 101680 0 101680 0% /run/user/0
It looks like our disk has the first partition mounted as /boot and rootfs is in LVM. Let's check this:
[root@localhost ~]# cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: QEMU Model: QEMU DVD-ROM Rev: 2.5+ Type: CD-ROM ANSI SCSI revision: 05
On SCSI, there are no other devices than the CD-ROM. My virtual disk is defined as VIRTIO, so it's not here.
[root@localhost ~]# cat /proc/partitions major minor #blocks name 11 0 1048575 sr0 252 0 20971520 vda 252 1 512000 vda1 252 2 20458496 vda2 253 0 18317312 dm-0 253 1 2097152 dm-1
It is another good place to check what happens. sr0 is a SCSI CDROM, vda is my virtual disk, vda1 and vda2 are its partitions, dm-* are devices from the "device mapper" and can be translated to a readable form, as it is written here .
[root@localhost ~]# fdisk -l /dev/vda Disk /dev/vda: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x0007d9a3 Device Boot Start End Blocks Id System /dev/vda1 * 2048 1026047 512000 83 Linux /dev/vda2 1026048 41943039 20458496 8e Linux LVM
It seems that we were lucky. There are no partitions serving the data, everything is serviced by LVM, except /boot.
What will we implement? I'll add one more disk with the same layout, make the first partition mirrored using MD (mdadm), the second partition will be added to LVM, then all LVs will be mirrored by LVM. A boot record should be written on the second disk also, the BIOS will be configured to boot from the second disk as the second choice. Peanuts.
TAKE THE BACKUP !!!
All operations here are very dangerous. Even if they look safe, a small typo can turn your server into a bunch of hardware.
DISCLAIMER: The author is not responsible for the loss of data and other problems caused by execution of the described commands.
Add a disk. In a real situation, you can not know whether the disk is empty or contains data. If you have the opportunity to connect the drive somewhere else, then clean it there. It is usually enough to clear the first 512b to erase the partition table. If you need to erase the GPT, then probably your first disk is also large and has a GPT on it. In this case, you must follow another article, not this one.
In a virtual world, a reboot is not required. Some hardware also allows you to use hot-swap drives. In the worst case, you must turn off the system, insert a new disk and turn on the system. If the new disk still have content, it can be loaded instead of the primary one, so try to erase the disk data before inserting.
[root@localhost ~]# tail /var/log/messages Mar 12 06:25:06 localhost kernel: pci 0000:00:09.0: BAR 4: assigned [mem 0x40000000-0x407fffff 64bit pref] Mar 12 06:25:06 localhost kernel: pci 0000:00:09.0: BAR 1: assigned [mem 0x40800000-0x40800fff] Mar 12 06:25:06 localhost kernel: pci 0000:00:09.0: BAR 0: assigned [io 0x1000-0x103f] Mar 12 06:25:06 localhost kernel: virtio-pci 0000:00:09.0: enabling device (0000 -> 0003) Mar 12 06:25:06 localhost kernel: vdb: unknown partition table
We decided that the partitioning scheme is suitable for us, so we will simply copy the suitable layout from the old disk to the new one (suppose we had add a disk of the same size for mirroring):
[root@localhost ~]# sfdisk -d /dev/vda | sfdisk /dev/vdb Checking that no-one is using this disk right now ... OK Disk /dev/vdb: 41610 cylinders, 16 heads, 63 sectors/track sfdisk: /dev/vdb: unrecognized partition table type Old situation: sfdisk: No partitions found New situation: Units: sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/vdb1 * 2048 1026047 1024000 83 Linux /dev/vdb2 1026048 41943039 40916992 8e Linux LVM /dev/vdb3 0 - 0 0 Empty /dev/vdb4 0 - 0 0 Empty Warning: partition 1 does not start at a cylinder boundary Warning: partition 1 does not end at a cylinder boundary Warning: partition 2 does not start at a cylinder boundary Warning: partition 2 does not end at a cylinder boundary Successfully wrote the new partition table Re-reading the partition table ... If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).) [root@localhost ~]#
The above command is very dangerous. THIS command dump partition table from vda and write it on vdb. YOUR command will be differ. Check seven times before pressing Enter. Overwriting working partition table may turn your server into brick. Did you backup ?
Now, change partition type to 'Linux raid autodetect':
[root@localhost ~]# fdisk /dev/vdb Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): p Disk /dev/vdb: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/vdb1 * 2048 1026047 512000 83 Linux /dev/vdb2 1026048 41943039 20458496 8e Linux LVM Command (m for help): t Partition number (1,2, default 2): 1 Hex code (type L to list all codes): fd Changed type of partition 'Linux' to 'Linux raid autodetect' Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
Changing partition type is not necessary, but this will help the kernel detect mirror chains.
Check if the kernel knows about the new partitions:
[root@localhost ~]# cat /proc/partitions major minor #blocks name 11 0 1048575 sr0 252 0 20971520 vda 252 1 512000 vda1 252 2 20458496 vda2 252 16 20971520 vdb 252 17 512000 vdb1 252 18 20458496 vdb2 253 0 18317312 dm-0 253 1 2097152 dm-1 [root@localhost ~]#
If you do not see new vdb1 and vdb2 here, run kpartx -a /dev/vdb to force the kernel to re-read the partition table of the new disk. If the partitions were earlier (in the case of a non-empty disk), you may need to run kpartx -d /dev/vdb before re-reading.
The mdadm package is not installed with minimal installation, install it if necessary.
[root@localhost ~]# mdadm -C /dev/md0 -n2 -l1 /dev/vdb1 missing mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
The broken mirror starts with one missing device. The warning is for incompatible tools. Our grub2 knows about MD/v1 metadata, so agree with "yes."
[root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 vdb1[0] 511680 blocks super 1.2 [2/1] [U_] unused devices:
An underscore in [U_] is our "missing" device. Create /etc/mdadm.conf config file. This is not strictly necessary, but it will help fix the MD names and support RAID. Examine "Array UUID" and make correct config file:
[root@localhost ~]# mdadm --examine /dev/vdb1 | grep "Array UUID" Array UUID : 1303fbf7:ede1d764:ef401616:2b234c54 [root@localhost ~]# cat /etc/mdadm.conf MAILADDR root ARRAY /dev/md0 uuid=1303fbf7:ede1d764:ef401616:2b234c54
Check that everything works as should:
[root@localhost ~]# mdadm -S /dev/md0 mdadm: stopped /dev/md0 [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] unused devices:[root@localhost ~]# mdadm -A /dev/md0 mdadm: /dev/md0 has been started with 1 drive (out of 2). [root@localhost ~]# mdadm -R /dev/md0 [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 vdb1[0] 511680 blocks super 1.2 [2/1] [U_] unused devices:
Copy /boot content to new location:
[root@localhost ~]# mount | grep boot /dev/vda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota) [root@localhost ~]# grep boot /etc/fstab UUID=98355cc5-4eca-4576-9b97-153755cbd2a3 /boot xfs defaults 0 0
Surprise, /boot formatted as xfs, let's do the same:
[root@localhost ~]# mkfs.xfs /dev/md0 meta-data=/dev/md0 isize=256 agcount=4, agsize=31980 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=127920, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal log bsize=4096 blocks=853, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@localhost ~]# mkdir -p /mnt/d [root@localhost ~]# mount /dev/md0 /mnt/d [root@localhost ~]# (cd /boot && tar cf - .)|(cd /mnt/d && tar xpfB -) [root@localhost ~]# umount /boot /mnt/d [root@localhost ~]# mount /dev/md0 /boot
Fix /etc/fstab /boot entry to mount /dev/md0:
[root@localhost ~]# grep boot /etc/fstab /dev/md0 /boot xfs defaults 0 0 #UUID=98355cc5-4eca-4576-9b97-153755cbd2a3 /boot xfs defaults 0 0
Note that the original /boot remains unchanged, because it was unmounted. All changes will be performed in a new location, so we have a chance to boot our system using the old working procedure.
Rebuild initrd to include MD modules and configuration files. They will be included automatically, you just need to rebuild initrd:
[root@localhost ~]# mkinitrd -f /boot/initramfs-$(uname -r).img $(uname -r)
Rebuild grub.cfg and write bootsector:
[root@localhost ~]# grub2-mkconfig > /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-327.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-327.el7.x86_64.img /usr/sbin/grub2-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image.. /usr/sbin/grub2-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image.. /usr/sbin/grub2-probe: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image.. Found linux image: /boot/vmlinuz-0-rescue-b5480fa09a54b645ab8ad5a0eccb24d7 Found initrd image: /boot/initramfs-0-rescue-b5480fa09a54b645ab8ad5a0eccb24d7.img done [root@localhost ~]# grub2-install /dev/vdb Installing for i386-pc platform. grub2-install: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image.. grub2-install: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image.. Installation finished. No error reported.
Time to test. Shut down the server. Enter the BIOS settings and remove the vda from the boot list, leave only the vdb drive. Boot the server. Check the output of the dmesg command. Try to make sure that /dev/vda1 was not used at boot time, since we will destroy it in the next step.
So far, so good. Let's finish /boot mirroring. First, we will fix partition type of /dev/vda1 to be "fd":
[root@localhost ~]# fdisk /dev/vda Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): p Disk /dev/vda: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/vda1 * 2048 1026047 512000 83 Linux /dev/vda2 1026048 41943039 20458496 8e Linux LVM Command (m for help): t Partition number (1,2, default 2): 1 Hex code (type L to list all codes): fd Changed type of partition 'Linux' to 'Linux raid autodetect' Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
A dangerous command again, the old boot partition will be lost for the sake of building a healthy MD device:
[root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 vdb1[0] 511680 blocks super 1.2 [2/1] [U_] unused devices:[root@localhost ~]# mdadm -a /dev/md0 /dev/vda1 mdadm: added /dev/vda1 [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 vda1[2] vdb1[0] 511680 blocks super 1.2 [2/1] [U_] [===========>.........] recovery = 57.8% (295936/511680) finish=0.0min speed=49322K/sec unused devices:
Wait until the synchronization is complete and recreate the boot sector on the device /dev/vda.
[root@localhost ~]# grub2-install /dev/vda Installing for i386-pc platform. grub2-install: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image.. grub2-install: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image.. Installation finished. No error reported.
Reboot the server, add the first disk to the boot sequence as the first device, check the boot procedure.
The difficult part is over. Add a second partition to an existing VG:
[root@localhost ~]# vgs VG #PV #LV #SN Attr VSize VFree rhel 1 2 0 wz--n- 19.51g 40.00m [root@localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 rhel lvm2 a-- 19.51g 40.00m [root@localhost ~]# pvcreate /dev/vdb2 Physical volume "/dev/vdb2" successfully created [root@localhost ~]# vgextend rhel /dev/vdb2 Volume group "rhel" successfully extended [root@localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 rhel lvm2 a-- 19.51g 40.00m /dev/vdb2 rhel lvm2 a-- 19.51g 19.51g
We must have free space on both PVs to store the metadata of the mirror and the log. If you do not have free space, try to reduce something. In my example, reducing swap LV is the simplest way: swapoff, reduce, re-create swap, swapon. But 40.00m of free space is quite enough to store mirrors metadata. So, let's start:
[root@localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root rhel -wi-ao---- 17.47g swap rhel -wi-ao---- 2.00g [root@localhost ~]# lvconvert -b -m1 /dev/rhel/root [root@localhost ~]# lvconvert -b -m1 /dev/rhel/swap [root@localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root rhel rwi-aor--- 17.47g 2.24 swap rhel rwi-aor--- 2.00g 0.00
Wait until Cpy%Sync is 100% before the destructive tests. You can reboot the server, the LVM will continue to synchronize on its own.
Rebuilt the new initrd to reflect changes in LVM structure:
[root@localhost ~]# mkinitrd -f /boot/initramfs-$(uname -r).img $(uname -r) /usr/sbin/dracut: line 639: warning: setlocale: LC_MESSAGES: cannot change locale (C.utf8): No such file or directory /usr/sbin/dracut: line 640: warning: setlocale: LC_CTYPE: cannot change locale (C.utf8): No such file or directory
NOTE: The rebuild is required, do not skip it, otherwise the server will not boot even with two available disks.
Restart the server to verify the boot procedure is working.
Let's remove second drive and check /var/log/messages what happen:
Mar 12 08:48:48 localhost kernel: md: md0 still in use. Mar 12 08:48:48 localhost kernel: md/raid1:md0: Disk failure on vdb1, disabling device.#012md/raid1:md0: Operation continuing on 1 devices. Mar 12 08:48:57 localhost kernel: md: super_written gets error=-5, uptodate=0 Mar 12 08:48:57 localhost kernel: md/raid1:mdX: Disk failure on dm-5, disabling device.#012md/raid1:mdX: Operation continuing on 1 devices. Mar 12 08:48:57 localhost lvm[1476]: Device #1 of raid1 array, rhel-root, has failed. Mar 12 08:48:57 localhost lvm[1476]: WARNING: Device for PV XOFMua-NHIr-1FrR-fpeu-i47H-kQN1-KJ3UOx not found or rejected by a filter. Mar 12 08:48:57 localhost lvm[1476]: WARNING: Device for PV XOFMua-NHIr-1FrR-fpeu-i47H-kQN1-KJ3UOx already missing, skipping. Mar 12 08:48:57 localhost lvm[1476]: WARNING: Device for PV XOFMua-NHIr-1FrR-fpeu-i47H-kQN1-KJ3UOx not found or rejected by a filter. Mar 12 08:48:57 localhost lvm[1476]: Use 'lvconvert --repair rhel/root' to replace failed device.
As you see, everything still functional. But what about booting ? Reboot the server.
Amazing !
Return the second device back as is. LVM immediately detected a paired drive and resynchronized LVs seamless:
[root@localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root rhel rwi-aor--- 8.47g 100.00 swap rhel rwi-aor--- 1.00g 100.00 [root@localhost ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md0 : active raid1 vda1[2] 511680 blocks super 1.2 [2/1] [_U] unused devices:
However, the MD (/boot) device was not automatically assembled. Let's help him:
[root@localhost ~]# mdadm -a /dev/md0 /dev/vdb1 mdadm: added /dev/vdb1 [root@localhost ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md0 : active raid1 vdb1[3] vda1[2] 511680 blocks super 1.2 [2/1] [_U] [=====>...............] recovery = 25.0% (128000/511680) finish=0.1min speed=32496K/sec unused devices:
Let's repeat the same test, now with the first disk. Oops. My server is crushed. Probably because of very frequent equipment failure. However, the boot was successful, and the server returned with one second disk. The re-reassignment of the disk went through the same scenario, and the mirror was rebuilt well.
In the next test, we will replace the failed disk with an empty one. The procedure for restructuring will be slightly different. I shut off the server and replaced one disk with blank.
First of all, copy the partition table to a new disk. Double check which drive is the source and which is the destination.
[root@localhost ~]# fdisk -l /dev/vda Disk /dev/vda: 10.7 GB, 10737418240 bytes, 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000f2f15 Device Boot Start End Blocks Id System /dev/vda1 * 2048 1026047 512000 fd Linux raid autodetect /dev/vda2 1026048 20971519 9972736 8e Linux LVM [root@localhost ~]# fdisk -l /dev/vdb Disk /dev/vdb: 10.7 GB, 10737418240 bytes, 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
In my case vda will be a source, then command will be:
[root@localhost ~]# sfdisk -d /dev/vda | sfdisk /dev/vdb
Now rebuild the MD device:
[root@localhost ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md0 : active raid1 vda1[2] 511680 blocks super 1.2 [2/1] [_U] unused devices:[root@localhost ~]# mdadm -a /dev/md0 /dev/vdb1 mdadm: added /dev/vdb1 [root@localhost ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md0 : active raid1 vdb1[3] vda1[2] 511680 blocks super 1.2 [2/1] [_U] [====>................] recovery = 23.7% (121792/511680) finish=0.1min speed=60896K/sec unused devices:
Wait until the synchronization is complete, then install the boot sector. Again, in my case I should fix vdb:
[root@localhost ~]# grub2-install /dev/vdb Installing for i386-pc platform. grub2-install: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image.. grub2-install: warning: Couldn't find physical volume `(null)'. Some modules may be missing from core image.. Installation finished. No error reported.
Now let's fix LVM faulty PV. First, delete it:
[root@localhost ~]# pvs WARNING: Device for PV uTPvR8-zeuP-tdJT-AW3r-DCuj-hazO-QxqYdB not found or rejected by a filter. PV VG Fmt Attr PSize PFree /dev/vda2 rhel lvm2 a-- 9.51g 32.00m unknown device rhel lvm2 a-m 9.51g 32.00m [root@localhost ~]# vgreduce rhel --removemissing --force WARNING: Device for PV uTPvR8-zeuP-tdJT-AW3r-DCuj-hazO-QxqYdB not found or rejected by a filter. WARNING: Device for PV uTPvR8-zeuP-tdJT-AW3r-DCuj-hazO-QxqYdB not found or rejected by a filter. Wrote out consistent volume group rhel [root@localhost ~]# pvcreate /dev/vdb2 Physical volume "/dev/vdb2" successfully created [root@localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 rhel lvm2 a-- 9.51g 32.00m /dev/vdb2 lvm2 --- 9.51g 9.51g [root@localhost ~]# vgextend rhel /dev/vdb2 Volume group "rhel" successfully extended [root@localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 rhel lvm2 a-- 9.51g 32.00m /dev/vdb2 rhel lvm2 a-- 9.51g 9.51g [root@localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root rhel rwi-aor--- 8.47g 100.00 swap rhel rwi-aor--- 1.00g 100.00
You can see the new r appears in attributes list, what indicates that volume need repare. After correcting PV, let's fix LV. First, we convert it back to a normal linear LV, and then convert it back to be a mirror:
[root@localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root rhel rwi-aor--- 8.47g 100.00 swap rhel rwi-aor--- 1.00g 100.00 [root@localhost ~]# lvconvert -m0 /dev/rhel/swap [root@localhost ~]# lvconvert -m0 /dev/rhel/root [root@localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root rhel -wi-ao---- 8.47g swap rhel -wi-ao---- 1.00g [root@localhost ~]# lvconvert -b -m1 /dev/rhel/root [root@localhost ~]# lvconvert -b -m1 /dev/rhel/swap [root@localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root rhel rwi-aor--- 8.47g 1.01 swap rhel rwi-aor--- 1.00g 6.25
Wait for 100% synced.
NOTE: This is an old school way, the modern way is lvchange --rebuild PV or lvconvert --repair. Any single of these two commands instead of block commands above.
If something went wrong, for example, if you skipped the mkinitrd step or the bootrecord creation step, the server will not boot. In my case, I've re-created the VM again from the template, which is much faster. But if your server is valuable, you can try boot from the rescue CD. In the worst case, you have performed a server's backup and will do bare metal restore.
After boot from the rescue CD, it will try to find your installation. In most cases, it will find it and mount it under /mnt/sysimage. Then run "chroot" and do the skipped steps. This section of the article is quite theoretical without any commands for copy-paste, because the case has not happen.