Linux LVM: Logical Volume Manager
Table of Contents
Introduction
The traditional approach to dealing with storage is to put one more formatted partitions (or swap space) on a single device.
These partitions are then used by the operating system for creating mount points (generally, managed by the filesystem table in /etc/fstab
).
The traditional partitioning approach has several disadvantages. Particularly, the size of mounted volumes is restricted by the size of the underlying device. This makes it impossible to easily scale storage capacity. Moreover, it is not trivial to create striped or RAID volumes, in case performance or redundancy are desired.
In contrast, the Logical Volume Manager (LVM) in Linux abstracts from physical devices and instead differentiates between Logical Volumes (LVs), Volume Groups (VGs), and Physical Volumes (PVs). A VG can span multiple PVs, but a single PV cannot host more than one VG. Similarly, a VG can host multiple LVs, but a single LV cannot span multiple VGs. This is illustrated by the following Figure.
Especially when dealing with virtual machines, which may have been assigned modestly-sized disk space, it is not uncommon to run out of storage space at some point. Luckily, extending disks with LVM is easy!
⚠️ Careful: Do not move beyond this point without having a (functional!) backup! ⚠️
Basic usage of LVM
In what follows, we consider a virtual Debian 11 host running on a Proxmox hypervisor
The machine was set up with a guided LVM and separate home partition on disk of size 20G.
lsblk
provides a clear look at the storage layout.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 488M 0 part /boot
└─sda3 8:3 0 19G 0 part
├─lvm--test--vg-root 254:0 0 3.5G 0 lvm /
├─lvm--test--vg-swap_1 254:1 0 976M 0 lvm [SWAP]
└─lvm--test--vg-home 254:2 0 5.1G 0 lvm /home
Querying PVs, VGs, and LVs
We can use pvs
, vgs
, and lvs
to list the PVs, VGs, and LVs.
If necessary, pvdisplay
, vgdisplay
, and lvdisplay
provide more detailled output.
pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 lvm-test-vg lvm2 a-- <19.02g 9.51g
vgs
VG #PV #LV #SN Attr VSize VFree
lvm-test-vg 1 3 0 wz--n- <19.02g 9.51g
lvs
LV VG Attr LSize
home lvm-test-vg -wi-ao---- 5.08g
root lvm-test-vg -wi-ao---- 3.47g
swap_1 lvm-test-vg -wi-ao---- 976.00m
Our VG is made up by a single PV, contains three LVs, and zero snapshots.
As our PV currently contains only a single VG, their sizes and free space matches exactly.
Moreover, our logical volumes home
, root
, and swap_1
all belong to the VG lvm-test-vg
.
Take note that currently not all free space of the VG (PV) has been assigned to the corresponding LVs.
Creating new LVs
For now, we are only considering the simplest case where our system owns a single disk.
As long as our VG has free space, we can assign new LVs to it.
In order to create a new LV on an existing VG, we use one of the following lvcreate
commands, to assign the corresponding amount of volume space.
Note that the second command fails, because we cannot assign 100% of VG space to a new LV when the VG is already occupied by other LVs (see above).
lvcreate -n test -L1g lvm-test-vg # 1G to the new LV
lvcreate -n test -l100%VG lvm-test-vg # 100% of VG space to LV
lvcreate -n test -l100%FREE lvm-test-vg # 100% of free VG space to LV
Applying the first command reduces the free space listed by pvs
and lvs
from 9.51g to 8.51g.
We can also confirm that vgs
succesfully lists the newly created volume, which has not yet been mounted.
LV VG Attr LSize
home lvm-test-vg -wi-ao---- 5.08g
root lvm-test-vg -wi-ao---- 3.47g
swap_1 lvm-test-vg -wi-ao---- 976.00m
test lvm-test-vg -wi-a----- 1.00g
Formatting the newly created LV and mounting it is done as follows.
mkfs ext4 /dev/mapper/lvm--test--vg-test
mkdir $HOME/data
mount /dev/lvm-test-vg/test $HOME/data
Let us now remove all remaining free space from the VG (and thus, the PV) by first extending the home
LV with 50% of the free VG space.
Subsequently, we extend the root
LV with 100% of the free VG space.
lvextend /dev/lvm-test-vg/home -l +50%FREE
lvextend /dev/lvm-test-vg/root -l +100%FREE
Increasing the size of VGs
As noted in the introduction, a single VG can be be made up from multiple PVs. Therefore, in order to increase the size of a given VG, we have two primary options:
- Increase the size of the existing PV.
- Adding a new PV to the VG.
The hypervisor Proxmox on which our Debian machine resides also relies on LVM.
To extend a disk of a virtual machine we run the following command on the Proxmox hypervisor, where 300
corresponds to the ID of the virtual machine and scsi0
to the device which we are resizing.
In general, this is equivalent to doing a block-by-block copy of a physical disk to a new disk that is larger by 5G.
qm resize 300 scsi0 +5G
Moreover, we attach a new disk scis1
with a size of 10G to the virtual machine.
After rebooting our Debian machine, the output of lsblk
confirms that sda
gained 5G in size and a new decice sdb
with a size of 10G was attached to the host.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 25G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 488M 0 part /boot
└─sda3 8:3 0 19G 0 part
├─lvm--test--vg-root 254:0 0 7.7G 0 lvm /
├─lvm--test--vg-swap_1 254:1 0 976M 0 lvm [SWAP]
├─lvm--test--vg-home 254:2 0 9.3G 0 lvm /home
└─lvm--test--vg-test 254:3 0 1G 0 lvm
sdb 8:16 0 10G 0 disk
Increasing the size of a PV
While the block size of device sda
increased as expected, pvs
still shows that the PV associated with the LVM still has no free space.
PV VG Fmt Attr PSize PFree
/dev/sda3 lvm-test-vg lvm2 a-- <19.02g 0
This occurs because the underlying partition dev/sda3
has not yet been extended.
To fix this issue, we run parted
.
parted /dev/sda
GNU Parted 3.4
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Warning: Not all of the space available to /dev/sda appears to be used,
you can fix the GPT to use all of the space (an extra 10485760 blocks)
or continue with the current setting?
Fix/Ignore? F
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 26.8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 538MB 537MB fat32 boot, esp
2 538MB 1050MB 512MB ext2
3 1050MB 21.5GB 20.4GB lvm
(parted) resizepart 3 100%
(parted) print
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 26.8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 538MB 537MB fat32 boot, esp
2 538MB 1050MB 512MB ext2
3 1050MB 26.8GB 25.8GB lvm
(parted) quit
After extending the partition, we must resize the PV associated with the LVM by executing pvresize /dev/sda3
.
Running pvs
reveals that the PV now spans the entire disk; and we gained the expected 5G of new space.
PV VG Fmt Attr PSize PFree
/dev/sda3 lvm-test-vg lvm2 a-- <24.02g 5.00g
Adding another PV to an existing VG
We already confirmed that /dev/sdb/
corresponds to the second virtual disk that we attached to our Debian machine. Running lvmdiskscan
yields the following output.
/dev/sda1 [ 512.00 MiB]
/dev/sda2 [ 488.00 MiB]
/dev/sda3 [ 24.02 GiB] LVM physical volume
/dev/sdb [ 10.00 GiB]
1 disk
2 partitions
0 LVM physical volume whole disks
1 LVM physical volume
vgs
VG #PV #LV #SN Attr VSize VFree
lvm-test-vg 1 4 0 wz--n- <24.02g 5.00g
We extend an existing VG lvm-test-vg
by another PV on /dev/sdb
by using the command vgextend lvm-test-vg /dev/sdb
.
Physical volume "/dev/sdb" successfully created.
Volume group "lvm-test-vg" successfully extended
Using vgs
, we can confirm that the PV has been succesfully attached.
VG #PV #LV #SN Attr VSize VFree
lvm-test-vg 2 4 0 wz--n- <34.02g <15.00g
To understand where the free space in the VG comes from, we run pvs
.
Overall, this is exactly what we expected! Our VG gained 5G from extending /dev/sda
and 10G from adding a second PV /dev/sdb
to the VG.
We could now further extend existing LVs or create new ones in the same VG.
PV VG Fmt Attr PSize PFree
/dev/sda3 lvm-test-vg lvm2 a-- <24.02g 5.00g
/dev/sdb lvm-test-vg lvm2 a-- <10.00g <10.00g
This is what lvmdiskcan
shows now
Take note that the newly created PV occupies the whole underlying disk.
/dev/sda1 [ 512.00 MiB]
/dev/sda2 [ 488.00 MiB]
/dev/sda3 [ 24.02 GiB] LVM physical volume
/dev/sdb [ 10.00 GiB] LVM physical volume
0 disks
2 partitions
1 LVM physical volume whole disk
1 LVM physical volume
⚠️
Of course, if the actual devices underlying /dev/sda
and /dev/sdb
are distinct physical disks, then we have essentially introduced the same problem as with a striped volume, where our entire filesystem may fail if one of the PVs fails! ⚠️
Conclusion
Managing PVs, VGs, and LVs with the LVM follows a straightforward process. Similar considerations (and commands) apply when reducing the space of a LV, VG, or PV. In this post, we did not touch one of the main reasons for using the LVM, apart from gaining increased abstraction: The possibility to take snapshots! 🚀