LVM - lvm
▶ Watch on YouTube (opens in a new tab)
LVM is a storage management framework rather than a file system. It is used to manage physical storage devices, allowing you to create a number of logical storage volumes that use and virtualize the underlying physical storage devices.
Note that it is possible to over-commit the physical storage in the process, to allow flexibility for scenarios where not all available storage is in use at the same time.
To use LVM, make sure you have lvm2
installed on your machine.
Terminology
LVM can combine several physical storage devices into a volume group. You can then allocate logical volumes of different types from this volume group.
One supported volume type is a thin pool, which allows over-committing the resources by creating thinly provisioned volumes whose total allowed maximum size (quota) is larger than the available physical storage. Another type is a volume snapshot, which captures a specific state of a logical volume.
lvm
driver in LXD
The lvm
driver in LXD uses logical volumes for images, and volume snapshots for instances and snapshots.
LXD assumes that it has full control over the volume group. Therefore, you should not maintain any file system entities that are not owned by LXD in an LVM volume group, because LXD might delete them. However, if you need to reuse an existing volume group (for example, because your setup has only one volume group), you can do so by setting the lvm.vg.force_reuse
configuration.
By default, LVM storage pools use an LVM thin pool and create logical volumes for all LXD storage entities (images, instances and custom volumes) in there. This behavior can be changed by setting lvm.use_thinpool
to false
when you create the pool. In this case, LXD uses “normal” logical volumes for all storage entities that are not snapshots. Note that this entails serious performance and space reductions for the lvm
driver (close to the dir
driver both in speed and storage usage). The reason for this is that most storage operations must fall back to using rsync
, because logical volumes that are not thin pools do not support snapshots of snapshots. In addition, non-thin snapshots take up much more storage space than thin snapshots, because they must reserve space for their maximum size (quota) at creation time. Therefore, this option should only be chosen if the use case requires it.
For environments with a high instance turnover (for example, continuous integration) you should tweak the backup retain_min
and retain_days
settings in /etc/lvm/lvm.conf
to avoid slowdowns when interacting with LXD.
Configuration options
The following configuration options are available for storage pools that use the lvm
driver and for storage volumes in these pools.
Storage pool configuration
lvm.thinpool_metadata_size
The size of the thin pool metadata volume
Key:
lvm.thinpool_metadata_size
Type:
string
Default:
0
(auto)
Scope:
global
By default, LVM calculates an appropriate size.
lvm.thinpool_name
Thin pool where volumes are created
Key:
lvm.thinpool_name
Type:
string
Default:
LXDThinPool
Scope:
local
lvm.use_thinpool
Whether the storage pool uses a thin pool for logical volumes
Key:
lvm.use_thinpool
Type:
bool
Default:
true
Scope:
global
lvm.vg.force_reuse
Force using an existing non-empty volume group
Key:
lvm.vg.force_reuse
Type:
bool
Default:
false
Scope:
global
lvm.vg_name
Name of the volume group to create
Key:
lvm.vg_name
Type:
string
Default:
name of the pool
Scope:
local
rsync.bwlimit
Upper limit on the socket I/O for rsync
Key:
rsync.bwlimit
Type:
string
Default:
0
(no limit)
Scope:
global
When rsync
must be used to transfer storage entities, this option specifies the upper limit to be placed on the socket I/O.
rsync.compression
Whether to use compression while migrating storage pools
Key:
rsync.compression
Type:
bool
Default:
true
Scope:
global
size
Size of the storage pool (for loop-based pools)
Key:
size
Type:
string
Default:
auto (20% of free disk space, >= 5 GiB and <= 30 GiB)
Scope:
local
When creating loop-based pools, specify the size in bytes (suffixes are supported). You can increase the size to grow the storage pool.
The default (auto
) creates a storage pool that uses 20% of the free disk space, with a minimum of 5 GiB and a maximum of 30 GiB.
source
Path to an existing block device, loop file, or LVM volume group
Key:
source
Type:
string
Scope:
local
source.wipe
Whether to wipe the block device before creating the pool
Key:
source.wipe
Type:
bool
Default:
false
Scope:
local
Set this option to true
to wipe the block device specified in source
prior to creating the storage pool.
Tip
In addition to these configurations, you can also set default values for the storage volume configurations. See Configure default values for storage volumes.
Storage volume configuration
block.filesystem
File system of the storage volume
Key:
block.filesystem
Type:
string
Default:
same as volume.block.filesystem
Condition:
block-based volume with content type filesystem
Scope:
global
Valid options are: btrfs
, ext4
, xfs
If not set, ext4
is assumed.
block.mount_options
Mount options for block-backed file system volumes
Key:
block.mount_options
Type:
string
Default:
same as volume.block.mount_options
Condition:
block-based volume with content type filesystem
Scope:
global
lvm.stripes
Number of stripes to use for new volumes (or thin pool volume)
Key:
lvm.stripes
Type:
string
Default:
same as volume.lvm.stripes
Scope:
global
lvm.stripes.size
Size of stripes to use
Key:
lvm.stripes.size
Type:
string
Default:
same as volume.lvm.stripes.size
Scope:
global
The size must be at least 4096 bytes, and a multiple of 512 bytes.
security.shared
Enable volume sharing
Key:
security.shared
Type:
bool
Default:
same as volume.security.shared
or false
Condition:
virtual-machine or custom block volume
Scope:
global
Enabling this option allows sharing the volume across multiple instances despite the possibility of data loss.
security.shifted
Enable ID shifting overlay
Key:
security.shifted
Type:
bool
Default:
same as volume.security.shifted
or false
Condition:
custom volume
Scope:
global
Enabling this option allows attaching the volume to multiple isolated instances.
security.unmapped
Disable ID mapping for the volume
Key:
security.unmapped
Type:
bool
Default:
same as volume.security.unmappped
or false
Condition:
custom volume
Scope:
global
size
Size/quota of the storage volume
Key:
size
Type:
string
Default:
same as volume.size
Condition:
appropriate driver
Scope:
global
snapshots.expiry
When snapshots are to be deleted
Key:
snapshots.expiry
Type:
string
Default:
same as volume.snapshots.expiry
Condition:
custom volume
Scope:
global
Specify an expression like 1M 2H 3d 4w 5m 6y
.
snapshots.pattern
Template for the snapshot name
Key:
snapshots.pattern
Type:
string
Default:
same as volume.snapshots.pattern
or snap%d
Condition:
custom volume
Scope:
global
You can specify a naming template that is used for scheduled snapshots and unnamed snapshots.
The snapshots.pattern
option takes a Pongo2 template string to format the snapshot name.
To add a time stamp to the snapshot name, use the Pongo2 context variable creation_date
. Make sure to format the date in your template string to avoid forbidden characters in the snapshot name. For example, set snapshots.pattern
to {{ creation_date|date:'2006-01-02_15-04-05' }}
to name the snapshots after their time of creation, down to the precision of a second.
Another way to avoid name collisions is to use the placeholder %d
in the pattern. For the first snapshot, the placeholder is replaced with 0
. For subsequent snapshots, the existing snapshot names are taken into account to find the highest number at the placeholder’s position. This number is then incremented by one for the new name.
snapshots.schedule
Schedule for automatic volume snapshots
Key:
snapshots.schedule
Type:
string
Default:
same as snapshots.schedule
Condition:
custom volume
Scope:
global
Specify either a cron expression (<minute> <hour> <dom> <month> <dow>
), a comma-separated list of schedule aliases (@hourly
, @daily
, @midnight
, @weekly
, @monthly
, @annually
, @yearly
), or leave empty to disable automatic snapshots (the default).
volatile.idmap.last
JSON-serialized UID/GID map that has been applied to the volume
Key:
volatile.idmap.last
Type:
string
Condition:
filesystem
volatile.idmap.next
JSON-serialized UID/GID map that has been applied to the volume
Key:
volatile.idmap.next
Type:
string
Condition:
filesystem
volatile.uuid
The volume’s UUID
Key:
volatile.uuid
Type:
string
Default:
random UUID
Scope:
global
Storage bucket configuration
To enable storage buckets for local storage pool drivers and allow applications to access the buckets via the S3 protocol, you must configure the core.storage_buckets_address
server setting.
size
Size/quota of the storage bucket
Key:
size
Type:
string
Default:
same as volume.size
Condition:
appropriate driver
Scope:
local