Btrfs - btrfs
▶ Watch on YouTube (opens in a new tab)
Btrfs is a local file system based on the COW principle. COW means that data is stored to a different block after it has been modified instead of overwriting the existing data, reducing the risk of data corruption. Unlike other file systems, Btrfs is extent-based, which means that it stores data in contiguous areas of memory.
In addition to basic file system features, Btrfs offers RAID and volume management, pooling, snapshots, checksums, compression and other features.
To use Btrfs, make sure you have btrfs-progs
installed on your machine.
Terminology
A Btrfs file system can have subvolumes, which are named binary subtrees of the main tree of the file system with their own independent file and directory hierarchy. A Btrfs snapshot is a special type of subvolume that captures a specific state of another subvolume. Snapshots can be read-write or read-only.
btrfs
driver in LXD
The btrfs
driver in LXD uses a subvolume per instance, image and snapshot. When creating a new entity (for example, launching a new instance), it creates a Btrfs snapshot.
Btrfs doesn’t natively support storing block devices. Therefore, when using Btrfs for VMs, LXD creates a big file on disk to store the VM. This approach is not very efficient and might cause issues when creating snapshots.
Btrfs can be used as a storage backend inside a container in a nested LXD environment. In this case, the parent container itself must use Btrfs. Note, however, that the nested LXD setup does not inherit the Btrfs quotas from the parent.
Quotas
Btrfs supports storage quotas via qgroups. Btrfs qgroups are hierarchical, but new subvolumes will not automatically be added to the qgroups of their parent subvolumes. This means that users can trivially escape any quotas that are set. Therefore, if strict quotas are needed, you should consider using a different storage driver (for example, ZFS with refquota
or LVM with Btrfs on top).
When using quotas, you must take into account that Btrfs extents are immutable. When blocks are written, they end up in new extents. The old extents remain until all their data is dereferenced or rewritten. This means that a quota can be reached even if the total amount of space used by the current files in the subvolume is smaller than the quota.
Note
This issue is seen most often when using VMs on Btrfs, due to the random I/O nature of using raw disk image files on top of a Btrfs subvolume.
Therefore, you should never use VMs with Btrfs storage pools.
If you really need to use VMs with Btrfs storage pools, set the instance root disk’ssize.state
property to twice the size of the root disk’s size.
Setting thebtrfs.mount_options
storage pool option tocompress-force
can also help.
Configuration options
Storage pool configuration
-
btrfs.mount_options
: Mount options for block devices
Type: string | Default:user_subvol_rm_allowed
| Scope: global -
size
: Size of the storage pool (for loop-based pools)
Type: string | Default: auto (20% of free disk space, >= 5 GiB and <= 30 GiB) | Scope: local -
source
: Path to an existing block device, loop file, or Btrfs subvolume
Type: string | Scope: local -
source.wipe
: Whether to wipe the block device before creating the pool
Type: bool | Default:false
| Scope: local
Tip
You can also set default values for the storage volume configurations.
Storage volume configuration
-
security.shared
: Enable volume sharing
Type: bool | Default: same asvolume.security.shared
orfalse
-
security.shifted
: Enable ID shifting overlay
Type: bool | Default: same asvolume.security.shifted
orfalse
-
security.unmapped
: Disable ID mapping for the volume
Type: bool | Default: same asvolume.security.unmappped
orfalse
-
size
: Size/quota of the storage volume
Type: string | Default: same asvolume.size
-
snapshots.expiry
: When snapshots are to be deleted
Type: string | Default: same asvolume.snapshots.expiry
-
snapshots.pattern
: Template for the snapshot name
Type: string | Default: same asvolume.snapshots.pattern
orsnap%d
Example:
{{ creation_date|date:'2006-01-02_15-04-05' }}
* `snapshots.schedule`: Schedule for automatic volume snapshots
**Type:** string | **Default:** same as `snapshots.schedule`
* `volatile.idmap.last`: JSON-serialized UID/GID map that has been applied
**Type:** string
* `volatile.idmap.next`: JSON-serialized UID/GID map that has been applied
**Type:** string
* `volatile.uuid`: The volume’s UUID
**Type:** string | **Default:** random UUID | **Scope:** global
### Storage bucket configuration
To enable S3 protocol access for storage buckets, configure [`core.storage_buckets_address`](../server/#server-core:core.storage_buckets_address).
* `size`: Size/quota of the storage bucket
**Type:** string | **Default:** same as `volume.size` | **Scope:** local
```