Instance options
Instance options are configuration options that are directly related to the instance.
See Configure instance options for instructions on how to set the instance options.
The key/value configuration is namespaced. The following options are available:
- Miscellaneous options
- Boot-related options
cloud-init
configuration- Resource limits
- Migration options
- NVIDIA and CUDA configuration
- Raw instance configuration overrides
- Security policies
- Snapshot scheduling and configuration
- Volatile internal data
Note that while a type is defined for each option, all values are stored as strings and should be exported over the REST API as strings (which makes it possible to support any extra values without breaking backward compatibility).
Miscellaneous options
In addition to the configuration options listed in the following sections, these instance options are supported:
agent.nic_config
Whether to use the name and MTU of the default network interfaces
- Key:
agent.nic_config
- Type: bool
- Default:
false
- Live update: no
- Condition: virtual machine
When set to true, the name and MTU of the default network interfaces inside the virtual machine will match those of the instance devices.
cluster.evacuate
What to do when evacuating the instance
- Key:
cluster.evacuate
- Type: string
- Default:
auto
- Live update: no
The cluster.evacuate
provides control over how instances are handled when a cluster member is being evacuated.
Available Modes:
auto
(default): The system will automatically decide the best evacuation method based on the instance’s type and configured devices:- If any device is not suitable for migration, the instance will not be migrated (only stopped).
- Live migration will be used only for virtual machines with the
migration.stateful
setting enabled and for which all its devices can be migrated as well.
live-migrate
: Instances are live-migrated to another node. This means the instance remains running and operational during the migration process, ensuring minimal disruption.migrate
: In this mode, instances are migrated to another node in the cluster. The migration process will not be live, meaning there will be a brief downtime for the instance during the migration.stop
: Instances are not migrated. Instead, they are stopped on the current node.
See Evacuate a cluster member for more information.
linux.kernel_modules
Kernel modules to load or allow loading
- Key:
linux.kernel_modules
- Type: string
- Live update: yes
- Condition: container
Specify the kernel modules as a comma-separated list.
The modules are loaded before the instance starts, or they can be loaded by a privileged user if linux.kernel_modules.load
is set to ondemand
.
linux.kernel_modules.load
How to load kernel modules
- Key:
linux.kernel_modules.load
- Type: string
- Default:
boot
- Live update: no
- Condition: container
This option specifies how to load the kernel modules that are specified in linux.kernel_modules
.
Possible values are boot
(load the modules when booting the container) and ondemand
(intercept the finit_modules()
syscall and allow a privileged user in the container’s user namespace to load the modules).
linux.sysctl.*
Override for the corresponding sysctl
setting in the container
- Key:
linux.sysctl.*
- Type: string
- Live update: no
- Condition: container
ubuntu_pro.guest_attach
Whether to auto-attach Ubuntu Pro.
- Key:
ubuntu_pro.guest_attach
- Type: string
- Live update: no
Indicate whether the guest should auto-attach Ubuntu Pro at start up.
The allowed values are off
, on
, and available
.
If set to off
, it will not be possible for the Ubuntu Pro client in the guest to obtain guest token via devlxd
.
If set to available
, attachment via guest token is possible but will not be performed automatically by the Ubuntu Pro client in the guest at startup.
If set to on
, attachment will be performed automatically by the Ubuntu Pro client in the guest at startup.
To allow guest attachment, the host must be an Ubuntu machine that is Pro attached, and guest attachment must be enabled via the Pro client.
To do this, run pro config set lxd_guest_attach=on
.
user.*
Free-form user key/value storage
- Key:
user.*
- Type: string
- Live update: no
User keys can be used in search.
environment.*
Environment variables for the instance
- Key:
environment.*
- Type: string
- Live update: yes (exec)
You can export key/value environment variables to the instance.
These are then set for lxc exec
.
Boot-related options
The following instance options control the boot-related behavior of the instance:
boot.autostart
Whether to always start the instance when LXD starts
- Key:
boot.autostart
- Type: bool
- Live update: no
If set to false
, restore the last state.
boot.autostart.delay
Delay after starting the instance
- Key:
boot.autostart.delay
- Type: integer
- Default:
0
- Live update: no
The number of seconds to wait after the instance started before starting the next one.
boot.autostart.priority
What order to start the instances in
- Key:
boot.autostart.priority
- Type: integer
- Default:
0
- Live update: no
The instance with the highest value is started first.
boot.debug_edk2
Enable debug version of the edk2
- Key:
boot.debug_edk2
- Type: bool
The instance should use a debug version of the edk2
.
A log file can be found in $LXD_DIR/logs/<instance_name>/edk2.log
.
boot.host_shutdown_timeout
How long to wait for the instance to shut down
- Key:
boot.host_shutdown_timeout
- Type: integer
- Default:
30
- Live update: yes
Number of seconds to wait for the instance to shut down before it is force-stopped.
boot.stop.priority
What order to shut down the instances in
- Key:
boot.stop.priority
- Type: integer
- Default:
0
- Live update: no
The instance with the highest value is shut down first.
cloud-init
configuration
The following instance options control the cloud-init
configuration of the instance:
cloud-init.network-config
Network configuration for cloud-init
- Key:
cloud-init.network-config
- Type: string
- Default:
DHCP on eth0
- Live update: no
- Condition: If supported by image
The content is used as seed value for cloud-init
.
cloud-init.ssh-keys.KEYNAME
Additional SSH key to be injected on the instance by cloud-init
- Key:
cloud-init.ssh-keys.KEYNAME
- Type: string
- Live update: no
- Condition: If supported by image
Represents an additional SSH public key to be merged into existing cloud-init
seed data
and injected into an instance. Has the format {user}:{key}
, where {user}
is a Linux username and
{key}
can be either a pure SSH public key or an import ID for a key hosted elsewhere.
// For example: root:gh:githubUser
, myUser:ssh-keyAlg publicKeyHash
cloud-init.user-data
User data for cloud-init
- Key:
cloud-init.user-data
- Type: string
- Default:
#cloud-config
- Live update: no
- Condition: If supported by image
The content is used as seed value for cloud-init
.
cloud-init.vendor-data
Vendor data for cloud-init
- Key:
cloud-init.vendor-data
- Type: string
- Default:
#cloud-config
- Live update: no
- Condition: If supported by image
The content is used as seed value for cloud-init
.
user.network-config
Legacy version of cloud-init.network-config
- Key:
user.network-config
- Type: string
- Default:
DHCP on eth0
- Live update: no
- Condition: If supported by image
user.user-data
Legacy version of cloud-init.user-data
- Key:
user.user-data
- Type: string
- Default:
#cloud-config
- Live update: no
- Condition: If supported by image
user.vendor-data
Legacy version of cloud-init.vendor-data
- Key:
user.vendor-data
- Type: string
- Default:
#cloud-config
- Live update: no
- Condition: If supported by image
Support for these options depends on the image that is used and is not guaranteed.
If you specify both cloud-init.user-data
and cloud-init.vendor-data
, the content of both options is merged.
Therefore, make sure that the cloud-init
configuration you specify in those options does not contain the same keys.
Resource limits
The following instance options specify resource limits for the instance:
limits.cpu
Which CPUs to expose to the instance
- Key:
limits.cpu
- Type: string
- Default: 1 (VMs)
- Live update: yes
A number or a specific range of CPUs to expose to the instance.
See CPU pinning for more information.
limits.cpu.allowance
How much of the CPU can be used
- Key:
limits.cpu.allowance
- Type: string
- Default: 100%
- Live update: yes
- Condition: container
To control how much of the CPU can be used, specify either a percentage (50%
) for a soft limit
or a chunk of time (25ms/100ms
) for a hard limit.
See Allowance and priority (container only) for more information.
limits.cpu.nodes
Which NUMA nodes to place the instance CPUs on
- Key:
limits.cpu.nodes
- Type: string
- Live update: yes
A comma-separated list of NUMA node IDs or ranges to place the instance CPUs on.
See Allowance and priority (container only) for more information.
limits.cpu.pin_strategy
VM CPU auto pinning strategy
- Key:
limits.cpu.pin_strategy
- Type: string
- Default:
none
- Live update: no
- Condition: virtual machine
Specify the strategy for VM CPU auto pinning.
Possible values: none
(disables CPU auto pinning) and auto
(enables CPU auto pinning).
See CPU limits for virtual machines for more information.
limits.cpu.priority
CPU scheduling priority compared to other instances
- Key:
limits.cpu.priority
- Type: integer
- Default:
10
(maximum) - Live update: yes
- Condition: container
When overcommitting resources, specify the CPU scheduling priority compared to other instances that share the same CPUs. Specify an integer between 0 and 10.
See Allowance and priority (container only) for more information.
limits.disk.priority
Priority of the instance’s I/O requests
- Key:
limits.disk.priority
- Type: integer
- Default:
5
(medium) - Live update: yes
Controls how much priority to give to the instance’s I/O requests when under load.
Specify an integer between 0 and 10.
limits.hugepages.1GB
Limit for the number of 1 GB huge pages
- Key:
limits.hugepages.1GB
- Type: string
- Live update: yes
- Condition: container
Fixed value (in bytes) to limit the number of 1 GB huge pages. Various suffixes are supported (see Units for storage and network limits).
See Huge page limits for more information.
limits.hugepages.1MB
Limit for the number of 1 MB huge pages
- Key:
limits.hugepages.1MB
- Type: string
- Live update: yes
- Condition: container
Fixed value (in bytes) to limit the number of 1 MB huge pages. Various suffixes are supported (see Units for storage and network limits).
See Huge page limits for more information.
limits.hugepages.2MB
Limit for the number of 2 MB huge pages
- Key:
limits.hugepages.2MB
- Type: string
- Live update: yes
- Condition: container
Fixed value (in bytes) to limit the number of 2 MB huge pages. Various suffixes are supported (see Units for storage and network limits).
See Huge page limits for more information.
limits.hugepages.64KB
Limit for the number of 64 KB huge pages
- Key:
limits.hugepages.64KB
- Type: string
- Live update: yes
- Condition: container
Fixed value (in bytes) to limit the number of 64 KB huge pages. Various suffixes are supported (see Units for storage and network limits).
See Huge page limits for more information.
limits.memory
Usage limit for the host’s memory
- Key:
limits.memory
- Type: string
- Default:
1GiB
(VMs) - Live update: yes
Percentage of the host’s memory or a fixed value in bytes. Various suffixes are supported.
See Units for storage and network limits for details.
limits.memory.enforce
Whether the memory limit is hard
or soft
- Key:
limits.memory.enforce
- Type: string
- Default:
hard
- Live update: yes
- Condition: container
If the instance’s memory limit is hard
, the instance cannot exceed its limit.
If it is soft
, the instance can exceed its memory limit when extra host memory is available.
limits.memory.hugepages
Whether to back the instance using huge pages
- Key:
limits.memory.hugepages
- Type: bool
- Default:
false
- Live update: no
- Condition: virtual machine
If this option is set to false
, regular system memory is used.
limits.memory.swap
Whether to encourage/discourage swapping less used pages for this instance
- Key:
limits.memory.swap
- Type: bool
- Default:
true
- Live update: yes
- Condition: container
limits.memory.swap.priority
Prevents the instance from being swapped to disk
- Key:
limits.memory.swap.priority
- Type: integer
- Default:
10
(maximum) - Live update: yes
- Condition: container
Specify an integer between 0 and 10. The higher the value, the less likely the instance is to be swapped to disk.
limits.processes
Maximum number of processes that can run in the instance
- Key:
limits.processes
- Type: integer
- Default: empty
- Live update: yes
- Condition: container
If left empty, no limit is set.
limits.kernel.*
Kernel resources per instance
- Key:
limits.kernel.*
- Type: string
- Live update: no
- Condition: container
You can set kernel limits on an instance, for example, you can limit the number of open files. See Kernel resource limits for more information.
CPU limits
You have different options to limit CPU usage:
- Set
limits.cpu
to restrict which CPUs the instance can see and use. See CPU pinning for how to set this option. - Set
limits.cpu.allowance
to restrict the load an instance can put on the available CPUs. This option is available only for containers. See Allowance and priority (container only) for how to set this option. - Set
limits.cpu.pin_strategy
to specify the strategy for virtual-machine CPU auto pinning. This option is available only for virtual machines. See CPU limits for virtual machines for how to set this option.
It is possible to set both options at the same time to restrict both which CPUs are visible to the instance and the allowed usage of those instances.
However, if you use limits.cpu.allowance
with a time limit, you should avoid using limits.cpu
in addition, because that puts a lot of constraints on the scheduler and might lead to less efficient allocations.
The CPU limits are implemented through a mix of the cpuset
and cpu
cgroup controllers.
CPU pinning
limits.cpu
results in CPU pinning through the cpuset
controller.
You can specify either which CPUs or how many CPUs are visible and available to the instance:
-
To specify which CPUs to use, set
limits.cpu
to either a set of CPUs (for example,1,2,3
) or a CPU range (for example,0-3
).To pin to a single CPU, use the range syntax (for example,
1-1
) to differentiate it from a number of CPUs. -
If you specify a number (for example,
4
) of CPUs, LXD will do dynamic load-balancing of all instances that aren’t pinned to specific CPUs, trying to spread the load on the machine. Instances are re-balanced every time an instance starts or stops, as well as whenever a CPU is added to the system.
CPU limits for virtual machines
Note:
LXD supports live-updating the limits.cpu
option.
However, for virtual machines, this only means that the respective CPUs are hotplugged.
Depending on the guest operating system, you might need to either restart the instance or complete some manual actions to bring the new CPUs online.
LXD virtual machines default to having just one vCPU allocated, which shows up as matching the host CPU vendor and type, but has a single core and no threads.
When limits.cpu
is set to a single integer, LXD allocates multiple vCPUs and exposes them to the guest as full cores.
Unless limits.cpu.pin_strategy
is set to auto
, those vCPUs are not pinned to specific cores on the host.
The number of vCPUs can be updated while the VM is running.
When limits.cpu
is set to a range or comma-separated list of CPU IDs (as provided by lxc info --resources
), the vCPUs are pinned to those cores.
In this scenario, LXD checks whether the CPU configuration lines up with a realistic hardware topology and if it does, it replicates that topology in the guest.
When doing CPU pinning, it is not possible to change the configuration while the VM is running.
For example, if the pinning configuration includes eight threads, with each pair of thread coming from the same core and an even number of cores spread across two CPUs, the guest will show two CPUs, each with two cores and each core with two threads. The NUMA layout is similarly replicated and in this scenario, the guest would most likely end up with two NUMA nodes, one for each CPU socket.
In such an environment with multiple NUMA nodes, the memory is similarly divided across NUMA nodes and be pinned accordingly on the host and then exposed to the guest.
All this allows for very high performance operations in the guest as the guest scheduler can properly reason about sockets, cores and threads as well as consider NUMA topology when sharing memory or moving processes across NUMA nodes.
Allowance and priority (container only)
limits.cpu.allowance
drives either the CFS scheduler quotas when passed a time constraint, or the generic CPU shares mechanism when passed a percentage value:
- The time constraint (for example,
20ms/50ms
) is a hard limit. For example, if you want to allow the container to use a maximum of one CPU, setlimits.cpu.allowance
to a value like100ms/100ms
. The value is relative to one CPU worth of time, so to restrict to two CPUs worth of time, use something like100ms/50ms
or200ms/100ms
. - When using a percentage value, the limit is a soft limit that is applied only when under load.
It is used to calculate the scheduler priority for the instance, relative to any other instance that is using the same CPU or CPUs.
For example, to limit the CPU usage of the container to one CPU when under load, set
limits.cpu.allowance
to100%
.
limits.cpu.nodes
can be used to restrict the CPUs that the instance can use to a specific set of NUMA nodes.
To specify which NUMA nodes to use, set limits.cpu.nodes
to either a set of NUMA node IDs (for example, 0,1
) or a set of NUMA node ranges (for example, 0-1,2-4
).
limits.cpu.priority
is another factor that is used to compute the scheduler priority score when a number of instances sharing a set of CPUs have the same percentage of CPU assigned to them.
Huge page limits
LXD allows to limit the number of huge pages available to a container through the limits.hugepage.[size]
key (for example, limits.hugepages.1MB
).
Architectures often expose multiple huge-page sizes. The available huge-page sizes depend on the architecture.
Setting limits for huge pages is especially useful when LXD is configured to intercept the mount
syscall for the hugetlbfs
file system in unprivileged containers.
When LXD intercepts a hugetlbfs
mount
syscall, it mounts the hugetlbfs
file system for a container with correct uid
and gid
values as mount options.
This makes it possible to use huge pages from unprivileged containers.
However, it is recommended to limit the number of huge pages available to the container through limits.hugepages.[size]
to stop the container from being able to exhaust the huge pages available to the host.
Limiting huge pages is done through the hugetlb
cgroup controller, which means that the host system must expose the hugetlb
controller in the legacy or unified cgroup hierarchy for these limits to apply.
Kernel resource limits
For container instances, LXD exposes a generic namespaced key limits.kernel.*
that can be used to set resource limits.
It is generic in the sense that LXD does not perform any validation on the resource that is specified following the limits.kernel.*
prefix.
LXD cannot know about all the possible resources that a given kernel supports.
Instead, LXD simply passes down the corresponding resource key after the limits.kernel.*
prefix and its value to the kernel.
The kernel does the appropriate validation.
This allows users to specify any supported limit on their system.
Some common limits are:
Key | Resource | Description |
---|---|---|
limits.kernel.as | RLIMIT_AS | Maximum size of the process’s virtual memory |
limits.kernel.core | RLIMIT_CORE | Maximum size of the process’s core dump file |
limits.kernel.cpu | RLIMIT_CPU | Limit in seconds on the amount of CPU time the process can consume |
limits.kernel.data | RLIMIT_DATA | Maximum size of the process’s data segment |
limits.kernel.fsize | RLIMIT_FSIZE | Maximum size of files the process may create |
limits.kernel.locks | RLIMIT_LOCKS | Limit on the number of file locks that this process may establish |
limits.kernel.memlock | RLIMIT_MEMLOCK | Limit on the number of bytes of memory that the process may lock in RAM |
limits.kernel.nice | RLIMIT_NICE | Maximum value to which the process’s nice value can be raised |
limits.kernel.nofile | RLIMIT_NOFILE | Maximum number of open files for the process |
limits.kernel.nproc | RLIMIT_NPROC | Maximum number of processes that can be created for the user of the calling process |
limits.kernel.rtprio | RLIMIT_RTPRIO | Maximum value on the real-time-priority that may be set for this process |
limits.kernel.sigpending | RLIMIT_SIGPENDING | Maximum number of signals that may be queued for the user of the calling process |
A full list of all available limits can be found in the manpages for the getrlimit(2)
/setrlimit(2)
system calls.
To specify a limit within the limits.kernel.*
namespace, use the resource name in lowercase without the RLIMIT_
prefix.
For example, RLIMIT_NOFILE
should be specified as nofile
.
A limit is specified as two colon-separated values that are either numeric or the word unlimited
(for example, limits.kernel.nofile=1000:2000
).
A single value can be used as a shortcut to set both soft and hard limit to the same value (for example, limits.kernel.nofile=3000
).
A resource with no explicitly configured limit will inherit its limit from the process that starts up the container. Note that this inheritance is not enforced by LXD but by the kernel.
Migration options
The following instance options control the behavior if the instance is moved from one LXD server to another:
migration.incremental.memory
Whether to use incremental memory transfer
- Key:
migration.incremental.memory
- Type: bool
- Default:
false
- Live update: yes
- Condition: container
Using incremental memory transfer of the instance’s memory can reduce downtime.
migration.incremental.memory.goal
Percentage of memory to have in sync before stopping the instance
- Key:
migration.incremental.memory.goal
- Type: integer
- Default:
70
- Live update: yes
- Condition: container
migration.incremental.memory.iterations
Maximum number of transfer operations to go through before stopping the instance
- Key:
migration.incremental.memory.iterations
- Type: integer
- Default:
10
- Live update: yes
- Condition: container
migration.stateful
Whether to allow for stateful stop/start and snapshots
- Key:
migration.stateful
- Type: bool
- Default:
false
or value from profiles orinstances.migration.stateful
(if set) - Live update: no
- Condition: virtual machine
Enabling this option prevents the use of some features that are incompatible with it.
NVIDIA and CUDA configuration
The following instance options specify the NVIDIA and CUDA configuration of the instance:
nvidia.driver.capabilities
What driver capabilities the instance needs
- Key:
nvidia.driver.capabilities
- Type: string
- Default:
compute,utility
- Live update: no
- Condition: container
The specified driver capabilities are used to set libnvidia-container NVIDIA_DRIVER_CAPABILITIES
.
nvidia.require.cuda
Required CUDA version
- Key:
nvidia.require.cuda
- Type: string
- Live update: no
- Condition: container
The specified version expression is used to set libnvidia-container NVIDIA_REQUIRE_CUDA
.
nvidia.require.driver
Required driver version
- Key:
nvidia.require.driver
- Type: string
- Live update: no
- Condition: container
The specified version expression is used to set libnvidia-container NVIDIA_REQUIRE_DRIVER
.
nvidia.runtime
Whether to pass the host NVIDIA and CUDA runtime libraries into the instance
- Key:
nvidia.runtime
- Type: bool
- Default:
false
- Live update: no
- Condition: container
Raw instance configuration overrides
The following instance options allow direct interaction with the backend features that LXD itself uses:
raw.apparmor
AppArmor profile entries
Key: raw.apparmor
Type: blob
Live update: yes
The specified entries are appended to the generated profile.
raw.idmap
Raw idmap configuration
Key: raw.idmap
Type: blob
Live update: no
Condition: unprivileged container
For example: both 1000 1000
raw.lxc
Raw LXC configuration to be appended to the generated one
Key: raw.lxc
Type: blob
Live update: no
Condition: container
raw.qemu
Raw QEMU configuration to be appended to the generated command line
Key: raw.qemu
Type: blob
Live update: no
Condition: virtual machine
raw.qemu.conf
Addition/override to the generated qemu.conf file
Key: raw.qemu.conf
Type: blob
Live update: no
Condition: virtual machine
See Override QEMU configuration for more information.
raw.seccomp
Raw Seccomp configuration
Key: raw.seccomp
Type: blob
Live update: no
Condition: container
Important
Setting these raw.*
keys might break LXD in non-obvious ways.
Therefore, you should avoid setting any of these keys.
Override QEMU configuration
For VM instances, LXD configures QEMU through a configuration file that is passed to QEMU with the -readconfig
command-line option.
This configuration file is generated for each instance before boot.
It can be found at /var/log/lxd/<instance_name>/qemu.conf
.
The default configuration works fine for LXD’s most common use case: modern UEFI guests with VirtIO devices.
In some situations, however, you might need to override the generated configuration.
For example:
- To run an old guest OS that doesn’t support UEFI.
- To specify custom virtual devices when VirtIO is not supported by the guest OS.
- To add devices that are not supported by LXD before the machines boots.
- To remove devices that conflict with the guest OS.
To override the configuration, set the raw.qemu.conf
option.
It supports a format similar to qemu.conf
, with some additions.
Since it is a multi-line configuration option, you can use it to modify multiple sections or keys.
To replace a section or key in the generated configuration file, add a section with a different value.
For example, use the following section to override the default virtio-gpu-pci
GPU driver:
raw.qemu.conf: |-
[device "qemu_gpu"]
driver = "qxl-vga"
To remove a section, specify a section without any keys. For example:
raw.qemu.conf: |-
[device "qemu_gpu"]
To remove a key, specify an empty string as the value. For example:
raw.qemu.conf: |-
[device "qemu_gpu"]
driver = ""
To add a new section, specify a section name that is not present in the configuration file. The configuration file format used by QEMU allows multiple sections with the same name. Here’s a piece of the configuration generated by LXD:
[global]
driver = "ICH9-LPC"
property = "disable_s3"
value = "1"
[global]
driver = "ICH9-LPC"
property = "disable_s4"
value = "1"
To specify which section to override, specify an index. For example:
raw.qemu.conf: |-
[global][1]
value = "0"
Section indexes start at 0 (which is the default value when not specified), so the above example would generate the following configuration:
[global]
driver = "ICH9-LPC"
property = "disable_s3"
value = "1"
[global]
driver = "ICH9-LPC"
property = "disable_s4"
value = "0"
Security policies
The following instance options control the Security policies of the instance:
security.agent.metrics
Whether the lxd-agent is queried for state information and metrics
Key: security.agent.metrics
Type: bool
Default: true
Live update: no
Condition: virtual machine
security.csm
Whether to use a firmware that supports UEFI-incompatible operating systems
Key: security.csm
Type: bool
Default: false
Live update: no
Condition: virtual machine
When enabling this option, set security.secureboot
to false
.
security.delegate_bpf
Whether to enable eBPF delegation using BPF Token mechanism
Key: security.delegate_bpf
Type: bool
Default: false
Live update: no
Condition: unprivileged container
This option enables BPF functionality delegation mechanism (using BPF Token).
Note: security.delegate_bpf.cmd_types
, security.delegate_bpf.map_types
,
security.delegate_bpf.prog_types
, security.delegate_bpf.attach_types
need to be configured depending on BPF workload in the container.
See Privilege delegation using BPF Token for more information.
security.delegate_bpf.attach_types
Which eBPF attach types to allow with delegation mechanism
Key: security.delegate_bpf.attach_types
Type: bool
Default: false
Live update: no
Condition: unprivileged container
Which eBPF program attachment types to allow with delegation mechanism. Syntax follows
a kernel one for delegate_attachs
bpffs mount option.
A number (bitmask) or :-separated list of attachment types to allow can be specified.
For example, cgroup_inet_ingress
allows BPF_CGROUP_INET_INGRESS
attachment type.
security.delegate_bpf.cmd_types
Which eBPF commands to allow with delegation mechanism
Key: security.delegate_bpf.cmd_types
Type: bool
Default: false
Live update: no
Condition: unprivileged container
Which eBPF commands to allow with delegation mechanism. Syntax follows a kernel one for delegate_cmds
bpffs mount option. A number (bitmask) or :-separated list of commands to allow can be specified.
For example, prog_load:map_create
allows eBPF programs loading and eBPF maps creation.
Notice: security.delegate_bpf.prog_types
and security.delegate_bpf.map_types
still need to
be configured accordingly.
security.delegate_bpf.map_types
Which eBPF maps to allow with delegation mechanism
Key: security.delegate_bpf.map_types
Type: bool
Default: false
Live update: no
Condition: unprivileged container
Which eBPF maps to allow with delegation mechanism. Syntax follows a kernel one for delegate_maps
bpffs mount option. A number (bitmask) or :-separated list of map types to allow can be specified.
For example, ringbuf
allows BPF_MAP_TYPE_RINGBUF
map.
security.delegate_bpf.prog_types
Which eBPF program types to allow with delegation mechanism
Key: security.delegate_bpf.prog_types
Type: bool
Default: false
Live update: no
Condition: unprivileged container
Which eBPF program types to allow with delegation mechanism. Syntax follows a kernel one for delegate_progs
bpffs mount option. A number (bitmask) or :-separated list of program types to allow can be specified.
For example, socket_filter
allows BPF_PROG_TYPE_SOCKET_FILTER
program type.
security.devlxd
Whether /dev/lxd
is present in the instance
Key: security.devlxd
Type: bool
Default: true
Live update: no
See Communication between instance and host for more information.
security.devlxd.images
Controls the availability of the /1.0/images
API over devlxd
Key: security.devlxd.images
Type: bool
Default: false
Live update: yes
security.idmap.base
The base host ID to use for the allocation
Key: security.idmap.base
Type: integer
Live update: no
Condition: unprivileged container
Setting this option overrides auto-detection.
security.idmap.isolated
Whether to use a unique idmap for this instance
Key: security.idmap.isolated
Type: bool
Default: false
Live update: no
Condition: unprivileged container
If specified, the idmap used for this instance is unique among instances that have this option set.
security.idmap.size
The size of the idmap to use
Key: security.idmap.size
Type: integer
Live update: no
Condition: unprivileged container
security.nesting
Whether to support running LXD (nested) inside the instance
Key: security.nesting
Type: bool
Default: false
Live update: yes
Condition: container
security.privileged
Whether to run the instance in privileged mode
Key: security.privileged
Type: bool
Default: false
Live update: no
Condition: container
See Container security for more information.
security.protection.delete
Whether to prevent the instance from being deleted
Key: security.protection.delete
Type: bool
Default: false
Live update: container
security.protection.shift
Whether to protect the file system from being UID/GID shifted
Key: security.protection.shift
Type: bool
Default: false
Live update: yes
Condition: container
Set this option to true
to prevent the instance’s file system from being UID/GID shifted on startup.
security.protection.start
Whether to prevent the instance from being started
Key: security.protection.start
Type: bool
Default: false
Live update: container
security.secureboot
Whether UEFI secure boot is enabled with the default Microsoft keys
Key: security.secureboot
Type: bool
Default: true
Live update: no
Condition: virtual machine
When disabling this option, consider enabling security.csm
.
security.sev
Whether AMD SEV (Secure Encrypted Virtualization) is enabled for this VM
Key: security.sev
Type: bool
Default: false
Live update: no
Condition: virtual machine
security.sev.policy.es
Whether AMD SEV-ES (SEV Encrypted State) is enabled for this VM
Key: security.sev.policy.es
Type: bool
Default: false
Live update: no
Condition: virtual machine
security.sev.session.data
The guest owner’s base64-encoded session blob
Key: security.sev.session.data
Type: string
Default: true
Live update: no
Condition: virtual machine
security.sev.session.dh
The guest owner’s base64-encoded Diffie-Hellman key
Key: security.sev.session.dh
Type: string
Default: true
Live update: no
Condition: virtual machine
security.syscalls.allow
List of syscalls to allow
Key: security.syscalls.allow
Type: string
Live update: no
Condition: container
A \n
-separated list of syscalls to allow.
This list must be mutually exclusive with security.syscalls.deny*
.
security.syscalls.deny
List of syscalls to deny
Key: security.syscalls.deny
Type: string
Live update: no
Condition: container
A \n
-separated list of syscalls to deny.
This list must be mutually exclusive with security.syscalls.allow
.
security.syscalls.deny_compat
Whether to block compat_*
syscalls (x86_64 only)
Key: security.syscalls.deny_compat
Type: bool
Default: false
Live update: no
Condition: container
On x86_64, this option controls whether to block compat_*
syscalls.
On other architectures, the option is ignored.
security.syscalls.deny_default
Whether to enable the default syscall deny
Key: security.syscalls.deny_default
Type: bool
Default: true
Live update: no
Condition: container
security.syscalls.intercept.bpf
Whether to handle the bpf()
system call
Key: security.syscalls.intercept.bpf
Type: bool
Default: false
Live update: no
Condition: container
security.syscalls.intercept.bpf.devices
Whether to allow BPF programs
Key: security.syscalls.intercept.bpf.devices
Type: bool
Default: false
Live update: no
Condition: container
This option controls whether to allow BPF programs for the devices cgroup in the unified hierarchy to be loaded.
security.syscalls.intercept.mknod
Whether to handle the mknod
and mknodat
system calls
Key: security.syscalls.intercept.mknod
Type: bool
Default: false
Live update: no
Condition: container
These system calls allow creation of a limited subset of char/block devices.
security.syscalls.intercept.mount
Whether to handle the mount
system call
Key: security.syscalls.intercept.mount
Type: bool
Default: false
Live update: no
Condition: container
security.syscalls.intercept.mount.allowed
File systems that can be mounted
Key: security.syscalls.intercept.mount.allowed
Type: string
Live update: yes
Condition: container
Specify a comma-separated list of file systems that are safe to mount for processes inside the instance.
security.syscalls.intercept.mount.fuse
File system that should be redirected to FUSE implementation
Key: security.syscalls.intercept.mount.fuse
Type: string
Live update: yes
Condition: container
Specify the mounts of a given file system that should be redirected to their FUSE implementation (for example, ext4=fuse2fs
).
security.syscalls.intercept.mount.shift
Whether to use idmapped mounts for syscall interception
Key: security.syscalls.intercept.mount.shift
Type: bool
Default: false
Live update: yes
Condition: container
security.syscalls.intercept.sched_setscheduler
Whether to handle the sched_setscheduler
system call
Key: security.syscalls.intercept.sched_setscheduler
Type: bool
Default: false
Live update: no
Condition: container
This system call allows increasing process priority.
security.syscalls.intercept.setxattr
Whether to handle the setxattr
system call
Key: security.syscalls.intercept.setxattr
Type: bool
Default: false
Live update: no
Condition: container
This system call allows setting a limited subset of restricted extended attributes.
security.syscalls.intercept.sysinfo
Whether to handle the sysinfo
system call
Key: security.syscalls.intercept.sysinfo
Type: bool
Default: false
Live update: no
Condition: container
This system call can be used to get cgroup-based resource usage information.
Snapshot scheduling and configuration
The following instance options control the creation and expiry of instance snapshots:
snapshots.expiry
When snapshots are to be deleted
Key: snapshots.expiry
Type: string
Live update: no
Specify an expression like 1M 2H 3d 4w 5m 6y
.
snapshots.pattern
Template for the snapshot name
Key: snapshots.pattern
Type: string
Default: snap%d
Live update: no
Specify a Pongo2 template string that represents the snapshot name.
This template is used for scheduled snapshots and for unnamed snapshots.
See Automatic snapshot names for more information.
snapshots.schedule
Schedule for automatic instance snapshots
Key: snapshots.schedule
Type: string
Default: empty
Live update: no
Specify either a cron expression (<minute> <hour> <dom> <month> <dow>
), a comma-separated list of schedule aliases (@hourly
, @daily
, @midnight
, @weekly
, @monthly
, @annually
, @yearly
), or leave empty to disable automatic snapshots.
snapshots.schedule.stopped
Whether to automatically snapshot stopped instances
Key: snapshots.schedule.stopped
Type: bool
Default: false
Live update: no
Automatic snapshot names
The snapshots.pattern
option takes a Pongo2 template string to format the snapshot name.
To add a time stamp to the snapshot name, use the Pongo2 context variable creation_date
.
Make sure to format the date in your template string to avoid forbidden characters in the snapshot name.
For example, set snapshots.pattern
to {{ creation_date|date:'2006-01-02_15-04-05' }}
to name the snapshots after their time of creation, down to the precision of a second.
Another way to avoid name collisions is to use the placeholder %d
in the pattern.
For the first snapshot, the placeholder is replaced with 0
.
For subsequent snapshots, the existing snapshot names are taken into account to find the highest number at the placeholder’s position.
This number is then incremented by one for the new name.
Volatile internal data
Warning
The volatile.*
keys cannot be manipulated by the user. Do not attempt to modify these keys in any way. LXD modifies these keys, and attempting to manipulate them yourself might break LXD in non-obvious ways.
The following volatile keys are currently used internally by LXD to store internal data specific to an instance:
volatile.<name>.apply_quota
Disk quota
Key: volatile.<name>.apply_quota
Type: string
The disk quota is applied the next time the instance starts.
volatile.<name>.ceph_rbd
RBD device path for Ceph disk devices
Key: volatile.<name>.ceph_rbd
Type: string
volatile.<name>.host_name
Network device name on the host
Key: volatile.<name>.host_name
Type: string
volatile.<name>.hwaddr
Network device MAC address
Key: volatile.<name>.hwaddr
Type: string
The network device MAC address is used when no hwaddr
property is set on the device itself.
volatile.<name>.last_state.created
Whether the network device physical device was created
Key: volatile.<name>.last_state.created
Type: string
Possible values are true
or false
.
volatile.<name>.last_state.hwaddr
Network device original MAC
Key: volatile.<name>.last_state.hwaddr
Type: string
The original MAC that was used when moving a physical device into an instance.
volatile.<name>.last_state.mtu
Network device original MTU
Key: volatile.<name>.last_state.mtu
Type: string
The original MTU that was used when moving a physical device into an instance.
volatile.<name>.last_state.vdpa.name
VDPA device name
Key: volatile.<name>.last_state.vdpa.name
Type: string
The VDPA device name used when moving a VDPA device file descriptor into an instance.
volatile.<name>.last_state.vf.hwaddr
SR-IOV virtual function original MAC
Key: volatile.<name>.last_state.vf.hwaddr
Type: string
The original MAC used when moving a VF into an instance.
volatile.<name>.last_state.vf.id
SR-IOV virtual function ID
Key: volatile.<name>.last_state.vf.id
Type: string
The ID used when moving a VF into an instance.
volatile.<name>.last_state.vf.spoofcheck
SR-IOV virtual function original spoof check setting
Key: volatile.<name>.last_state.vf.spoofcheck
Type: string
The original spoof check setting used when moving a VF into an instance.
volatile.<name>.last_state.vf.vlan
SR-IOV virtual function original VLAN
Key: volatile.<name>.last_state.vf.vlan
Type: string
The original VLAN used when moving a VF into an instance.
volatile.apply_nvram
Whether to regenerate VM NVRAM the next time the instance starts
Key: volatile.apply_nvram
Type: bool
volatile.apply_template
Template hook
Key: volatile.apply_template
Type: string
The template with the given name is triggered upon next startup.
volatile.base_image
Hash of the base image
Key: volatile.base_image
Type: string
The hash of the image that the instance was created from (empty if the instance was not created from an image).
volatile.cloud-init.instance-id
instance-id (UUID) exposed to cloud-init
Key: volatile.cloud-init.instance-id
Type: string
volatile.evacuate.origin
The origin of the evacuated instance
Key: volatile.evacuate.origin
Type: string
The cluster member that the instance lived on before evacuation.
volatile.idmap.base
The first ID in the container’s primary idmap range
Key: volatile.idmap.base
Type: integer
Condition: container
volatile.idmap.current
The idmap currently in use by the container
Key: volatile.idmap.current
Type: string
Condition: container
volatile.idmap.next
The idmap to use the next time the container starts
Key: volatile.idmap.next
Type: string
Condition: container
volatile.last_state.idmap
On-disk UID/GID map for the container’s rootfs
Key: volatile.last_state.idmap
Type: string
Condition: container
The UID/GID map that has been applied to the container’s underlying storage.
This is usually set for containers created on older kernels that don’t
support idmapped mounts.
volatile.last_state.power
Instance state as of last host shutdown
Key: volatile.last_state.power
Type: string
volatile.uuid
Instance UUID
Key: volatile.uuid
Type: string
The instance UUID is globally unique across all servers and projects.
volatile.uuid.generation
Instance generation UUID
Key: volatile.uuid.generation
Type: string
The instance generation UUID changes whenever the instance’s place in time moves backwards.
It is globally unique across all servers and projects.
volatile.vsock_id
Instance vsock ID used as of last start
Key: volatile.vsock_id
Type: string