Skip to content
Docs
CephFS - cephfs

CephFS - cephfs

▶ Watch on YouTube (opens in a new tab)

Ceph (opens in a new tab) is an open-source storage platform that stores its data in a storage cluster based on RADOS. It is highly scalable and, as a distributed system without a single point of failure, very reliable.

Tip

If you want to quickly set up a basic Ceph cluster, check out MicroCeph (opens in a new tab).

Ceph provides different components for block storage and for file systems.

CephFS is Ceph’s file system component that provides a robust, fully-featured POSIX-compliant distributed file system. Internally, it maps files to Ceph objects and stores file metadata (for example, file ownership, directory paths, access permissions) in a separate data pool.

Terminology

Ceph uses the term object for the data that it stores. The daemon that is responsible for storing and managing data is the Ceph OSD. Ceph’s storage is divided into pools, which are logical partitions for storing objects. They are also referred to as data pools, storage pools or OSD pools.

A CephFS file system consists of two OSD storage pools, one for the actual data and one for the file metadata.

cephfs driver in LXD

Note

The cephfs driver can only be used for custom storage volumes with content type filesystem.

For other storage volumes, use the Ceph driver. That driver can also be used for custom storage volumes with content type filesystem, but it implements them through Ceph RBD images.

Unlike other storage drivers, this driver does not set up the storage system but assumes that you already have a Ceph cluster installed.

You can either create the CephFS file system that you want to use beforehand and specify it through the source option, or specify the cephfs.create_missing option to automatically create the file system and the data and metadata OSD pools (with the names given in cephfs.data_pool and cephfs.meta_pool).

This driver also behaves differently than other drivers in that it provides remote storage. As a result and depending on the internal network, storage access might be a bit slower than for local storage. On the other hand, using remote storage has big advantages in a cluster setup, because all cluster members have access to the same storage pools with the exact same contents, without the need to synchronize storage pools.

LXD assumes that it has full control over the OSD storage pool. Therefore, you should never maintain any file system entities that are not owned by LXD in a LXD OSD storage pool, because LXD might delete them.

The cephfs driver in LXD supports snapshots if snapshots are enabled on the server side.

Configuration options

The following configuration options are available for storage pools that use the cephfs driver and for storage volumes in these pools.

Storage Pool Configuration

cephfs.cluster_name

  • Description: Name of the Ceph cluster that contains the CephFS file system
  • Type: string
  • Default: ceph
  • Scope: global

cephfs.create_missing

  • Description: Automatically create the CephFS file system if it doesn't exist.
  • Type: bool
  • Default: false
  • Scope: global

LXD will automatically create the file system and the required data/metadata OSD pools.


cephfs.data_pool

  • Description: Data OSD pool name used when creating a file system automatically
  • Type: string
  • Scope: global

cephfs.fscache

  • Description: Enable kernel fscache and cachefilesd usage
  • Type: bool
  • Default: false
  • Scope: global

cephfs.meta_pool

  • Description: Metadata OSD pool name used when creating a file system automatically
  • Type: string
  • Scope: global

cephfs.osd_pg_num

  • Description: Number of placement groups (pg_num) when creating missing OSD pools
  • Type: string
  • Scope: global

cephfs.osd_pool_size

  • Description: Number of RADOS object replicas (set to 1 for no replication)
  • Type: string
  • Default: 3

cephfs.path

  • Description: Base path for the CephFS mount
  • Type: string
  • Default: /
  • Scope: global

cephfs.user.name

  • Description: Ceph user to use
  • Type: string
  • Default: admin
  • Scope: global

source

  • Description: Existing CephFS file system or file system path to use
  • Type: string
  • Scope: local

volatile.pool.pristine

  • Description: Indicates whether the CephFS file system was empty at creation
  • Type: string
  • Default: true
  • Scope: global

💡 You can also configure default values for storage volumes.
See Configure default values for storage volumes


Storage Volume Configuration

security.shifted

  • Description: Enable ID-shifting overlay (allows attaching to multiple isolated instances)
  • Type: bool
  • Default: Same as volume.security.shifted or false
  • Condition: custom volume
  • Scope: global

security.unmapped

  • Description: Disable ID mapping for the volume
  • Type: bool
  • Default: Same as volume.security.unmapped or false
  • Condition: custom volume
  • Scope: global

size

  • Description: Size/quota of the storage volume
  • Type: string
  • Default: Same as volume.size
  • Condition: appropriate driver
  • Scope: global

snapshots.expiry

  • Description: Expiry time for snapshots
  • Type: string
  • Default: Same as volume.snapshots.expiry
  • Condition: custom volume
  • Scope: global

Specify an expression like 1M 2H 3d 4w 5m 6y


snapshots.pattern

  • Description: Template for snapshot names
  • Type: string
  • Default: Same as volume.snapshots.pattern or snap%d
  • Condition: custom volume
  • Scope: global

Use a Pongo2 template string.
Example: {{ creation_date|date:'2006-01-02_15-04-05' }}
%d can be used to auto-increment snapshot names.


snapshots.schedule

  • Description: Schedule for automatic snapshots
  • Type: string
  • Default: Same as snapshots.schedule
  • Condition: custom volume
  • Scope: global

Accepts cron expressions or aliases like @daily, @weekly, etc.


volatile.idmap.last

  • Description: JSON-serialized UID/GID map last applied to the volume
  • Type: string
  • Condition: filesystem

volatile.idmap.next

  • Description: JSON-serialized UID/GID map to be applied next
  • Type: string
  • Condition: filesystem

volatile.uuid

  • Description: UUID of the volume
  • Type: string
  • Default: Random UUID
  • Scope: global