curl
to fetch the most recent version of the standalone script.curl --silent --remote-name --location https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
Make the cephadm
script executable:
chmod +x cephadm
This script can be run directly from the current directory:
./cephadm <arguments...>
cephadm
command installed on the host. To install the packages that provide the cephadm
command, run the following commands:./cephadm add-repo --release quincy ./cephadm install
Confirm that cephadm
is now in your PATH by running which
:
which cephadm
A successful which cephadm
command will return this:
/usr/sbin/cephadm
Some Linux distributions may already include up-to-date Ceph packages. In that case, you can install cephadm directly. For example:
In Ubuntu:
apt install -y cephadm
In CentOS Stream:
dnf search release-ceph dnf install --assumeyes centos-release-ceph-quincy dnf install --assumeyes cephadm
In Fedora:
dnf -y install cephadm
In SUSE:
zypper install -y cephadm
Run the ceph bootstrap
command:
# cephadm bootstrap --mon-ip *<mon-ip>* cephadm bootstrap --mon-ip 10.1.10.10 --cluster-network 10.1.10.0/24
This command will:
/root/.ssh/authorized_keys
file./etc/ceph/ceph.pub
./etc/ceph/ceph.conf
. This file is needed to communicate with the new cluster.client.admin
administrative (privileged!) secret key to /etc/ceph/ceph.client.admin.keyring
._admin
label to the bootstrap host. By default, any host with this label will (also) get a copy of /etc/ceph/ceph.conf
and /etc/ceph/ceph.client.admin.keyring
.–config *<config-file>*
option. For example:$ cat <<EOF> initial-ceph.conf [global] osd crush chooseleaf type = 0 EOF $ ./cephadm bootstrap --config initial-ceph.conf ...
Cephadm does not require any Ceph packages to be installed on the host. However, we recommend enabling easy access to the ceph
command. There are several ways to do this:
cephadm shell
command launches a bash shell in a container with all of the Ceph packages installed. By default, if configuration and keyring files are found in /etc/ceph
on the host, they are passed into the container environment so that the shell is fully functional. Note that when executed on a MON host, cephadm shell
will infer the config
from the MON container instead of using the default configuration. If –mount <path>
is given, then the host <path>
(file or directory) will appear under /mnt
inside the container:cephadm shell
ceph
commands, you can also run commands like this:cephadm shell -- ceph -s
ceph-common
package, which contains all of the ceph commands, including ceph
, rbd
, mount.ceph
(for mounting CephFS file systems), etc.:cephadm add-repo --release quincy cephadm install ceph-common
Confirm that the ceph
command is accessible with:
ceph -v
Confirm that the ceph
command can connect to the cluster and also its status with:
ceph status
By default, a ceph.conf
file and a copy of the client.admin
keyring are maintained in /etc/ceph
on all hosts with the _admin
label, which is initially applied only to the bootstrap host. We usually recommend that one or more other hosts be given the _admin
label so that the Ceph CLI (e.g., via cephadm shell
) is easily accessible on multiple hosts. To add the _admin
label to additional host(s):
ceph orch host label add *<host>* _admin ceph orch host ls
There are a few ways to create new OSDs:
ceph orch apply osd --all-available-devices
ceph orch daemon add osd *<host>*:*<device-path>*
For example:
ceph orch daemon add osd host-01:/dev/sdb,/dev/sdc,/dev/sdd ceph orch daemon add osd host-02:/dev/sdb,/dev/sdc,/dev/sdd ceph orch daemon add osd host-03:/dev/sdb,/dev/sdc,/dev/sdd
Advanced OSD creation from specific devices on a specific host:
ceph orch daemon add osd host1:data_devices=/dev/sda,/dev/sdb,db_devices=/dev/sdc,osds_per_device=2
Verify OSD tree
ceph osd tree