====== Ceph deploy from docs.ceph.com ======
[[https://docs.ceph.com/en/quincy/cephadm/install/#cephadm-deploying-new-cluster|https://docs.ceph.com/en/quincy/cephadm/install/#cephadm-deploying-new-cluster]]
==== CURL-BASED INSTALLATION ====
* Use ''curl'' to fetch the most recent version of the standalone script.
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
Make the ''cephadm'' script executable:
chmod +x cephadm
This script can be run directly from the current directory:
./cephadm
* Although the standalone script is sufficient to get a cluster started, it is convenient to have the ''cephadm'' command installed on the host. To install the packages that provide the ''cephadm'' command, run the following commands:
./cephadm add-repo --release quincy
./cephadm install
Confirm that ''cephadm'' is now in your PATH by running ''which'':
which cephadm
A successful ''which cephadm'' command will return this:
/usr/sbin/cephadm
==== DISTRIBUTION-SPECIFIC INSTALLATIONS ====
Some Linux distributions may already include up-to-date Ceph packages. In that case, you can install cephadm directly. For example:
In Ubuntu:
apt install -y cephadm
In CentOS Stream:
dnf search release-ceph
dnf install --assumeyes centos-release-ceph-quincy
dnf install --assumeyes cephadm
In Fedora:
dnf -y install cephadm
In SUSE:
zypper install -y cephadm
===== BOOTSTRAP A NEW CLUSTER =====
Run the ''ceph bootstrap'' command:
# cephadm bootstrap --mon-ip **
cephadm bootstrap --mon-ip 10.1.10.10 --cluster-network 10.1.10.0/24
This command will:
* Create a monitor and manager daemon for the new cluster on the local host.
* Generate a new SSH key for the Ceph cluster and add it to the root user’s ''/root/.ssh/authorized_keys'' file.
* Write a copy of the public key to ''/etc/ceph/ceph.pub''.
* Write a minimal configuration file to ''/etc/ceph/ceph.conf''. This file is needed to communicate with the new cluster.
* Write a copy of the ''client.admin'' administrative (privileged!) secret key to ''/etc/ceph/ceph.client.admin.keyring''.
* Add the ''_admin'' label to the bootstrap host. By default, any host with this label will (also) get a copy of ''/etc/ceph/ceph.conf'' and ''/etc/ceph/ceph.client.admin.keyring''.
* You can pass any initial Ceph configuration options to the new cluster by putting them in a standard ini-style configuration file and using the ''–config **'' option. For example:
$ cat < initial-ceph.conf
[global]
osd crush chooseleaf type = 0
EOF
$ ./cephadm bootstrap --config initial-ceph.conf ...
===== ENABLE CEPH CLI =====
Cephadm does not require any Ceph packages to be installed on the host. However, we recommend enabling easy access to the ''ceph'' command. There are several ways to do this:
* The ''cephadm shell'' command launches a bash shell in a container with all of the Ceph packages installed. By default, if configuration and keyring files are found in ''/etc/ceph'' on the host, they are passed into the container environment so that the shell is fully functional. Note that when executed on a MON host, ''cephadm shell'' will infer the ''config'' from the MON container instead of using the default configuration. If ''–mount '' is given, then the host '''' (file or directory) will appear under ''/mnt'' inside the container:
cephadm shell
* To execute ''ceph'' commands, you can also run commands like this:
cephadm shell -- ceph -s
* You can install the ''ceph-common'' package, which contains all of the ceph commands, including ''ceph'', ''rbd'', ''mount.ceph'' (for mounting CephFS file systems), etc.:
cephadm add-repo --release quincy
cephadm install ceph-common
Confirm that the ''ceph'' command is accessible with:
ceph -v
Confirm that the ''ceph'' command can connect to the cluster and also its status with:
ceph status
===== ADDING HOSTS =====
By default, a ''ceph.conf'' file and a copy of the ''client.admin'' keyring are maintained in ''/etc/ceph'' on all hosts with the ''_admin'' label, which is initially applied only to the bootstrap host. We usually recommend that one or more other hosts be given the ''_admin'' label so that the Ceph CLI (e.g., via ''cephadm shell'') is easily accessible on multiple hosts. To add the ''_admin'' label to additional host(s):
ceph orch host label add ** _admin
ceph orch host ls
===== DEPLOY OSDS =====
==== CREATING NEW OSDS ====
There are a few ways to create new OSDs:
* Tell Ceph to consume any available and unused storage device:
ceph orch apply osd --all-available-devices
* Create an OSD from a specific device on a specific host:
ceph orch daemon add osd **:**
For example:
ceph orch daemon add osd host-01:/dev/sdb,/dev/sdc,/dev/sdd
ceph orch daemon add osd host-02:/dev/sdb,/dev/sdc,/dev/sdd
ceph orch daemon add osd host-03:/dev/sdb,/dev/sdc,/dev/sdd
Advanced OSD creation from specific devices on a specific host:
ceph orch daemon add osd host1:data_devices=/dev/sda,/dev/sdb,db_devices=/dev/sdc,osds_per_device=2
Verify OSD tree
ceph osd tree