Let us take a closer look at how to set up the Ceph filesystem on Digitalocean with the support of DigitalOcean managed services at Bobcares.
ceph
It is a distributed storage system that offers object, block, and file storage. It is very scalable. Using the CRUSH algorithm, Ceph clusters can run on any hardware.
Creating a Ceph Cluster
Let’s first go through the most crucial Ceph parts and how they work:
- Ceph Monitors: sometimes referred to as MONs, are in charge of upholding the cluster maps necessary for the Ceph daemons to communicate with one another. For the storage service to be more dependable and available, there should always be more than one MON operating.
- Ceph Managers, or MGRs: are runtime daemons in charge of monitoring runtime metrics and the condition of the Ceph cluster right now. They work in tandem with the monitoring daemons (MONs) to offer extra monitoring and a connection to outside management and monitoring systems.
- Ceph Object Store Devices: commonly referred to as OSDs, is in charge of keeping items on a local file system and granting network access to them. These are typically in connection to a single cluster physical disk. Direct communication between Ceph clients and OSDs. Note that it is important to know the functions of the Ceph parts to set up a Ceph Filesystem on DigitalOcean.
First, get in touch with the Ceph Monitors (MONs) to get the most recent version of the cluster map before interacting with the data in the Ceph storage. The cluster topology and data storage location are both present in the cluster map. Ceph clients then use the cluster map to select which OSD to talk with.
The Digitalocean Kubernetes cluster may run Ceph storage with the help of Rook. These all operate within the Rook cluster and communicate with the Rook agents directly. Concealing Ceph components like placement groups and storage maps while still allowing for sophisticated setups, streamlines the process of managing your Ceph cluster.
Set up a Ceph Filesystem using Kubernetes on DigitalOcean.
Kubernetes object for Ceph cluster.
Start now by creating a Kubernetes object for the Ceph cluster to Set Up a Ceph Filesystem on DigitalOcean.
Firstly, create a YAML file:
nano cephcluster.yaml
The configuration governs the deployment of the Ceph cluster. We can deploy three Ceph Monitors (MON) in this example and turn on the Ceph dashboard. Although the Ceph dashboard is outside the purview of this tutorial, use it later in the unique project to visualize the state of the Ceph cluster. This is the first step to setting up a Ceph Filesystem on DigitalOcean with Kubernetes.
To provide the apiVersion, the Kubernetes Object class, the name, and the namespace the Object must post in, add the text shown below:
cephcluster.yaml
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph
Add the spec key next, which specifies the model Kubernetes will employ while building the Ceph cluster on Digitalocean. To start, decide the image version to use and whether to accept unsupported Ceph versions:
cephcluster.yaml
spec:
cephVersion:
image: ceph/ceph:v14.2.8
allowUnsupported: false
Then, using the dataDirHostPath key, specify the location of the configuration files:
cephcluster.yaml
dataDirHostPath: /var/lib/rook
Next, use the following options to determine whether to bypass upgrade checks and when to upgrade the cluster. Note that the user has the option to choose it based on their requirements to set up a Ceph Filesystem on DigitalOcean:
cephcluster.yaml
skipUpgradeChecks: false
continueUpgradeAfterChecksEvenIfNotHealthy: false
Use the ‘mon’ key, and configure the number of Ceph Monitors (MONs). Moreover, let the deployment of many MONs on a single node:
cephcluster.yaml
mon:
count: 3
allowMultiplePerNode: false
The dashboard key defines the options for the Ceph dashboard. Do the following when using a reverse proxy to activate the dashboard, alter the port, and prefix it:
cephcluster.yaml
dashboard:
enabled: true
# urlPrefix: /ceph-dashboard
# port: 8443
# serve the dashboard using SSL
ssl: false
Enable cluster monitoring
With the monitoring key, we can additionally enable monitoring of the cluster (monitoring needs Prometheus to be pre-installed):
cephcluster.yaml
monitoring:
enabled: false
rulesNamespace: rook-ceph
RDBs (Reliable Autonomic Distributed Object Store) is thin-provisioned, resizable Ceph block devices that store data across several nodes.
By enabling rbdMirroring, RBD pictures may be asynchronously shared between two Ceph clusters. This is not required because here we will only be using one cluster. Therefore, the number of employees is set to 0:
cephcluster.yaml
rbdMirroring:
workers: 0
For the Ceph daemons, we can enable the crash collector as follows. Note that follow the procedures as such to make the configuring up of Ceph filesystem in Digitalocean
cephcluster.yaml
crashCollector:
disable: false
Terminate or delete the cluster in order to use the cleaning policy. Because of this, the option must be left blank:
cephcluster.yaml
cleanupPolicy:
deleteDataDirOnHosts: ""
removeOSDsIfOutAndSafeToRemove: false
The storage key provides the cluster-level storage parameters, such as the database size, the number of OSDs to generate for each device, the node, and devices to utilize, etc.
cephcluster.yaml
storage:
useAllNodes: true
useAllDevices: true
config:
# metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.
# databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB
# journalSizeMB: "1024" # uncomment if the disks are 20 GB or smaller
Control Daemon disturbances
To control daemon disturbances during an upgrade or fence, we can use the disruptionManagement key. This allows the smooth flow of Kubernetes to Set Up a Ceph Filesystem on DigitalOcean.
cephcluster.yaml
disruptionManagement:
managePodBudgets: false
osdMaintenanceTimeout: 30
manageMachineDisruptionBudgets: false
machineDisruptionBudgetNamespace: openshift-machine-api
The final file that results from these configuration blocks is as follows:
cephcluster.yaml
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:v14.2.8
allowUnsupported: false
dataDirHostPath: /var/lib/rook
skipUpgradeChecks: false
continueUpgradeAfterChecksEvenIfNotHealthy: false
mon:
count: 3
allowMultiplePerNode: false
dashboard:
enabled: true
# serve the dashboard under a subpath (useful when accessing the dashboard via a reverse proxy)
# urlPrefix: /ceph-dashboard
# serve the dashboard at the given port.
# port: 8443
# serve the dashboard using SSL
ssl: false
monitoring:
enabled: false
rulesNamespace: rook-ceph
rbdMirroring:
workers: 0
crashCollector:
disable: false
cleanupPolicy:
deleteDataDirOnHosts: ""
removeOSDsIfOutAndSafeToRemove: false
storage:
useAllNodes: true
useAllDevices: true
config:
# metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.
# databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB
# journalSizeMB: "1024" # uncomment if the disks are 20 GB or smaller
disruptionManagement:
managePodBudgets: false
osdMaintenanceTimeout: 30
manageMachineDisruptionBudgets: false
machineDisruptionBudgetNamespace: openshift-machine-api
After this process, save and exit the file. Additionally, we can alter the deployment by setting a unique port for the dashboard or modifying the database size.
After that apply this manifest to the Digitalocean Kubernetes cluster for Ceph setup:
kubectl apply -f cephcluster.yaml
Now verify that the pods are operational:
kubectl get pod -n rook-ceph
This normally takes a few minutes, so just refresh until the output looks something like this:
OutputNAME READY STATUS RESTARTS AGE
csi-cephfsplugin-lz6dn 3/3 Running 0 3m54s
csi-cephfsplugin-provisioner-674847b584-4j9jw 5/5 Running 0 3m54s
csi-cephfsplugin-provisioner-674847b584-h2cgl 5/5 Running 0 3m54s
csi-cephfsplugin-qbpnq 3/3 Running 0 3m54s
csi-cephfsplugin-qzsvr 3/3 Running 0 3m54s
csi-rbdplugin-kk9sw 3/3 Running 0 3m55s
csi-rbdplugin-l95f8 3/3 Running 0 3m55s
csi-rbdplugin-provisioner-64ccb796cf-8gjwv 6/6 Running 0 3m55s
csi-rbdplugin-provisioner-64ccb796cf-dhpwt 6/6 Running 0 3m55s
csi-rbdplugin-v4hk6 3/3 Running 0 3m55s
rook-ceph-crashcollector-pool-33zy7-68cdfb6bcf-9cfkn 1/1 Running 0 109s
rook-ceph-crashcollector-pool-33zyc-565559f7-7r6rt 1/1 Running 0 53s
rook-ceph-crashcollector-pool-33zym-749dcdc9df-w4xzl 1/1 Running 0 78s
rook-ceph-mgr-a-7fdf77cf8d-ppkwl 1/1 Running 0 53s
rook-ceph-mon-a-97d9767c6-5ftfm 1/1 Running 0 109s
rook-ceph-mon-b-9cb7bdb54-lhfkj 1/1 Running 0 96s
rook-ceph-mon-c-786b9f7f4b-jdls4 1/1 Running 0 78s
rook-ceph-operator-599765ff49-fhbz9 1/1 Running 0 6m58s
rook-ceph-osd-prepare-pool-33zy7-c2hww 1/1 Running 0 21s
rook-ceph-osd-prepare-pool-33zyc-szwsc 1/1 Running 0 21s
rook-ceph-osd-prepare-pool-33zym-2p68b 1/1 Running 0 21s
rook-discover-6fhlb 1/1 Running 0 6m21s
rook-discover-97kmz 1/1 Running 0 6m21s
rook-discover-z5k2z 1/1 Running 0 6m21s
We have now successfully installed the Ceph cluster and can proceed to create the first storage block. This is the final step in using Kubernetes to Set Up a Ceph Filesystem on DigitalOcean.
[Need assistance with similar queries? We are here to help]
Conclusion
To sum up, we have now seen how to set up To conclude, the ceph digitalocean with the support of our tech support team.
PREVENT YOUR SERVER FROM CRASHING!
Never again lose customers to poor server speed! Let us help you.
Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.
0 Comments