How to Integrate Ceph Storage to Openstack

Introduction

Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability, The Ceph storage services are usually hosted on external, dedicated storage nodes. Such storage clusters can sum up to several hundreds of nodes, providing petabytes of storage capacity.

Overview

This document covers how to setup Ceph with Openstack Mitaka with CentOS 7.

Procedure


Let’s assume we are using 3 nodes as a Ceph Server, one of it will be the ceph deployer
1. For the Ceph Deployer  you can execute the following command chain:

# yum install -y yum-utils
# yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/
# yum install --nogpgcheck -y epel-release
# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
# rm /etc/yum.repos.d/dl.fedoraproject.org*

2. Add the package to your repository. Open a text editor and create a Yellowdog Updater, Modified (YUM) entry. Use the file path/etc/yum.repos.d/ceph.repo.

# nano /etc/yum.repos.d/ceph.repo

3. Paste the following example code. Replace {ceph-release} with the recent major release of Ceph (e.g., jewel). Replace {distro} with your Linux distribution (e.g., el7 for CentOS7 ). Finally, save the contents to the /etc/yum.repos.d/ceph.repo file.

[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

4. Update your repository and install ceph-deploy:

# yum update; yum install ceph-deploy

CEPH NODE SETUP

INSTALL NTP

1. Setup the NTP/Chrony

# yum -y install chrony

2. Edit the /etc/chrony.conf file and add, change, or remove the following keys as necessary for your environment:

# nano /etc/chrony.conf
server controller1 iburst#server 0.rhel.pool.ntp.org iburst
#server 1.rhel.pool.ntp.org iburst
#server 2.rhel.pool.ntp.org iburst
#server 3.rhel.pool.ntp.org iburst
 
 
allow 10.1.30.0/24

3. Start the NTP service and configure it to start when the system boots:

# systemctl enable chronyd.service
# systemctl start chronyd.service

INSTALL SSH SERVER

For ALL Ceph Nodes perform the following steps:
1. Install an SSH server (if necessary) on each Ceph Node:

# yum install openssh-server

2. Ensure the SSH server is running on ALL Ceph Nodes.

“This part below can be skip if you want to use root itself”

CREATE A CEPH DEPLOY USER

1. Create a new user on each Ceph Node

# useradd -d /home/cephuser -m cephuser
# passwd cephuser

2. For the new user you added to each Ceph node, ensure that the user has sudo privileges.

# echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
# chmod 0440 /etc/sudoers.d/cephuser

ENABLE PASSWORD-LESS SSH

Since ceph-deploy will not prompt for a password, you must generate SSH keys on the admin node and distribute the public key to each Ceph node. ceph-deploy will attempt to generate the SSH keys for initial monitors.
1. Generate the SSH keys, but do not use sudo or the root user. Leave the passphrase empty:

[root@ceph-admin ~]# su - cephuser
[root@ceph-admin ~]# ssh-keygen
 
Generating public/private key pair.
Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /ceph-admin/.ssh/id_rsa.
Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.

2. Copy the key to each Ceph Node, replacing {username} with the user name you created with Create a Ceph Deploy User.

[root@ceph-admin ~]# ssh-copy-id cephuser@ceph1
[root@ceph-admin ~]# ssh-copy-id cephuser@ceph2
[root@ceph-admin ~]# ssh-copy-id cephuser@ceph3

Copy as well the content of the file from ceph-admin from /root/.ssh/id_rsa.pub to the ceph-nodes /root/.ssh/authorized_keys

3. Modify the ~/.ssh/config file of your ceph-deploy admin node so that ceph-deploy can log in to Ceph nodes as the user you created without requiring you to specify –username {username} each time you execute ceph-deploy. This has the added benefit of streamlining ssh and scpusage. Replace {username} with the user name you created:

Host ceph1
   Hostname ceph1
   User cephuser
Host ceph2
   Hostname ceph2
   User cephuser
Host ceph3
   Hostname ceph3
   User cephuser

TTY

On CentOS and RHEL, you may receive an error while trying to execute ceph-deploy commands. If requiretty is set by default on your Ceph nodes, disable it by executing sudo visudo and locate the Defaults requiretty setting. Change it to Defaults:ceph !requiretty or comment it out to ensure that ceph-deploy can connect using the user you created with Create a Ceph Deploy User

SELINUX

On CentOS and RHEL, SELinux is set to Enforcing by default. To streamline your installation, we recommend setting SELinux to Permissive or disabling it entirely and ensuring that your installation and cluster are working properly before hardening your configuration. To set SELinux to Permissive, execute the following:

# setenforce 0

PRIORITIES/PREFERENCES

Ensure that your package manager has priority/preferences packages installed and enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to enable optional repositories.

# yum install yum-plugin-priorities

INSTALL CEPH_DEPLOY

[cephuser@ceph-admin ~]$  sudo yum -y install ceph-deploy

Create a directory

[cephuser@ceph-admin ~]$ mkdir ceph-deploy && cd ceph-deploy

Launch installation procedure

[cephuser@ceph-admin ceph-deploy]$ sudo ceph-deploy new ceph1 ceph2 ceph3

We have this configuration file created :

[cephuser@ceph-admin ceph-deploy]$ cat ceph.conf
[global]
fsid = 661696ed-4de3-4f18-8075-7a5d79cd063e
mon_initial_members = ceph1, ceph2, ceph3
mon_host = 10.3.1.101,10.3.1.102,10.3.1.103
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

We add some more information in it :

# my network
public network = 10.3.1.0/24
cluster network = 192.168.1.0/24
 
# replicas and placement groups
osd pool default size = 2 # Write an object 2 times
osd pool default min size = 1 # Allow writing 1 copy in a degraded state
osd pool default pg num = 256
osd pool default pgp num = 256
 
# crush leaf type
osd crush chooseleaf type = 1

Logged as “root” , add more sections in “/etc/yum.repos.d/ceph.repo” repository definition file :

[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
 
 
[ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-jewel/el7/$basearch
enabled=1
priority=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
 
[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-infernalis/el7/SRPMS
enabled=0
priority=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

Now, let’s go for the installation:

[cephuser@ceph-admin ceph-deploy]$ sudo ceph-deploy install ceph-admin ceph1 ceph2 ceph3

If you are having an issue with the epel you can run these command and redo the intallation:

# sudo yum-config-manager --disable epel
# sudo yum-config-manager --save --setopt=epel.skip_if_unavailable=true

Set monitors and keys

[cephuser@ceph-admin ceph-deploy]$ sudo ceph-deploy mon create-initial

Creating OSD

1. Prepare the disks on each ceph server before it can use as OSDs. From the current disk status

[root@ceph1 ceph-dash]# lsblk -f
NAME            FSTYPE      LABEL           UUID                                   MOUNTPOINT
fd0
sda
|-sda1          xfs                         1df9ba03-1a46-459c-b7fb-6541f0066b8d   /boot
`-sda2          LVM2_member                 Xd2OL6-BC21-I9zZ-aUUU-HHDh-c49b-Wzig5C
  |-centos-root xfs                         57c87736-f469-4694-ba6c-8a7ce74bae84   /
  `-centos-swap swap                        6c587535-48ca-4fa1-ad27-c739510dce44   [SWAP]
sdb
sdc
sr0             iso9660     CentOS 7 x86_64 2015-12-09-23-03-16-00

We will set sdb as our Journal Disk and the other disk would be for data
2. We need to create GPT partition tables, repeating the commands for each data disk

[root@ceph1 ceph-dash]# parted /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) mkpart primary xfs 0% 100%
(parted) quit
Information: You may need to update /etc/fstab.
 
 
[root@ceph1 ceph-dash]# parted /dev/sdc
GNU Parted 3.1
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdc will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) mkpart primary xfs 0% 100%
(parted) quit
Information: You may need to update /etc/fstab.

3. For the journal, we are going to use a raw/unformatted volume, so we will not format it with XFS and we will not mark it as XFS with parted. However, a journal partition needs to be dedicated to each OSD, so we need to create three different partitions. In production environment, you can decide either to dedicate a disk (probably an SSD) to each journal, or like me to share the same SSD to different journal mount points. In both cases, the commands in parted for the journal disk will be:

[root@ceph1 ceph-dash]# parted /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? Yes
(parted) mkpart primary 0% 20%
(parted) mkpart primary 21% 40%
(parted) mkpart primary 41% 60%
(parted) mkpart primary 61% 80%
(parted) mkpart primary 81% 100%
(parted) quit
Information: You may need to update /etc/fstab.

Verify the filesystem

[root@ceph1 ceph-dash]# fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
 
Disk /dev/sdb: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
 
 
#         Start          End    Size  Type            Name
 1         2048     12582911      6G  Microsoft basic primary
 2     13211648     25165823    5.7G  Microsoft basic primary
 3     25794560     37748735    5.7G  Microsoft basic primary
 4     38377472     50331647    5.7G  Microsoft basic primary
 5     50960384     62912511    5.7G  Microsoft basic primary
 
Disk /dev/sda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000171c4
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048    62914559    30944256   8e  Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
 
Disk /dev/sdc: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
 
 
#         Start          End    Size  Type            Name
 1         2048     62912511     30G  Microsoft basic primary
 
Disk /dev/mapper/centos-root: 29.5 GB, 29490151424 bytes, 57597952 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
 
 
Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

4. Format the partition with XFS

[root@ceph1 ceph-dash]# mkfs.xfs /dev/sdc1
meta-data=/dev/sdc1              isize=256    agcount=4, agsize=1965952 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=7863808, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=3839, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

To see the final disks state

[root@ceph1 ceph-dash]# lsblk -f
NAME            FSTYPE      LABEL           UUID                                   MOUNTPOINT
fd0
sda
|-sda1          xfs                         1df9ba03-1a46-459c-b7fb-6541f0066b8d   /boot
`-sda2          LVM2_member                 Xd2OL6-BC21-I9zZ-aUUU-HHDh-c49b-Wzig5C
  |-centos-root xfs                         57c87736-f469-4694-ba6c-8a7ce74bae84   /
  `-centos-swap swap                        6c587535-48ca-4fa1-ad27-c739510dce44   [SWAP]
sdb
|-sdb1
|-sdb2
|-sdb3
|-sdb4
`-sdb5
sdc
`-sdc1          xfs                         551a21ab-0fca-4fb3-aa70-b9ef227052cf
sr0             iso9660     CentOS 7 x86_64 2015-12-09-23-03-16-00

DO THE SAME TO ALL OF THE CEPH NODES

Configure Ceph Cluster from Admin Node

1. Prepare Object Storage Daemon

[root@ceph-admin ceph]# ceph-deploy osd prepare ceph1:/dev/sdc1 ceph2:/dev/sdc1 ceph3:/dev/sdc1

2. Activate Object Storage Daemon

[root@ceph-admin ceph]# ceph-deploy osd activate ceph1:/dev/sdc1 ceph2:/dev/sdc1 ceph3:/dev/sdc1

3. Transfer config files

[root@ceph-admin ceph]# ceph-deploy admin ceph1 ceph2 ceph3

4. From the three nodes run this command:

# chmod 644 /etc/ceph/ceph.client.admin.keyring

5. show status from the storage nodes:

[root@ceph3 ~]# ceph health
HEALTH_OK

Creating a Pool

Creating a pool requires a pool name PG and PGP numbers and a pool type is either replicated or erasures but the default is replicated.
1. Lets create a pool i.e. volumes, images, backups, vms, rbd. You can use the pgcalculator for pg counts from this link.

# ceph osd pool create volumes 128
# ceph osd pool create images 32
# ceph osd pool create backups 32
# ceph osd pool create vms 128
# ceph osd pool create rbd 512 512

To delete them simply do these command:

# ceph osd pool delete volumes volumes --yes-i-really-really-mean-it
# ceph osd pool delete images images --yes-i-really-really-mean-it
# ceph osd pool delete backups backups --yes-i-really-really-mean-it
# ceph osd pool delete vms vms --yes-i-really-really-mean-it
# ceph osd pool delete rbd rbd --yes-i-really-really-mean-it

Configure Openstack Ceph Clients

This installation should run to Controller Nodes

The nodes running glance-apicinder-volumenova-compute and cinder-backup act as Ceph clients. Each requires the ceph.conf file:

[ALL]# mkdir /etc/ceph
[ALL]# touch /etc/ceph/ceph.conf

INSTALL CEPH CLIENT PACKAGES

On the glance-api (Controller nodes) you’ll need the Python bindings for librbd:

# yum install python-rbd

On the nova-compute (Compute nodes), cinder-backup and on the cinder-volume node, use both the Python bindings and the client command line tools:

# yum install ceph-common

SETUP CEPH CLIENT AUTHENTICATION

If you have cephx authentication enabled, create a new user for Nova/Cinder and Glance. Execute the following:

[root@ceph3 ~]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[client.cinder]
        key = AQCoJw9Yq5Q7KRAAJkAkQTTqdAiOCnpBsiooVA==
 
[root@ceph3 ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
        key = AQDYJw9YSg/7IBAA4mNXp4y3DW4CDmXCVdQ19w==
 
[root@ceph3 ~]# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
[client.cinder-backup]
        key = AQDnJw9YlBHHAhAAhNcIQUjmc84PtmPReFe2lQ==

Add the keyrings for client.cinder, client.glance, and client.cinder-backup to the appropriate nodes and change their ownership:

[root@ceph3 ~]# ceph auth get-or-create client.glance | ssh 10.1.30.90 tee /etc/ceph/ceph.client.glance.keyring
root@10.1.30.90's password:
[client.glance]
        key = AQDYJw9YSg/7IBAA4mNXp4y3DW4CDmXCVdQ19w==
 
 
[root@ceph3 ~]# ceph auth get-or-create client.glance | ssh 10.1.30.91 tee /etc/ceph/ceph.client.glance.keyring
The authenticity of host '10.1.30.91 (10.1.30.91)' can't be established.
ECDSA key fingerprint is c5:fb:30:98:64:c6:c8:03:01:2f:1f:b7:ce:65:c7:8d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.1.30.91' (ECDSA) to the list of known hosts.
root@10.1.30.91's password:
[client.glance]
        key = AQDYJw9YSg/7IBAA4mNXp4y3DW4CDmXCVdQ19w==
 
[root@ceph3 ~]# ceph auth get-or-create client.glance | ssh 10.1.30.92 tee /etc/ceph/ceph.client.glance.keyring
The authenticity of host '10.1.30.92 (10.1.30.92)' can't be established.
ECDSA key fingerprint is c5:fb:30:98:64:c6:c8:03:01:2f:1f:b7:ce:65:c7:8d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.1.30.92' (ECDSA) to the list of known hosts.
root@10.1.30.92's password:
[client.glance]
        key = AQDYJw9YSg/7IBAA4mNXp4y3DW4CDmXCVdQ19w==

Login to the controller servers and chown the glance keyring file

[ALL][root@controller1 ~]# chown glance:glance /etc/ceph/ceph.client.glance.keyring
[root@ceph3 ~]# ceph auth get-or-create client.cinder | ssh 10.1.30.101 tee /etc/ceph/ceph.client.cinder.keyring
[root@ceph3 ~]# ceph auth get-or-create client.cinder | ssh 10.1.30.102 tee /etc/ceph/ceph.client.cinder.keyring
[root@ceph3 ~]# ceph auth get-or-create client.cinder | ssh 10.1.30.103 tee /etc/ceph/ceph.client.cinder.keyring

Login to Ceph Server and chown the cinder keyring file

# chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

Nodes running nova-compute need the keyring file for the nova-compute process:

# ceph auth get-or-create client.cinder | ssh 10.1.30.73 tee /etc/ceph/ceph.client.cinder.keyring
# ceph auth get-or-create client.cinder | ssh 10.1.30.74 tee /etc/ceph/ceph.client.cinder.keyring
# ceph auth get-or-create client.cinder | ssh 10.1.30.75 tee /etc/ceph/ceph.client.cinder.keyring

They also need to store the secret key of the client.cinder user in libvirt. The libvirt process needs it to access the cluster while attaching a block device from Cinder.
Create a temporary copy of the secret key on the nodes running nova-compute:

# ceph auth get-key client.cinder | ssh 10.1.30.73 tee client.cinder.key
# ceph auth get-key client.cinder | ssh 10.1.30.74 tee client.cinder.key
# ceph auth get-key client.cinder | ssh 10.1.30.75 tee client.cinder.key

Then, on the compute nodes, add the secret key to libvirt and remove the temporary copy of the key:

[root@compute1 ~]# uuidgen
ced05082-03b5-4d79-a10f-cebe81211690
 
[root@compute1 ~]# cat client.cinder.key
AQCTq4VYdPCGGxAAevjUPh8e5xom10r9qUZswA==
 
 
cat > secret.xml <
  ced05082-03b5-4d79-a10f-cebe81211690
  
    client.cinder secret
  

EOF
 
 
[root@compute1 ~]# virsh secret-define --file secret.xml
Secret ced05082-03b5-4d79-a10f-cebe81211690 created
 
[root@compute1 ~]# scp client.cinder.key secret.xml root@compute2:/root
[root@compute1 ~]# scp client.cinder.key secret.xml root@compute3:/root
 
[root@compute2 ~]# virsh secret-define --file secret.xml
Secret ced05082-03b5-4d79-a10f-cebe81211690 created
 
[root@compute3 ~]# virsh secret-define --file secret.xml
Secret ced05082-03b5-4d79-a10f-cebe81211690 created
 
 
 
 
[root@compute1 ~]# virsh secret-set-value --secret ced05082-03b5-4d79-a10f-cebe81211690 --base64 $(cat client.cinder.key) && rm -f client.cinder.key secret.xml
Secret value set
 
[root@compute2 ~]# virsh secret-set-value --secret ced05082-03b5-4d79-a10f-cebe81211690 --base64 $(cat client.cinder.key) && rm -f client.cinder.key secret.xml
Secret value set
 
[root@compute3 ~]# virsh secret-set-value --secret ced05082-03b5-4d79-a10f-cebe81211690 --base64 $(cat client.cinder.key) && rm -f client.cinder.key secret.xml
Secret value set

Save the uuid of the secret for configuring nova-compute later.

CONFIGURE OPENSTACK TO USE CEPH

CONFIGURING GLANCE

Glance can use multiple back ends to store images. To use Ceph block devices by default, configure Glance like the following.
1. Edit the /etc/glance/glance-api.conf and add under the [glance_store] section from the Controllers Node:

[glance_store]
filesystem_store_datadir = /var/lib/glance/images/
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

2. If you want to enable copy-on-write cloning of images, also add under the [DEFAULT] section:

show_image_direct_url = True

Note that this exposes the back end location via Glance’s API, so the endpoint with this option enabled should not be publicly accessible.

CONFIGURE CINDER

OpenStack requires a driver to interact with Ceph block devices. You must also specify the pool name for the block device. On your Controller node, edit /etc/cinder/cinder.conf by adding:

[ceph]
rbd_user = cinder
rbd_secret_uuid = ced05082-03b5-4d79-a10f-cebe81211690
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
 
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

SO:
FOR CONTROLLER CINDER FILE SHOULD HAVE THE FOLLOWING
File: /etc/cinder/cinder.conf

[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.1.30.92
enabled_backends =  rbd
glance_api_version = 2
 
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
 
 
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEYMGR]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:xahchahQu6@vip-db/cinder
 
[keystone_authtoken]
auth_uri = http://vip-keystone:5000
auth_url = http://vip-keystone:35357
memcached_servers = 10.1.30.92:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = xahchahQu6
 
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
 
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = vip-rabbitmq
rabbit_userid = openstack
rabbit_password = rabb1tmqpass
 
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]
 
[ceph]
rbd_user = cinder
rbd_secret_uuid = ced05082-03b5-4d79-a10f-cebe81211690
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2

FOR CEPH NODES CINDER FILE SHOULD HAVE THE FOLLOWING

File: /etc/cinder/cinder.conf

[DEFAULT]
iscsi_helper = tgtadm
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.1.30.103
glance_api_servers = http://controller:9292
enabled_backends = ceph
 
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
 
[ceph]
rbd_user = cinder
rbd_secret_uuid = ced05082-03b5-4d79-a10f-cebe81211690
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
 
 
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEYMGR]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:xahchahQu6@vip-db/cinder
[keystone_authtoken]
auth_uri = http://vip-keystone:5000
auth_url = http://vip-keystone:35357
memcached_servers = 10.1.30.92:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = xahchahQu6
 
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = vip-rabbitmq
rabbit_userid = openstack
rabbit_password = rabb1tmqpass
 
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]

CONFIGURING NOVA TO ATTACH CEPH RBD BLOCK DEVICE

In order to attach Cinder devices (either normal block or by issuing a boot from volume), you must tell Nova (and libvirt) which user and UUID to refer to when attaching the device. libvirt will refer to this user when connecting and authenticating with the Ceph cluster.
FROM Compute Server add the following to /etc/nova/nova.conf

[libvirt]
rbd_secret_uuid = 5ad7660b-344b-4a21-a8ed-ef6012b27e7d
rbd_user = cinder

Now on every compute nodes (for us we used controller) edit your Ceph configuration file and should have the following:

[root@controller1 ~]# cat /etc/ceph/ceph.conf
[client]
    rbd cache = true
    rbd cache writethrough until flush = true
    admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
    log file = /var/log/qemu/qemu-guest-$pid.log
    rbd concurrent management ops = 20

RESTART OPENSTACK

To activate the Ceph block device driver and load the block device pool name into the configuration, you must restart OpenStack. Thus, for Debian based systems execute these commands on the appropriate nodes:

# From Controller Nodes
service openstack-glance-api restart
 
 
# From Compute Nodes
service openstack-nova-compute restart
 
# From Ceph Nodes
service openstack-cinder-volume restart
service openstack-cinder-backup restart
Spread the love

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.