* Make common params of container args in a var to avoid duplication
* The /var/lib/ceph/crash mount was missing after 637ca81c9c
* Add CEPH_USE_RANDOM_NONCE as it's needed when running inside container (can be removed for squid later)
* Add NODE_NAME as some part of ceph code relies on this var
* add default logging opts for
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
The current approach is extremely complex and introduced a lot
of spaghetti code. This doesn't offer a good user experience at all.
It's time to think to another approach (dedicated playbook) and drop
the current implementation in order to clean up the code.
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
This is needed in order to install `ceph-mgr-dashboard`
as it has a dependency on `python3-grpcio-tools` which comes from
crb repo.
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
keep the ceph.conf very simple.
manage the common options such as `public_network` with `ceph_config`
module.
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
This refactor makes the 'name' argument not mandatory because when
'state' is 'info' we shouldn't need to pass it.
The second change is just a duplicate code removal.
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
This adds the resquired changes in order to support
CentOS stream 9.
Also, this bumps the Ansible version support to 2.15
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
With ansible-core 2.15 it is not possible to pass argument of unexpected
type, as otherwise module will fail with:
`'None' is not a string and conversion is not allowed`
With that we want to only get all existing crush rules, so we can simply
supply an empty string as a name argument, which would satisfy
requirements and have same behaviour for previous ansible versions.
Alternative approach would be to stop making `name` as a required
argument to the module and use empty string as default value
when info state is used.
Signed-off-by: Dmitriy Rabotyagov <noonedeadpunk@gmail.com>
There was multiple rgw frontends entries while there was just one
rgw instance on each host. The other entries were the details from
the other rgw hosts in the cluster
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2232282
Signed-off-by: Teoman ONAY <tonay@ibm.com>
When deploying with --skip-tags=package-install (when there is no access to a repository), the playbook is still trying to update the package cache, or to install ceph-mgr packages, which makes the playbook fail.
This change prevents the playbook to try to update the cache or install ceph-mgr packages when the package-install tag is skipped.
Signed-off-by: Florent CARLI <florent.carli@rte-france.com>
let's use quay.io/ceph/daemon-base in every tests instead of
`ceph/daemon` since it's not supposed to be built anymore soon.
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
As it's always being set in ceph.conf template, it leads to having duplicated osd_memory_target keys in rendered ceph conf while defining one in ceph_conf_overrides.
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
filestore objectstore will be gone in the next Ceph release.the
This drops the filestore support in ceph-ansible.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Exclude lvm_volumes defined disks from existing osds while it has been counted by the "count number of osds for lvm scenario" task.
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
* Exclude device from lvm_volumes while osd_auto_discovery is true
* Sum num_osds on both lvm_volumes and devices list
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
Enabling installation of the admin key to mgr nodes by setting
"copy_admin_key: true" is broken. This is because the variable is not
referenced correctly (using inline Jinja2 templating).
Signed-off-by: René Højbjerg Larsen <rhl@jfm.dk>
Add --security-opt label=disable to all containers
accessing /var/lib/ceph. podman selinux relabeling behavious changed
since version podman-3:4.2.0-1 which prevent some containers to access
files in these subdirectories.
Signed-off-by: Teoman ONAY <tonay@ibm.com>
If a disk has a symlink it will be re-added to the devices lists one with resolved path and the other with a defined path.
We can rebuild the list from the readlink output cause readlink always return the correct path for all disks.
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
Exclude disks were defined in dedicated_devices and bluestore_wal_devices on osd_auto_discovery enabled.
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
With podman version podman-3:4.2.0-4.module+el8.7.0+17064+3b31f55c and
later, when mgr fails to start if mon is already running.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2169767
Signed-off-by: Teoman ONAY <tonay@ibm.com>
We need to make sure `rgw_instances` is set before `ceph.conf` is
rendered. Otherwise, the `ceph-crash` play in the main playbook updates
(via ceph-handler) the `ceph.conf` on rgw nodes and removes rgw instances
sections.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2141604
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Changed the when condition to only execute that fact setting on RGW
nodes while before it was run on all nodes and failed if the node
was not on the same network range as the RGW.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2131150
Signed-off-by: Teoman ONAY <tonay@redhat.com>
When the following conditions are met:
- rgw is deployed,
- dashboard is deployed,
- playbook is called with --limit,
- a node being processed is collocated on either a mon or mgr.
The playbook fails because `rgw_instances` is undefined.
The idea here is to make sure this variable is always defined.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2063029
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
when these variables are defined in the inventory host file,
all tasks are skipped then because the node being played isn't
aware about the values from the rgw nodes.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2063029
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>