Those default values are useless and might cause issues.
- `osd_scenario` should be mandatory anyway.
- `pool_default_size` is not used anymore (this has been refactored
recently.
Closes: #3468
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
osd node's disks will remain on vagrant host,when run "vagrant destroy",
because we use time as a part of disk path, and time on delete not equal time on create.
we already use random_hostname in Libvirt backend,it will create disk
use the hostname as a part of diskname. for example: vagrant_osd2_1539159988_065f15e3e1fa6ceb0770-hda.qcow2.
Signed-off-by: binhong.hua <binhong.hua@gmail.com>
In case of an OpenStack "box", the Vagrantfile intend to check the
existence of os_networks and os_floating_ip_pool settings in
vagrant_variables.yml and pass them to the provider if they are set.
Due to two typos in the Vagrantfile this is not working as it checks the
wrong variable names.
This commit fixes the typos so these settings can be used.
Signed-off-by: Norbert Illés <illesnorbi@gmail.com>
Let's try to avoid using dashes as testinfra needs to be able to read
the groups.
Typically, with iscsi-gws we can't add a marker for these iscsi nodes,
using an underscore fixes the issue.
Signed-off-by: Sébastien Han <seb@redhat.com>
Set volume_cache to unsafe for CI VMs.
We might be using tmpfs for volume disks soon, therefore 'unsafe' is a
prerequisite for volume_cache
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Resolves issue: Multiple RGW Ceph.conf Issue #1258
In multi-RGW setup, in ceph.conf the RGW sections
contain identical bind IP in civetweb line. So this
modification fixes that issue and puts the right IP
for each RGW.
Signed-off-by: SirishaGuduru SGuduru@walmartlabs.com
Modified ceph-defaults and ran generate_group_vars_sample.sh
group_vars/osds.yml.sample and group_vars/rhcs.yml.sample are
not part of the changes. But they got modified when
generate_group_vars_sample.sh is ran to generate group_vars/
all.yml.sample.
Uncommented added variables in ceph-defaults
Updated tests by adding value for radosgw_interface
Added radosgw_interface to centos cluster tests
Modified ceph-rgw role,rebased and ran generate_group_vars_sample.sh
In ceph-rgw role removed check_mandatory_vars.yml.
Rebased on master.
Ran generate_group_vars_sample.sh and then the below files got
modified.
There is only two main scenarios now:
* collocated: everything remains on the same device:
- data, db, wal for bluestore
- data and journal for filestore
* non-collocated: dedicated device for some of the component
Signed-off-by: Sébastien Han <seb@redhat.com>
remove `ceph_mon_docker_interface` and use `monitor_interface` instead
for both containerized and non-containerized deployment.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
`ceph-docker-common`:
At the moment there is a lot of duplicated tasks in each
`./roles/ceph-<role>/tasks/docker/main.yml` that could be refactored in
`./roles/ceph-docker-common/tasks/main.yml`.
`*_containerized_deployment` variables:
All `*_containerized_deployment` have been refactored to a single
variable `containerized_deployment`
duplicate `cephx` variables in `group_vars/* have been removed.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The Ceph Manager daemon (ceph-mgr) runs alongside monitor daemons, to
provide additional monitoring and interfaces to external monitoring and
management systems.
Only works as of the Kraken release.
Co-Authored-By: Guillaume Abrioux <gabrioux@redhat.com>
Signed-off-by: Sébastien Han <seb@redhat.com>
If debug is set to true in vagrant_variables.yml then during the vagrant
provision phase Ansible will run with -vvvv option
Signed-off-by: Sébastien Han <seb@redhat.com>
This variable is already defined as a global default in the OSD role and
was not being kept in sync as we now require the '-e' parameter prefixed
to each variable. This was also missing the CLUSTER environment variable
that is defined in the global default version of
ceph_osd_docker_extra_env.
This will allow each testing scenario to have a unique names
for it's disks so there will not be conflicts when running tests
in parallel.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
This makes our libvirt boxes come up with the OS on /dev/vda and
three devices added at /dev/sd{a/b/c} so that we can ensure that
the OSD devices we want to use can always be available for
both virtualbox and libvirt for both xenial and centos7.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
This was a holdout from the Linode merge that shouldn't have been
included. The right way to set the installation source is through
group_vars.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
- All Ceph instances now communicate using public subnet and
additionally OSDs communicate with each other using private cluster
subnet
- Workaround for
https://github.com/vagrant-libvirt/vagrant-libvirt/issues/645
- Fix for #952 to avoid concatenated MAC addresses caused by
vagrant-libvirt bug.
This enables running multiple clusters concurrently in the same Linode
account. Linode does not allow machines to have the same label.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Other things of note:
o You can now set the ceph branch to test against in
vagrant_variables.yml.
o You can now set the ceph_conf_overrides in vagrant_variables.yml.
This commit depends on an open PR:
https://github.com/displague/vagrant-linode/pull/66
Until that is merged, you must copy the changed file to your copy
of the vagrant-linode plugin, e.g.:
cp lib/vagrant-linode/actions/create.rb ~/.vagrant.d/gems/gems/vagrant-linode-0.2.7/lib/vagrant-linode/actions/create.rb
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
use the activation scenario instead of the full ceph_disk one, we
already have a task to prepare osds so we just need to activate the
device.
working for me using vagrant :)
Signed-off-by: Sébastien Han <seb@redhat.com>
There is no need to run the actions from
roles/ceph-mon/tasks/docker/create_configs.yml
on the first monitor only since the monitor deployment happens
**serially**.
Moreover with Vagrant it's useful to allow the auto creation of the
cluster fsid, so enabling the option. If this is not desired you can
still set `fsid: 9c9c0448-0551-401d-b55b-e5b3a42bae42` for example.
Signed-off-by: Sébastien Han <seb@redhat.com>
Ceph has the ability to export it's filesystem via NFS using Ganesha.
Add a ceph-nfs role that will start Ganesha and export the Ceph
filesystems.
Note that, although support is going in to export RGW via NFS, this is
not working yet.
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>