Use command module instead of shell since we do not do anything fancy
here. Remove the duplicate register.
Signed-off-by: Sébastien Han <seb@redhat.com>
As raised in #466 it is important in order to avoid unnecessary
troubleshooting to check that ceph ports are allowed on the platform.
The check runs a nmap command from the host running Ansible
to all the ceph nodes with their respective ports.
Signed-off-by: Sébastien Han <seb@redhat.com>
Thanks to @cloudnull great patch at
https://github.com/ansible/ansible/pull/12555
we now have the ability to add more configuration options instead of
having to push a PR to add a new option to the template. So you can
dynamically add and remove flags.
To use it, edit `ceph_conf_overrides` in `group_vars/all` like so:
```
ceph_conf_overrides
global:
foo: 12345
bar: 6789
```
Signed-off-by: Sébastien Han <seb@redhat.com>
Because of some permission issue, likely due to the recent ceph user, if
80 is used for civetweb we get:
set_ports_option: cannot bind to 80: 13 (Permission denied)
Changing the port to 8080 until this gets solved.
Signed-off-by: Sébastien Han <seb@redhat.com>
It should be used to disable health warnings about number of PGs
being too low if some pools have very few objects bringing down
the average number of objects per pool. This happens when running RadosGW.
The default is 10 and since the warnings only occur with some use cases,
the default here is 10 as well. Set to 20 or more to silence the warnings.
Currently, the fetch directory is created in your working directory
(where ansible is run from). We prefer to not keep any state in this
directory and would prefer to have the fetch directory configurable so
we can store it outside of our code checkout.
This commit creates a new variable in each role called
`fetch_directory` (defaulting to the previous value of 'fetch/'), and
then updates each reference to 'fetch' to use the new variable instead.
Closes issue #383
When multiple monitor hosts attempt to create the fetch directory there
is the potential for the task to fail with:
"OSError: [Errno 17] File exists: 'fetch'"
This appear to be an issue with the file module trying to create the
same directory at the same time when the tasks has been delegated to a
single host.
This commit enables run_once on the affected task which should address
the issue.
This is a rare case but it happens. Since we're just calling
`monitor_interface` and not `hostvars[host]['monitor_interface'],
an error may occur when the current host's interface does not
exist on the other hosts. (eg. eth0 exists for node0, but it does
not exist on node1 and node2)
Fix for this is to use hostvars[host]['monitor_interface']
Fix back the rolling update playbook.
However every single time the playbook will run it will check for new
packages and install the latest ones. I don't think this is always the
desired behaviour. We need to find a way to conciliate both...
Signed-off-by: Sébastien Han <seb@redhat.com>
Fix the logic for the mandatory devices check so that it applies to
raw_multi_journal and journal_collocation scenarios separately.
This fails otherwise because whichever var is "first" in the or is most
likely undefined.
I'm currently getting a KeyError due to missing 'dependencies' on this
role when I attempt to install it with ansible-galaxy (ansible 1.9.2).
This commit simply defines an empty dependencies list so that
ansible-galaxy executes correctly.
Cool stuff :). We don't need to specify an initial monitor key anymore.
A key will automatically be generated.
The default key can always be overriden with the `monitor_secret`
variable.
Signed-off-by: leseb <seb@redhat.com>
We don't always have a dedicated cluster network so we can by default
re-use the public network value.
This is just laziness :).
Signed-off-by: leseb <seb@redhat.com>
While re-running the playbook we do not want to check for new packages.
We shouldn't perform upgrades, we leave this to the operators.
Signed-off-by: leseb <seb@redhat.com>
Feel so bad about this one...
Now it's fixed, the rgw section will be activated once the rgws hosts
are part of the inventory.
Signed-off-by: leseb <seb@redhat.com>
Even if the subcription command is indempotent it takes around 15/16sec
to get it done. Where with the simple yum check we lower down this to
3sec.
Signed-off-by: leseb <seb@redhat.com>
Since the command is indempotent we don't need to check if the repo is
enabled as it will likely take twice the time.
Signed-off-by: leseb <seb@redhat.com>
We want to force the user to only enable the options they need. Thus
they shouldn't have to enable one option and then disable another.
Signed-off-by: leseb <seb@redhat.com>
Now we don't need to activate the services through a variable. If the
role is activated in the inventory, actions will occur automatically.
Fixing the repo creation for red hat storage too.
Signed-off-by: leseb <seb@redhat.com>
The new product version has jsut came out. ICE doesn't exist anymore and
Red Hat Storage is the name of the new product.
Signed-off-by: leseb <seb@redhat.com>