Commit Graph

2213 Commits (6b5487d1e56f48dbba7305079465c32c8231663e)

Author SHA1 Message Date
Dimitri Savineau a089e1ec23 systemd/service: Set docker.service conditionally
We don't need to set After=docker.service when the container_binary
variable isn't set to docker.
It doesn't break anything currently but it could be confusing when
using podman.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-07 20:56:11 +00:00
Dimitri Savineau d6e71d769c common: Use rhsm_repository module for RHCS
Instead of using subscription-manager with command module we can use
the rhsm_repository ansible module.
This module already uses repos list feature to determine if a
repository is enabled or not. That way this module is idempotent so
we don't need changed_when: false anymore.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-07 19:15:42 +00:00
Dimitri Savineau 53514a5b50 common: Add noarch to community repository
The ceph stable community repository only enables the basearch
packages url.
Adding the noarch url because starting with nautilus release, some
packages are added there and useful for mgr or grafana.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-06 00:25:11 +00:00
Dimitri Savineau 4d32ecc980 Force osd pool min_size value to integer
After b8d580b and e9e5d5a we could have either item.min_size or
osd_pool_default_min_size using string instead of int causing the
condition to be true when it's false.
As a result, the task could try to set the pool min_size value to
0 which leads to:

Error EINVAL: pool min_size must be between 1 and 1

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-05 19:48:09 +00:00
Dimitri Savineau cb381b41fe Add CONTAINER_IMAGE env var to ceph daemons
Ceph daemons will set the CONTAINER_IMAGE environment variable value
in the daemon metadata.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-05 15:07:05 +00:00
Guillaume Abrioux e9e5d5a39a fix pool min_size customization
b8d580b3f4 introduced a bug when
`min_size` isn't set (default to 0).

Typical error:

```
Error EINVAL: pool min_size must be between 1 and 1
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-05 13:29:34 +00:00
Radu Toader b8d580b3f4 Customize pools min_size
Signed-off-by: Radu Toader <radu.m.toader@gmail.com>
2019-03-05 10:57:15 +00:00
Radu Toader 2048255f61 When creating pool, read pool.application and make the call to ceph osd pool enable application
Signed-off-by: Radu Toader <radu.m.toader@gmail.com>
2019-03-05 09:16:03 +00:00
Kevin Coakley b11dc13476 Updated 7 ansible-lint issues in the ceph-mon, ceph-osd, and ceph-rgw roles
The following lint issues have been resolved:

[301] Commands should not change things if nothing needs doing
/home/travis/build/ceph/ceph-ansible/roles/ceph-mon/tasks/ceph_keys.yml:2

[305] Use shell only when shell functionality is required
/home/travis/build/ceph/ceph-ansible/roles/ceph-osd/tasks/start_osds.yml:47

[301] Commands should not change things if nothing needs doing
/home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:2

[301] Commands should not change things if nothing needs doing
/home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:7

[301] Commands should not change things if nothing needs doing
/home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:14

[301] Commands should not change things if nothing needs doing
/home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:19

[301] Commands should not change things if nothing needs doing
/home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:24

Signed-off-by: Kevin Coakley <kcoakley@sdsc.edu>
2019-03-04 22:25:35 +00:00
Guillaume Abrioux 359f8a9a4a nfs: fix systemd template service for ubuntu
`mkdir` is located in `/bin` on Ubuntu.
Let's use some jinja to support Ubuntu.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-04 19:54:25 +00:00
Dimitri Savineau 45a7082712 lint: Fix spaces before and after variables
ansible-lint reports:

[206] Variables should have spaces after {{ and before }}

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-01 17:22:24 +00:00
VasishtaShastry 34c25ef49b Extends check_devices tasks to non-collocated an lvm-batch scenarios
Tuned name of a task and error message to make it more user understandable

Fixes BZ 1648168 - ceph-validate : devices are not validated in non-collocated and lvm_batch scenario

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1648168

Signed-off-by: VasishtaShastry <vipin.indiasmg@gmail.com>
2019-03-01 02:13:51 +00:00
Kevin Coakley 038401fef2 Add changed_when: false to the "get osd ids" statement
The "get osd ids" statement only registers the osd_ids_non_container variable. Running "ls /var/lib/ceph/osd/ | sed 's/.*-//'" should never produce a change on the system. Adding changed_when: false prevents irrelevant change messages from Ansible.

Signed-off-by: Kevin Coakley <kcoakley@sdsc.edu>
2019-02-28 22:46:19 +00:00
ToprHarley 573adce7dd Convert interface names to underscores
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1540881

Signed-off-by: Tomas Petr <tpetr@redhat.com>
2019-02-28 17:07:34 +00:00
Guillaume Abrioux d5be83e504 osd: add ipc=host in systemd template for containers
in addition to 15812970f0

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-28 13:14:09 +00:00
fpantano 21fad7ced3 Removed not needed mountpoint and removed ubuntu section
Referring to BZ#1683290, as dsavineau suggests, being this
bug tripleO specific, removed the ubuntu section and removed
useless mountpoints.

Signed-off-by: fpantano <fpantano@redhat.com>
2019-02-28 09:46:10 +00:00
fpantano 0c1944236b Added to the ceph-radosgw service template the ca-trust
volume avoiding to expose useless information.
This bug is referred to the following bugzilla:

https://bugzilla.redhat.com/show_bug.cgi?id=1683290

Signed-off-by: fpantano <fpantano@redhat.com>
2019-02-28 09:46:10 +00:00
Dimitri Savineau 58a9d310d5 mon: Move client admin variable to defaults
There's no need to set the client_admin_ceph_authtool_cap variable
via a set_fact task.
Instead we can set this in the role defaults.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-02-27 18:39:39 +00:00
Dimitri Savineau dd7b7604de mon: Add mds permissions to client.admin
The administrator keyring needs full capabilities on mds like mon,
osd and mgr.
Whithout this, the client.admin key won't be able to run commands
against mds (like ceph tell mds.0 session ls)

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1672878

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-02-27 18:39:39 +00:00
Guillaume Abrioux 4ab02d2cd1 tests: set ceph_origin and ceph_repository for non_container-collocation
those variables are mandatory.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-27 15:58:35 +00:00
Guillaume Abrioux f68ad10bc9 mon: do not create unnecessarily mgr keyrings
there's no need to generate mgr keyrings 'mgr.monX' when mgrs aren't
collocated with monitors.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-27 15:58:35 +00:00
Kevin Coakley d327681b99 Set permissions on monitor directory to u=rwX,g=rX,o=rX recursive
Set directories to 755 and files to 644 to /var/lib/ceph/mon/{{ cluster }}-{{ monitor_name }} recursively instead of setting files and directories to 755 recursively. The ceph mon process writes files to this path with permissions 644. This update stops ansible from updating the permissions in /var/lib/ceph/mon/{{ cluster }}-{{ monitor_name }} every time ceph mon writes a file and increases idempotency.

Signed-off-by: Kevin Coakley <kcoakley@sdsc.edu>
2019-02-27 10:48:19 +00:00
Dimitri Savineau dc1c0dcee2 ceph-osd: Drop memory flag with bluestore
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-02-26 07:27:06 +00:00
Guillaume Abrioux 8f42007272 facts: fix auto_discovery exclude
the previous approach was wrong.
checking if `item.key` is in `osd_auto_discovery_exclude` (`['dm-',
'loop']`) is incorrect because it will obviously not match. Therefore,
the condition will return `True` whatever the device we are checking.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-26 03:16:33 +00:00
Guillaume Abrioux 83d7ef777e osd: add possibility to exclude device in osd_auto_discovery
Add a new `osd_auto_discovery_exclude` to give the possibility of
excluding some devices in auto_discovery scenario.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-25 10:05:34 +00:00
Dimitri Savineau b7338d438a ceph-infra: Remove restart firewalld handler
There's no need to restart firewalld service when a new rule is
added due to the usage of the immediate flag.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-02-22 18:15:47 +00:00
Guillaume Abrioux 2b60a35634 common: do not override ceph_release when ceph_repository is 'rhcs'
We shouldn't reset `ceph_release` with `ceph_stable_release` when
`ceph_repository` is `rhcs`

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1645379

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-21 11:58:49 +01:00
Guillaume Abrioux 21e5db8982 osd: make the 'wait for all osd to be up' task configurable
introduce two new variables to make the check that 'wait for all osd to
be up' configurable.
It's possible that for some deployments, OSDs can take longer to be seen
as UP and IN.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1676763

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-20 16:06:04 +00:00
Patrick Donnelly ed40c5237d delegate key creation to first mon
Otherwise keys get scattered over the mons and the mgr key is not copied properly.

With ansible_inventory:

    [mdss]
            mds-000 ansible_ssh_host=192.168.129.110 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa'
    [clients]
            client-000 ansible_ssh_host=192.168.143.94 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa'
    [mgrs]
            mgr-000 ansible_ssh_host=192.168.222.195 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa'
    [mons]
            mon-000 ansible_ssh_host=192.168.139.173 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' monitor_address=192.168.139.173
            mon-002 ansible_ssh_host=192.168.212.114 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' monitor_address=192.168.212.114
            mon-001 ansible_ssh_host=192.168.167.177 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' monitor_address=192.168.167.177
    [osds]
            osd-001 ansible_ssh_host=192.168.178.128 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa'
            osd-000 ansible_ssh_host=192.168.138.233 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa'
            osd-002 ansible_ssh_host=192.168.197.23 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa'

We get this failure:

    TASK [ceph-mon : include_tasks ceph_keys.yml] **********************************************************************************************************************************************************************
    included: /root/ceph-ansible/roles/ceph-mon/tasks/ceph_keys.yml for mon-000, mon-002, mon-001

    TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] *************************************************************************************************************************************************
    changed: [mon-000] => {
        "attempts": 1,
        "changed": true,
        "cmd": [
            "ceph",
            "--cluster",
            "ceph",
            "-n",
            "mon.",
            "-k",
            "/var/lib/ceph/mon/ceph-li1166-30/keyring",
            "mon_status",
            "--format",
            "json"
        ],
        "delta": "0:00:01.897397",
        "end": "2019-02-14 17:08:09.340534",
        "rc": 0,
        "start": "2019-02-14 17:08:07.443137"
    }

    STDOUT:

    {"name":"li1166-30","rank":0,"state":"leader","election_epoch":4,"quorum":[0,1,2],"quorum_age":0,"features":{"required_con":"2449958747315912708","required_mon":["kraken","luminous","mimic","osdmap-prune","nautilus"],"quorum_con":"4611087854031667199","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus"]},"outside_quorum":[],"extra_probe_peers":[{"addrvec":[{"type":"v2","addr":"192.168.167.177:3300","nonce":0},{"type":"v1","addr":"192.168.167.177:6789","nonce":0}]},{"addrvec":[{"type":"v2","addr":"192.168.212.114:3300","nonce":0},{"type":"v1","addr":"192.168.212.114:6789","nonce":0}]}],"sync_provider":[],"monmap":{"epoch":1,"fsid":"bb401e2a-c524-428e-bba9-8977bc96f04b","modified":"2019-02-14 17:08:05.012133","created":"2019-02-14 17:08:05.012133","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus"],"optional":[]},"mons":[{"rank":0,"name":"li1166-30","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.139.173:3300","nonce":0},{"type":"v1","addr":"192.168.139.173:6789","nonce":0}]},"addr":"192.168.139.173:6789/0","public_addr":"192.168.139.173:6789/0"},{"rank":1,"name":"li985-128","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.167.177:3300","nonce":0},{"type":"v1","addr":"192.168.167.177:6789","nonce":0}]},"addr":"192.168.167.177:6789/0","public_addr":"192.168.167.177:6789/0"},{"rank":2,"name":"li895-17","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.212.114:3300","nonce":0},{"type":"v1","addr":"192.168.212.114:6789","nonce":0}]},"addr":"192.168.212.114:6789/0","public_addr":"192.168.212.114:6789/0"}]},"feature_map":{"mon":[{"features":"0x3ffddff8ffacffff","release":"luminous","num":1}],"client":[{"features":"0x3ffddff8ffacffff","release":"luminous","num":1}]}}

    TASK [ceph-mon : fetch ceph initial keys] **************************************************************************************************************************************************************************
    changed: [mon-001] => {
        "changed": true,
        "cmd": [
            "ceph",
            "-n",
            "mon.",
            "-k",
            "/var/lib/ceph/mon/ceph-li985-128/keyring",
            "--cluster",
            "ceph",
            "auth",
            "get",
            "client.bootstrap-rgw",
            "-f",
            "plain",
            "-o",
            "/var/lib/ceph/bootstrap-rgw/ceph.keyring"
        ],
        "delta": "0:00:03.179584",
        "end": "2019-02-14 17:08:14.305348",
        "rc": 0,
        "start": "2019-02-14 17:08:11.125764"
    }

    STDERR:

    exported keyring for client.bootstrap-rgw
    changed: [mon-002] => {
        "changed": true,
        "cmd": [
            "ceph",
            "-n",
            "mon.",
            "-k",
            "/var/lib/ceph/mon/ceph-li895-17/keyring",
            "--cluster",
            "ceph",
            "auth",
            "get",
            "client.bootstrap-rgw",
            "-f",
            "plain",
            "-o",
            "/var/lib/ceph/bootstrap-rgw/ceph.keyring"
        ],
        "delta": "0:00:03.706169",
        "end": "2019-02-14 17:08:14.041698",
        "rc": 0,
        "start": "2019-02-14 17:08:10.335529"
    }

    STDERR:

    exported keyring for client.bootstrap-rgw
    changed: [mon-000] => {
        "changed": true,
        "cmd": [
            "ceph",
            "-n",
            "mon.",
            "-k",
            "/var/lib/ceph/mon/ceph-li1166-30/keyring",
            "--cluster",
            "ceph",
            "auth",
            "get",
            "client.bootstrap-rgw",
            "-f",
            "plain",
            "-o",
            "/var/lib/ceph/bootstrap-rgw/ceph.keyring"
        ],
        "delta": "0:00:03.916467",
        "end": "2019-02-14 17:08:13.803999",
        "rc": 0,
        "start": "2019-02-14 17:08:09.887532"
    }

    STDERR:

    exported keyring for client.bootstrap-rgw

    TASK [ceph-mon : create ceph mgr keyring(s)] ***********************************************************************************************************************************************************************
    skipping: [mon-000] => (item=mgr-000)  => {
        "changed": false,
        "item": "mgr-000",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=mon-000)  => {
        "changed": false,
        "item": "mon-000",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=mon-002)  => {
        "changed": false,
        "item": "mon-002",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=mon-001)  => {
        "changed": false,
        "item": "mon-001",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=mgr-000)  => {
        "changed": false,
        "item": "mgr-000",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=mon-000)  => {
        "changed": false,
        "item": "mon-000",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=mon-002)  => {
        "changed": false,
        "item": "mon-002",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=mon-001)  => {
        "changed": false,
        "item": "mon-001",
        "skip_reason": "Conditional result was False"
    }
    changed: [mon-001] => (item=mgr-000) => {
        "changed": true,
        "cmd": [
            "ceph",
            "-n",
            "client.admin",
            "-k",
            "/etc/ceph/ceph.client.admin.keyring",
            "--cluster",
            "ceph",
            "auth",
            "import",
            "-i",
            "/etc/ceph//ceph.mgr.li547-145.keyring"
        ],
        "delta": "0:00:05.822460",
        "end": "2019-02-14 17:08:21.422810",
        "item": "mgr-000",
        "rc": 0,
        "start": "2019-02-14 17:08:15.600350"
    }

    STDERR:

    imported keyring
    changed: [mon-001] => (item=mon-000) => {
        "changed": true,
        "cmd": [
            "ceph",
            "-n",
            "client.admin",
            "-k",
            "/etc/ceph/ceph.client.admin.keyring",
            "--cluster",
            "ceph",
            "auth",
            "import",
            "-i",
            "/etc/ceph//ceph.mgr.li1166-30.keyring"
        ],
        "delta": "0:00:05.814039",
        "end": "2019-02-14 17:08:27.663745",
        "item": "mon-000",
        "rc": 0,
        "start": "2019-02-14 17:08:21.849706"
    }

    STDERR:

    imported keyring
    changed: [mon-001] => (item=mon-002) => {
        "changed": true,
        "cmd": [
            "ceph",
            "-n",
            "client.admin",
            "-k",
            "/etc/ceph/ceph.client.admin.keyring",
            "--cluster",
            "ceph",
            "auth",
            "import",
            "-i",
            "/etc/ceph//ceph.mgr.li895-17.keyring"
        ],
        "delta": "0:00:05.787291",
        "end": "2019-02-14 17:08:33.921243",
        "item": "mon-002",
        "rc": 0,
        "start": "2019-02-14 17:08:28.133952"
    }

    STDERR:

    imported keyring
    changed: [mon-001] => (item=mon-001) => {
        "changed": true,
        "cmd": [
            "ceph",
            "-n",
            "client.admin",
            "-k",
            "/etc/ceph/ceph.client.admin.keyring",
            "--cluster",
            "ceph",
            "auth",
            "import",
            "-i",
            "/etc/ceph//ceph.mgr.li985-128.keyring"
        ],
        "delta": "0:00:05.782064",
        "end": "2019-02-14 17:08:40.138706",
        "item": "mon-001",
        "rc": 0,
        "start": "2019-02-14 17:08:34.356642"
    }

    STDERR:

    imported keyring

    TASK [ceph-mon : copy ceph mgr key(s) to the ansible server] *******************************************************************************************************************************************************
    skipping: [mon-000] => (item=mgr-000)  => {
        "changed": false,
        "item": "mgr-000",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=mgr-000)  => {
        "changed": false,
        "item": "mgr-000",
        "skip_reason": "Conditional result was False"
    }
    changed: [mon-001] => (item=mgr-000) => {
        "changed": true,
        "checksum": "aa0fa40225c9e09d67fe7700ce9d033f91d46474",
        "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/etc/ceph/ceph.mgr.li547-145.keyring",
        "item": "mgr-000",
        "md5sum": "cd884fb9ddc9b8b4e3cd1ad6a98fb531",
        "remote_checksum": "aa0fa40225c9e09d67fe7700ce9d033f91d46474",
        "remote_md5sum": null
    }

    TASK [ceph-mon : copy keys to the ansible server] ******************************************************************************************************************************************************************
    skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-osd/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-rgw/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-mds/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-rbd/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-osd/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=/etc/ceph/ceph.client.admin.keyring)  => {
        "changed": false,
        "item": "/etc/ceph/ceph.client.admin.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-rgw/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-mds/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-rbd/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=/etc/ceph/ceph.client.admin.keyring)  => {
        "changed": false,
        "item": "/etc/ceph/ceph.client.admin.keyring",
        "skip_reason": "Conditional result was False"
    }
    changed: [mon-001] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {
        "changed": true,
        "checksum": "095c7868a080b4c53494335d3a2223abbad12605",
        "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-osd/ceph.keyring",
        "item": "/var/lib/ceph/bootstrap-osd/ceph.keyring",
        "md5sum": "d8f4c4fa564aade81b844e3d92c7cac6",
        "remote_checksum": "095c7868a080b4c53494335d3a2223abbad12605",
        "remote_md5sum": null
    }
    changed: [mon-001] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {
        "changed": true,
        "checksum": "ce7a2d4441626f22e995b37d5131b9e768f18494",
        "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-rgw/ceph.keyring",
        "item": "/var/lib/ceph/bootstrap-rgw/ceph.keyring",
        "md5sum": "271e4f90c5853c74264b6b749650c3f2",
        "remote_checksum": "ce7a2d4441626f22e995b37d5131b9e768f18494",
        "remote_md5sum": null
    }
    changed: [mon-001] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {
        "changed": true,
        "checksum": "e35e8613076382dd3c9d89b5bc2090e37871aab7",
        "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-mds/ceph.keyring",
        "item": "/var/lib/ceph/bootstrap-mds/ceph.keyring",
        "md5sum": "ed7c32277914c8e34ad5c532d8293dd2",
        "remote_checksum": "e35e8613076382dd3c9d89b5bc2090e37871aab7",
        "remote_md5sum": null
    }
    changed: [mon-001] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {
        "changed": true,
        "checksum": "ac43101ad249f6b6bb07ceb3287a3693aeae7f6c",
        "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-rbd/ceph.keyring",
        "item": "/var/lib/ceph/bootstrap-rbd/ceph.keyring",
        "md5sum": "1460e3c9532b0b7b3a5cb329d77342cd",
        "remote_checksum": "ac43101ad249f6b6bb07ceb3287a3693aeae7f6c",
        "remote_md5sum": null
    }
    changed: [mon-001] => (item=/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring) => {
        "changed": true,
        "checksum": "01d74751810f5da621937b10c83d47fc7f1865c5",
        "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring",
        "item": "/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring",
        "md5sum": "979987f10fd7da5cff67e665f54bfe4d",
        "remote_checksum": "01d74751810f5da621937b10c83d47fc7f1865c5",
        "remote_md5sum": null
    }
    changed: [mon-001] => (item=/etc/ceph/ceph.client.admin.keyring) => {
        "changed": true,
        "checksum": "482f702cf861b41021d76de655ecf996fe9a4a4a",
        "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/etc/ceph/ceph.client.admin.keyring",
        "item": "/etc/ceph/ceph.client.admin.keyring",
        "md5sum": "7581c187044fd4e0f7a5440244a6b306",
        "remote_checksum": "482f702cf861b41021d76de655ecf996fe9a4a4a",
        "remote_md5sum": null
    }

    TASK [ceph-mon : include secure_cluster.yml] ***********************************************************************************************************************************************************************
    skipping: [mon-000] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }

    TASK [ceph-mon : crush_rules.yml] **********************************************************************************************************************************************************************************
    skipping: [mon-000] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-001] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }

    TASK [ceph-mgr : set_fact docker_exec_cmd] *************************************************************************************************************************************************************************
    skipping: [mon-000] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-001] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }

    TASK [ceph-mgr : include common.yml] *******************************************************************************************************************************************************************************
    included: /root/ceph-ansible/roles/ceph-mgr/tasks/common.yml for mon-000, mon-002, mon-001

    TASK [ceph-mgr : create mgr directory] *****************************************************************************************************************************************************************************
    changed: [mon-000] => {
        "changed": true,
        "gid": 167,
        "group": "ceph",
        "mode": "0755",
        "owner": "ceph",
        "path": "/var/lib/ceph/mgr/ceph-li1166-30",
        "secontext": "unconfined_u:object_r:ceph_var_lib_t:s0",
        "size": 4096,
        "state": "directory",
        "uid": 167
    }
    changed: [mon-002] => {
        "changed": true,
        "gid": 167,
        "group": "ceph",
        "mode": "0755",
        "owner": "ceph",
        "path": "/var/lib/ceph/mgr/ceph-li895-17",
        "secontext": "unconfined_u:object_r:ceph_var_lib_t:s0",
        "size": 4096,
        "state": "directory",
        "uid": 167
    }
    changed: [mon-001] => {
        "changed": true,
        "gid": 167,
        "group": "ceph",
        "mode": "0755",
        "owner": "ceph",
        "path": "/var/lib/ceph/mgr/ceph-li985-128",
        "secontext": "unconfined_u:object_r:ceph_var_lib_t:s0",
        "size": 4096,
        "state": "directory",
        "uid": 167
    }

    TASK [ceph-mgr : fetch ceph mgr keyring] ***************************************************************************************************************************************************************************
    skipping: [mon-000] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-001] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }

    TASK [ceph-mgr : copy ceph keyring(s) if needed] *******************************************************************************************************************************************************************
    An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
    failed: [mon-002] (item={'name': '/etc/ceph/ceph.mgr.li895-17.keyring', 'dest': '/var/lib/ceph/mgr/ceph-li895-17/keyring', 'copy_key': True}) => {
        "changed": false,
        "item": {
            "copy_key": true,
            "dest": "/var/lib/ceph/mgr/ceph-li895-17/keyring",
            "name": "/etc/ceph/ceph.mgr.li895-17.keyring"
        }
    }

    MSG:

    Could not find or access 'fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring'
    Searched in:
     /root/ceph-ansible/roles/ceph-mgr/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
     /root/ceph-ansible/roles/ceph-mgr/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
     /root/ceph-ansible/roles/ceph-mgr/tasks/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
     /root/ceph-ansible/roles/ceph-mgr/tasks/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
     /root/ceph-ansible/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
     /root/ceph-ansible/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring on the Ansible Controller.
    If you are using a module and expect the file to exist on the remote, see the remote_src option
    skipping: [mon-002] => (item={'name': '/etc/ceph/ceph.client.admin.keyring', 'dest': '/etc/ceph/ceph.client.admin.keyring', 'copy_key': False})  => {
        "changed": false,
        "item": {
            "copy_key": false,
            "dest": "/etc/ceph/ceph.client.admin.keyring",
            "name": "/etc/ceph/ceph.client.admin.keyring"
        },
        "skip_reason": "Conditional result was False"
    }
    An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
    failed: [mon-001] (item={'name': '/etc/ceph/ceph.mgr.li985-128.keyring', 'dest': '/var/lib/ceph/mgr/ceph-li985-128/keyring', 'copy_key': True}) => {
        "changed": false,
        "item": {
            "copy_key": true,
            "dest": "/var/lib/ceph/mgr/ceph-li985-128/keyring",
            "name": "/etc/ceph/ceph.mgr.li985-128.keyring"
        }
    }

    MSG:

    Could not find or access 'fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring'
    Searched in:
     /root/ceph-ansible/roles/ceph-mgr/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
     /root/ceph-ansible/roles/ceph-mgr/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
     /root/ceph-ansible/roles/ceph-mgr/tasks/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
     /root/ceph-ansible/roles/ceph-mgr/tasks/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
     /root/ceph-ansible/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
     /root/ceph-ansible/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring on the Ansible Controller.
    If you are using a module and expect the file to exist on the remote, see the remote_src option
    skipping: [mon-001] => (item={'name': '/etc/ceph/ceph.client.admin.keyring', 'dest': '/etc/ceph/ceph.client.admin.keyring', 'copy_key': False})  => {
        "changed": false,
        "item": {
            "copy_key": false,
            "dest": "/etc/ceph/ceph.client.admin.keyring",
            "name": "/etc/ceph/ceph.client.admin.keyring"
        },
        "skip_reason": "Conditional result was False"
    }
    An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
    failed: [mon-000] (item={'name': '/etc/ceph/ceph.mgr.li1166-30.keyring', 'dest': '/var/lib/ceph/mgr/ceph-li1166-30/keyring', 'copy_key': True}) => {
        "changed": false,
        "item": {
            "copy_key": true,
            "dest": "/var/lib/ceph/mgr/ceph-li1166-30/keyring",
            "name": "/etc/ceph/ceph.mgr.li1166-30.keyring"
        }
    }

    MSG:

    Could not find or access 'fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring'
    Searched in:
     /root/ceph-ansible/roles/ceph-mgr/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
     /root/ceph-ansible/roles/ceph-mgr/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
     /root/ceph-ansible/roles/ceph-mgr/tasks/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
     /root/ceph-ansible/roles/ceph-mgr/tasks/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
     /root/ceph-ansible/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
     /root/ceph-ansible/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring on the Ansible Controller.
    If you are using a module and expect the file to exist on the remote, see the remote_src option
    skipping: [mon-000] => (item={'name': '/etc/ceph/ceph.client.admin.keyring', 'dest': '/etc/ceph/ceph.client.admin.keyring', 'copy_key': False})  => {
        "changed": false,
        "item": {
            "copy_key": false,
            "dest": "/etc/ceph/ceph.client.admin.keyring",
            "name": "/etc/ceph/ceph.client.admin.keyring"
        },
        "skip_reason": "Conditional result was False"
    }

    NO MORE HOSTS LEFT *************************************************************************************************************************************************************************************************
     to retry, use: --limit @/root/ceph-linode/linode.retry

    PLAY RECAP *********************************************************************************************************************************************************************************************************
    client-000                 : ok=30   changed=2    unreachable=0    failed=0
    mds-000                    : ok=32   changed=4    unreachable=0    failed=0
    mgr-000                    : ok=32   changed=4    unreachable=0    failed=0
    mon-000                    : ok=89   changed=21   unreachable=0    failed=1
    mon-001                    : ok=84   changed=20   unreachable=0    failed=1
    mon-002                    : ok=81   changed=17   unreachable=0    failed=1
    osd-000                    : ok=32   changed=4    unreachable=0    failed=0
    osd-001                    : ok=32   changed=4    unreachable=0    failed=0
    osd-002                    : ok=32   changed=4    unreachable=0    failed=0

Also, create all keys on the first mon and copy those to the other mons to be
consistent.

Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2019-02-20 11:19:44 +01:00
David Waiting 3930791cb7 ensure at least one osd is up
The existing task checks that the number of OSDs is equal to the number of up OSDs before continuing.

The problem is that if none of the OSDs have been discovered yet, the task will exit immediately and subsequent pool creation will fail (num_osds = 0, num_up_osds = 0).

This is related to Bugzilla 1578086.

In this change, we also check that at least one OSD is present. In our testing, this results in the task correctly waiting for all OSDs to come up before continuing.

Signed-off-by: David Waiting <david_waiting@comcast.com>
2019-02-19 18:31:05 +00:00
Guillaume Abrioux c98fd0b9e0 facts: ensure ceph_uid is set when running rhel
when hosts is running on RHEL, let's enforce ceph_uid to 167.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-19 16:40:08 +01:00
Guillaume Abrioux e7034402a4 container: fix tmpfiles.d ceph files
- fix uid/gid in ceph tmpfiles
- move file to `/etc/tmpfiles.d`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-19 16:40:08 +01:00
Guillaume Abrioux d4e31b90a6 Revert "osd: container remove --pid=host"
This reverts commit bb2bbeb941.

Looks like when not passing `--pid=host` we are facing some issues when
deploying more than 2 OSDs in containerized environment.

At the moment, we are still troubleshooting this issue but we prefer to
revert this commit so it doesn't block any PR in the CI.

As soon as we have a fix; we will push a new PR to remove `--pid=host`
(a revert of revert...)

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-14 10:34:37 +00:00
Guillaume Abrioux 500256cdab validate: fix ntp_daemon_type check in validate
is_atomic is defined in ceph-facts or very early in main playbook.

In non containerized deployment, is_atomic is only set in ceph-facts
which is played after ceph-validate.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-14 10:34:37 +00:00
Guillaume Abrioux 76303b457c container: create ceph-common.conf tmpfiles.d if it doesn't exist
Otherwise the task will fail.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-14 10:34:37 +00:00
Guillaume Abrioux b24202f6a4 facts: move two set_fact into ceph-facts
those two set_fact tasks should be moved in ceph-facts.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-14 10:34:37 +00:00
Guillaume Abrioux 8c8ec63633 container: use tmpfiles.d to creates /run/ceph
instead of using `RuntimeDirectory` parameter in systemd unit files,
let's use a systemd `tmpfiles.d` to ensure `/run/ceph`.

Explanation:

`podman` doesn't create the `/var/run/ceph` if it doesn't exist the time
where the container is run while `docker` used to create it.
In case of `switch_to_containers` scenario, `/run/ceph` gets created by
a tmpfiles.d systemd file; when switching to containers, the systemd
unit file complains because `/run/ceph` already exists

The better fix would be to ensure `/usr/lib/tmpfiles.d/ceph-common.conf`
is removed and only rely on `RuntimeDirectory` from systemd unit file parameter
but we come from a non-containerized environment which is already running,
it means `/run/ceph` is already created and when starting the unit to
start the container, systemd will still complain and we can't simply
remove the directory if daemons are collocated.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-13 09:42:27 +01:00
Guillaume Abrioux 7e0a70f7a8 switch_to_containers: do not try to redeploy monitors
`ceph-mon` tries to redeploy monitors because it assumes it was not yet
deployed since `mon_socket_stat` and `ceph_mon_container_stat` are
undefined (indeed, we stop the daemon before calling `ceph-mon` in the
switch_to_containers playbook).

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-13 09:42:27 +01:00
Rishabh Dave 05ea783eff fix mistake in task that aborts when ntpd is chosen on Atomic
Since it's already confusing whether ntp_daemon_type should be "ntp" or
"ntpd", fix the mistake in the title of the task that aborts if
ntp_daemon_type is set to "ntpd" and OS being used is Atomic.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-02-12 09:09:27 +01:00
Rishabh Dave bdff3e48fd don't install NTPd on Atomic
Since Atomic doesn't allow any installations and NTPd is not present
on Atomic image we are using, abort when ntp_daemon_type is set to ntpd.

https://github.com/ceph/ceph-ansible/issues/3572
Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-02-11 12:02:30 +01:00
Sébastien Han c69c8c9ac1 mon: do not hardcode ceph uid
167 is the ceph uid for Red Hat based system, thus trying to deploy a
monitor on Debian fail since the ceph user id on that system is 64045.
This commit uses the ceph_uid variable which contains the right uid
based on system/container detection.

Closes: https://github.com/ceph/ceph-ansible/issues/3589
Signed-off-by: Sébastien Han <seb@redhat.com>
2019-02-11 09:09:40 +00:00
Leah Neukirchen 4fe7f37849 Fix uses of default(omit) with string concatenation
When {{omit}} is concatenated with another string, it expands to something
like __omit_place_holder__63eea0d96dd6ed867b95405e11d87dddf61f448d.
However, in these use-cases we need an empty string.

Regression introduced in d53f55e807.

Signed-off-by: Leah Neukirchen <leah.neukirchen@mayflower.de>
2019-02-08 16:18:15 +00:00
Patrick C. F. Ernzer c605ff6a68 setup_ntp: call handler to disable ntpd if chronyd used
The task setup chronyd called the handler disable chronyd, which of
course defeats the purpose.

Changing the task to disable ntpd instead fixes the issue of chronyd
being disabled after it got enabled.

Fixes: #3582

Signed-off-by: Patrick C. F. Ernzer pcfe@redhat.com
2019-02-08 12:04:44 +01:00
Guillaume Abrioux d4b3c1d409 iscsi-gws: remove a leftover
remove leftover introduced by 9d590f4

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-08 01:11:42 +01:00
Guillaume Abrioux 9d590f4339 iscsi: fix permission denied error
Typical error:
```
fatal: [iscsi-gw0]: FAILED! =>
  msg: 'an error occurred while trying to read the file ''/home/guits/ceph-ansible/tests/functional/all_daemons/fetch/e5f4ab94-c099-4781-b592-dbd440a9d6f3/iscsi-gateway.key'': [Errno 13] Permission denied: b''/home/guits/ceph-ansible/tests/functional/all_daemons/fetch/e5f4ab94-c099-4781-b592-dbd440a9d6f3/iscsi-gateway.key'''
```

`become: True` is not needed on the following task:

`copy crt file(s) to gateway nodes`.

Since it's already set in the main playbook (site.yml/site-container.yml)

The thing is that the files get generated in the 'fetch_directory' with
root user because there is a 'delegate_to' + we run the playbook with
`become: True` (from main playbook).

The idea here is to create files under ansible user so we can open them
later to copy them on the remote machine.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-07 17:57:22 +01:00
Sébastien Han bb2bbeb941 osd: container remove --pid=host
Let's try again with the Nautilus release.

Closes: https://github.com/ceph/ceph-ansible/issues/1297
Signed-off-by: Sébastien Han <seb@redhat.com>
2019-02-07 12:13:51 +00:00
John Fulton cc0bf197e1 Fix CNI error when net=host is not used on OSD calls
Follow up fix that 410abd7 missed.

Related: ceph#3561

Signed-off-by: John Fulton <fulton@redhat.com>
2019-02-05 22:49:01 +00:00
John Fulton 719a25b571 Create Ceph Initial Dirs earlier
Include tasks from create_ceph_initial_dirs earlier during
ceph config role.

Fixes: #3568
Signed-off-by: John Fulton <fulton@redhat.com>
2019-02-05 18:38:05 +00:00
John Fulton dab3f6ee3f Fix CNI error when net=host is not used in some podman calls
With 'podman version 1.0.0' on RHEL8 beta the 'get ceph version' and
'ceph monitor mkfs' commands fail [1] with "error configuring network
namespace for container Missing CNI default network".

When net=host is added these errors are resolved. net=host is used in
many other calls (grep -R net=host | wc -l --> 38).

Fixes: #3561
Signed-off-by: John Fulton <fulton@redhat.com>
(cherry picked from commit 410abd7745)
2019-02-05 18:14:28 +01:00
Guillaume Abrioux 914d94cae8 set RuntimeDirectory in all systemd unit templates
/var/run/ceph resides in a non persistent filesystem (tmpfs)
After a reboot, all daemons won't start because this directory will be
missing.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-05 18:14:28 +01:00
Guillaume Abrioux 7ade032807 osd: bind mount /var/run/udev/
without this, the command `ceph-volume lvm list --format json` hangs and
takes a very long time to complete.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-05 18:14:28 +01:00
Guillaume Abrioux fdca29f2a7 facts: set timeout_command fact in ceph-defaults
- also add `--foreground` which seems to fix some issue we are facing when
using timeout with `podman`.
- use this fact in the `is ceph running already?` task.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-05 18:14:28 +01:00
Guillaume Abrioux 16efdbc59b podman: support podman installation on rhel8
Add required changes to support podman on rhel8

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1667101

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-05 18:14:28 +01:00
John Fulton 37b5d1084a Make python print statements python3 compatible
The restart_osd_daemon.sh generated from the j2 template
contains a python call which uses 'print x' instead of
'print(x)'. Add the missing parentheses to make this call
compatible with both 2 and 3.

Also add parentheses to other python print calls found
in roles/ceph-client/defaults/main.yml and
infrastructure-playbooks/cluster-os-migration.yml.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1671721
Signed-off-by: John Fulton <fulton@redhat.com>
2019-02-01 15:23:27 +00:00
Andrew Schoen 70a4368bc5 ceph-config: do not always assume containers when calculating num_osds
CEPH_CONTAINER_IMAGE should be None if containerized_deployment
is False.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2019-02-01 12:28:12 +01:00
Andrew Schoen 88eda479a9 ceph-facts: generate devices when osd_auto_discovery is true
This task used to live in ceph-osd, but we need it defined here to that
ceph-config can use it when trying to determine the number of osds.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2019-02-01 12:28:12 +01:00
John Fulton cba9b23363 Do not timeout podman/docker pull if timeout value is '0'
If user sets "docker_pull_timeout: '0'" then do not use the
timeout command when running podman/docker pull. Also, use
"timeout -s KILL"; without KILL, podman on RHEL8 beta does
not timeout and deployment can hang.

Related: https://bugzilla.redhat.com/show_bug.cgi?id=1670625
Signed-off-by: John Fulton <fulton@redhat.com>
2019-01-31 15:35:09 +00:00
Guillaume Abrioux fe1528adb4 config: support num_osds fact setting in containerized deployment
This part of the code must be supported in containerized deployment

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1664112

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-31 14:30:23 +00:00
Ramana Raja dfff89ce67 Install nfs-ganesha stable v2.7
nfs-ganesha v2.5 and 2.6 have hit EOL. Install nfs-ganesha v2.7
stable that is currently being maintained.

Signed-off-by: Ramana Raja <rraja@redhat.com>
2019-01-30 14:57:26 +01:00
Guillaume Abrioux 9f16501747 common: clean monitor initial keyring code
let's not be blocked by the fact we don't have the initial keyring in
`{{ fetch_directory }}`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-30 10:36:02 +01:00
Guillaume Abrioux f8aa8cdf60 facts: clean fsid generation code
clean some leftover and duplicate code.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-30 10:36:02 +01:00
Patrick Donnelly 080ce7dd72 do not set ceph_release to dummy
When we're deploying a dev branch.

Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2019-01-29 16:04:44 +00:00
Zack Cerza 82897c76fb Fix ceph.conf generation
877979c78 in #3486 broke ceph.conf generation entirely. Remove the stray
curly brace.

Signed-off-by: Zack Cerza <zack@redhat.com>
2019-01-25 09:19:24 +01:00
Guillaume Abrioux 0bfefdd5bc override ceph_release with ceph_stable_release
when `ceph_origin` is set to `'repository'` and `ceph_repository` to
`'community'` we need to ensure `ceph_release` reflect
`ceph_stable_release`.

4a3f180f9d simply removed the override
while it should just have to be run only when the condition mentioned
above is satisfied.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-22 15:53:32 +01:00
Guillaume Abrioux 877979c787 config: make sure ceph_release is set for all client node
`ceph_release` is set in `ceph-container-common` but this role is
played only on first node for clients, this means ceph-config will fail
on all client nodes except the first one.

This commit ensure ceph_release is set for all client nodes.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-22 13:45:38 +01:00
Sébastien Han 5babc1b4eb mon: enable msgr2
Enabling msgr2 style declaration for Nautilus and above. Prior releases
will keep the right syntax.
When upgrading from Mimic to Nautilus we must maintain something in the
form of:

mon_host = [v1:127.0.0.1:6789/0,v2:127.0.0.1:3300/0]

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-22 13:45:38 +01:00
Sébastien Han fc34fb1bd9 mon: ability to change mon listening port on container
You can now use 'ceph_mon_container_listen_port' to change the port the
monitor will listen on.
Setting the default to 3300 (assigned by IANA) since Nautilus has released the messenger2
transport protocol.

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-22 13:45:38 +01:00
Sébastien Han 41a7cc878c Revert "mon: force peer addition"
This reverts commit ee08d1f89a which was
mostly to workaround a bug in ceph@master. Now, ceph@master is fixed so
reverting this. Thanks to https://github.com/ceph/ceph/pull/25900

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-22 13:45:38 +01:00
guihecheng 1ac94c048f rgw: add support for multiple rgw instances on a single host
With this, we could have multiple rgw instances on a single host
with a single run, don't have to use rgw-standalone.yml which does not
seems able to bind ports separately.
If you want to have multiple rgw instances, just change 'radosgw_instances'
to the number you want, which defaults to 1.
Not compatible with Multi-Site yet.

Signed-off-by: guihecheng <guihecheng@cmiot.chinamobile.com>
2019-01-18 11:12:28 +01:00
Guillaume Abrioux 1bbdde272f config: remove code related to ceph release prior to luminous
This part of the code is not needed since ceph-ansible@master is
intended to deploy ceph@master only.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-14 14:41:13 +00:00
Guillaume Abrioux e9188cd202 ceph-default: rm useless condition
This condition is useless and it's also creating issues we don't see in
our CI. ceph_release is set by either ceph-common or ceph-docker-common
so let's keep it this way.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1645379

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-14 14:41:13 +00:00
Sébastien Han ee08d1f89a mon: force peer addition
Somewhat something changed with the introduction of msg2 and we have to
add each node as a peer so the monitors can form a quorum. This might be
due to our CI environment, although adding this is completly harmless
and solves monitors not being able to form quorum.

It seems that the initial monitor map wasn't containing the right
information about the peers (addresses like 0.0.0.0/0r1, for each rank.

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-09 13:15:52 +01:00
Bruceforce 446f3c9fae nfs-ganesha: fixed nfs_ganesha_dev_apt_repo variable
The nfs_ganesha_dev_apt_repo variable was set incorrect in task
"fetch nfs-ganesha development repository"

Signed-off-by: Bruceforce <Bruceforce@users.noreply.github.com>
2019-01-05 16:04:05 +01:00
Sébastien Han 9abf9dba0b rgw: do not create mandatory directories
The packages are responsible for this, currently tracked in ceph https://github.com/ceph/ceph/pull/25503

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-04 13:57:40 +00:00
Sébastien Han 2af624dc5b rbd-mirror: copy bootstrap key after package install
If we don't copy the key after the package install the directory /var/lib/ceph/bootstrap-rbd-mirror
will not exist and the copy will fail.

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-04 13:57:40 +00:00
Sébastien Han b1dfe3f03e config: only pre-create ceph dirs on containers
We don't need to create the directories on non-containers, they are
created by the packages.

Closes: https://github.com/ceph/ceph-ansible/issues/3430
Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-04 13:57:40 +00:00
Rishabh Dave 6fa757d343 ceph-infra: disable unrequired NTP services
When one of the currently supported NTP services has been set up,
disable rest of the NTP services on Ceph nodes.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-01-04 14:01:05 +01:00
Rishabh Dave b03ab60742 ceph-infra: merge ntp_debian.yml and ntp_rpm.yml
Merge ntp_debian.yml and ntp_rpm.yml into one (the new file is called
setup_ntp.yml) since they are almost identical. Also avoid repetition
of the common setup step for ntpd and chronyd services.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-01-04 14:01:05 +01:00
Rishabh Dave a14bfa282a copy certificates as root user
Since the current user on the controller node, might not have the
permission to read the TLS certificate and related files, copy these
files to the Ceph nodes as root user.

Fixes: https://github.com/ceph/ceph-ansible/issues/3465
Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-01-03 09:49:06 +01:00
Kai Wembacher 1dd26f76bf document missing support for non-containerized deployment
Signed-off-by: Kai Wembacher <kai@ktwe.de>
2018-12-21 15:37:55 +00:00
jtudelag 23ad5fd9cb Clarify RGWs configuration when using ceph_conf_overrides.
To avoid future misconfigurations, clarify that the only valid
scheme is [client.rgw.*] instead of [client.radosgw.*].
2018-12-20 13:55:03 +00:00
Kai Wembacher a273ed7f60 add support for rocksdb and wal on the same partition in non-collocated
Signed-off-by: Kai Wembacher <kai@ktwe.de>
2018-12-20 14:19:46 +01:00
Sébastien Han f99a875b7f lint: Remote package tasks should have a retry
Make linter happy and add more robustness to remote tasks by retrying 3
times (the default) before failing.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-20 11:06:09 +01:00
Guillaume Abrioux d7e77012ef retry on packages and repositories failures
add register/until on all packaging related tasks to avoid non valid CI
failure.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-12-19 14:48:27 +00:00
Sébastien Han d9e7835086 mon: remove ceph aliases for containers
These aliases have led to several issues making believe that ceph
binaries are actually present on the host when running the command.
However it wasn't explicit that the commands were only ran inside a
container.
It has brought to much confusion so we decided to remove them.

Closes: https://github.com/ceph/ceph-ansible/issues/3445
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-17 11:10:03 +01:00
Guillaume Abrioux 0eb56e36f8 introduce new role ceph-facts
sometimes we play the whole role `ceph-defaults` just to access the
default value of some variables. It means we play the `facts.yml` part
in this role while it's not desired. Splitting this role will speedup
the playbook.

Closes: #3282

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-12-12 11:18:01 +01:00
Guillaume Abrioux 1b8b5e0aac meta: set the right minimum ansible version required for galaxy
ceph-ansible@master requires the latest stable ansible version.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-12-11 09:59:25 +01:00
Sébastien Han 8e2585b6c7 ceph-defaults: do not use podman only on atomic
We want to test podman on f29 non-atomic, atomic is not a hard
requirement. However, if you want to get podman then you will have to
install it first before running the playbook.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-07 15:37:07 +01:00
Sébastien Han 51ca4f883b mgr: little refact
This commit removes the default module, so ceph-ansible does not enable
any manager module.
To enable a module you need to set a value to 'ceph_mgr_modules', you
can pass a list of modules like this:

ceph_mgr_modules:
  - status
  - dashboard

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-06 14:55:56 +00:00
Noah Watkins 3cf5fd2c3e start_osds: use list instead of keys (re-introduce)
the python3 fix merged by:

  https://github.com/ceph/ceph-ansible/pull/3346

was reintroduced a few days later by:

  82a6b5adec

and this patch fixes it again :)

Signed-off-by: Noah Watkins <nwatkins@redhat.com>
2018-12-05 23:25:35 +00:00
Guillaume Abrioux 1cff1f9806 revert infra: don't restart firewalld if unit is masked
If firewalld unit is masked, setting `configure_firewall: false` is
enough

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-12-04 15:52:08 +00:00
Sébastien Han 896676ee80 fix json data type
Json is a type structure which is always typed as a string, where before
this we were declaring a dict, which is not a json valid structure.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-04 12:34:54 +01:00
Sébastien Han 82a6b5adec osd: manage legacy ceph-disk non-container startup
The code is now able (again) to start osds that where configured with
ceph-disk on a non-container scenario.

Closes: https://github.com/ceph/ceph-ansible/issues/3388
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 452069cb3a)
2018-12-03 16:01:57 +01:00
Sébastien Han ec2d1f502d osd: re-introduce disk_list check
This commit
4cc1506303 (diff-51bbe3572e46e3b219ad726da44b64ebL13)
accidentally removed this check.

This is a must have for ceph-disk based containerized OSDs.

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 9b5a93e3a5)
2018-12-03 16:01:57 +01:00
Sébastien Han 4c51130198 osd: discover osd_objectstore on the fly
Applying and passing the OSD_BLUESTORE/FILESTORE on the fly is wrong for
existing clusters as their config will be changed.

Typically, if an OSD was prepared with ceph-disk on filestore and we
change the default objectstore to bluestore, the activation will fail.
The flag osd_objectstore should only be used for the preparation, not
activation. The activate in this case detects the osd objecstore which
prevents failures like the one described above.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:11:47 +00:00
Sébastien Han bef522627e ceph-osd: change jinja condition
If an existing cluster runs this config, and has ceph-disk OSD, the
`expose_partitions` won't be expected by jinja since it's inside the
'old' if. We need it as part of the osd_scenario != 'lvm' condition.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1640273
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:11:47 +00:00
Sébastien Han bf375327a0 ceph-mgr: refact role for containers
Now we simplify the invocation of start and remove some code and the
directory 'docker'.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00
Sébastien Han 14fc5bad12 mon: do not serialized container bootstrap
This commit unifies the container and non-container code, which in the
meantime gives use the ability to deploy N mon container at the same
time without having to serialized the deployment. This will drastically
reduces the time needed to bootstrap the cluster.
Note, this is only possible since Nautilus because the monitors are
bootstrap the initial keys on their own once they reach quorum. In the
Nautilus version of the ceph-container mon, we stopped generating the
keys 'manually' from inside the container, for more detail see: https://github.com/ceph/ceph-container/pull/1238

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00
Sébastien Han 61082b3b32 mgr: only copy keys with dedicated mgr
When collocating mon and mgr, the mgr container will attempt to create
its own key since it has the admin key at its disposal. Also at this
point there is nothing to fetch since the key is not created by the
mons, as mentionned above the mgr creates the key on its own.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00
Sébastien Han 1c760904b0 site: collocated mon and mgr by default
This will speed up the deployment and also deploy mon and mgr collocated
just as recommended.
This won't prevent you of adding more and dedicaded machines for mgr if
needed.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00