Commit Graph

3950 Commits (07a384ba569d17d04818d6ba61f723172ba5e377)
 

Author SHA1 Message Date
Andrew Schoen a440c2b3fe Merge pull request #1598 from ceph/test-rbd-pool
ceph-mon: fix get rbd size hanging
2017-06-13 10:04:57 -05:00
Andrew Schoen e2104acb62 rolling_update: set health_mon_check_delay to 15
The old value of 10 did not give enough time for a containerized mon to
pass the health check.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-06-13 08:56:44 -05:00
Andrew Schoen a9867e22fb tests: add PG config to the docker_cluster scenario
This is so we'll pass the PG check when performing a rolling update.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-06-13 08:56:43 -05:00
Andrew Schoen d15e464b81 tests: adds *-update_docker_cluster testing scenarios
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-06-13 08:56:43 -05:00
Sébastien Han 8b4624fa4c Merge pull request #1604 from ceph/rewrite_pgs_clean_tasks
rewrite check pgs clean tasks
2017-06-13 13:41:42 +02:00
Guillaume Abrioux 5af9bb432c rewrite check pgs clean tasks
Avoid screen scrapping by rewriting `waiting for clean pgs` tasks like it is
done in 304de48.

Use the json output returned by `ceph -s` instead

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-06-13 09:48:56 +02:00
Sébastien Han f5c2d3de9c Merge pull request #1573 from duritong/master-control_path-port
keep port as part of the control path
2017-06-12 15:05:11 +02:00
Sébastien Han 497924795d ceph-mon: fix get rbd size hanging
For newly created cluster the command: ceph --cluster {{ cluster }} osd
pool get rbd size does not respond properly.
We only want to check if the rbd pool exists, so we know use an ls |
grep approach.

Closes: https://github.com/ceph/ceph-ansible/issues/1547
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-06-12 14:39:39 +02:00
Alfredo Deza 93fc892978 Merge pull request #1584 from ceph/rewrite_check_pgs
Common: Rewrite check_pgs
2017-06-12 07:52:29 -04:00
Guillaume Abrioux 304de4833f Common: Rewrite check_pgs
Rewrite the check_pgs by using json parsing instead of complex regexp to
parse the `ceph -s` output.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-06-12 10:59:16 +02:00
Andrew Schoen a6fd955d3c Merge pull request #1590 from ceph/fix_ceph_docker_on_openstack
Common: Add a default for ceph_docker_on_openstack
2017-06-07 08:50:21 -05:00
Andrew Schoen 040b5c4b7f Merge pull request #1589 from ceph/bz-1454945
remove ceph-iscsi-gw play from site.yml.sample
2017-06-06 10:17:44 -05:00
Guillaume Abrioux a09ce92d51 Common: Add a default for ceph_docker_on_openstack
Add a default value for `ceph_docker_on_openstack` to avoid a
conditional check error for the task `pause after docker install before starting` in
`roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-06-06 16:49:04 +02:00
Andrew Schoen dd193df4b9 remove ceph-iscsi-gw play from site.yml.sample
We ship ceph-iscsi-gw in a separate repo downstream and do not package
it with ceph-ansible. Including the play for ceph-iscsi-gw in
site.yml.sample makes the playbook fail when using the downstream
packages.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1454945

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-06-06 09:04:58 -05:00
Andrew Schoen b7f3be6c35 Merge pull request #1586 from ceph/rpm-strip-iscsi
rpm: do not package iscsi files
2017-06-05 13:46:41 -05:00
Gregory Meno 634bc588c2 Merge pull request #1587 from ceph/bz-1451786
ceph-mon: fix support for ipv6 on containerized mons
2017-06-05 11:14:49 -07:00
Andrew Schoen e8187f6a0f ceph-mon: fix support for ipv6 on containerized mons
The fact ['ansible_$interface']['ipv4'] is a dictionary where
['ansible_$interface']['ipv6'] is a list. If we use
ansible_default_ipv6|ipv4 is is always a dictionary which allows us to
get the ipv6 and ipv4 address without adding more complexity to the
template.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-06-05 10:51:47 -05:00
Ken Dreyer 2cfb3677d8 rpm: do not package iscsi files
Currently we cannot install the ceph-iscsi-ansible RPM on a node where
the ceph-ansible RPM is already installed.

ceph-iscsi-ansible should install on top of the ceph-ansible environment
without issues.
2017-06-05 09:19:48 -06:00
Andrew Schoen 28c724d5c6 Merge pull request #1582 from ceph/purge-docker-fix
purge-docker-cluster: include ceph_docker_registry
2017-06-05 09:31:53 -05:00
Andrew Schoen 59992c54cc purge-docker-cluster: include ceph_docker_registry
We need to include ceph_docker_registry when removing containers/images
because if we don't it will assume docker.io which is not always where
the image originated from, causing the playbook to fail.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-06-02 09:49:17 -05:00
Sébastien Han fdc7866072 Merge pull request #1469 from ceph/refact_code
Docker: Refact code
2017-06-02 12:40:25 +02:00
Sébastien Han bd4a7dd6c8 Merge pull request #1580 from ceph/fix_check_pgs
Common: Improve check pgs
2017-06-02 12:11:05 +02:00
Guillaume Abrioux 0542a95b68 Common: Improve check pgs
For some reason we changed the check of pgs but it appears it could be
dangerous because the current check might satisfied as long as 1 PG is
active+clean.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-06-01 20:12:36 +02:00
Sébastien Han 250f8aa08b Merge pull request #1579 from ceph/fix_ceph-osd-run
Docker: Remove duplicate var passed to docker-run
2017-06-01 16:22:45 +02:00
Guillaume Abrioux 0a2048a577 Docker: Remove duplicate var passed to docker-run
since `-e CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE` is already hardcoded in
`eph-osd-run.sh.j2` there is no need to add `-e
CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE` as a default value in defaults vars.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-06-01 14:31:17 +02:00
Andrew Schoen de4447a955 Merge pull request #1577 from ceph/purge-tests
tests: use docker playbook when redeploying a purged cluster
2017-05-31 15:18:14 -05:00
Andrew Schoen 22541a9c9a tests: use docker playbook when redeploying a purged cluster
When we purge a containerized cluster we need to use the correct
playbook when redploying the cluster.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-05-31 12:48:51 -05:00
Sébastien Han 410f513b08 Merge pull request #1568 from ceph/bz-1455187
purge-docker-cluster fix and test
2017-05-31 16:53:35 +02:00
Andrew Schoen f7677e4393 purge-docker-cluster: pip is only used on Debian
We only need to purge packages installed by pip on Debian systems.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-05-31 09:03:44 -05:00
mh 64b7b88fb7 keep port as part of the control path
On a Vagrant setup, every host is 127.0.0.1 and every user is vagrant.
To not have ansible do the same stuff on the same machine (the one
we connected first), we should keep the port as part of the
control path.

Also this helps in setups where different hosts might be hidden
behind the same IP, but on different ports (e.g. using DNAT).
2017-05-30 18:28:39 +02:00
Andrew Schoen ecdb6f4967 tests: adds a scenario for purging containerized clusters
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-05-25 08:41:36 -05:00
Andrew Schoen 8e322d4825 purge-docker-cluster: default raw_journal_devices to []
If we're purging a containerized cluster that did not use the
raw_multi_journal OSD scenario then raw_journal_devices will not be
defined which causes the playbook to fail.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1455187

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-05-25 07:30:25 -05:00
Guillaume Abrioux ddfe019342 Refact code
`ceph-docker-common`:
  At the moment there is a lot of duplicated tasks in each
  `./roles/ceph-<role>/tasks/docker/main.yml` that could be refactored in
  `./roles/ceph-docker-common/tasks/main.yml`.

`*_containerized_deployment` variables:
  All `*_containerized_deployment` have been refactored to a single
  variable `containerized_deployment`

duplicate `cephx` variables in `group_vars/* have been removed.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-05-24 15:55:41 +02:00
Guillaume Abrioux f0adecf482 Clean osds.yml.sample
Remove duplicate lines in osds.yml default vars file.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-05-24 15:55:41 +02:00
Sébastien Han 215ca26942 Merge pull request #1555 from ceph/rol-docker
rolling-update: set/unset flags on the right container
2017-05-24 15:24:06 +02:00
Andrew Schoen 2326c5ac63 Merge pull request #1557 from ceph/install-condition
common: fix installation condition
2017-05-24 06:39:36 -05:00
Sébastien Han aae4dd4aa9 Merge pull request #1563 from ceph/wip-libvirt
Use host-passthrough for libvirt vCPUs
2017-05-24 12:13:18 +02:00
Sébastien Han f75d36ce18 Merge pull request #1564 from arodd/master
Fixing partition detection regex for FusionIO devices.
2017-05-24 12:07:40 +02:00
Sébastien Han 468dc06bcd common: remove useless check
We only check for everything expect 'distro' because that
is a valid way of deploying RHCS, with preprepared repos
present on the nodes.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-05-24 11:52:22 +02:00
Austin Workman 22033bd1bf Fixing partition detection regex for FusionIO devices. 2017-05-23 14:39:39 -05:00
Zack Cerza 0b7744db27 Use host-passthrough for libvirt vCPUs
Signed-off-by: Zack Cerza <zack@redhat.com>
2017-05-23 12:55:29 -06:00
Sébastien Han f7e9585a2c common: fix installation condition
Problem: we could end up in situation where we would install a package
on a machine that does not have the right repo enabled. Because the
condition was set to OR we weren't pinning a particular host but just a
condition. Let's say someone sets 'ceph_origin == "distro"', this would
try to install OSD packages on Monitors.

Solution: use a AND condition to first pin to the group_name (which
identifies a set of hosts) AND then after this one of the installation
condition.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1453119
Co-Authored-By: https://github.com/zhsj
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-05-23 11:50:58 +02:00
Sébastien Han 90389864d8 rolling-update: set/unset flags on the right container
Problem: we are delegating the set/unset flag to a monitor node but we
try to call an osd container

Solution: use the right container name.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-05-22 09:38:08 +02:00
Andrew Schoen e605445da3 Merge pull request #1552 from ceph/rhel
common: explicitly set rhel os version support
2017-05-19 10:23:02 -05:00
Sébastien Han 8ad503b248 common: explicitly set rhel os version support
Clarify in the error message that only RHEL version >= 7.3 are
supported.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1452431
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-05-19 10:38:20 +02:00
Sébastien Han fa168e9b69 Merge pull request #1529 from ktdreyer/readme-backporting
README: add backporting instructions
2017-05-19 09:17:36 +02:00
Ken Dreyer 4b2609d558 README: add backporting instructions
Add instructions for how we maintain and release the stable release
series.

We'd been following this for a while, so let's get it written down.
2017-05-18 11:08:41 -06:00
Sébastien Han 99a11e216f Merge pull request #1551 from ceph/revert-1531-wip-1495
Revert "docker: Retry OSD disk prepare to workaround race condition"
2017-05-18 16:06:58 +02:00
Sébastien Han 6bdadc4363 Revert "docker: Retry OSD disk prepare to workaround race condition" 2017-05-18 16:03:16 +02:00
Sébastien Han e189d7caa6 backport_to_stable_branch: fix redirection
2> to redirect stderr not 2&>

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-05-18 14:48:14 +02:00