Commit Graph

41 Commits (279044155fdb1bb7c25d028d81075e443aaeca91)

Author SHA1 Message Date
Dimitri Savineau 1e944b6022 rgw: change default frontend on nautilus
As discussed in ceph/ceph#26599, beast is now the default frontend
for rados gateway with nautilus release.
Add rgw_thread_pool_size variable with 512 as default value and keep
backward compatibility with num_threads option when using civetweb.
Update radosgw_civetweb_num_threads to reflect rgw_thread_pool_size
change.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit d17b1b48b6)
2019-04-10 14:42:33 -04:00
Guillaume Abrioux afdaa70a63 update: enable msgr2 protocol
This commit enable the msgr2 protocol when the cluster is fully upgraded
to nautilus

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-25 16:02:56 -04:00
Dimitri Savineau d8538ad4e1 Set the default crush rule in ceph.conf
Currently the default crush rule value is added to the ceph config
on the mon nodes as an extra configuration applied after the template
generation via the ansible ini module.

This implies two behaviors:

1/ On each ceph-ansible run, the ceph.conf will be regenerated via
ceph-config+template and then ceph-mon+ini_file. This leads to a
non necessary daemons restart.

2/ When other ceph daemons are collocated on the monitor nodes
(like mgr or rgw), the default crush rule value will be erased by
the ceph.conf template (mon -> mgr -> rgw).

This patch adds the osd_pool_default_crush_rule config to the ceph
template and only for the monitor nodes (like crush_rules.yml).
The default crush rule id is read (if exist) from the current ceph
configuration.
The default configuration is -1 (ceph default).

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1638092

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-14 08:56:52 +00:00
Zack Cerza 82897c76fb Fix ceph.conf generation
877979c78 in #3486 broke ceph.conf generation entirely. Remove the stray
curly brace.

Signed-off-by: Zack Cerza <zack@redhat.com>
2019-01-25 09:19:24 +01:00
Guillaume Abrioux 877979c787 config: make sure ceph_release is set for all client node
`ceph_release` is set in `ceph-container-common` but this role is
played only on first node for clients, this means ceph-config will fail
on all client nodes except the first one.

This commit ensure ceph_release is set for all client nodes.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-22 13:45:38 +01:00
Sébastien Han 5babc1b4eb mon: enable msgr2
Enabling msgr2 style declaration for Nautilus and above. Prior releases
will keep the right syntax.
When upgrading from Mimic to Nautilus we must maintain something in the
form of:

mon_host = [v1:127.0.0.1:6789/0,v2:127.0.0.1:3300/0]

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-22 13:45:38 +01:00
guihecheng 1ac94c048f rgw: add support for multiple rgw instances on a single host
With this, we could have multiple rgw instances on a single host
with a single run, don't have to use rgw-standalone.yml which does not
seems able to bind ports separately.
If you want to have multiple rgw instances, just change 'radosgw_instances'
to the number you want, which defaults to 1.
Not compatible with Multi-Site yet.

Signed-off-by: guihecheng <guihecheng@cmiot.chinamobile.com>
2019-01-18 11:12:28 +01:00
Guillaume Abrioux 1bbdde272f config: remove code related to ceph release prior to luminous
This part of the code is not needed since ceph-ansible@master is
intended to deploy ceph@master only.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-14 14:41:13 +00:00
Guillaume Abrioux a86c2b8526 config: write jinja comment with appropriate syntax
jinja comment should be written using the jinja syntax `{# ... #}`

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1654441

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-29 15:48:23 +01:00
Guillaume Abrioux 68dde424f6 config: convert _osd_memory_target to int
ceph.conf doesn't accept float value.

Typical error seen:
```
$ sudo ceph daemon osd.2 config get osd_memory_target
Can't get admin socket path: unable to get conf option admin_socket for osd.2:
parse error setting 'osd_memory_target' to '7823740108,8' (strict_si_cast:
unit prefix not recognized)
```

This commit ensures the value inserted in ceph.conf will be an integer.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-21 14:33:27 +00:00
Neha Ojha 10538e9a23 osd_memory_target: standardize unit and fix calculation
* The default value of osd_memory_target used by ceph is 4294967296 bytes,
so use the same as ceph-ansible default.

* Convert ansible_memtotal_mb to bytes to calculate osd_memory_target

Signed-off-by: Neha Ojha <nojha@redhat.com>
2018-11-19 09:54:33 +00:00
Guillaume Abrioux a2b2028212 config: remove complex jinja logic in ceph.conf.j2
using consecutive set_fact in the playbook instead of complex jinja syntax
makes ceph.conf.j2 more readable.
By the way, jinja can be painful to debug at some point.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-31 14:16:10 +01:00
Guillaume Abrioux d8d3e55006 remove restapi role
As of `mimic`, restapi is no longer available because of manager daemon.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-30 14:19:13 +01:00
Guillaume Abrioux 6130bc841d config: look up for monitor_address_block in hostvars
`monitor_address_block` should be read from hostvars[host] instead of
current node being played.

eg:

Let's assume we have:

```
[mons]
ceph-mon0 monitor_address=192.168.1.10
ceph-mon1 monitor_interface=eth1
ceph-mon2 monitor_address_block=192.168.1.0/24
```

the ceph.conf generation task will end up with:

```
fatal: [ceph-mon0]: FAILED! => {}

MSG:

'ansible.vars.hostvars.HostVarsVars object' has no attribute u'ansible_interface'
```

the reason is that it will assume `monitor_address_block` isn't defined even on
ceph-mon2 because looking for `monitor_address_block` instead of
`hostvars[host]['monitor_address_block']`, therefore it enters in the condition as default value:

```
    {%- else -%}
      {% set interface = 'ansible_' + (monitor_interface | replace('-', '_')) %}
      {% if ip_version == 'ipv4' -%}
        {{ hostvars[host][interface][ip_version]['address'] }}
      {%- elif ip_version == 'ipv6' -%}
        [{{ hostvars[host][interface][ip_version][0]['address'] }}]
      {%- endif %}
    {%- endif %}
```

`monitor_interface` is set with default value `'interface'` so the `interface`
variable is built with 'ansible_' + 'interface'. It makes ansible throwing a
confusing message about `'ansible_interface'`.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1635303

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-02 22:41:05 +02:00
Giulio Fidente 6126210e0e Fix version check in ceph.conf template
We need to look for ceph_release when comparing with release names,
not ceph_version.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1631789
Signed-off-by: Giulio Fidente <gfidente@redhat.com>
2018-09-24 13:08:27 +02:00
Guillaume Abrioux 6d6fd514e0 config: set default _rgw_hostname value to respective host
the default value for _rgw_hostname was took from the current node being
played while it should be took from the respective node in the loop.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1622505

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-09-18 20:10:34 +02:00
Neha Ojha 27027a17d3 osd: add osd memory target option
BlueStore's cache is sized conservatively by default, so that it does
not overwhelm under-provisioned servers. The default is 1G for HDD, and
3G for SSD.

To replace the page cache, as much memory as possible should be given to
BlueStore. This is required for good performance. Since ceph-ansible
knows how much memory a host has, it can set

`bluestore cache size = max(total host memory / num OSDs on this host * safety
factor, 1G)`

Due to fragmentation and other memory use not included in bluestore's
cache, a safety factor of 0.5 for dedicated nodes and 0.2 for
hyperconverged nodes is recommended.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1595003

Signed-off-by: Neha Ojha <nojha@redhat.com>
Co-Authored-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-09-18 10:12:46 +00:00
Guillaume Abrioux 9ff26e80f2 defaults: add a default value to rgw_hostname
let's add ansible_hostname as a default value for rgw_hostname if no
hostname in servicemap matches ansible_fqdn.

Fixes: #3063
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1622505

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-09-10 12:07:44 +02:00
Guillaume Abrioux f422efb1d6 config: ensure rgw section has the correct name
the ceph.conf.j2 always assumes the hostname used to register the
radosgw in the servicemap is equivalent to `{{ ansible_hostname }}`
which returns the shortname form.

We need to detect which form of the hostname was used in case of already
deployed cluster and update the ceph.conf accordingly.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1580408

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 10:04:24 +02:00
Guillaume Abrioux db29b5b84d config: clean template, remove useless conditions
there is no need to have all these conditions.

for instance, assuming `mds_group_name` is set to 'mdss':

  - `if groups[mds_group_name] is defined` checks if `'mdss'` is present in `{{ groups }}`

  - `if {{ mds_group_name }} in group_names` checks if the current node is part
  the group `'mdss'`

  - `if inventory_hostname in groups.get(mds_group_name, [])` checks if
  the current node is part of the group 'mdss'

The third condition is enough to cover the need of ensuring we are
running on a mds node.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 10:04:24 +02:00
Sébastien Han 4d64dd4686 rgw: ability to use ceph-ansible vars into containers
Since the container now simply reads the ceph.conf, we remove all the
unnecessary options.

Also this PR is the foundation to support multiple backend, such as the
new 'beast' from Ceph Mimic.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1582411
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-09 14:13:17 +02:00
Sébastien Han ea9e60d48d config: enforce socket name
This was introduced by
59ee2e8d3b
and made our socket checks impossible to run. The PID could be found,
but the cctid cannot.
This happens during upgrade to mimic and on cluster running on mimic.

So let's force the admin socket the way it was so we can properly check
for existing instances also the line $cluster-$name.$pid.$cctid.asok
is only needed when running multiple instances of the same daemon,
thing ceph-ansible cannot do at the time of writing

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1610220
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-31 10:58:04 +02:00
Sébastien Han 713b9fcf9b ceph-config: do not log cluster log on container
The container image recently merged both cluster and mon log into a
single stream. Following this, we now see this warning coming from the
container image:

2018-06-19 13:44:01.542990 7ff75b024700  1 mon.vm02@1(peon).log
v57928205 unable to write to '/var/log/ceph/ceph.log' for channel
'cluster': (2) No such file or directory

So we now tell the mon to not log cluster log on the filesystem.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1591771
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-05 15:11:45 +00:00
Sébastien Han 6f9dd26caa config: remove any spaces in public_network or cluster_network
With two public networks configured - we found that with
"NETWORK_ADDR_1, NETWORK_ADDR_2" install process consistently became
broken, trying to find docker registry on second network, and not
finding mon container.

but without spaces
"NETWORK_ADDR_1,NETWORK_ADDR_2" install succeeds
so, containerized install is more peculiar with formatting of this line

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1534003
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-01-30 17:47:15 +01:00
Sébastien Han c2e04623a5 container: change the way we force no logs inside the container
Previously we were using ceph_conf_overrides however this doesn't play
nice for softwares like TripleO that uses ceph_conf_overrides inside its
own code. For now, and since this is the only occurence of this, we can
ensure no logs through the ceph conf template.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1532619
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-01-10 16:21:47 +01:00
Major Hayden 5676fa23b1 Convert interface names to underscores for facts
If a deployer uses an interface name with a dash/hyphen in it, such
as 'br-storage' for the monitor_interface group_var, the ceph.conf.j2
template fails to find the right facts. It looks for
'ansible_br-storage' but only 'ansible_br_storage' exists.

This patch converts the interface name to underscores when the
template does the fact lookup.
2017-12-12 09:03:40 +01:00
Guillaume Abrioux 80d32decd3 config: fix config generation
The path to the fact is not correct.
In any case, we will retrieve the IP address in hostvars, the variable
is the way we get the interface name according where it has been set
(eg.: inventory host file vs. group_vars/)

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1510906

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-09 08:50:57 +01:00
Sébastien Han ab7eb79212 config: fix monitor_interface when not passed in the inventory file
Setting monitor_interface in group_vars/all.yml makes the
hostvars[host]['monitor_interface'] non-existing.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1507922
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-11-03 09:25:02 +01:00
Sébastien Han 8670b45ef2 rgw/nfs: fix section duplication
Once and for all, hopefully...

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-25 15:45:37 +02:00
Sébastien Han 54de2efc5d Merge pull request #2082 from ceph/restapi-cephconf
common: move restapi template to config
2017-10-20 14:07:48 +02:00
Sébastien Han 4413511b66 all: backward compatibility between stable-2.2 and 3.0
stable-3.0 brought numerous changes in ceph-ansible variables, this PR
aims to maintain backward compatibility for someone running stable-2.2
upgrading to stable-3.0 but keeps its groups_vars untouched.
We will then determine the right options to make sure the upgrade works
but we are expecting that new variables should be used.

We will drop this in a near future, maybe 3.1 or 3.2.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-20 11:54:10 +02:00
Sébastien Han ba5c6e66f0 common: move restapi template to config
Closes: github.com/ceph/ceph-ansible/issues/1981
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-20 11:14:13 +02:00
Sébastien Han aa70b07ae2 config: proper render ceph.conf when doing collocation
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-11 18:29:34 +02:00
Sébastien Han 1bd891232c config: do not duplicate sections when doing collocation
Prior to this commit, when collocating a RGW and NFS on the same box the
ceph.conf layout was the following:

[client.rgw.rgw0]
host = mds0
host = rgw0
rgw frontends = civetweb port=192.168.15.50:8080
num_threads=100[client.rgw.mds0]
rgw frontends = civetweb port=192.168.15.70:8080 num_threads=100
rgw frontends = civetweb port=192.168.15.50:8080 num_threads=100
keyring = /var/lib/ceph/radosgw/test-rgw.mds0/keyring
keyring = /var/lib/ceph/radosgw/test-rgw.rgw0/keyring
rgw data = /var/lib/ceph/radosgw/test-rgw.rgw0
log file = /var/log/ceph/test-rgw-mds0.log
log file = /var/log/ceph/test-rgw-rgw0.log

[mds.mds0]
host = mds0

[global]
rgw override bucket index max shards = 16
fsid = 70e1d368-57b3-4978-b746-cbffce6e56b5
rgw bucket default quota max objects = 1638400
osd_pool_default_size = 1
public network = 192.168.15.0/24
mon host = 192.168.15.10,192.168.15.11,192.168.15.12
osd_pool_default_pg_num = 8
cluster network = 192.168.16.0/24

[mds.rgw0]
host = rgw0

[client.rgw.mds0]
host = mds0
rgw data = /var/lib/ceph/radosgw/test-rgw.mds0
keyring = /var/lib/ceph/radosgw/test-rgw.mds0/keyring
rgw frontends = civetweb port=192.168.15.70:8080 num_threads=100
log file = /var/log/ceph/test-rgw-mds0.log

Basically appending all the sections. This commits solves that.
Now the sections appear like this:

-bash-4.2# cat /etc/ceph/test.conf
[client.rgw.rgw0]
log file = /var/log/ceph/test-rgw-rgw0.log
host = rgw0
keyring = /var/lib/ceph/radosgw/test-rgw.rgw0/keyring
rgw frontends = civetweb port=192.168.15.50:8080 num_threads=100

[client.rgw.mds0]
log file = /var/log/ceph/test-rgw-mds0.log
host = mds0
keyring = /var/lib/ceph/radosgw/test-rgw.mds0/keyring
rgw frontends = civetweb port=192.168.15.70:8080 num_threads=100

[global]
cluster network = 192.168.16.0/24
mon host = 192.168.15.10,192.168.15.11,192.168.15.12
osd_pool_default_size = 1
public network = 192.168.15.0/24
rgw bucket default quota max objects = 1638400
osd_pool_default_pg_num = 8
rgw override bucket index max shards = 16
fsid = 77a21980-3033-4174-9264-1abc7185bcb3

[mds.rgw0]
host = rgw0

[mds.mds0]
host = mds0

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-09 17:25:44 +02:00
Guillaume Abrioux be757122f1 config: fix path to set `interface` in ceph.conf
need to use `hostvars[host]['XXX']` to retrieve the monitor
interface and/or radosgw interface.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1493920

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-09-23 14:18:28 +02:00
Sébastien Han 4a55085914 config: fix rgw interface when using different interfaces
Conf file generation failing on rgw nodes when nodes have different
interface names.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1493552
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-09-22 15:41:11 +02:00
Sébastien Han adf5017924 config: remove max open file
This is only used by the old sysvinit scripts

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-09-20 23:06:01 +02:00
Sébastien Han a4baed1025 config: no not generate osd section if bluestore
This section is not needed when running a bluestore osd.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-09-20 18:14:48 +02:00
Sébastien Han 6f0b1fe803 rgw: remove old variables
Since the only support civetweb these variables are obsolete.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-09-14 09:42:50 -06:00
Ali Maredia 52efe92a87 nfs: configure RGW FSAL to start up correctly
- Add RGW keyring to nfs node
- Add RGW section to ganesha.conf
- Add RGW section to ceph.conf onf nfs node

Signed-off-by: Ali Maredia <amaredia@redhat.com>
2017-09-12 16:27:16 -04:00
Guillaume Abrioux 539197a2fc Introduce new role ceph-config.
This will give us more flexibility and the possibility to deploy a client node
for an external ceph-cluster.

related BZ:
https://bugzilla.redhat.com/show_bug.cgi?id=1469426

Fixes: #1670

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-08-24 11:33:03 +02:00