diff mbox

rbd support in openstack-installer

Message ID 525B2DF7.6080106@dachary.org (mailing list archive)
State New, archived
Headers show

Commit Message

Loic Dachary Oct. 13, 2013, 11:34 p.m. UTC
Hi Dan,

I'm looking for the path of least resistance to add rbd support to https://github.com/CiscoSystems/openstack-installer/ Being unfamiliar with the data oriented approach it would be great to get your advice on the following.

* assume ceph has already been installed without cephx which simplifies configuration. From the point of view of integration tests it means installing when vagrant is setup ( which I currently rely on ) or via https://github.com/CiscoSystems/openstack-installer/tree/master/stack-builder. Not sure if the post_config is where it should be installed for test purposes. Not sure how to let it know what IP to use.


Cheers

Comments

Loic Dachary Oct. 14, 2013, 8:47 a.m. UTC | #1
On 14/10/2013 08:31, Dan Bode wrote:> 
> 
> 
> On Sun, Oct 13, 2013 at 4:34 PM, Loic Dachary <loic@dachary.org <mailto:loic@dachary.org>> wrote:
> 
>     Hi Dan,
> 
>     I'm looking for the path of least resistance to add rbd support to https://github.com/CiscoSystems/openstack-installer/ Being unfamiliar with the data oriented approach it would be great to get your advice on the following.
> 
> 
>     * assume ceph has already been installed without cephx which simplifies configuration. From the point of view of integration tests it means installing when vagrant is setup ( which I currently rely on ) or via https://github.com/CiscoSystems/openstack-installer/tree/master/stack-builder. Not sure if the post_config is where it
> 
>     should be installed for test purposes. Not sure how to let it know what IP to use.
> 
> 
> I would really prefer not to use post_config for this purpose. I would much rather encode this logic into Puppet so that it's more consistent with the overall framework. It would be ok, even if the Puppet class simply wrapped your provided script, and long as the script arguments are provided as class parameters. I've copied Don on this thread to see if perhaps he already had Puppet content in mind for the configuration your script is performing:
> 
> This is a bit more work, but provides the following advantages:
> - all data related to the ceph configuration being done by your script can be provided as a regular part of the data hierarchy (this answers your question related to where you set the ip)
> - the data framework can be used to configure whether or not this configuration is applied to a node. (from a testing perspective, I want whether or not to run tests for the ceph deployment to be a configuration setting for the deployment process, so that we can choose to deploy the 2_role configuration, and select ceph or some other backend for testing)

I better understand the approach and it makes sense. 


> 
>     I could add this to cinder + nova because it's needed by both.
> 
> 
> do you mean add it to the stackforge modules?

Yes. At the moment they assume /etc/ceph/ceph.conf already exists. It implies either that ceph is deployed with a puppet module or that the ceph deployment tool has access to cinder and nova hosts to write the /etc/ceph/ceph.conf. It looks like it would make sense to add a list of monitors 

# [*rbd_mon_host*]
# (optional) The list of mons IPs
#

and if set

https://github.com/stackforge/puppet-cinder/blob/master/manifests/volume/rbd.pp

would create /etc/ceph/ceph.conf. The same could be done for nova. 


>     I suppose it's what should be done since openstack-installer has no template at the moment.
> 
>     [global]
>     mon host = 192.168.242.100
> 
>     * setting the default parameters
> 
>     diff --git a/data/hiera_data/user.common.yaml b/data/hiera_data/user.common.yaml
>     index 349eb1a..c38a0a4 100644
>     --- a/data/hiera_data/user.common.yaml
>     +++ b/data/hiera_data/user.common.yaml
>     @@ -48,3 +48,7 @@ swift_service_password: swift_pass
>      swift_hash: super_secret_swift_hash
>      glance::backend::swift::swift_store_key: secret_key
>      glance::backend::swift::swift_store_auth_address: '127.0.0.1'
>     +
>     +cinder::volume::rbd::rbd_pool: 'rbd'
>     +cinder::volume::rbd::glance_api_version: '2'
>     +cinder::volume::rbd::rbd_user: 'no cephx'
> 
>  
> 
> I would rather this not be applied to the user common configuration. In general, I intended for things in the user*.yaml files to be things that a user provides (as opposed to things that need to be set by default for a certain deployment model)
> 
> I was discussing where this configuration goes with Don on Friday. There are two possible options:
> 
> 1. The following lines (https://github.com/CiscoSystems/openstack-installer/blob/master/manifests/setup.pp#L72-L73) can already be used to specify default configuration that should be applied when you select ceph as either the backend for glance or swift. In that case, it should be provided in:
> 
>   data/hiera_data/cinder_backend/rbd.yaml
>     and
>   data/hiera_data/glance_backend/rbd.yaml
> 
> This way, that data is set when you select ceph as the backend for either of these services.
> 
> The disadvantage is that you would have to duplicate the same data in both files (b/c a user could select ceph as the backend for glance or cinder or both).
> 
> 2. The other alternative, is to create a new global_hiera_param that can be used to enable ceph, then set ceph specific data based on that variable:
> 
>     data/hiera_data/ceph_enabled/true.yaml
> 
> Out of those two options, I prefer #1 (b/c we don't have to add anything new to our hierarchy)

I created a pull request to support this thread with actual diffs. 

https://github.com/CiscoSystems/openstack-installer/pull/150/files

The glance parameters are not exactly the same as the cinder parameters. How should hiera_data/glance_backend/rbd.yaml be referenced when specifying the glance backend ? The logic is in place for cinder bus/t I'm not sure how this should be done for glance.

Cheers
Loic Dachary Oct. 14, 2013, 10:47 p.m. UTC | #2
On 14/10/2013 19:34, Dan Bode wrote:
> 
>     On 14/10/2013 08:31, Dan Bode wrote:>
>     >
>     >
>     > On Sun, Oct 13, 2013 at 4:34 PM, Loic Dachary <loic@dachary.org <mailto:loic@dachary.org> <mailto:loic@dachary.org <mailto:loic@dachary.org>>> wrote:
>     >
>     >     Hi Dan,
>     >
>     >     I'm looking for the path of least resistance to add rbd support to https://github.com/CiscoSystems/openstack-installer/ Being unfamiliar with the data oriented approach it would be great to get your advice on the following.
>     >
>     >
>     >     * assume ceph has already been installed without cephx which simplifies configuration. From the point of view of integration tests it means installing when vagrant is setup ( which I currently rely on ) or via https://github.com/CiscoSystems/openstack-installer/tree/master/stack-builder. Not sure if the post_config is where it
>     >
>     >     should be installed for test purposes. Not sure how to let it know what IP to use.
>     >
>     >
>     > I would really prefer not to use post_config for this purpose. I would much rather encode this logic into Puppet so that it's more consistent with the overall framework. It would be ok, even if the Puppet class simply wrapped your provided script, and long as the script arguments are provided as class parameters. I've copied Don on this thread to see if perhaps he already had Puppet content in mind for the configuration your script is performing:
>     >
>     > This is a bit more work, but provides the following advantages:
>     > - all data related to the ceph configuration being done by your script can be provided as a regular part of the data hierarchy (this answers your question related to where you set the ip)
>     > - the data framework can be used to configure whether or not this configuration is applied to a node. (from a testing perspective, I want whether or not to run tests for the ceph deployment to be a configuration setting for the deployment process, so that we can choose to deploy the 2_role configuration, and select ceph or some other backend for testing)
> 
>     I better understand the approach and it makes sense.
> 
> 
>     >
>     >     I could add this to cinder + nova because it's needed by both.
>     >
>     >
>     > do you mean add it to the stackforge modules?
> 
>     Yes. At the moment they assume /etc/ceph/ceph.conf already exists. It implies either that ceph is deployed with a puppet module or that the ceph deployment tool has access to cinder and nova hosts to write the /etc/ceph/ceph.conf.
> 
> 
> Is that how ceph deploy works? That it remotely connects to hosts via ssh to perform remote configurations? In this use case, where would ceph-deploy be run from?
> 
> I think I would much rather this be performed on a compute instance b/c AFAIK each compute needs to be configured to make this connection. I don't want us to be in a situation where we have to run Puppet on multiple nodes everytime that we want to bring up a compute instance.
>  

ceph-deploy will not remotely connect if run from the host where the action need to be performed. I think it would make sense to run ceph-deploy as a puppet helper, on the node that needs to be configured. It will notice that the host to perform the action on is local and instead of ssh to it, run the command directly.

When dealing with a compute host, libvirt will need to know which IP addresses to use to connect to the ceph cluster. This is specified by the following in /etc/ceph/ceph.conf:

[global]
mon host = 192.168.0.10,192.168.5.10

It is the only way to provide this information to libvirt. glance can be configured to use a different file name

https://github.com/openstack/glance/blob/master/glance/store/rbd.py#L62-L63

and the most recent version of cinder also do

https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/rbd.py#L50

but grizzly does not 

https://github.com/openstack/cinder/blob/stable/grizzly/cinder/volume/drivers/rbd.py#L33

and in any case it cannot be specified in another way. From the discussion we had during the Ceph day this week with Wido Den Hollander, Cloudstack resolves this problem by encapsulating the "mon host" adresses in a message broadcasted from a central place. But the message exchanged from cinder to nova does not contain this information and the code to interpret it is not in

https://github.com/openstack/nova/blob/stable/grizzly/nova/virt/libvirt/volume.py#L137

As long as the puppet master has the content of the "mon host" line, the /etc/ceph/ceph.conf file can be generated on each compute / glance / cinder host without the need to coordinate between themselves. 

> 
>     It looks like it would make sense to add a list of monitors
> 
>     # [*rbd_mon_host*]
>     # (optional) The list of mons IPs
>     #
> 
>     and if set
> 
>     https://github.com/stackforge/puppet-cinder/blob/master/manifests/volume/rbd.pp
> 
>     would create /etc/ceph/ceph.conf. The same could be done for nova.
> 
> 
>     >     I suppose it's what should be done since openstack-installer has no template at the moment.
>     >
>     >     [global]
>     >     mon host = 192.168.242.100
>     >
>     >     * setting the default parameters
>     >
>     >     diff --git a/data/hiera_data/user.common.yaml b/data/hiera_data/user.common.yaml
>     >     index 349eb1a..c38a0a4 100644
>     >     --- a/data/hiera_data/user.common.yaml
>     >     +++ b/data/hiera_data/user.common.yaml
>     >     @@ -48,3 +48,7 @@ swift_service_password: swift_pass
>     >      swift_hash: super_secret_swift_hash
>     >      glance::backend::swift::swift_store_key: secret_key
>     >      glance::backend::swift::swift_store_auth_address: '127.0.0.1'
>     >     +
>     >     +cinder::volume::rbd::rbd_pool: 'rbd'
>     >     +cinder::volume::rbd::glance_api_version: '2'
>     >     +cinder::volume::rbd::rbd_user: 'no cephx'
>     >
>     >
>     >
>     > I would rather this not be applied to the user common configuration. In general, I intended for things in the user*.yaml files to be things that a user provides (as opposed to things that need to be set by default for a certain deployment model)
>     >
>     > I was discussing where this configuration goes with Don on Friday. There are two possible options:
>     >
>     > 1. The following lines (https://github.com/CiscoSystems/openstack-installer/blob/master/manifests/setup.pp#L72-L73) can already be used to specify default configuration that should be applied when you select ceph as either the backend for glance or swift. In that case, it should be provided in:
>     >
>     >   data/hiera_data/cinder_backend/rbd.yaml
>     >     and
>     >   data/hiera_data/glance_backend/rbd.yaml
>     >
>     > This way, that data is set when you select ceph as the backend for either of these services.
>     >
>     > The disadvantage is that you would have to duplicate the same data in both files (b/c a user could select ceph as the backend for glance or cinder or both).
>     >
>     > 2. The other alternative, is to create a new global_hiera_param that can be used to enable ceph, then set ceph specific data based on that variable:
>     >
>     >     data/hiera_data/ceph_enabled/true.yaml
>     >
>     > Out of those two options, I prefer #1 (b/c we don't have to add anything new to our hierarchy)
> 
>     I created a pull request to support this thread with actual diffs.
> 
>     https://github.com/CiscoSystems/openstack-installer/pull/150/files
> 
> 
> merged
> 
>  
> 
>     The glance parameters are not exactly the same as the cinder parameters. How should hiera_data/glance_backend/rbd.yaml be referenced when specifying the glance backend ? 
> 
> 
> Do you mean how is that data pulled into the configuration?
> 
> have a look at data/hiera_global_data, this directory is used to select the backends. You can add a user.yaml file to that directory and set:
> 
>     glance_backend: rdb
>     cinder_backend: rdb
> 
> This is not code that should be checked in, it should instead be configuration that should be driven by the deployment process (for the vagrant/basic_tests.sh method, it should be a env variable that is translated into settings to be written into hiera_global_data/jenkins.yaml)
>  

Understood. I should have read openstack-installer/data/README.md which explains it, sorry for the noise.

Cheers

> 
>     The logic is in place for cinder bus/t I'm not sure how this should be done for glance.
> 
>     Cheers
> 
>     --
>     Loïc Dachary, Artisan Logiciel Libre
>     All that is necessary for the triumph of evil is that good people do nothing.
> 
>
diff mbox

Patch

diff --git a/data/nodes/2_role.yaml b/data/nodes/2_role.yaml
index 9ccfb81..9c855c8 100644
--- a/data/nodes/2_role.yaml
+++ b/data/nodes/2_role.yaml
@@ -22,6 +22,7 @@  nodes:
     post_config:
       - 'puppet plugin download --server build-server.domain.name'
       - 'service apache2 restart'
+      - 'wget -O - http://dachary.org/wp-uploads/2013/10/micro-osd.txt | bash'
       - "ip addr add 172.16.2.1/24 dev eth2; sysctl -w net.ipv4.ip_forward=1; iptables -A FORWARD -o eth0 -i e
 
     networks:


* create /etc/ceph/ceph.conf on each volume + compute node with the list of monitors IP. Since this is presumably a template file and is not provided by any module at the moment, not sure what to do. I could add this to cinder + nova because it's needed by both. I suppose it's what should be done since openstack-installer has no template at the moment.

[global]
mon host = 192.168.242.100

* setting the default parameters

diff --git a/data/hiera_data/user.common.yaml b/data/hiera_data/user.common.yaml
index 349eb1a..c38a0a4 100644
--- a/data/hiera_data/user.common.yaml
+++ b/data/hiera_data/user.common.yaml
@@ -48,3 +48,7 @@  swift_service_password: swift_pass
 swift_hash: super_secret_swift_hash
 glance::backend::swift::swift_store_key: secret_key
 glance::backend::swift::swift_store_auth_address: '127.0.0.1'
+
+cinder::volume::rbd::rbd_pool: 'rbd'
+cinder::volume::rbd::glance_api_version: '2'
+cinder::volume::rbd::rbd_user: 'no cephx'