diff mbox

init-ceph: Adding do_cmd_okfail when calling ceph-crush-location

Message ID C8CF870384EBCF4CAB89D2041A5C80B301135242@SHSMSX101.ccr.corp.intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Xue, Chendi July 30, 2014, 12:55 a.m. UTC
From 797dbf4f027200bb22ba89d2f616e254321ec023 Mon Sep 17 00:00:00 2001
From: Chendi Xue <chendi.xue@intel.com>
Date: Tue, 29 Jul 2014 16:45:45 +0800
Subject: [PATCH 8/8] Adding do_cmd_okfail when calling ceph-crush-location

Original init-ceph doesn't use do_cmd_okfail when getting osd_location, which
results in osds wrong mapping( verified by using "ceph osd tree" )

Signed-off-by: Chendi Xue <chendi.xue@intel.com>
---
src/init-ceph.in | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Sage Weil July 30, 2014, 1:02 a.m. UTC | #1
Hi,

On Wed, 30 Jul 2014, Xue, Chendi wrote:
> >From 797dbf4f027200bb22ba89d2f616e254321ec023 Mon Sep 17 00:00:00 2001
> From: Chendi Xue <chendi.xue@intel.com>
> Date: Tue, 29 Jul 2014 16:45:45 +0800
> Subject: [PATCH 8/8] Adding do_cmd_okfail when calling ceph-crush-location
> 
> Original init-ceph doesn't use do_cmd_okfail when getting osd_location, 
> which results in osds wrong mapping( verified by using "ceph osd tree" )

Is this when you are using the -a switch, or when you are starting daemons 
on the local host?  The -a stuff isn't well supported any more (and I've 
half a mind to kill support for it entirely if the world weren't going 
toward systemd anyway).

Anyway, I'm worried about what the _okfail variant will change the 
behavior here if the location hook script exits with an error for some 
reason...

sage


> 
> Signed-off-by: Chendi Xue <chendi.xue@intel.com>
> ---
> src/init-ceph.in | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/init-ceph.in b/src/init-ceph.in
> index 86eb87f..09c7cbc 100644
> --- a/src/init-ceph.in
> +++ b/src/init-ceph.in
> @@ -327,7 +327,7 @@ for name in $what; do
>                                if [ "${update_crush:-1}" = "1" -o "${update_crush:-1}" = "true" ]; then
>                                    # update location in crush
>                                    get_conf osd_location_hook "$BINDIR/ceph-crush-location" "osd crush location hook"
> -                                  osd_location=`$osd_location_hook --cluster ceph --id $id --type osd`
> +                                 osd_location=$(do_cmd_okfail "$osd_location_hook --cluster ceph --id $id --type osd")
>                                    get_conf osd_weight "" "osd crush initial weight"
>                                    defaultweight="$(df -P -k $osd_data/. | tail -1 | awk '{ print sprintf("%.2f",$2/1073741824) }')"
>                                    get_conf osd_keyring "$osd_data/keyring" "keyring"
> -- 
> 1.9.1
> 
> 
> Best Regards,
> -Chendi
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
>
Sage Weil July 30, 2014, 3:44 a.m. UTC | #2
On Wed, 30 Jul 2014, Xue, Chendi wrote:
> Hi, Sage
> 
> Yes, this happened when I use the -a switch.
> 
> If I use do_cmd instead of do_cmd_okfail to call the location hook here, 
> is it being more suitable?

I think that'll work, yeah!

> I think there are still a bunch of people using "/etc/init.d/ceph -a 
> start" to start ceph deamon, so why leave this small problem in 
> init-ceph when "-a" is still supported?

I think it's fine.  The main issue is that the -a mode only works for 
clusters deployed the 'old' way (not with ceph-deploy or chef or puppet).

Anyway, I'll take a patch with do_cmd (provided you tested it and it works 
for you).  :)

sage


> 
> 
> Best Regards,
> -Chendi
> 
> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Sage Weil
> Sent: Wednesday, July 30, 2014 9:03 AM
> To: Xue, Chendi
> Cc: ceph-devel@vger.kernel.org
> Subject: Re: [PATCH]init-ceph: Adding do_cmd_okfail when calling ceph-crush-location
> 
> Hi,
> 
> On Wed, 30 Jul 2014, Xue, Chendi wrote:
> > >From 797dbf4f027200bb22ba89d2f616e254321ec023 Mon Sep 17 00:00:00 
> > >2001
> > From: Chendi Xue <chendi.xue@intel.com>
> > Date: Tue, 29 Jul 2014 16:45:45 +0800
> > Subject: [PATCH 8/8] Adding do_cmd_okfail when calling 
> > ceph-crush-location
> > 
> > Original init-ceph doesn't use do_cmd_okfail when getting 
> > osd_location, which results in osds wrong mapping( verified by using 
> > "ceph osd tree" )
> 
> Is this when you are using the -a switch, or when you are starting daemons on the local host?  The -a stuff isn't well supported any more (and I've half a mind to kill support for it entirely if the world weren't going toward systemd anyway).
> 
> Anyway, I'm worried about what the _okfail variant will change the behavior here if the location hook script exits with an error for some reason...
> 
> sage
> 
> 
> > 
> > Signed-off-by: Chendi Xue <chendi.xue@intel.com>
> > ---
> > src/init-ceph.in | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/src/init-ceph.in b/src/init-ceph.in index 
> > 86eb87f..09c7cbc 100644
> > --- a/src/init-ceph.in
> > +++ b/src/init-ceph.in
> > @@ -327,7 +327,7 @@ for name in $what; do
> >                                if [ "${update_crush:-1}" = "1" -o 
> > "${update_crush:-1}" = "true" ]; then
> >                                    # update location in crush
> >                                    get_conf osd_location_hook "$BINDIR/ceph-crush-location" "osd crush location hook"
> > -                                  osd_location=`$osd_location_hook 
> > --cluster ceph --id $id --type osd`
> > +                                 osd_location=$(do_cmd_okfail 
> > +"$osd_location_hook --cluster ceph --id $id --type osd")
> >                                    get_conf osd_weight "" "osd crush initial weight"
> >                                    defaultweight="$(df -P -k $osd_data/. | tail -1 | awk '{ print sprintf("%.2f",$2/1073741824) }')"
> >                                    get_conf osd_keyring "$osd_data/keyring" "keyring"
> > --
> > 1.9.1
> > 
> > 
> > Best Regards,
> > -Chendi
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> > in the body of a message to majordomo@vger.kernel.org More majordomo 
> > info at  http://vger.kernel.org/majordomo-info.html
> > 
> > 
> 
>
diff mbox

Patch

diff --git a/src/init-ceph.in b/src/init-ceph.in
index 86eb87f..09c7cbc 100644
--- a/src/init-ceph.in
+++ b/src/init-ceph.in
@@ -327,7 +327,7 @@  for name in $what; do
                               if [ "${update_crush:-1}" = "1" -o "${update_crush:-1}" = "true" ]; then
                                   # update location in crush
                                   get_conf osd_location_hook "$BINDIR/ceph-crush-location" "osd crush location hook"
-                                  osd_location=`$osd_location_hook --cluster ceph --id $id --type osd`
+                                 osd_location=$(do_cmd_okfail "$osd_location_hook --cluster ceph --id $id --type osd")
                                   get_conf osd_weight "" "osd crush initial weight"
                                   defaultweight="$(df -P -k $osd_data/. | tail -1 | awk '{ print sprintf("%.2f",$2/1073741824) }')"
                                   get_conf osd_keyring "$osd_data/keyring" "keyring"