From patchwork Sat Jun 1 01:20:43 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Elder X-Patchwork-Id: 2646391 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id BC3C8DFB79 for ; Sat, 1 Jun 2013 01:20:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756571Ab3FABUq (ORCPT ); Fri, 31 May 2013 21:20:46 -0400 Received: from mail-ie0-f181.google.com ([209.85.223.181]:35732 "EHLO mail-ie0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755098Ab3FABUp (ORCPT ); Fri, 31 May 2013 21:20:45 -0400 Received: by mail-ie0-f181.google.com with SMTP id x14so5776519ief.12 for ; Fri, 31 May 2013 18:20:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding :x-gm-message-state; bh=+PTaITH1F80PaH1l7P+uty8oB1HhoOJDw1BdZlTIVqQ=; b=JsjtXWmlOFIkPs74JPlrhS/ltD3z27HlwEW5NlsxeM68vl4tOEv4HoNdTDz/xoKm2n xa9geFFeFlD6y/ZINNoB0KbYllsQDC1FXWJnj9ppm1QpKZEZLtJ/Xem9nczS7Dkm6z+N NnVOUIXnuod5WROsJLGbe11h34KNHFvyqEShFk7ZgY6ssN6aCsAy+BsfSUraxP5MZ6Cf byHGulDvSuZorJIju9ToCNcSdvVPI6GO3zBVThIEcORibd+bzuZg8du2CU5HoVa13YLV O2bKkf5P+HT9vqmYS5lfryDz4y+YfJBP4TQSL1d/23tRzNQESTCtefVQRcp5tcSFdnMs QfVA== X-Received: by 10.50.65.67 with SMTP id v3mr2884621igs.75.1370049644870; Fri, 31 May 2013 18:20:44 -0700 (PDT) Received: from [172.22.22.4] (c-71-195-31-37.hsd1.mn.comcast.net. [71.195.31.37]) by mx.google.com with ESMTPSA id l14sm5814148igf.9.2013.05.31.18.20.43 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 31 May 2013 18:20:44 -0700 (PDT) Message-ID: <51A94C6B.4000102@inktank.com> Date: Fri, 31 May 2013 20:20:43 -0500 From: Alex Elder User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130510 Thunderbird/17.0.6 MIME-Version: 1.0 To: ceph-devel Subject: [PATCH 4/5] rbd: use rwsem to protect header updates References: <51A94BC0.4080703@inktank.com> In-Reply-To: <51A94BC0.4080703@inktank.com> X-Gm-Message-State: ALoCoQk2GLzN1FXvH9VtxwNTo5bTBSJTu6v7NlWVTAUtP5zA8W9gGIFYqx17fiqfF+vy7gJeqNQZ Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Updating an image header needs to be protected to ensure it's done consistently. However distinct headers can be updated concurrently without a problem. Instead of using the global control lock to serialize headder updates, just rely on the header semaphore. (It's already used, this just moves it out to cover a broader section of the code.) That leaves the control mutex protecting only the creation of rbd clients, so rename it. This resolves: http://tracker.ceph.com/issues/5222 Signed-off-by: Alex Elder Reviewed-by: Josh Durgin --- drivers/block/rbd.c | 25 +++++++++---------------- 1 file changed, 9 insertions(+), 16 deletions(-) { @@ -675,13 +675,13 @@ static struct rbd_client *rbd_get_client(struct ceph_options *ceph_opts) { struct rbd_client *rbdc; - mutex_lock_nested(&ctl_mutex, SINGLE_DEPTH_NESTING); + mutex_lock_nested(&client_mutex, SINGLE_DEPTH_NESTING); rbdc = rbd_client_find(ceph_opts); if (rbdc) /* using an existing client */ ceph_destroy_options(ceph_opts); else rbdc = rbd_client_create(ceph_opts); - mutex_unlock(&ctl_mutex); + mutex_unlock(&client_mutex); return rbdc; } @@ -835,7 +835,6 @@ static int rbd_header_from_disk(struct rbd_device *rbd_dev, /* We won't fail any more, fill in the header */ - down_write(&rbd_dev->header_rwsem); if (first_time) { header->object_prefix = object_prefix; header->obj_order = ondisk->options.order; @@ -864,8 +863,6 @@ static int rbd_header_from_disk(struct rbd_device *rbd_dev, if (rbd_dev->mapping.size != header->image_size) rbd_dev->mapping.size = header->image_size; - up_write(&rbd_dev->header_rwsem); - return 0; out_2big: ret = -EIO; @@ -3349,17 +3346,17 @@ static int rbd_dev_refresh(struct rbd_device *rbd_dev) int ret; rbd_assert(rbd_image_format_valid(rbd_dev->image_format)); - mutex_lock_nested(&ctl_mutex, SINGLE_DEPTH_NESTING); + down_write(&rbd_dev->header_rwsem); mapping_size = rbd_dev->mapping.size; if (rbd_dev->image_format == 1) ret = rbd_dev_v1_header_info(rbd_dev); else ret = rbd_dev_v2_header_info(rbd_dev); + up_write(&rbd_dev->header_rwsem); /* If it's a mapped snapshot, validate its EXISTS flag */ rbd_exists_validate(rbd_dev); - mutex_unlock(&ctl_mutex); if (mapping_size != rbd_dev->mapping.size) { sector_t size; @@ -4288,12 +4285,10 @@ static int rbd_dev_v2_header_info(struct rbd_device *rbd_dev) bool first_time = rbd_dev->header.object_prefix == NULL; int ret; - down_write(&rbd_dev->header_rwsem); - if (first_time) { ret = rbd_dev_v2_header_onetime(rbd_dev); if (ret) - goto out; + return ret; } /* @@ -4308,7 +4303,7 @@ static int rbd_dev_v2_header_info(struct rbd_device *rbd_dev) ret = rbd_dev_v2_parent_info(rbd_dev); if (ret) - goto out; + return ret; /* * Print a warning if this is the initial probe and @@ -4325,7 +4320,7 @@ static int rbd_dev_v2_header_info(struct rbd_device *rbd_dev) ret = rbd_dev_v2_image_size(rbd_dev); if (ret) - goto out; + return ret; if (rbd_dev->spec->snap_id == CEPH_NOSNAP) if (rbd_dev->mapping.size != rbd_dev->header.image_size) @@ -4333,8 +4328,6 @@ static int rbd_dev_v2_header_info(struct rbd_device *rbd_dev) ret = rbd_dev_v2_snap_context(rbd_dev); dout("rbd_dev_v2_snap_context returned %d\n", ret); -out: - up_write(&rbd_dev->header_rwsem); return ret; } diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 9c81a5c..107e1e5 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -372,7 +372,7 @@ enum rbd_dev_flags { RBD_DEV_FLAG_REMOVING, /* this mapping is being removed */ }; -static DEFINE_MUTEX(ctl_mutex); /* Serialize open/close/setup/teardown */ +static DEFINE_MUTEX(client_mutex); /* Serialize client creation */ static LIST_HEAD(rbd_dev_list); /* devices */ static DEFINE_SPINLOCK(rbd_dev_list_lock); @@ -518,7 +518,7 @@ static const struct block_device_operations rbd_bd_ops = { /* * Initialize an rbd client instance. Success or not, this function - * consumes ceph_opts. Caller holds ctl_mutex. + * consumes ceph_opts. Caller holds client_mutex. */ static struct rbd_client *rbd_client_create(struct ceph_options *ceph_opts)