From patchwork Mon Mar 26 09:50:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dongsheng Yang X-Patchwork-Id: 10307515 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 14EE860212 for ; Mon, 26 Mar 2018 09:50:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EF2302961D for ; Mon, 26 Mar 2018 09:50:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E3C762961F; Mon, 26 Mar 2018 09:50:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 32E5529621 for ; Mon, 26 Mar 2018 09:50:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751109AbeCZJul (ORCPT ); Mon, 26 Mar 2018 05:50:41 -0400 Received: from m6562.mail.qiye.163.com ([123.126.65.62]:42432 "EHLO m6562.mail.qiye.163.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751050AbeCZJuk (ORCPT ); Mon, 26 Mar 2018 05:50:40 -0400 Received: from atest-guest.localdomain (unknown [218.94.118.90]) by smtp12 (Coremail) with SMTP id WtOowACXreRiwrha2AoBAA--.70S2; Mon, 26 Mar 2018 17:50:26 +0800 (CST) From: Dongsheng Yang To: idryomov@gmail.com, sage@redhat.com, elder@kernel.org, jdillama@redhat.com Cc: ceph-devel@vger.kernel.org, Dongsheng Yang Subject: [PATCH v2] rbd: support timeout in rbd_wait_state_locked Date: Mon, 26 Mar 2018 05:50:23 -0400 Message-Id: <1522057823-32315-1-git-send-email-dongsheng.yang@easystack.cn> X-Mailer: git-send-email 1.8.3.1 X-CM-TRANSID: WtOowACXreRiwrha2AoBAA--.70S2 X-Coremail-Antispam: 1Uf129KBjvJXoWxKw4rCw43WF13JFWrJr1rCrg_yoW7GryDpF WUKas8tFWUtF1I9a1fJFs8Zr1Ygw10g39rWasrC3s3CasYyrn5Jr92kF90qFW7ZFZ8AFZ7 KF45Ars8Cr4UCFDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x0JbfPfdUUUUU= X-Originating-IP: [218.94.118.90] X-CM-SenderInfo: 5grqw2pkhqwhp1dqwq5hdv52pwdfyhdfq/1tbiDxEVelnfO-9G6QAAsc Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP currently, the rbd_wait_state_locked() will wait forever if we can't get our state locked. Example: rbd map --exclusive test1 --> /dev/rbd0 rbd map test1 --> /dev/rbd1 dd if=/dev/zero of=/dev/rbd1 bs=1M count=1 --> IO blocked To avoid this problem, this patch introduce a timeout design in rbd_wait_state_locked(). Then rbd_wait_state_locked() will return error when we reach a timeout. This patch allow user to set the lock_timeout in rbd mapping. Signed-off-by: Dongsheng Yang --- drivers/block/rbd.c | 51 +++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 39 insertions(+), 12 deletions(-) diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index f1d9e60..8824053 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -726,6 +726,7 @@ static struct rbd_client *rbd_client_find(struct ceph_options *ceph_opts) */ enum { Opt_queue_depth, + Opt_lock_timeout, Opt_last_int, /* int args above */ Opt_last_string, @@ -739,6 +740,7 @@ enum { static match_table_t rbd_opts_tokens = { {Opt_queue_depth, "queue_depth=%d"}, + {Opt_lock_timeout, "lock_timeout=%d"}, /* int args above */ /* string args above */ {Opt_read_only, "read_only"}, @@ -751,16 +753,18 @@ enum { }; struct rbd_options { - int queue_depth; - bool read_only; - bool lock_on_read; - bool exclusive; + int queue_depth; + unsigned long lock_timeout; + bool read_only; + bool lock_on_read; + bool exclusive; }; -#define RBD_QUEUE_DEPTH_DEFAULT BLKDEV_MAX_RQ -#define RBD_READ_ONLY_DEFAULT false -#define RBD_LOCK_ON_READ_DEFAULT false -#define RBD_EXCLUSIVE_DEFAULT false +#define RBD_QUEUE_DEPTH_DEFAULT BLKDEV_MAX_RQ +#define RBD_READ_ONLY_DEFAULT false +#define RBD_LOCK_ON_READ_DEFAULT false +#define RBD_EXCLUSIVE_DEFAULT false +#define RBD_LOCK_TIMEOUT_DEFAULT 0 /* no timeout */ static int parse_rbd_opts_token(char *c, void *private) { @@ -790,6 +794,14 @@ static int parse_rbd_opts_token(char *c, void *private) } rbd_opts->queue_depth = intval; break; + case Opt_lock_timeout: + /* 0 is "wait forever" (i.e. infinite timeout) */ + if (intval < 0 || intval > INT_MAX / 1000) { + pr_err("lock_timeout out of range\n"); + return -EINVAL; + } + rbd_opts->lock_timeout = msecs_to_jiffies(intval * 1000); + break; case Opt_read_only: rbd_opts->read_only = true; break; @@ -3494,8 +3506,9 @@ static int rbd_obj_method_sync(struct rbd_device *rbd_dev, /* * lock_rwsem must be held for read */ -static void rbd_wait_state_locked(struct rbd_device *rbd_dev) +static int rbd_wait_state_locked(struct rbd_device *rbd_dev) { + unsigned long timeout = 0; DEFINE_WAIT(wait); do { @@ -3508,12 +3521,19 @@ static void rbd_wait_state_locked(struct rbd_device *rbd_dev) prepare_to_wait_exclusive(&rbd_dev->lock_waitq, &wait, TASK_UNINTERRUPTIBLE); up_read(&rbd_dev->lock_rwsem); - schedule(); + timeout = schedule_timeout(ceph_timeout_jiffies( + rbd_dev->opts->lock_timeout)); down_read(&rbd_dev->lock_rwsem); + if (!timeout) { + finish_wait(&rbd_dev->lock_waitq, &wait); + rbd_warn(rbd_dev, "timed out in waiting for lock"); + return -ETIMEDOUT; + } } while (rbd_dev->lock_state != RBD_LOCK_STATE_LOCKED && !test_bit(RBD_DEV_FLAG_BLACKLISTED, &rbd_dev->flags)); finish_wait(&rbd_dev->lock_waitq, &wait); + return 0; } static void rbd_queue_workfn(struct work_struct *work) @@ -3606,7 +3626,9 @@ static void rbd_queue_workfn(struct work_struct *work) result = -EROFS; goto err_unlock; } - rbd_wait_state_locked(rbd_dev); + result = rbd_wait_state_locked(rbd_dev); + if (result) + goto err_unlock; } if (test_bit(RBD_DEV_FLAG_BLACKLISTED, &rbd_dev->flags)) { result = -EBLACKLISTED; @@ -5139,6 +5161,7 @@ static int rbd_add_parse_args(const char *buf, rbd_opts->read_only = RBD_READ_ONLY_DEFAULT; rbd_opts->queue_depth = RBD_QUEUE_DEPTH_DEFAULT; + rbd_opts->lock_timeout = RBD_LOCK_TIMEOUT_DEFAULT; rbd_opts->lock_on_read = RBD_LOCK_ON_READ_DEFAULT; rbd_opts->exclusive = RBD_EXCLUSIVE_DEFAULT; @@ -5209,6 +5232,8 @@ static void rbd_dev_image_unlock(struct rbd_device *rbd_dev) static int rbd_add_acquire_lock(struct rbd_device *rbd_dev) { + int ret = 0; + if (!(rbd_dev->header.features & RBD_FEATURE_EXCLUSIVE_LOCK)) { rbd_warn(rbd_dev, "exclusive-lock feature is not enabled"); return -EINVAL; @@ -5216,8 +5241,10 @@ static int rbd_add_acquire_lock(struct rbd_device *rbd_dev) /* FIXME: "rbd map --exclusive" should be in interruptible */ down_read(&rbd_dev->lock_rwsem); - rbd_wait_state_locked(rbd_dev); + ret = rbd_wait_state_locked(rbd_dev); up_read(&rbd_dev->lock_rwsem); + if (ret) + return ret; if (test_bit(RBD_DEV_FLAG_BLACKLISTED, &rbd_dev->flags)) { rbd_warn(rbd_dev, "failed to acquire exclusive lock"); return -EROFS;