From patchwork Tue Apr 1 14:22:15 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guangliang Zhao X-Patchwork-Id: 3923631 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 5F9289F2F7 for ; Tue, 1 Apr 2014 14:24:39 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 74F2420204 for ; Tue, 1 Apr 2014 14:24:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8D14520200 for ; Tue, 1 Apr 2014 14:24:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751900AbaDAOYf (ORCPT ); Tue, 1 Apr 2014 10:24:35 -0400 Received: from mail-pa0-f44.google.com ([209.85.220.44]:56095 "EHLO mail-pa0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751565AbaDAOYf (ORCPT ); Tue, 1 Apr 2014 10:24:35 -0400 Received: by mail-pa0-f44.google.com with SMTP id bj1so9982104pad.3 for ; Tue, 01 Apr 2014 07:24:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=OrQTGMkgCMOIrC8tJTWElC2W+OXt+GUwDyaFJFGaQNU=; b=AMoIwPJv2q3B6+pWA05dnbob27zM9fM7J2ot1XwzRryIX0Tyh74IfOn5b3373Wam/f gBJ5vE6l0N0XnMNtYjAbO4shAxBsEt/Fz4k44F1Wmy6h9/egqLpymfx/STeWDEJZctuo 4KLQYdfZSWDipmq6W43cMv69IQB6yycXt7qOJeNu72Ig4cMLOG9w55EUGmHIgWuOAp9u 3ENy7pE0VGjWMXrWrAIbYWq2Iz8CmAbnXOPSc61hLXJG4S0x7y3kTZRs+fjxbQmWJ0QM EvFPwcLtyBsMQIOsPy/boV3iluYzYFkInF7u1AFWFmfVp6tN48ZD8XgmGBL82drdDAi4 Xrzw== X-Received: by 10.66.146.199 with SMTP id te7mr15537217pab.106.1396362274540; Tue, 01 Apr 2014 07:24:34 -0700 (PDT) Received: from localhost ([111.204.254.10]) by mx.google.com with ESMTPSA id iq10sm50371248pbc.14.2014.04.01.07.24.08 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Tue, 01 Apr 2014 07:24:33 -0700 (PDT) From: Guangliang Zhao To: ceph-devel@vger.kernel.org Cc: josh.durgin@inktank.com, elder@ieee.org Subject: [PATCH 1/3 v3] rbd: skip the copyup when an entire object writing Date: Tue, 1 Apr 2014 22:22:15 +0800 Message-Id: <1396362136-8722-1-git-send-email-lucienchao@gmail.com> X-Mailer: git-send-email 1.7.9.5 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP It need to copyup the parent's content when layered writing, but an entire object write would overwrite it, so skip it. Signed-off-by: Guangliang Zhao --- drivers/block/rbd.c | 49 ++++++++++++++++++++++++++++++++++--------------- 1 file changed, 34 insertions(+), 15 deletions(-) diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index b365e0d..e425be7 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -2624,34 +2624,53 @@ out: return ret; } -static int rbd_img_obj_request_submit(struct rbd_obj_request *obj_request) +static bool rbd_img_obj_request_simple(struct rbd_obj_request *obj_request) { struct rbd_img_request *img_request; struct rbd_device *rbd_dev; - bool known; + u64 obj_size; rbd_assert(obj_request_img_data_test(obj_request)); img_request = obj_request->img_request; rbd_assert(img_request); rbd_dev = img_request->rbd_dev; + obj_size = (u64) 1 << rbd_dev->header.obj_order; + /* Read requests didn't need special handling */ + if (!img_request_write_test(img_request)) + return true; + /* No-layered writes are simple requests*/ + if (!img_request_layered_test(img_request)) + return true; /* - * Only writes to layered images need special handling. - * Reads and non-layered writes are simple object requests. * Layered writes that start beyond the end of the overlap - * with the parent have no parent data, so they too are - * simple object requests. Finally, if the target object is - * known to already exist, its parent data has already been - * copied, so a write to the object can also be handled as a - * simple object request. + * with the parent have no parent data, so they are simple + * object requests. */ - if (!img_request_write_test(img_request) || - !img_request_layered_test(img_request) || - rbd_dev->parent_overlap <= obj_request->img_offset || - ((known = obj_request_known_test(obj_request)) && - obj_request_exists_test(obj_request))) { + if (rbd_dev->parent_overlap <= obj_request->img_offset) + return true; + /* + * If the obj_request aligns with the boundary and equals + * to the size of an object, it doesn't need copyup, because + * the obj_request will overwrite it finally. + */ + if ((!obj_request->offset) && (obj_request->length == obj_size)) + return true; + /* + * If the target object is known to already exist, its parent + * data has already been copied, so a write to the object can + * also be handled as a simple object request + */ + if (obj_request_known_test(obj_request) && + obj_request_exists_test(obj_request)) + return true; + return false; +} +static int rbd_img_obj_request_submit(struct rbd_obj_request *obj_request) +{ + if (rbd_img_obj_request_simple(obj_request)) { struct rbd_device *rbd_dev; struct ceph_osd_client *osdc; @@ -2667,7 +2686,7 @@ static int rbd_img_obj_request_submit(struct rbd_obj_request *obj_request) * start by reading the data for the full target object from * the parent so we can use it for a copyup to the target. */ - if (known) + if (obj_request_known_test(obj_request)) return rbd_img_obj_parent_read_full(obj_request); /* We don't know whether the target exists. Go find out. */