From patchwork Thu Mar 13 03:21:34 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guangliang Zhao X-Patchwork-Id: 3821811 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 2EBE09F1CD for ; Thu, 13 Mar 2014 03:21:51 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 35E9E201C7 for ; Thu, 13 Mar 2014 03:21:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 36712201B6 for ; Thu, 13 Mar 2014 03:21:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753128AbaCMDVs (ORCPT ); Wed, 12 Mar 2014 23:21:48 -0400 Received: from mail-pb0-f49.google.com ([209.85.160.49]:51543 "EHLO mail-pb0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752937AbaCMDVr (ORCPT ); Wed, 12 Mar 2014 23:21:47 -0400 Received: by mail-pb0-f49.google.com with SMTP id jt11so452282pbb.8 for ; Wed, 12 Mar 2014 20:21:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=6uS1J5hQMeFiDY05HkeQ5XKgMi1ArcHvqT+oGu6Y6Tc=; b=hEJ+Q1YJW2IMyXW/SwwiUcqvodxBdfEf7lakzdMk5FHGf5y0LVTO0ceNbtbdavqGRS VpFugeG2Hbw22kGzKgFvJJcgeG63iTT4loe8Zt/zyUKv8FlmPLO6uM0xT4KRv+6kC0VF ZaTRkAkFGZM/rmuCyAGQp+KnJO2d0R0jBkMU1joQ50u93Ws2BZyWuk6uo667PHCf1HeX iQLQdJhRH5CsMcG12nC+UGnRzZqaIkHfJjQ0CebaVOix6ClGQs1Szwotz4KDbNc4OoyL uPhW2eIRYphQ8VzgVaxjXHBFRqGS7KEkHeGgIlQpA+GDdinTp9v2aW5XmKYJxSs5VQN1 L1xw== X-Received: by 10.66.66.66 with SMTP id d2mr1407531pat.80.1394680906919; Wed, 12 Mar 2014 20:21:46 -0700 (PDT) Received: from localhost ([220.181.11.233]) by mx.google.com with ESMTPSA id ha11sm1954532pbd.17.2014.03.12.20.21.42 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Wed, 12 Mar 2014 20:21:45 -0700 (PDT) From: Guangliang Zhao To: ceph-devel@vger.kernel.org Cc: elder@ieee.org Subject: [PATCH 1/3 v2] rbd: skip the copyup when an entire object writing Date: Thu, 13 Mar 2014 11:21:34 +0800 Message-Id: <1394680896-8554-1-git-send-email-lucienchao@gmail.com> X-Mailer: git-send-email 1.7.9.5 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP It need to copyup the parent's content when layered writing, but an entire object write would overwrite it, so skip it. Signed-off-by: Guangliang Zhao Reviewed-by: Alex Elder Reviewed-by: Josh Durgin --- drivers/block/rbd.c | 49 ++++++++++++++++++++++++++++++++++--------------- 1 file changed, 34 insertions(+), 15 deletions(-) diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index b365e0d..2d48858 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -2624,34 +2624,53 @@ out: return ret; } -static int rbd_img_obj_request_submit(struct rbd_obj_request *obj_request) +static bool rbd_obj_request_simple(struct rbd_obj_request *obj_request) { struct rbd_img_request *img_request; struct rbd_device *rbd_dev; - bool known; + u64 obj_size; rbd_assert(obj_request_img_data_test(obj_request)); img_request = obj_request->img_request; rbd_assert(img_request); rbd_dev = img_request->rbd_dev; + obj_size = (u64) 1 << rbd_dev->header.obj_order; + /* Read requests didn't need special handling */ + if (!img_request_write_test(img_request)) + return true; + /* No-layered writes are simple requests*/ + if (!img_request_layered_test(img_request)) + return true; /* - * Only writes to layered images need special handling. - * Reads and non-layered writes are simple object requests. * Layered writes that start beyond the end of the overlap - * with the parent have no parent data, so they too are - * simple object requests. Finally, if the target object is - * known to already exist, its parent data has already been - * copied, so a write to the object can also be handled as a - * simple object request. + * with the parent have no parent data, so they are simple + * object requests. */ - if (!img_request_write_test(img_request) || - !img_request_layered_test(img_request) || - rbd_dev->parent_overlap <= obj_request->img_offset || - ((known = obj_request_known_test(obj_request)) && - obj_request_exists_test(obj_request))) { + if (rbd_dev->parent_overlap <= obj_request->img_offset) + return true; + /* + * If the obj_request aligns with the boundary and equals + * to the size of an object, it doesn't need copyup, because + * the obj_request will overwrite it finally. + */ + if ((!obj_request->offset) && (obj_request->length == obj_size)) + return true; + /* + * If the target object is known to already exist, its parent + * data has already been copied, so a write to the object can + * also be handled as a simple object request + */ + if (obj_request_known_test(obj_request) && + obj_request_exists_test(obj_request)) + return true; + return false; +} +static int rbd_img_obj_request_submit(struct rbd_obj_request *obj_request) +{ + if (rbd_obj_request_simple(obj_request)) { struct rbd_device *rbd_dev; struct ceph_osd_client *osdc; @@ -2667,7 +2686,7 @@ static int rbd_img_obj_request_submit(struct rbd_obj_request *obj_request) * start by reading the data for the full target object from * the parent so we can use it for a copyup to the target. */ - if (known) + if (obj_request_known_test(obj_request)) return rbd_img_obj_parent_read_full(obj_request); /* We don't know whether the target exists. Go find out. */