From patchwork Thu Apr 9 15:02:16 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peng Tao X-Patchwork-Id: 6188481 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 59CCE9F2E9 for ; Thu, 9 Apr 2015 15:05:18 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 65F3C2037B for ; Thu, 9 Apr 2015 15:05:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7D3F82037A for ; Thu, 9 Apr 2015 15:05:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753643AbbDIPFO (ORCPT ); Thu, 9 Apr 2015 11:05:14 -0400 Received: from mail-pa0-f48.google.com ([209.85.220.48]:35602 "EHLO mail-pa0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753397AbbDIPFM (ORCPT ); Thu, 9 Apr 2015 11:05:12 -0400 Received: by pabtp1 with SMTP id tp1so46823520pab.2 for ; Thu, 09 Apr 2015 08:05:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=iGX3Bh6cHC+5M53Ul1RFuT8yaXItWRZWv3NsGmc97Hs=; b=c/D5AcOp07d7P4q8oLA+UVh7DI15azrcNbv+Wmzc4epGMQuHREJc9Ou9Ajrl5NroWP D7a69r2+fCrggKL33KDIz1dYSTM9k8VswzybdmzKR3TH1C5PzLVP94RYdkEwQGCeYild jD7P26GCHc5CkOUHik4t7OruOzcLxsMOlwe+IrSTsQiDuDcE1qAKYdkiGwpY0PYwsVA9 1UFNs7twcW7qUH86V0BV/oxaHDzridm+kctUjSW80qahjoX+pAJvf7K7bQ2UdB1hz1Jz JBCvOcJtn4LkgccOuFrW9mD378k1638NsLi1oZowClecTWoQYVB2R8WkcD6q1x+1CA+U IpIA== X-Gm-Message-State: ALoCoQmZGeje9XfGTuTmUWKSBrEKM4hb8sNJq1/XAnz8U0J4Bg5mSzhjUyc4SlCFmsfWfTKuIWwW X-Received: by 10.70.96.65 with SMTP id dq1mr38432525pdb.79.1428591911918; Thu, 09 Apr 2015 08:05:11 -0700 (PDT) Received: from localhost.localdomain (ec2-54-65-164-9.ap-northeast-1.compute.amazonaws.com. [54.65.164.9]) by mx.google.com with ESMTPSA id ca17sm14840186pdb.95.2015.04.09.08.05.06 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 09 Apr 2015 08:05:11 -0700 (PDT) From: Peng Tao To: linux-nfs@vger.kernel.org Cc: Trond Myklebust , Peng Tao , Subject: [PATCH v2 1/2] nfs: fix DIO good bytes calculation Date: Thu, 9 Apr 2015 23:02:16 +0800 Message-Id: <1428591737-19071-1-git-send-email-tao.peng@primarydata.com> X-Mailer: git-send-email 1.9.1 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For direct read that has IO size larger than rsize, we'll split it into several READ requests and nfs_direct_good_bytes() would count completed bytes incorrectly by eating last zero count reply. Fix it by handling mirror and non-mirror cases differently such that we only count mirrored writes differently. This fixes 5fadeb47("nfs: count DIO good bytes correctly with mirroring"). Reported-by: Jean Spector Cc: # v3.19+ Signed-off-by: Peng Tao --- fs/nfs/direct.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c index e907c8c..5e451a7 100644 --- a/fs/nfs/direct.c +++ b/fs/nfs/direct.c @@ -131,20 +131,25 @@ nfs_direct_good_bytes(struct nfs_direct_req *dreq, struct nfs_pgio_header *hdr) WARN_ON_ONCE(hdr->pgio_mirror_idx >= dreq->mirror_count); - count = dreq->mirrors[hdr->pgio_mirror_idx].count; - if (count + dreq->io_start < hdr->io_start + hdr->good_bytes) { - count = hdr->io_start + hdr->good_bytes - dreq->io_start; - dreq->mirrors[hdr->pgio_mirror_idx].count = count; - } - - /* update the dreq->count by finding the minimum agreed count from all - * mirrors */ - count = dreq->mirrors[0].count; + if (dreq->mirror_count == 1) { + dreq->mirrors[hdr->pgio_mirror_idx].count += hdr->good_bytes; + dreq->count += hdr->good_bytes; + } else { + /* mirrored writes */ + count = dreq->mirrors[hdr->pgio_mirror_idx].count; + if (count + dreq->io_start < hdr->io_start + hdr->good_bytes) { + count = hdr->io_start + hdr->good_bytes - dreq->io_start; + dreq->mirrors[hdr->pgio_mirror_idx].count = count; + } + /* update the dreq->count by finding the minimum agreed count from all + * mirrors */ + count = dreq->mirrors[0].count; - for (i = 1; i < dreq->mirror_count; i++) - count = min(count, dreq->mirrors[i].count); + for (i = 1; i < dreq->mirror_count; i++) + count = min(count, dreq->mirrors[i].count); - dreq->count = count; + dreq->count = count; + } } /*