From patchwork Fri Feb 19 12:45:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SelvaKumar S X-Patchwork-Id: 12096639 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D92DCC433DB for ; Sat, 20 Feb 2021 02:09:39 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 38F7064EDF for ; Sat, 20 Feb 2021 02:09:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 38F7064EDF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=samsung.com Authentication-Results: mail.kernel.org; spf=tempfail smtp.mailfrom=dm-devel-bounces@redhat.com Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-541-Zkgb0ZDGMGSUmJ0gmpiwAg-1; Fri, 19 Feb 2021 21:09:33 -0500 X-MC-Unique: Zkgb0ZDGMGSUmJ0gmpiwAg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4895E8030C1; Sat, 20 Feb 2021 02:09:28 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B5AD75D9C2; Sat, 20 Feb 2021 02:09:26 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 4AB014EE4D; Sat, 20 Feb 2021 02:09:05 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 11K28NCD015627 for ; Fri, 19 Feb 2021 21:08:23 -0500 Received: by smtp.corp.redhat.com (Postfix) id AB7251635BA; Sat, 20 Feb 2021 02:08:23 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast05.extmail.prod.ext.rdu2.redhat.com [10.11.55.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A4D9B16359B for ; Sat, 20 Feb 2021 02:08:23 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8BFC7800962 for ; Sat, 20 Feb 2021 02:08:23 +0000 (UTC) Received: from mailout2.samsung.com (mailout2.samsung.com [203.254.224.25]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-555-dVYu3Q-WO0OrDH3W2hrt0g-1; Fri, 19 Feb 2021 21:08:21 -0500 X-MC-Unique: dVYu3Q-WO0OrDH3W2hrt0g-1 Received: from epcas5p3.samsung.com (unknown [182.195.41.41]) by mailout2.samsung.com (KnoxPortal) with ESMTP id 20210220020130epoutp02395379b4c0596cda1f9db855a550cd09~lUkM0dlMV1741417414epoutp02D for ; Sat, 20 Feb 2021 02:01:30 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.samsung.com 20210220020130epoutp02395379b4c0596cda1f9db855a550cd09~lUkM0dlMV1741417414epoutp02D Received: from epsmges5p2new.samsung.com (unknown [182.195.42.74]) by epcas5p3.samsung.com (KnoxPortal) with ESMTP id 20210220020129epcas5p3c13f985c823c12ccb8b383e911355eeb~lUkMAbWS92049720497epcas5p3a; Sat, 20 Feb 2021 02:01:29 +0000 (GMT) Received: from epcas5p1.samsung.com ( [182.195.41.39]) by epsmges5p2new.samsung.com (Symantec Messaging Gateway) with SMTP id 9D.F7.50652.97D60306; Sat, 20 Feb 2021 11:01:29 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p4.samsung.com (KnoxPortal) with ESMTPA id 20210219124559epcas5p41da46f1c248e334953d407a154697903~lJtoHS3Mt2098420984epcas5p4K; Fri, 19 Feb 2021 12:45:59 +0000 (GMT) Received: from epsmgms1p1new.samsung.com (unknown [182.195.42.41]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20210219124559epsmtrp1fef1b54166bacd3d630f5686fcb408ca~lJtoFV7DI0509805098epsmtrp1T; Fri, 19 Feb 2021 12:45:59 +0000 (GMT) X-AuditID: b6c32a4a-6c9ff7000000c5dc-95-60306d79ea3e Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgms1p1new.samsung.com (Symantec Messaging Gateway) with SMTP id E6.7A.13470.703BF206; Fri, 19 Feb 2021 21:45:59 +0900 (KST) Received: from localhost.localdomain (unknown [107.110.206.5]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20210219124556epsmtip294ecebf376f5ad1f2f7d9beb2f4d0c4e~lJtlh9Gr51410314103epsmtip2S; Fri, 19 Feb 2021 12:45:56 +0000 (GMT) From: SelvaKumar S To: linux-nvme@lists.infradead.org Date: Fri, 19 Feb 2021 18:15:14 +0530 Message-Id: <20210219124517.79359-2-selvakuma.s1@samsung.com> In-Reply-To: <20210219124517.79359-1-selvakuma.s1@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA02Sf0xTVxTHc997fe/RWPdoYbur6LYmuMBCi0zlkkg3N7M894c/ojNmMUKD DyRQIK3AKn+scRGUTcEF2ChjEANEyyihQGXQLqQVBTrGilrCHMi0oANrxTpKVpDt8Vjmf59z zv2e7zknl8alPpGczs47xenyNLkKUkzYXHFvJxi0iemJfU4KtU5WkOhs2SKGHE/qROhq6wCG HvwepNDAqp9Eo34Xhr52egGyW6sI5PjtHWR3DBHoVu93JGpomaGQLdyAo9tVl3FkmQ8QaDow TqH5pSESlXb8Bd6XsT+aJil2dKqDYG+NFLJW83mS7Wz6nO2bMJJs4Kc7JHuxywzYoHULW9b/ JXZA/Kl41wkuN7uI06nU6eKTjVXNoGB8w2eh3hrSCGzichBBQ2Y7/PauGysHYlrK9AFYV98G hOAZgP2lbkoIFgH8e3mAKAf0msR175iQdwBo7FoEfCspEwSwuSyHZ5JJgN4mK8FzFKOAK+1n CF6AM5U4vDBej/MFGbMbtt17TvJMMLFwZKJRxBtImFR45pFBGO8NWDsWoniOYNTQ9ti7JpUw kXCo1rfWH//3zRfddTjfHzI+GpadqyQF8R7YULssElgG5252UQLL4Z8VpetcDGfP12ACGwG8 GCgW+D3osa9g/Dw4Ewfbe1VCejOsHrZggu9GeCHsW5dKYM/3Pky4z1Y4fG2nkI6BT12969Ow cLBigRROdQnAwNxbleBN00vbmF7axvS/cSPAzeB1rkCvzeL0OwqS8rhipV6j1RfmZSkz8rVW sPYR4z/uAX9MP1U6AUYDJ4A0roiSXHuoTJdKTmgMpzldfpquMJfTO8EmmlC8JulJnE6TMlma U1wOxxVwuv+qGB0hN2Kd/nZLjmIHMlenHynJLVIfPS4RbzycuTt2140bEcnZV0pS+hVTCp2j ryjUpjO+kvZB9yetLwLh1E1u1YcJhqW7bnqDZ388EXs9/ISIsXts951HucgVZcjbM1sTo26s PjxHLq0apsLbis9Gy0Ob2y6du6mMfHframwG1Zyjjpr1yycPbsGfPx7Vgkh5gzf6arLKMTNv 9KSqm5JnfnmWoq6X0ZYXpeYfzIMD9tPu4yke6a+Zj1RSIvjRwt5lWH1ENVwSNu3JH0u6nS/K uNPhF19vwQjLz52D0Ts1rUnBtKiWgNxy7OC0CzEdmQe6A/sefiUzx+2feHVsZvvIN4cUhP6k Zls8rtNr/gGXAnrr9wMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprOIsWRmVeSWpSXmKPExsWy7bCSvC77Zv0Eg+vNPBar7/azWbS2f2Oy 2PtuNqvFytVHmSwe3/nMbnH0/1s2i/NvDzNZTDp0jdFiz6YpLBZ7b2lb7Nl7ksXi8q45bBbz lz1lt9j2ez6zxZUpi5gt1r1+z2Lx4P11dovXP06yWbRt/MroIOyxc9Zddo/z9zayeFw+W+qx aVUnm8fmJfUeu282sHm833eVzaNvyypGj8+b5DzaD3QzBXBFcdmkpOZklqUW6dslcGUsmLKU seA6T8X3XdPYGhi3cXUxcnBICJhIHL4f08XIxSEksJtR4uGHpyxdjJxAcRmJtXc72SBsYYmV /56zQxR9ZJSYsucAK0iCTUBX4tqSTWANIgJKEn/XN4HZzALLmCUezVQEsYUFHCXW3v8CNohF QFXi7M0FrCCLeQVsJZpeVELMl5eYeek7O4jNKWAnse3NNWYQWwio5MP3aWCtvAKCEidnPoEa Ly/RvHU28wRGgVlIUrOQpBYwMq1ilEwtKM5Nzy02LDDMSy3XK07MLS7NS9dLzs/dxAiOOy3N HYzbV33QO8TIxMF4iFGCg1lJhHf7c70EId6UxMqq1KL8+KLSnNTiQ4zSHCxK4rwXuk7GCwmk J5akZqemFqQWwWSZODilGpi28FRLFwhFz/3y7j/fU/9sqXqX/SLrd/8JfrAkcQ3r3B1/9fuZ r9rrxzHFG75k/6q69OHuydMOmou/NkloOtgQOtOr9pxrzWzNu5MUl6j7Tb7mIMTw0u7xydtm yUVbPmh+ipVuq3dWOdVuXRTY91tFv++L4cKAtzHP+U7/DfVVqjp7kqMtZ+OWZymdWrUZW5oV Pj+NW/P7wqebKRovkz6EtBdfvs8y/9kTxwMaT0J+7F++5QND0X2BM4wRGauNPeIm1ap0Ljwv Ebj0pNIbxT0OXAw+0Z/rcg4dqlp9fpHiBN9fJ/4/76s4tF+CvUunlnnr9jcvOXpC7Pc0L35+ 8ZKZn6ql5z+mxksa2ZXMspVKLMUZiYZazEXFiQB9lYxxKgMAAA== X-CMS-MailID: 20210219124559epcas5p41da46f1c248e334953d407a154697903 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P X-CMS-RootMailID: 20210219124559epcas5p41da46f1c248e334953d407a154697903 References: <20210219124517.79359-1-selvakuma.s1@samsung.com> X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id 11K28NCD015627 X-loop: dm-devel@redhat.com Cc: axboe@kernel.dk, damien.lemoal@wdc.com, kch@kernel.org, SelvaKumar S , sagi@grimberg.me, snitzer@redhat.com, selvajove@gmail.com, linux-kernel@vger.kernel.org, nj.shetty@samsung.com, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, dm-devel@redhat.com, joshi.k@samsung.com, javier.gonz@samsung.com, kbusch@kernel.org, joshiiitr@gmail.com, hch@lst.de Subject: [dm-devel] [RFC PATCH v5 1/4] block: make bio_map_kern() non static X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Make bio_map_kern() non static, so that copy offload emulation can use it to add vmalloced memory to bio. Signed-off-by: SelvaKumar S Signed-off-by: Chaitanya Kulkarni --- block/blk-map.c | 2 +- include/linux/blkdev.h | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/block/blk-map.c b/block/blk-map.c index 21630dccac62..17381b1643b8 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -378,7 +378,7 @@ static void bio_map_kern_endio(struct bio *bio) * Map the kernel address into a bio suitable for io to a block * device. Returns an error pointer in case of error. */ -static struct bio *bio_map_kern(struct request_queue *q, void *data, +struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len, gfp_t gfp_mask) { unsigned long kaddr = (unsigned long)data; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index f94ee3089e01..699ace6b25ff 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -944,6 +944,8 @@ extern int blk_rq_map_user(struct request_queue *, struct request *, struct rq_map_data *, void __user *, unsigned long, gfp_t); extern int blk_rq_unmap_user(struct bio *); +struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len, + gfp_t gfp_mask); extern int blk_rq_map_kern(struct request_queue *, struct request *, void *, unsigned int, gfp_t); extern int blk_rq_map_user_iov(struct request_queue *, struct request *, struct rq_map_data *, const struct iov_iter *, From patchwork Fri Feb 19 12:45:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: SelvaKumar S X-Patchwork-Id: 12096641 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B15EEC433DB for ; Sat, 20 Feb 2021 02:10:33 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1BA7664EDF for ; Sat, 20 Feb 2021 02:10:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1BA7664EDF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=samsung.com Authentication-Results: mail.kernel.org; spf=tempfail smtp.mailfrom=dm-devel-bounces@redhat.com Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-307-KYit-cNnOhWr4ZqVDlC7Bw-1; Fri, 19 Feb 2021 21:10:29 -0500 X-MC-Unique: KYit-cNnOhWr4ZqVDlC7Bw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E9D2A18449E4; Sat, 20 Feb 2021 02:10:24 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id BD114E15E; Sat, 20 Feb 2021 02:10:24 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 4E6E618095CD; Sat, 20 Feb 2021 02:10:06 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 11K28RUJ015642 for ; Fri, 19 Feb 2021 21:08:27 -0500 Received: by smtp.corp.redhat.com (Postfix) id 324DD10A58FE; Sat, 20 Feb 2021 02:08:27 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast02.extmail.prod.ext.rdu2.redhat.com [10.11.55.18]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2C25D10FBFE0 for ; Sat, 20 Feb 2021 02:08:23 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C6B0B8007D9 for ; Sat, 20 Feb 2021 02:08:23 +0000 (UTC) Received: from mailout2.samsung.com (mailout2.samsung.com [203.254.224.25]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-559-3KIOpoP5M12wTIZV1eyrnQ-1; Fri, 19 Feb 2021 21:08:21 -0500 X-MC-Unique: 3KIOpoP5M12wTIZV1eyrnQ-1 Received: from epcas5p1.samsung.com (unknown [182.195.41.39]) by mailout2.samsung.com (KnoxPortal) with ESMTP id 20210220020137epoutp021e2c5e8dcb852566f546de11d9e9e10e~lUkTkVis21996219962epoutp02O for ; Sat, 20 Feb 2021 02:01:37 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.samsung.com 20210220020137epoutp021e2c5e8dcb852566f546de11d9e9e10e~lUkTkVis21996219962epoutp02O Received: from epsmges5p3new.samsung.com (unknown [182.195.42.75]) by epcas5p2.samsung.com (KnoxPortal) with ESMTP id 20210220020136epcas5p23b0d0ac0ffe81071b128801281325613~lUkSuW6IX2074420744epcas5p2H; Sat, 20 Feb 2021 02:01:36 +0000 (GMT) Received: from epcas5p4.samsung.com ( [182.195.41.42]) by epsmges5p3new.samsung.com (Symantec Messaging Gateway) with SMTP id 3F.00.33964.08D60306; Sat, 20 Feb 2021 11:01:36 +0900 (KST) Received: from epsmtrp2.samsung.com (unknown [182.195.40.14]) by epcas5p3.samsung.com (KnoxPortal) with ESMTPA id 20210219124603epcas5p33add0f2c1781b2a4d71bf30c9e1ac647~lJtsTBwGS2180821808epcas5p3U; Fri, 19 Feb 2021 12:46:03 +0000 (GMT) Received: from epsmgms1p1new.samsung.com (unknown [182.195.42.41]) by epsmtrp2.samsung.com (KnoxPortal) with ESMTP id 20210219124603epsmtrp2a11a7eecd63661b39966509251e4c546~lJtsR-xll0558405584epsmtrp2M; Fri, 19 Feb 2021 12:46:03 +0000 (GMT) X-AuditID: b6c32a4b-eb7ff700000184ac-89-60306d80115f Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgms1p1new.samsung.com (Symantec Messaging Gateway) with SMTP id 98.7A.13470.B03BF206; Fri, 19 Feb 2021 21:46:03 +0900 (KST) Received: from localhost.localdomain (unknown [107.110.206.5]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20210219124600epsmtip2cef97ec901cf921191559d0649f864c9~lJtpisV4e1410414104epsmtip2Y; Fri, 19 Feb 2021 12:46:00 +0000 (GMT) From: SelvaKumar S To: linux-nvme@lists.infradead.org Date: Fri, 19 Feb 2021 18:15:15 +0530 Message-Id: <20210219124517.79359-3-selvakuma.s1@samsung.com> In-Reply-To: <20210219124517.79359-1-selvakuma.s1@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA02Se0xTVwDGPfdebm+7lF1alh3ZnKYTMiAWXLU72QbsyS7R7OGDPXSDqndA Rh9pYaibsWEDh6NYO1i1OHAdIQJKQ8dqgXYjReJKsgmo7RRryuwwCBQyNhUJ22hvzfzvO98j v5yTQ+GiybgkqkRVxmpVilIJKSAcA6lp6/TKzMJMQ6sYdQSOkKjq0G0MucONcaitYxBDN67N 89DgvzMkujAzgCGTxweQy15PIPfVdORyewl0sfcEiZpb/+Ahx2Izji7VW3HUOTVLoOCsn4em 7npJVN31N3hBzPRYAjzmwvUugrn4Szljb68hme9bDjJ9V/QkM/vjZZKp624HzLz9CeZQ/5fY m4L3BM/vYUtLPma1GdmFguLavmFS81MP2Ds55yf1oOkYOAz4FKQ3wLbfvoo7DASUiO4D8M6J dpI7/Amg6dRYLLkNYE3VNeL+xNPVE52LaDeAv16Wc6V5AM3/WKMBSa+DvhZ7dJBIS+CSrZKI lHDaiEOD/xs8EojpZ+HPV2ewiCboZGheuBn1hXQWdBosJEdbDY+P3uFFNJ/Oho5pX6yTAL3H Q1EAvtz57IdGPAKA9F8UNHsXMW78ChwMXYppMbx1vpvH6SQ4H3bHABVwoubrWEcPYN1sBadz 4IhradmnlgGp0NabwdmrYMNQJ8Zx46FhMRSbCqGzKRStQzoFDp2Vc/bjcG6gN0ZioP+6keAe 6yiAp47eIIxgjeWB61geuI7lf/JJgLeDlaxGpyxidRs1MhVbIdUplLpyVZF0t1ppB9HfmLbJ CX4Pzkk9AKOAB0AKlyQKz96UFoqEexT79rNadYG2vJTVecBjFCF5VOjMDBaI6CJFGfsRy2pY 7f0Uo/hJeux03mSj8xFFx5Gs17y3KAco3nBG9gEiz51faHsGZgkqHdM7Dsj6noTf7tydVJVb a1OM8tnWRPMnvgbH2hVWw6eWMss7KfKQMLX/3I6UanWuWKXJ25+jdslstv7QrntB2jpUJaV0 JqOsvzMgCoxM+1/9biM4yGwuSxAGr/BrTc7q1xvceVNO5bhvu3Vvd/rwlu43JozKrW55cUuq 8WTDopmID4+ued8/vBRelf6ievqt3LpA6DnBu8nWp+8WtK3VrqjMt30xVv/hU0vGfJPic7jl tHDi4Zf3jbTszFxZMx4fzm7C721OWP9Sc8nCOCkvsW97+yHXWD5JHUjOWL0rR0LoihXr03Ct TvEfzxR6JvwDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrIIsWRmVeSWpSXmKPExsWy7bCSvC73Zv0Eg8b5mhar7/azWbS2f2Oy 2PtuNqvFytVHmSwe3/nMbnH0/1s2i/NvDzNZTDp0jdFiz6YpLBZ7b2lb7Nl7ksXi8q45bBbz lz1lt9j2ez6zxZUpi5gt1r1+z2Lx4P11dovXP06yWbRt/MroIOyxc9Zddo/z9zayeFw+W+qx aVUnm8fmJfUeu282sHm833eVzaNvyypGj8+b5DzaD3QzBXBFcdmkpOZklqUW6dslcGX07L7A VrB/J2PFyw/X2RoY581g7GLk5JAQMJE4tHEnkM3FISSwm1Fid/9kZoiEjMTau51sELawxMp/ z9khij4yShzffQCsm01AV+Lakk0sILaIgJLE3/VNYDazwDJmiUczFUFsYQEriRO33jKB2CwC qhLTfz4HW8ArYCuxo3cW1AJ5iZmXvrOD2JwCdhLb3lwDqxECqvnwfRobRL2gxMmZT4DmcwDN V5dYP08IYpW8RPPW2cwTGAVnIamahVA1C0nVAkbmVYySqQXFuem5xYYFhnmp5XrFibnFpXnp esn5uZsYwbGqpbmDcfuqD3qHGJk4GA8xSnAwK4nwbn+ulyDEm5JYWZValB9fVJqTWnyIUZqD RUmc90LXyXghgfTEktTs1NSC1CKYLBMHp1QDU9HKsJTbKTENM3rdpoe9mFtXrLiT1YbpBUvu OVFGxyOSGcuPSE+8F3VKeVqYy4/Syb1iqe51AYte/rFh+yDP8YKNS2/eqkNP9bKjXoju8pz0 6NzV19/OvJ3ZIbffqjT2ed7HAw0KXo+rd0058mHlwcbeG+eNzqQHrojPOtz7eJNl/qH99o27 FWbLnFNkvHD3mGryghfSgXJZCzazzi108qtvtVFoX/5RVG/6g2sC2Wr/1Jx8uYx+xSTuZ93S 0LlT8La7ZHxXRZA/x/Nwh38VP/TeHbezW7ZufoOH2Cmhu6Fbt1hvYZ3x9unvAyvfHLr/3+DL faFr/x69qhbRNVj/8Upd13qnhJezbjkq/Srt/a/EUpyRaKjFXFScCADnVkOPRAMAAA== X-CMS-MailID: 20210219124603epcas5p33add0f2c1781b2a4d71bf30c9e1ac647 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P X-CMS-RootMailID: 20210219124603epcas5p33add0f2c1781b2a4d71bf30c9e1ac647 References: <20210219124517.79359-1-selvakuma.s1@samsung.com> X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id 11K28RUJ015642 X-loop: dm-devel@redhat.com Cc: axboe@kernel.dk, damien.lemoal@wdc.com, kch@kernel.org, SelvaKumar S , sagi@grimberg.me, snitzer@redhat.com, selvajove@gmail.com, linux-kernel@vger.kernel.org, nj.shetty@samsung.com, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, dm-devel@redhat.com, joshi.k@samsung.com, javier.gonz@samsung.com, kbusch@kernel.org, joshiiitr@gmail.com, hch@lst.de Subject: [dm-devel] [RFC PATCH v5 2/4] block: add simple copy support X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Add new BLKCOPY ioctl that offloads copying of one or more sources ranges to a destination in the device. Accepts a 'copy_range' structure that contains destination (in sectors), no of sources and pointer to the array of source ranges. Each source range is represented by 'range_entry' that contains start and length of source ranges (in sectors). Introduce REQ_OP_COPY, a no-merge copy offload operation. Create bio with control information as payload and submit to the device. REQ_OP_COPY(19) is a write op and takes zone_write_lock when submitted to zoned device. If the device doesn't support copy or copy offload is disabled, then copy operation is emulated by default. However, the copy-emulation is an opt-in feature. Caller can choose not to use the copy-emulation by specifying a flag 'BLKDEV_COPY_NOEMULATION'. Copy-emulation is implemented by allocating memory of total copy size. The source ranges are read into memory by chaining bio for each source ranges and submitting them async and the last bio waits for completion. After data is read, it is written to the destination. bio_map_kern() is used to allocate bio and add pages of copy buffer to bio. As bio->bi_private and bio->bi_end_io are needed for chaining the bio and gets over-written, invalidate_kernel_vmap_range() for read is called in the caller. Introduce queue limits for simple copy and other helper functions. Add device limits as sysfs entries. - copy_offload - max_copy_sectors - max_copy_ranges_sectors - max_copy_nr_ranges copy_offload(= 0) is disabled by default. This needs to be enabled if copy-offload needs to be used. max_copy_sectors = 0 indicates the device doesn't support native copy. Native copy offload is not supported for stacked devices and is done via copy emulation. Signed-off-by: SelvaKumar S Signed-off-by: Kanchan Joshi Signed-off-by: Nitesh Shetty Signed-off-by: Javier González Signed-off-by: Chaitanya Kulkarni --- block/blk-core.c | 102 ++++++++++++++++-- block/blk-lib.c | 222 ++++++++++++++++++++++++++++++++++++++ block/blk-merge.c | 2 + block/blk-settings.c | 10 ++ block/blk-sysfs.c | 47 ++++++++ block/blk-zoned.c | 1 + block/bounce.c | 1 + block/ioctl.c | 33 ++++++ include/linux/bio.h | 1 + include/linux/blk_types.h | 14 +++ include/linux/blkdev.h | 15 +++ include/uapi/linux/fs.h | 13 +++ 12 files changed, 453 insertions(+), 8 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 7663a9b94b80..23e646e5ae43 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -720,6 +720,17 @@ static noinline int should_fail_bio(struct bio *bio) } ALLOW_ERROR_INJECTION(should_fail_bio, ERRNO); +static inline int bio_check_copy_eod(struct bio *bio, sector_t start, + sector_t nr_sectors, sector_t max_sect) +{ + if (nr_sectors && max_sect && + (nr_sectors > max_sect || start > max_sect - nr_sectors)) { + handle_bad_sector(bio, max_sect); + return -EIO; + } + return 0; +} + /* * Check whether this bio extends beyond the end of the device or partition. * This may well happen - the kernel calls bread() without checking the size of @@ -738,6 +749,75 @@ static inline int bio_check_eod(struct bio *bio, sector_t maxsector) return 0; } +/* + * Check for copy limits and remap source ranges if needed. + */ +static int blk_check_copy(struct bio *bio) +{ + struct blk_copy_payload *payload = bio_data(bio); + struct request_queue *q = bio->bi_disk->queue; + sector_t max_sect, start_sect, copy_size = 0; + sector_t src_max_sect, src_start_sect; + struct block_device *bd_part; + int i, ret = -EIO; + + rcu_read_lock(); + + bd_part = __disk_get_part(bio->bi_disk, bio->bi_partno); + if (unlikely(!bd_part)) { + rcu_read_unlock(); + goto out; + } + + max_sect = bdev_nr_sectors(bd_part); + start_sect = bd_part->bd_start_sect; + + src_max_sect = bdev_nr_sectors(payload->src_bdev); + src_start_sect = payload->src_bdev->bd_start_sect; + + if (unlikely(should_fail_request(bd_part, bio->bi_iter.bi_size))) + goto out; + + if (unlikely(bio_check_ro(bio, bd_part))) + goto out; + + rcu_read_unlock(); + + /* cannot handle copy crossing nr_ranges limit */ + if (payload->copy_nr_ranges > q->limits.max_copy_nr_ranges) + goto out; + + for (i = 0; i < payload->copy_nr_ranges; i++) { + ret = bio_check_copy_eod(bio, payload->range[i].src, + payload->range[i].len, src_max_sect); + if (unlikely(ret)) + goto out; + + /* single source range length limit */ + if (payload->range[i].len > q->limits.max_copy_range_sectors) + goto out; + + payload->range[i].src += src_start_sect; + copy_size += payload->range[i].len; + } + + /* check if copy length crosses eod */ + ret = bio_check_copy_eod(bio, bio->bi_iter.bi_sector, + copy_size, max_sect); + if (unlikely(ret)) + goto out; + + /* cannot handle copy more than copy limits */ + if (copy_size > q->limits.max_copy_sectors) + goto out; + + bio->bi_iter.bi_sector += start_sect; + bio->bi_partno = 0; + ret = 0; +out: + return ret; +} + /* * Remap block n of partition p to block n+start(p) of the disk. */ @@ -827,14 +907,16 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio) if (should_fail_bio(bio)) goto end_io; - if (bio->bi_partno) { - if (unlikely(blk_partition_remap(bio))) - goto end_io; - } else { - if (unlikely(bio_check_ro(bio, bio->bi_disk->part0))) - goto end_io; - if (unlikely(bio_check_eod(bio, get_capacity(bio->bi_disk)))) - goto end_io; + if (likely(!op_is_copy(bio->bi_opf))) { + if (bio->bi_partno) { + if (unlikely(blk_partition_remap(bio))) + goto end_io; + } else { + if (unlikely(bio_check_ro(bio, bio->bi_disk->part0))) + goto end_io; + if (unlikely(bio_check_eod(bio, get_capacity(bio->bi_disk)))) + goto end_io; + } } /* @@ -858,6 +940,10 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio) if (!blk_queue_discard(q)) goto not_supported; break; + case REQ_OP_COPY: + if (unlikely(blk_check_copy(bio))) + goto end_io; + break; case REQ_OP_SECURE_ERASE: if (!blk_queue_secure_erase(q)) goto not_supported; diff --git a/block/blk-lib.c b/block/blk-lib.c index 752f9c722062..97ba58d8d9a1 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -150,6 +150,228 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector, } EXPORT_SYMBOL(blkdev_issue_discard); +int blk_copy_offload(struct block_device *dest_bdev, struct blk_copy_payload *payload, + sector_t dest, gfp_t gfp_mask) +{ + struct request_queue *q = bdev_get_queue(dest_bdev); + struct bio *bio; + int ret, payload_size; + + payload_size = struct_size(payload, range, payload->copy_nr_ranges); + bio = bio_map_kern(q, payload, payload_size, gfp_mask); + if (IS_ERR(bio)) { + ret = PTR_ERR(bio); + goto err; + } + + bio->bi_iter.bi_sector = dest; + bio->bi_opf = REQ_OP_COPY | REQ_NOMERGE; + bio_set_dev(bio, dest_bdev); + + ret = submit_bio_wait(bio); +err: + bio_put(bio); + return ret; +} + +int blk_read_to_buf(struct block_device *src_bdev, struct blk_copy_payload *payload, + gfp_t gfp_mask, sector_t copy_size, void **buf_p) +{ + struct request_queue *q = bdev_get_queue(src_bdev); + struct bio *bio, *parent = NULL; + void *buf = NULL; + int copy_len = copy_size << SECTOR_SHIFT; + int i, nr_srcs, ret, cur_size, t_len = 0; + bool is_vmalloc; + + nr_srcs = payload->copy_nr_ranges; + + buf = kvmalloc(copy_len, gfp_mask); + if (!buf) + return -ENOMEM; + is_vmalloc = is_vmalloc_addr(buf); + + for (i = 0; i < nr_srcs; i++) { + cur_size = payload->range[i].len << SECTOR_SHIFT; + + bio = bio_map_kern(q, buf + t_len, cur_size, gfp_mask); + if (IS_ERR(bio)) { + ret = PTR_ERR(bio); + goto out; + } + + bio->bi_iter.bi_sector = payload->range[i].src; + bio->bi_opf = REQ_OP_READ; + bio_set_dev(bio, src_bdev); + bio->bi_end_io = NULL; + bio->bi_private = NULL; + + if (parent) { + bio_chain(parent, bio); + submit_bio(parent); + } + + parent = bio; + t_len += cur_size; + } + + ret = submit_bio_wait(bio); + bio_put(bio); + if (is_vmalloc) + invalidate_kernel_vmap_range(buf, copy_len); + if (ret) + goto out; + + *buf_p = buf; + return 0; +out: + kvfree(buf); + return ret; +} + +int blk_write_from_buf(struct block_device *dest_bdev, void *buf, sector_t dest, + sector_t copy_size, gfp_t gfp_mask) +{ + struct request_queue *q = bdev_get_queue(dest_bdev); + struct bio *bio; + int ret, copy_len = copy_size << SECTOR_SHIFT; + + bio = bio_map_kern(q, buf, copy_len, gfp_mask); + if (IS_ERR(bio)) { + ret = PTR_ERR(bio); + goto out; + } + bio_set_dev(bio, dest_bdev); + bio->bi_opf = REQ_OP_WRITE; + bio->bi_iter.bi_sector = dest; + + bio->bi_end_io = NULL; + ret = submit_bio_wait(bio); + bio_put(bio); +out: + return ret; +} + +int blk_prepare_payload(struct block_device *src_bdev, int nr_srcs, struct range_entry *rlist, + gfp_t gfp_mask, struct blk_copy_payload **payload_p, sector_t *copy_size) +{ + + struct request_queue *q = bdev_get_queue(src_bdev); + struct blk_copy_payload *payload; + sector_t bs_mask, total_len = 0; + int i, ret, payload_size; + + if (!q) + return -ENXIO; + + if (!nr_srcs) + return -EINVAL; + + if (bdev_read_only(src_bdev)) + return -EPERM; + + bs_mask = (bdev_logical_block_size(src_bdev) >> 9) - 1; + + payload_size = struct_size(payload, range, nr_srcs); + payload = kmalloc(payload_size, gfp_mask); + if (!payload) + return -ENOMEM; + + for (i = 0; i < nr_srcs; i++) { + if (rlist[i].src & bs_mask || rlist[i].len & bs_mask) { + ret = -EINVAL; + goto err; + } + + payload->range[i].src = rlist[i].src; + payload->range[i].len = rlist[i].len; + + total_len += rlist[i].len; + } + + payload->copy_nr_ranges = i; + payload->src_bdev = src_bdev; + *copy_size = total_len; + + *payload_p = payload; + return 0; +err: + kfree(payload); + return ret; +} + +int blk_copy_emulate(struct block_device *src_bdev, struct blk_copy_payload *payload, + struct block_device *dest_bdev, sector_t dest, + sector_t copy_size, gfp_t gfp_mask) +{ + void *buf = NULL; + int ret; + + ret = blk_read_to_buf(src_bdev, payload, gfp_mask, copy_size, &buf); + if (ret) + goto out; + + ret = blk_write_from_buf(dest_bdev, buf, dest, copy_size, gfp_mask); + if (buf) + kvfree(buf); +out: + return ret; +} + +/** + * blkdev_issue_copy - queue a copy + * @src_bdev: source block device + * @nr_srcs: number of source ranges to copy + * @rlist: array of source ranges in sector + * @dest_bdev: destination block device + * @dest: destination in sector + * @gfp_mask: memory allocation flags (for bio_alloc) + * @flags: BLKDEV_COPY_* flags to control behaviour + * + * Description: + * Copy array of source ranges from source block device to + * destination block devcie. All source must belong to same bdev and + * length of a source range cannot be zero. + */ + +int blkdev_issue_copy(struct block_device *src_bdev, int nr_srcs, + struct range_entry *src_rlist, struct block_device *dest_bdev, + sector_t dest, gfp_t gfp_mask, int flags) +{ + struct request_queue *q = bdev_get_queue(src_bdev); + struct request_queue *dest_q = bdev_get_queue(dest_bdev); + struct blk_copy_payload *payload; + sector_t bs_mask, copy_size; + int ret; + + ret = blk_prepare_payload(src_bdev, nr_srcs, src_rlist, gfp_mask, + &payload, ©_size); + if (ret) + return ret; + + bs_mask = (bdev_logical_block_size(dest_bdev) >> 9) - 1; + if (dest & bs_mask) { + return -EINVAL; + goto out; + } + + if (q == dest_q && q->limits.copy_offload) { + ret = blk_copy_offload(src_bdev, payload, dest, gfp_mask); + if (ret) + goto out; + } else if (flags & BLKDEV_COPY_NOEMULATION) { + ret = -EIO; + goto out; + } else + ret = blk_copy_emulate(src_bdev, payload, dest_bdev, dest, + copy_size, gfp_mask); + +out: + kvfree(payload); + return ret; +} +EXPORT_SYMBOL(blkdev_issue_copy); + /** * __blkdev_issue_write_same - generate number of bios with same page * @bdev: target blockdev diff --git a/block/blk-merge.c b/block/blk-merge.c index 808768f6b174..4e04f24e13c1 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -309,6 +309,8 @@ void __blk_queue_split(struct bio **bio, unsigned int *nr_segs) struct bio *split = NULL; switch (bio_op(*bio)) { + case REQ_OP_COPY: + break; case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: split = blk_bio_discard_split(q, *bio, &q->bio_split, nr_segs); diff --git a/block/blk-settings.c b/block/blk-settings.c index 43990b1d148b..93c15ba45a69 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -60,6 +60,10 @@ void blk_set_default_limits(struct queue_limits *lim) lim->io_opt = 0; lim->misaligned = 0; lim->zoned = BLK_ZONED_NONE; + lim->copy_offload = 0; + lim->max_copy_sectors = 0; + lim->max_copy_nr_ranges = 0; + lim->max_copy_range_sectors = 0; } EXPORT_SYMBOL(blk_set_default_limits); @@ -565,6 +569,12 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, if (b->chunk_sectors) t->chunk_sectors = gcd(t->chunk_sectors, b->chunk_sectors); + /* simple copy not supported in stacked devices */ + t->copy_offload = 0; + t->max_copy_sectors = 0; + t->max_copy_range_sectors = 0; + t->max_copy_nr_ranges = 0; + /* Physical block size a multiple of the logical block size? */ if (t->physical_block_size & (t->logical_block_size - 1)) { t->physical_block_size = t->logical_block_size; diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index b513f1683af0..625a72541263 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -166,6 +166,44 @@ static ssize_t queue_discard_granularity_show(struct request_queue *q, char *pag return queue_var_show(q->limits.discard_granularity, page); } +static ssize_t queue_copy_offload_show(struct request_queue *q, char *page) +{ + return queue_var_show(q->limits.copy_offload, page); +} + +static ssize_t queue_copy_offload_store(struct request_queue *q, + const char *page, size_t count) +{ + unsigned long copy_offload; + ssize_t ret = queue_var_store(©_offload, page, count); + + if (ret < 0) + return ret; + + if (copy_offload && q->limits.max_copy_sectors == 0) + return -EINVAL; + + q->limits.copy_offload = copy_offload; + return ret; +} + +static ssize_t queue_max_copy_sectors_show(struct request_queue *q, char *page) +{ + return queue_var_show(q->limits.max_copy_sectors, page); +} + +static ssize_t queue_max_copy_range_sectors_show(struct request_queue *q, + char *page) +{ + return queue_var_show(q->limits.max_copy_range_sectors, page); +} + +static ssize_t queue_max_copy_nr_ranges_show(struct request_queue *q, + char *page) +{ + return queue_var_show(q->limits.max_copy_nr_ranges, page); +} + static ssize_t queue_discard_max_hw_show(struct request_queue *q, char *page) { @@ -591,6 +629,11 @@ QUEUE_RO_ENTRY(queue_nr_zones, "nr_zones"); QUEUE_RO_ENTRY(queue_max_open_zones, "max_open_zones"); QUEUE_RO_ENTRY(queue_max_active_zones, "max_active_zones"); +QUEUE_RW_ENTRY(queue_copy_offload, "copy_offload"); +QUEUE_RO_ENTRY(queue_max_copy_sectors, "max_copy_sectors"); +QUEUE_RO_ENTRY(queue_max_copy_range_sectors, "max_copy_range_sectors"); +QUEUE_RO_ENTRY(queue_max_copy_nr_ranges, "max_copy_nr_ranges"); + QUEUE_RW_ENTRY(queue_nomerges, "nomerges"); QUEUE_RW_ENTRY(queue_rq_affinity, "rq_affinity"); QUEUE_RW_ENTRY(queue_poll, "io_poll"); @@ -636,6 +679,10 @@ static struct attribute *queue_attrs[] = { &queue_discard_max_entry.attr, &queue_discard_max_hw_entry.attr, &queue_discard_zeroes_data_entry.attr, + &queue_copy_offload_entry.attr, + &queue_max_copy_sectors_entry.attr, + &queue_max_copy_range_sectors_entry.attr, + &queue_max_copy_nr_ranges_entry.attr, &queue_write_same_max_entry.attr, &queue_write_zeroes_max_entry.attr, &queue_zone_append_max_entry.attr, diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 7a68b6e4300c..02069178d51e 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -75,6 +75,7 @@ bool blk_req_needs_zone_write_lock(struct request *rq) case REQ_OP_WRITE_ZEROES: case REQ_OP_WRITE_SAME: case REQ_OP_WRITE: + case REQ_OP_COPY: return blk_rq_zone_is_seq(rq); default: return false; diff --git a/block/bounce.c b/block/bounce.c index d3f51acd6e3b..5e052afe8691 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -254,6 +254,7 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask, bio->bi_iter.bi_size = bio_src->bi_iter.bi_size; switch (bio_op(bio)) { + case REQ_OP_COPY: case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_ZEROES: diff --git a/block/ioctl.c b/block/ioctl.c index d61d652078f4..0e52181657a4 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -133,6 +133,37 @@ static int blk_ioctl_discard(struct block_device *bdev, fmode_t mode, GFP_KERNEL, flags); } +static int blk_ioctl_copy(struct block_device *bdev, fmode_t mode, + unsigned long arg, unsigned long flags) +{ + struct copy_range crange; + struct range_entry *rlist; + int ret; + + if (!(mode & FMODE_WRITE)) + return -EBADF; + + if (copy_from_user(&crange, (void __user *)arg, sizeof(crange))) + return -EFAULT; + + rlist = kmalloc_array(crange.nr_range, sizeof(*rlist), + GFP_KERNEL); + if (!rlist) + return -ENOMEM; + + if (copy_from_user(rlist, (void __user *)crange.range_list, + sizeof(*rlist) * crange.nr_range)) { + ret = -EFAULT; + goto out; + } + + ret = blkdev_issue_copy(bdev, crange.nr_range, rlist, bdev, crange.dest, + GFP_KERNEL, flags); +out: + kfree(rlist); + return ret; +} + static int blk_ioctl_zeroout(struct block_device *bdev, fmode_t mode, unsigned long arg) { @@ -458,6 +489,8 @@ static int blkdev_common_ioctl(struct block_device *bdev, fmode_t mode, case BLKSECDISCARD: return blk_ioctl_discard(bdev, mode, arg, BLKDEV_DISCARD_SECURE); + case BLKCOPY: + return blk_ioctl_copy(bdev, mode, arg, 0); case BLKZEROOUT: return blk_ioctl_zeroout(bdev, mode, arg); case BLKREPORTZONE: diff --git a/include/linux/bio.h b/include/linux/bio.h index 1edda614f7ce..164313bdfb35 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -71,6 +71,7 @@ static inline bool bio_has_data(struct bio *bio) static inline bool bio_no_advance_iter(const struct bio *bio) { return bio_op(bio) == REQ_OP_DISCARD || + bio_op(bio) == REQ_OP_COPY || bio_op(bio) == REQ_OP_SECURE_ERASE || bio_op(bio) == REQ_OP_WRITE_SAME || bio_op(bio) == REQ_OP_WRITE_ZEROES; diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 866f74261b3b..5a35c02ac0a8 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -380,6 +380,8 @@ enum req_opf { REQ_OP_ZONE_RESET = 15, /* reset all the zone present on the device */ REQ_OP_ZONE_RESET_ALL = 17, + /* copy ranges within device */ + REQ_OP_COPY = 19, /* SCSI passthrough using struct scsi_request */ REQ_OP_SCSI_IN = 32, @@ -506,6 +508,11 @@ static inline bool op_is_discard(unsigned int op) return (op & REQ_OP_MASK) == REQ_OP_DISCARD; } +static inline bool op_is_copy(unsigned int op) +{ + return (op & REQ_OP_MASK) == REQ_OP_COPY; +} + /* * Check if a bio or request operation is a zone management operation, with * the exception of REQ_OP_ZONE_RESET_ALL which is treated as a special case @@ -565,4 +572,11 @@ struct blk_rq_stat { u64 batch; }; +struct blk_copy_payload { + sector_t dest; + int copy_nr_ranges; + struct block_device *src_bdev; + struct range_entry range[]; +}; + #endif /* __LINUX_BLK_TYPES_H */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 699ace6b25ff..2bb4513d4bb8 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -337,10 +337,14 @@ struct queue_limits { unsigned int max_zone_append_sectors; unsigned int discard_granularity; unsigned int discard_alignment; + unsigned int copy_offload; + unsigned int max_copy_sectors; unsigned short max_segments; unsigned short max_integrity_segments; unsigned short max_discard_segments; + unsigned short max_copy_range_sectors; + unsigned short max_copy_nr_ranges; unsigned char misaligned; unsigned char discard_misaligned; @@ -621,6 +625,7 @@ struct request_queue { #define QUEUE_FLAG_RQ_ALLOC_TIME 27 /* record rq->alloc_time_ns */ #define QUEUE_FLAG_HCTX_ACTIVE 28 /* at least one blk-mq hctx is active */ #define QUEUE_FLAG_NOWAIT 29 /* device supports NOWAIT */ +#define QUEUE_FLAG_SIMPLE_COPY 30 /* supports simple copy */ #define QUEUE_FLAG_MQ_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_SAME_COMP) | \ @@ -643,6 +648,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q); #define blk_queue_io_stat(q) test_bit(QUEUE_FLAG_IO_STAT, &(q)->queue_flags) #define blk_queue_add_random(q) test_bit(QUEUE_FLAG_ADD_RANDOM, &(q)->queue_flags) #define blk_queue_discard(q) test_bit(QUEUE_FLAG_DISCARD, &(q)->queue_flags) +#define blk_queue_copy(q) test_bit(QUEUE_FLAG_SIMPLE_COPY, &(q)->queue_flags) #define blk_queue_zone_resetall(q) \ test_bit(QUEUE_FLAG_ZONE_RESETALL, &(q)->queue_flags) #define blk_queue_secure_erase(q) \ @@ -1069,6 +1075,9 @@ static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q, return min(q->limits.max_discard_sectors, UINT_MAX >> SECTOR_SHIFT); + if (unlikely(op == REQ_OP_COPY)) + return q->limits.max_copy_sectors; + if (unlikely(op == REQ_OP_WRITE_SAME)) return q->limits.max_write_same_sectors; @@ -1343,6 +1352,12 @@ extern int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, int flags, struct bio **biop); +#define BLKDEV_COPY_NOEMULATION (1 << 0) /* do not emulate if copy offload not supported */ + +extern int blkdev_issue_copy(struct block_device *src_bdev, int nr_srcs, + struct range_entry *src_rlist, struct block_device *dest_bdev, + sector_t dest, gfp_t gfp_mask, int flags); + #define BLKDEV_ZERO_NOUNMAP (1 << 0) /* do not free blocks */ #define BLKDEV_ZERO_NOFALLBACK (1 << 1) /* don't write explicit zeroes */ diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index f44eb0a04afd..5cadb176317a 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -64,6 +64,18 @@ struct fstrim_range { __u64 minlen; }; +struct range_entry { + __u64 src; + __u64 len; +}; + +struct copy_range { + __u64 dest; + __u64 nr_range; + __u64 range_list; + __u64 rsvd; +}; + /* extent-same (dedupe) ioctls; these MUST match the btrfs ioctl definitions */ #define FILE_DEDUPE_RANGE_SAME 0 #define FILE_DEDUPE_RANGE_DIFFERS 1 @@ -184,6 +196,7 @@ struct fsxattr { #define BLKSECDISCARD _IO(0x12,125) #define BLKROTATIONAL _IO(0x12,126) #define BLKZEROOUT _IO(0x12,127) +#define BLKCOPY _IOWR(0x12, 128, struct copy_range) /* * A jump here: 130-131 are reserved for zoned block devices * (see uapi/linux/blkzoned.h) From patchwork Fri Feb 19 12:45:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: SelvaKumar S X-Patchwork-Id: 12096669 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FC35C433E0 for ; Sat, 20 Feb 2021 02:11:34 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9210864EE1 for ; Sat, 20 Feb 2021 02:11:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9210864EE1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=samsung.com Authentication-Results: mail.kernel.org; spf=tempfail smtp.mailfrom=dm-devel-bounces@redhat.com Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-193-RpiQc89GMcKP26ILNnt90w-1; Fri, 19 Feb 2021 21:11:30 -0500 X-MC-Unique: RpiQc89GMcKP26ILNnt90w-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B34D9107ACC7; Sat, 20 Feb 2021 02:11:25 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 93BA7702F2; Sat, 20 Feb 2021 02:11:25 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 633C757DFC; Sat, 20 Feb 2021 02:11:07 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 11K288BP015612 for ; Fri, 19 Feb 2021 21:08:08 -0500 Received: by smtp.corp.redhat.com (Postfix) id 456B416359B; Sat, 20 Feb 2021 02:08:08 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast02.extmail.prod.ext.rdu2.redhat.com [10.11.55.18]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3EAC6163595 for ; Sat, 20 Feb 2021 02:08:05 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D96BD8007D9 for ; Sat, 20 Feb 2021 02:08:05 +0000 (UTC) Received: from mailout4.samsung.com (mailout4.samsung.com [203.254.224.34]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-523-TuETwlEUNHiN1p4VLyO4zg-1; Fri, 19 Feb 2021 21:08:03 -0500 X-MC-Unique: TuETwlEUNHiN1p4VLyO4zg-1 Received: from epcas5p1.samsung.com (unknown [182.195.41.39]) by mailout4.samsung.com (KnoxPortal) with ESMTP id 20210220020142epoutp04a65ce793176944734d2d02dbf45ec2e6~lUkYQi8433213232132epoutp04j for ; Sat, 20 Feb 2021 02:01:42 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout4.samsung.com 20210220020142epoutp04a65ce793176944734d2d02dbf45ec2e6~lUkYQi8433213232132epoutp04j Received: from epsmges5p1new.samsung.com (unknown [182.195.42.73]) by epcas5p1.samsung.com (KnoxPortal) with ESMTP id 20210220020141epcas5p1c7952762f156ebcb262e441cb4de7b1f~lUkXuHvdV2719627196epcas5p17; Sat, 20 Feb 2021 02:01:41 +0000 (GMT) Received: from epcas5p4.samsung.com ( [182.195.41.42]) by epsmges5p1new.samsung.com (Symantec Messaging Gateway) with SMTP id 7B.B4.15682.58D60306; Sat, 20 Feb 2021 11:01:41 +0900 (KST) Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by epcas5p2.samsung.com (KnoxPortal) with ESMTPA id 20210219124608epcas5p2a673f9e00c3e7b5352f115497b0e2d98~lJtwNzm1w2970129701epcas5p2p; Fri, 19 Feb 2021 12:46:08 +0000 (GMT) Received: from epsmgms1p2.samsung.com (unknown [182.195.42.42]) by epsmtrp1.samsung.com (KnoxPortal) with ESMTP id 20210219124607epsmtrp1acb23657045715e9cc89c191737a5f83~lJtwGtgcX0512305123epsmtrp1R; Fri, 19 Feb 2021 12:46:07 +0000 (GMT) X-AuditID: b6c32a49-8d5ff70000013d42-83-60306d855f68 Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgms1p2.samsung.com (Symantec Messaging Gateway) with SMTP id CB.FC.08745.F03BF206; Fri, 19 Feb 2021 21:46:07 +0900 (KST) Received: from localhost.localdomain (unknown [107.110.206.5]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20210219124605epsmtip2be52bd49832c51a02a929d81b88f8b0c~lJttfzUfr1656916569epsmtip26; Fri, 19 Feb 2021 12:46:05 +0000 (GMT) From: SelvaKumar S To: linux-nvme@lists.infradead.org Date: Fri, 19 Feb 2021 18:15:16 +0530 Message-Id: <20210219124517.79359-4-selvakuma.s1@samsung.com> In-Reply-To: <20210219124517.79359-1-selvakuma.s1@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKKsWRmVeSWpSXmKPExsWy7bCmlm5rrkGCwbwfmhar7/azWbS2f2Oy 2PtuNqvFytVHmSwe3/nMbnH0/1s2i/NvDzNZTDp0jdFiz6YpLBZ7b2lb7Nl7ksXi8q45bBbz lz1lt9j2ez6zxZUpi5gt1r1+z2Lx4P11dovXP06yWbRt/MroIOyxc9Zddo/z9zayeFw+W+qx aVUnm8fmJfUeu282sHm833eVzaNvyypGj8+b5DzaD3QzBXBFcdmkpOZklqUW6dslcGXMP7eZ peCSTcWe/fOZGxh/6HcxcnJICJhILG/6ztbFyMUhJLCbUeLn6nPsIAkhgU+MEi+e1UEkPgPZ Lx6ywnQcnLeaFaJoF6NE5w91uKKNV/sYQRJsAroS15ZsYgGxRQSUJP6ub2IBKWIWmMAs0Xt9 LjNIQljAUuLCxfNgNouAqsSMnf/BbF4BW4kjb3cyQWyTl5h56TvYSZwCdhLb3lyDqhGUODnz CdgCZqCa5q2zmUEWSAh84ZC4dvMZ0HkcQI6LxKHlmhBzhCVeHd/CDmFLSbzsb4OyyyWedU6D 2tXAKNH3vhzCtpe4uOcvE8gYZgFNifW7oMElKzH11DomiLV8Er2/n0C18krsmPeECWKrmsSp 7WYQYRmJD4d3sUHYHhK9K38wQsJqIqPEpmtLWCcwKsxC8s0sJN/MQti8gJF5FaNkakFxbnpq sWmBYV5quV5xYm5xaV66XnJ+7iZGcErU8tzBePfBB71DjEwcjIcYJTiYlUR4tz/XSxDiTUms rEotyo8vKs1JLT7EKM3BoiTOu8PgQbyQQHpiSWp2ampBahFMlomDU6qBaWnV3yD3z41FC3YK xq59oLJE8srjIwYz/Vn27QzOO7HkrNZXkWO3jpUZX97rWzahIoBns9PEJdM9q1+wKc6ayGuf drJ3UeKl9R61chW/nPuNUo9/uvq7dPuh+pvH1YLe/fvE3HTodvL25Oi7CV/qO/jirvA5Cq3K 39C35NK7hRpByq2Xtv/v2120TGLuTwm/S4mS9+4Lznm2jGFO/IUZK481O9tMX3pXIHvW/w3B q0SuuevOl7h78M20Tu7+2Ia9Wx8+8gyZy+Dkv8j5+dKHT79YSIkvi/nWOqXlzP7GI/6TZrf1 sBTfdf5qHuzy8eJMhdP7uj51TwsoXFta/+hQlUTV/csZGb3sTqpiJ1smaqxTYinOSDTUYi4q TgQAS4Xot/gDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrPIsWRmVeSWpSXmKPExsWy7bCSvC7/Zv0Eg037DCxW3+1ns2ht/8Zk sffdbFaLlauPMlk8vvOZ3eLo/7dsFuffHmaymHToGqPFnk1TWCz23tK22LP3JIvF5V1z2Czm L3vKbrHt93xmiytTFjFbrHv9nsXiwfvr7Bavf5xks2jb+JXRQdhj56y77B7n721k8bh8ttRj 06pONo/NS+o9dt9sYPN4v+8qm0ffllWMHp83yXm0H+hmCuCK4rJJSc3JLEst0rdL4MqYf24z S8Elm4o9++czNzD+0O9i5OSQEDCRODhvNWsXIxeHkMAORonXM/czQyRkJNbe7WSDsIUlVv57 zg5R9JFRovH2VlaQBJuArsS1JZtYQGwRASWJv+ubwGxmgWXMEo9mKoLYwgKWEhcungcbyiKg KjFj538wm1fAVuLI251MEAvkJWZe+s4OYnMK2Else3MNrEYIqObD92lsEPWCEidnPgGazwE0 X11i/TwhiFXyEs1bZzNPYBSchaRqFkLVLCRVCxiZVzFKphYU56bnFhsWGOWllusVJ+YWl+al 6yXn525iBMepltYOxj2rPugdYmTiYDzEKMHBrCTCu/25XoIQb0piZVVqUX58UWlOavEhRmkO FiVx3gtdJ+OFBNITS1KzU1MLUotgskwcnFINTNG/liitzT5zvNb3zLQHv9LfcT/pmbB2XrxG vdumvA7NE+cKH3q3HnO15HnFO2tV1jejwKnmdwsXr4h9/VY+7cTPT6ILufNYTl8oP/P82Jt8 Xvv4gOA9zBMf8DJJp3f95PJ8n8ZoXz97x8vQmIpLVy2XMPFkPcvg/rPb5etSEVXds0tvFyya F7D2oqWRu21LvpmM7qW/Ar8TnU4ahC1ZkOIg+GvejFlLb/R7Kpy9Js5re/2IjPxZlxf7lxtu WJjMr7NC6pSXjfLdtj5hb/vD/yZUbcqYtWLhE8uu2c5ifZJM0Z4lxTEv2Z6oi126vzxaYEnP QXYRI8bAu+eOqWpHBdy6tiRphuWqgstH1reaPVJiKc5INNRiLipOBAAPMgp5QgMAAA== X-CMS-MailID: 20210219124608epcas5p2a673f9e00c3e7b5352f115497b0e2d98 X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P X-CMS-RootMailID: 20210219124608epcas5p2a673f9e00c3e7b5352f115497b0e2d98 References: <20210219124517.79359-1-selvakuma.s1@samsung.com> X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id 11K288BP015612 X-loop: dm-devel@redhat.com Cc: axboe@kernel.dk, damien.lemoal@wdc.com, kch@kernel.org, SelvaKumar S , sagi@grimberg.me, snitzer@redhat.com, selvajove@gmail.com, linux-kernel@vger.kernel.org, nj.shetty@samsung.com, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, dm-devel@redhat.com, joshi.k@samsung.com, javier.gonz@samsung.com, kbusch@kernel.org, joshiiitr@gmail.com, hch@lst.de Subject: [dm-devel] [RFC PATCH v5 3/4] nvme: add simple copy support X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Add support for TP 4065a ("Simple Copy Command"), v2020.05.04 ("Ratified") For device supporting native simple copy, this implementation accepts the payload passed from the block layer and convert payload to form simple copy command and submit to the device. Set the device copy limits to queue limits. By default copy_offload is disabled. End-to-end protection is done by setting both PRINFOR and PRINFOW to 0. Signed-off-by: SelvaKumar S Signed-off-by: Kanchan Joshi Signed-off-by: Nitesh Shetty Signed-off-by: Javier González --- drivers/nvme/host/core.c | 87 ++++++++++++++++++++++++++++++++++++++++ include/linux/nvme.h | 43 ++++++++++++++++++-- 2 files changed, 127 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index f13eb4ded95f..ba4de2f36cd5 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -706,6 +706,63 @@ static inline void nvme_setup_flush(struct nvme_ns *ns, cmnd->common.nsid = cpu_to_le32(ns->head->ns_id); } +static inline blk_status_t nvme_setup_copy(struct nvme_ns *ns, + struct request *req, struct nvme_command *cmnd) +{ + struct nvme_ctrl *ctrl = ns->ctrl; + struct nvme_copy_range *range = NULL; + struct blk_copy_payload *payload; + unsigned short nr_range = 0; + u16 control = 0, ssrl; + u32 dsmgmt = 0; + u64 slba; + int i; + + payload = bio_data(req->bio); + nr_range = payload->copy_nr_ranges; + + if (req->cmd_flags & REQ_FUA) + control |= NVME_RW_FUA; + + if (req->cmd_flags & REQ_FAILFAST_DEV) + control |= NVME_RW_LR; + + cmnd->copy.opcode = nvme_cmd_copy; + cmnd->copy.nsid = cpu_to_le32(ns->head->ns_id); + cmnd->copy.sdlba = cpu_to_le64(blk_rq_pos(req) >> (ns->lba_shift - 9)); + + range = kmalloc_array(nr_range, sizeof(*range), + GFP_ATOMIC | __GFP_NOWARN); + if (!range) + return BLK_STS_RESOURCE; + + for (i = 0; i < nr_range; i++) { + slba = payload->range[i].src; + slba = slba >> (ns->lba_shift - 9); + + ssrl = payload->range[i].len; + ssrl = ssrl >> (ns->lba_shift - 9); + + range[i].slba = cpu_to_le64(slba); + range[i].nlb = cpu_to_le16(ssrl - 1); + } + + cmnd->copy.nr_range = nr_range - 1; + + req->special_vec.bv_page = virt_to_page(range); + req->special_vec.bv_offset = offset_in_page(range); + req->special_vec.bv_len = sizeof(*range) * nr_range; + req->rq_flags |= RQF_SPECIAL_PAYLOAD; + + if (ctrl->nr_streams) + nvme_assign_write_stream(ctrl, req, &control, &dsmgmt); + + cmnd->rw.control = cpu_to_le16(control); + cmnd->rw.dsmgmt = cpu_to_le32(dsmgmt); + + return BLK_STS_OK; +} + static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req, struct nvme_command *cmnd) { @@ -888,6 +945,9 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req, case REQ_OP_DISCARD: ret = nvme_setup_discard(ns, req, cmd); break; + case REQ_OP_COPY: + ret = nvme_setup_copy(ns, req, cmd); + break; case REQ_OP_READ: ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_read); break; @@ -1928,6 +1988,31 @@ static void nvme_config_discard(struct gendisk *disk, struct nvme_ns *ns) blk_queue_max_write_zeroes_sectors(queue, UINT_MAX); } +static void nvme_config_copy(struct gendisk *disk, struct nvme_ns *ns, + struct nvme_id_ns *id) +{ + struct nvme_ctrl *ctrl = ns->ctrl; + struct request_queue *queue = disk->queue; + + if (!(ctrl->oncs & NVME_CTRL_ONCS_COPY)) { + queue->limits.copy_offload = 0; + queue->limits.max_copy_sectors = 0; + queue->limits.max_copy_range_sectors = 0; + queue->limits.max_copy_nr_ranges = 0; + blk_queue_flag_clear(QUEUE_FLAG_SIMPLE_COPY, queue); + return; + } + + /* setting copy limits */ + blk_queue_flag_test_and_set(QUEUE_FLAG_SIMPLE_COPY, queue); + queue->limits.copy_offload = 0; + queue->limits.max_copy_sectors = le64_to_cpu(id->mcl) * + (1 << (ns->lba_shift - 9)); + queue->limits.max_copy_range_sectors = le32_to_cpu(id->mssrl) * + (1 << (ns->lba_shift - 9)); + queue->limits.max_copy_nr_ranges = id->msrc + 1; +} + static void nvme_config_write_zeroes(struct gendisk *disk, struct nvme_ns *ns) { u64 max_blocks; @@ -2123,6 +2208,7 @@ static void nvme_update_disk_info(struct gendisk *disk, set_capacity_and_notify(disk, capacity); nvme_config_discard(disk, ns); + nvme_config_copy(disk, ns, id); nvme_config_write_zeroes(disk, ns); if ((id->nsattr & NVME_NS_ATTR_RO) || @@ -4705,6 +4791,7 @@ static inline void _nvme_check_size(void) BUILD_BUG_ON(sizeof(struct nvme_download_firmware) != 64); BUILD_BUG_ON(sizeof(struct nvme_format_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_dsm_cmd) != 64); + BUILD_BUG_ON(sizeof(struct nvme_copy_command) != 64); BUILD_BUG_ON(sizeof(struct nvme_write_zeroes_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_abort_cmd) != 64); BUILD_BUG_ON(sizeof(struct nvme_get_log_page_command) != 64); diff --git a/include/linux/nvme.h b/include/linux/nvme.h index bfed36e342cc..c36e486cbe18 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -295,7 +295,7 @@ struct nvme_id_ctrl { __u8 nvscc; __u8 nwpc; __le16 acwu; - __u8 rsvd534[2]; + __le16 ocfs; __le32 sgls; __le32 mnan; __u8 rsvd544[224]; @@ -320,6 +320,7 @@ enum { NVME_CTRL_ONCS_WRITE_ZEROES = 1 << 3, NVME_CTRL_ONCS_RESERVATIONS = 1 << 5, NVME_CTRL_ONCS_TIMESTAMP = 1 << 6, + NVME_CTRL_ONCS_COPY = 1 << 8, NVME_CTRL_VWC_PRESENT = 1 << 0, NVME_CTRL_OACS_SEC_SUPP = 1 << 0, NVME_CTRL_OACS_DIRECTIVES = 1 << 5, @@ -368,7 +369,10 @@ struct nvme_id_ns { __le16 npdg; __le16 npda; __le16 nows; - __u8 rsvd74[18]; + __le16 mssrl; + __le32 mcl; + __u8 msrc; + __u8 rsvd91[11]; __le32 anagrpid; __u8 rsvd96[3]; __u8 nsattr; @@ -679,6 +683,7 @@ enum nvme_opcode { nvme_cmd_resv_report = 0x0e, nvme_cmd_resv_acquire = 0x11, nvme_cmd_resv_release = 0x15, + nvme_cmd_copy = 0x19, nvme_cmd_zone_mgmt_send = 0x79, nvme_cmd_zone_mgmt_recv = 0x7a, nvme_cmd_zone_append = 0x7d, @@ -697,7 +702,8 @@ enum nvme_opcode { nvme_opcode_name(nvme_cmd_resv_register), \ nvme_opcode_name(nvme_cmd_resv_report), \ nvme_opcode_name(nvme_cmd_resv_acquire), \ - nvme_opcode_name(nvme_cmd_resv_release)) + nvme_opcode_name(nvme_cmd_resv_release), \ + nvme_opcode_name(nvme_cmd_copy)) /* @@ -869,6 +875,36 @@ struct nvme_dsm_range { __le64 slba; }; +struct nvme_copy_command { + __u8 opcode; + __u8 flags; + __u16 command_id; + __le32 nsid; + __u64 rsvd2; + __le64 metadata; + union nvme_data_ptr dptr; + __le64 sdlba; + __u8 nr_range; + __u8 rsvd12; + __le16 control; + __le16 rsvd13; + __le16 dspec; + __le32 ilbrt; + __le16 lbat; + __le16 lbatm; +}; + +struct nvme_copy_range { + __le64 rsvd0; + __le64 slba; + __le16 nlb; + __le16 rsvd18; + __le32 rsvd20; + __le32 eilbrt; + __le16 elbat; + __le16 elbatm; +}; + struct nvme_write_zeroes_cmd { __u8 opcode; __u8 flags; @@ -1406,6 +1442,7 @@ struct nvme_command { struct nvme_download_firmware dlfw; struct nvme_format_cmd format; struct nvme_dsm_cmd dsm; + struct nvme_copy_command copy; struct nvme_write_zeroes_cmd write_zeroes; struct nvme_zone_mgmt_send_cmd zms; struct nvme_zone_mgmt_recv_cmd zmr; From patchwork Fri Feb 19 12:45:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SelvaKumar S X-Patchwork-Id: 12096643 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAF56C433E0 for ; Sat, 20 Feb 2021 02:10:54 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 50C6164E12 for ; Sat, 20 Feb 2021 02:10:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 50C6164E12 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=samsung.com Authentication-Results: mail.kernel.org; spf=tempfail smtp.mailfrom=dm-devel-bounces@redhat.com Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-100-3j3o25zwPwS9zXXKtmNNNQ-1; Fri, 19 Feb 2021 21:10:49 -0500 X-MC-Unique: 3j3o25zwPwS9zXXKtmNNNQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2F0368030D3; Sat, 20 Feb 2021 02:10:45 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0F1205D9C6; Sat, 20 Feb 2021 02:10:45 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id CBE6E4EE7F; Sat, 20 Feb 2021 02:10:26 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 11K28RO5015643 for ; Fri, 19 Feb 2021 21:08:27 -0500 Received: by smtp.corp.redhat.com (Postfix) id 3245610A58EF; Sat, 20 Feb 2021 02:08:27 +0000 (UTC) Received: from mimecast-mx02.redhat.com (mimecast02.extmail.prod.ext.rdu2.redhat.com [10.11.55.18]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2C67F10FBFE4 for ; Sat, 20 Feb 2021 02:08:23 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CF50B800C81 for ; Sat, 20 Feb 2021 02:08:23 +0000 (UTC) Received: from mailout2.samsung.com (mailout2.samsung.com [203.254.224.25]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-304-VuEjSJTlNP6qAoZ8mXXhFA-1; Fri, 19 Feb 2021 21:08:21 -0500 X-MC-Unique: VuEjSJTlNP6qAoZ8mXXhFA-1 Received: from epcas5p1.samsung.com (unknown [182.195.41.39]) by mailout2.samsung.com (KnoxPortal) with ESMTP id 20210220020148epoutp02d35feb40475b70834290d0a868a2eade~lUkeQ-env1742017420epoutp02W for ; Sat, 20 Feb 2021 02:01:48 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.samsung.com 20210220020148epoutp02d35feb40475b70834290d0a868a2eade~lUkeQ-env1742017420epoutp02W Received: from epsmges5p1new.samsung.com (unknown [182.195.42.73]) by epcas5p1.samsung.com (KnoxPortal) with ESMTP id 20210220020148epcas5p151a785b7b609b7f8a20fe6fc035514a7~lUkdppwM50602606026epcas5p1M; Sat, 20 Feb 2021 02:01:48 +0000 (GMT) Received: from epcas5p4.samsung.com ( [182.195.41.42]) by epsmges5p1new.samsung.com (Symantec Messaging Gateway) with SMTP id 90.C4.15682.C8D60306; Sat, 20 Feb 2021 11:01:48 +0900 (KST) Received: from epsmtrp2.samsung.com (unknown [182.195.40.14]) by epcas5p1.samsung.com (KnoxPortal) with ESMTPA id 20210219124611epcas5p1c775b63b537e75da161556e375fcf05e~lJtz0JO_60207202072epcas5p1p; Fri, 19 Feb 2021 12:46:11 +0000 (GMT) Received: from epsmgms1p1new.samsung.com (unknown [182.195.42.41]) by epsmtrp2.samsung.com (KnoxPortal) with ESMTP id 20210219124611epsmtrp23f2c2f78140d23a8056425889adeec3d~lJtzyneid0558405584epsmtrp2V; Fri, 19 Feb 2021 12:46:11 +0000 (GMT) X-AuditID: b6c32a49-8bfff70000013d42-a4-60306d8c60b5 Received: from epsmtip2.samsung.com ( [182.195.34.31]) by epsmgms1p1new.samsung.com (Symantec Messaging Gateway) with SMTP id FB.7A.13470.313BF206; Fri, 19 Feb 2021 21:46:11 +0900 (KST) Received: from localhost.localdomain (unknown [107.110.206.5]) by epsmtip2.samsung.com (KnoxPortal) with ESMTPA id 20210219124609epsmtip2151dbf1ea005f7888f9954720b9d444a~lJtxRfto21677916779epsmtip25; Fri, 19 Feb 2021 12:46:09 +0000 (GMT) From: SelvaKumar S To: linux-nvme@lists.infradead.org Date: Fri, 19 Feb 2021 18:15:17 +0530 Message-Id: <20210219124517.79359-5-selvakuma.s1@samsung.com> In-Reply-To: <20210219124517.79359-1-selvakuma.s1@samsung.com> MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrCKsWRmVeSWpSXmKPExsWy7bCmlm5PrkGCwbI/zBar7/azWbS2f2Oy 2PtuNqvFytVHmSwe3/nMbnH0/1s2i/NvDzNZTDp0jdFiz6YpLBZ7b2lb7Nl7ksXi8q45bBbz lz1lt9j2ez6zxZUpi5gt1r1+z2Lx4P11dovXP06yWbRt/MroIOyxc9Zddo/z9zayeFw+W+qx aVUnm8fmJfUeu282sHm833eVzaNvyypGj8+b5DzaD3QzBXBFcdmkpOZklqUW6dslcGWc/S1Y cEG+4u7sPywNjC1SXYycHBICJhLHXm5l7GLk4hAS2M0oMWPpWSjnE6PEi5bfjCBVQgLfGCW+ XUyA6fjQ+RUqvpdRonlbNUTDZ0aJjj3HWUASbAK6EteWbAKzRQSUJP6ub2IBKWIWmMAs0Xt9 LjNIQljATaL/Wz/YJBYBVYmDT44zgdi8ArYSa5tbWCG2yUvMvPSdHcTmFLCT2PbmGjNEjaDE yZlPwBYwA9U0b53NDLJAQuAGh8S209eZIJpdJA7uuc8IYQtLvDq+hR3ClpL4/G4vG4RdLvGs cxpUfQOjRN/7cgjbXuLinr9AcQ6gBZoS63fpQ4RlJaaeWscEsZdPovf3E6hWXokd856AlUsI qEmc2m4GEZaR+HB4F9QmD4k/3yYxQQJuIqPEtTV2ExgVZiH5ZhaSb2YhLF7AyLyKUTK1oDg3 PbXYtMAwL7Vcrzgxt7g0L10vOT93EyM4IWp57mC8++CD3iFGJg7GQ4wSHMxKIrzbn+slCPGm JFZWpRblxxeV5qQWH2KU5mBREufdYfAgXkggPbEkNTs1tSC1CCbLxMEp1cDkHX5E4OO/ENNp a8327p/3dvmyDxw9dss2Tqtka9zeutnvnHCKgHDLNf9y5VPLJy54oTSd8SfzqQ8bdE/1v/1y ZcpOI5n2z/OtV8TK7/97br7QIU67WDtB9rUGGo+lf1V27j61Y8Zs/jeLwk9qzulJTvx8+OGt jUcfB1j+jugNW6bCu2BL08Lm6swDbka+Otf67/j43eW2eFptG7djqcDqutr514/5bfJlvJli 4M5goa7jmXzlb3Umz48mmadHJwRweyi+W3trqs2bT4X9F8NN5Z52nsnafEApftHS3LarkS9j Wh59W5LF4FLzUroyQ37Bpc/TmbPvOUXELhB5t8Fi6vuzyc81l+U1PX/CnqeXosRSnJFoqMVc VJwIANk2cbf3AwAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprGIsWRmVeSWpSXmKPExsWy7bCSvK7wZv0Eg+f9Rhar7/azWbS2f2Oy 2PtuNqvFytVHmSwe3/nMbnH0/1s2i/NvDzNZTDp0jdFiz6YpLBZ7b2lb7Nl7ksXi8q45bBbz lz1lt9j2ez6zxZUpi5gt1r1+z2Lx4P11dovXP06yWbRt/MroIOyxc9Zddo/z9zayeFw+W+qx aVUnm8fmJfUeu282sHm833eVzaNvyypGj8+b5DzaD3QzBXBFcdmkpOZklqUW6dslcGWc/S1Y cEG+4u7sPywNjC1SXYycHBICJhIfOr8ygthCArsZJW5dNoKIy0isvdvJBmELS6z895wdouYj o8SN+Z4gNpuArsS1JZtYQGwRASWJv+ubwGxmgWXMEo9mKoLYwgJuEv3f+sHmswioShx8cpwJ xOYVsJVY29zCCjFfXmLmpe9g8zkF7CS2vbnGDLHLVuLD92lsEPWCEidnPoGaLy/RvHU28wRG gVlIUrOQpBYwMq1ilEwtKM5Nzy02LDDMSy3XK07MLS7NS9dLzs/dxAiOOi3NHYzbV33QO8TI xMF4iFGCg1lJhHf7c70EId6UxMqq1KL8+KLSnNTiQ4zSHCxK4rwXuk7GCwmkJ5akZqemFqQW wWSZODilGpgiuZYYzzfntTKXfXPnqAxXoc9+ro/btq7uV567Xk/hRbeTpkTdN9EDlQuOf99/ 1/avamhk6Snz80+s7fK/p3sfLFTlsvD1dyoOsrpSevf53siHfx9lBh1NMaqYstr1V1qmjLWO WWoY9yntA5vO98/5rvH31r4nT97ernvAv/qzgNDFhE1dUor7hVnW32d7ZnkrueJQd4KG+7UT 28LmyVg9vb65Y0FTzutnBebPbi6QtDMWyWe5IlO1h+Ex65LfUup35n7+w6bU5rL4149tryaa vpr++/HR37fP/J2iuPz8VOYPa8riJpc1hP++suX6X+ZrWy/v073UaTz3gXDdlMiJmWkLqh6K 88/1OCPZWj6tQImlOCPRUIu5qDgRADXe98IpAwAA X-CMS-MailID: 20210219124611epcas5p1c775b63b537e75da161556e375fcf05e X-Msg-Generator: CA X-Sendblock-Type: REQ_APPROVE CMS-TYPE: 105P X-CMS-RootMailID: 20210219124611epcas5p1c775b63b537e75da161556e375fcf05e References: <20210219124517.79359-1-selvakuma.s1@samsung.com> X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id 11K28RO5015643 X-loop: dm-devel@redhat.com Cc: axboe@kernel.dk, damien.lemoal@wdc.com, kch@kernel.org, SelvaKumar S , sagi@grimberg.me, snitzer@redhat.com, selvajove@gmail.com, linux-kernel@vger.kernel.org, nj.shetty@samsung.com, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, dm-devel@redhat.com, joshi.k@samsung.com, javier.gonz@samsung.com, kbusch@kernel.org, joshiiitr@gmail.com, hch@lst.de Subject: [dm-devel] [RFC PATCH v5 4/4] dm kcopyd: add simple copy offload support X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Introduce copy_jobs to use copy-offload if it is natively supported (by underlying device) or else fall back to original method. run_copy_jobs() calls block layer copy offload api with BLKDEV_COPY_NOEMULATION. On successful completion, if only one destination device was present, then jobs is queued for completion. If multiple destinations were present, the completed destination is zeroed and pushed to pages_jobs to process copy offload for other destinations. In case of copy_offload failure, remaining destinations are processed via regular copying mechanism. Signed-off-by: SelvaKumar S --- drivers/md/dm-kcopyd.c | 49 ++++++++++++++++++++++++++++++++++++------ 1 file changed, 43 insertions(+), 6 deletions(-) diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c index 1bbe4a34ef4c..2442c4870e97 100644 --- a/drivers/md/dm-kcopyd.c +++ b/drivers/md/dm-kcopyd.c @@ -74,18 +74,20 @@ struct dm_kcopyd_client { atomic_t nr_jobs; /* - * We maintain four lists of jobs: + * We maintain five lists of jobs: * - * i) jobs waiting for pages - * ii) jobs that have pages, and are waiting for the io to be issued. - * iii) jobs that don't need to do any IO and just run a callback - * iv) jobs that have completed. + * i) jobs waiting to try copy offload + * ii) jobs waiting for pages + * iii) jobs that have pages, and are waiting for the io to be issued. + * iv) jobs that don't need to do any IO and just run a callback + * v) jobs that have completed. * - * All four of these are protected by job_lock. + * All five of these are protected by job_lock. */ spinlock_t job_lock; struct list_head callback_jobs; struct list_head complete_jobs; + struct list_head copy_jobs; struct list_head io_jobs; struct list_head pages_jobs; }; @@ -581,6 +583,36 @@ static int run_io_job(struct kcopyd_job *job) return r; } +static int run_copy_job(struct kcopyd_job *job) +{ + int r, i, count = 0; + unsigned long flags = 0; + struct range_entry srange; + + flags |= BLKDEV_COPY_NOEMULATION; + for (i = 0; i < job->num_dests; i++) { + srange.src = job->source.sector; + srange.len = job->source.count; + + r = blkdev_issue_copy(job->source.bdev, 1, &srange, + job->dests[i].bdev, job->dests[i].sector, GFP_KERNEL, flags); + if (r) + break; + + job->dests[i].count = 0; + count++; + } + + if (count == job->num_dests) { + push(&job->kc->complete_jobs, job); + } else { + push(&job->kc->pages_jobs, job); + r = 0; + } + + return r; +} + static int run_pages_job(struct kcopyd_job *job) { int r; @@ -662,6 +694,7 @@ static void do_work(struct work_struct *work) spin_unlock_irqrestore(&kc->job_lock, flags); blk_start_plug(&plug); + process_jobs(&kc->copy_jobs, kc, run_copy_job); process_jobs(&kc->complete_jobs, kc, run_complete_job); process_jobs(&kc->pages_jobs, kc, run_pages_job); process_jobs(&kc->io_jobs, kc, run_io_job); @@ -679,6 +712,8 @@ static void dispatch_job(struct kcopyd_job *job) atomic_inc(&kc->nr_jobs); if (unlikely(!job->source.count)) push(&kc->callback_jobs, job); + else if (job->source.bdev->bd_disk == job->dests[0].bdev->bd_disk) + push(&kc->copy_jobs, job); else if (job->pages == &zero_page_list) push(&kc->io_jobs, job); else @@ -919,6 +954,7 @@ struct dm_kcopyd_client *dm_kcopyd_client_create(struct dm_kcopyd_throttle *thro spin_lock_init(&kc->job_lock); INIT_LIST_HEAD(&kc->callback_jobs); INIT_LIST_HEAD(&kc->complete_jobs); + INIT_LIST_HEAD(&kc->copy_jobs); INIT_LIST_HEAD(&kc->io_jobs); INIT_LIST_HEAD(&kc->pages_jobs); kc->throttle = throttle; @@ -974,6 +1010,7 @@ void dm_kcopyd_client_destroy(struct dm_kcopyd_client *kc) BUG_ON(!list_empty(&kc->callback_jobs)); BUG_ON(!list_empty(&kc->complete_jobs)); + BUG_ON(!list_empty(&kc->copy_jobs)); BUG_ON(!list_empty(&kc->io_jobs)); BUG_ON(!list_empty(&kc->pages_jobs)); destroy_workqueue(kc->kcopyd_wq);