From patchwork Wed Aug 31 04:18:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 12960354 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DF3CECAAD8 for ; Wed, 31 Aug 2022 04:20:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231865AbiHaET5 (ORCPT ); Wed, 31 Aug 2022 00:19:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231468AbiHaETa (ORCPT ); Wed, 31 Aug 2022 00:19:30 -0400 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2070.outbound.protection.outlook.com [40.107.94.70]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB20BB7755; Tue, 30 Aug 2022 21:19:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mMR/rCPldybrX/hf6dkxWwSUImNvNmBTN/w4W2uwkIDNukosK3JrwtynebtJKHzRasQun2nZynyIDfJuhxwgmq5ZTum0X2RM6XTCyvkJhR71Kqo5ZP9Fd/+EqYm85xHoduW+WtZOeiTZ0Od5ZVhSzKXQSJEplHu0TIkj0KuK54+lCgBhqnzIkzsGTtNuJD5MyX1K7fFuwJ03jf7dwVusvUuCo92ZJc8Jp6bSbmOGx4MA6XuxAWOKBI5FBrEG4XxYIyBqzbiQRGnMJaVR3mI6Iu5HPNrcrLgvePDqJOQKBq75r3SlssneilMrG5NbCPFRM3xQptW9G0A6McGvSgPwDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IAu/h0sUVw6tD3CUItXDkxlRPxdYIPja298n2n76n1w=; b=Q1xdos6AyvvvKYlsuaA9q1ovCfA6T83cffpaeH3+yPp9cXZbkybeRiWLiUQ0v6h39tzpUIZaTVawJcWc9E5lG6lm+obMU0Z1dlrVUVkL3U6Rm6wiwooJ8YIhi+2vie5KW3gvNCPhLdols6kbXueuQD8okF/1OyIRcmM5JbQi+dS9uvY8FFQCgQewZX4MAeek+yMKJ9jfN80twCQzmsswz1tv5zRubQgeutZUfc5ksJD9O1kagbpn32cQ8N7YF8FqoFaty8Quc/qH6ixXh6RaSJOEaXFCAlQauYJikV5KyKJ/APfai39cHNnYbz694DRrJAGoglxzQKRFpJWsncwF2g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IAu/h0sUVw6tD3CUItXDkxlRPxdYIPja298n2n76n1w=; b=HRGP9kaHuNmnvEaWIQ/Iz7OmlBEsvKOzirblc+rsILS1krZdCjjqEGRt4TIMUugAzSNG48IMLPmHaogkz/KVO1yVPTuIuNJbEYvj3XOYBMOTw4JzTb1oxtrjoZeenBQDEagbx7kc5Ne8JEBFuoxBBe35ItnwK7oksMOcXCy93KhivUXGbA5oiNKqg1t/r52afLSEwOJQBbX3RTrfmtSPpD6Bl8TjIBWgz+EdqstBpFKGB9Y9iKe0i5C1tUTtFDW/n8CW/aqCYgBCU/LOvXc/oyt7M0KBvGKMksJK8E/QHA5+EfAwm+i+VuFzF/0UWrCvNVyd0BhvUFh1npDxPoPB/A== Received: from DM6PR02CA0058.namprd02.prod.outlook.com (2603:10b6:5:177::35) by CH2PR12MB4263.namprd12.prod.outlook.com (2603:10b6:610:a6::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5566.15; Wed, 31 Aug 2022 04:19:04 +0000 Received: from DM6NAM11FT090.eop-nam11.prod.protection.outlook.com (2603:10b6:5:177:cafe::7c) by DM6PR02CA0058.outlook.office365.com (2603:10b6:5:177::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5588.10 via Frontend Transport; Wed, 31 Aug 2022 04:19:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT090.mail.protection.outlook.com (10.13.172.184) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5588.10 via Frontend Transport; Wed, 31 Aug 2022 04:19:03 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.38; Wed, 31 Aug 2022 04:18:48 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 30 Aug 2022 21:18:47 -0700 Received: from sandstorm.attlocal.net (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.29 via Frontend Transport; Tue, 30 Aug 2022 21:18:46 -0700 From: John Hubbard To: Andrew Morton CC: Jens Axboe , Alexander Viro , Miklos Szeredi , Christoph Hellwig , "Darrick J . Wong" , Trond Myklebust , Anna Schumaker , Jan Kara , David Hildenbrand , Logan Gunthorpe , , , , , , LKML , John Hubbard Subject: [PATCH v2 1/7] mm: change release_pages() to use unsigned long for npages Date: Tue, 30 Aug 2022 21:18:37 -0700 Message-ID: <20220831041843.973026-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831041843.973026-1-jhubbard@nvidia.com> References: <20220831041843.973026-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2a88de84-39ce-421e-ac28-08da8b07f049 X-MS-TrafficTypeDiagnostic: CH2PR12MB4263:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4e2dweLHYfEt2/xQ+xdU1gmcauZKRdyfVBBCaL/l5qLYw8uso0ndfAuJ7DsG5YEJjyDAwNLcdgEbsaksaANMtp+ZxbW8DrK0ApEo9sSwUzsGqtaM4YPrunSgT1EE4G/LorJ/nWCXNFG44xVMSDyYEW7nwQfW9WAsjOcFOSY8ti0XAsFzoskP+Gr8k48uizGNTKkx+IFtj1XJSewlpS3sJ23MdlwOsXX5FuaFmtqJ+iIpGkfXiIwP2BRdKcczn980YiuwCZW4Zo8NEaBzH14Ghu6qEbaV571nzN3CRzwZoRUKUesS27WYwoUzrg3iVNqWZ/5udscFyCYoXe/eTjoi64G/IGFjpNZ6nB1z5tn/J+M7JQXxILtYPZ/fBprXhJALwVbBxTH9SxF9pyW5S6s+fJBnumWhDc/g8EyAg60Ewvt2zURmmp1QAaZpM6+IC5v7Z9Dl9LFzhj38ABeN1PqhbXrJNzDcTLvz4dLp9/mQEl1vfQWWgl3CZdrRrddDA/3Pu4WgJwsgHltpgdlsCTGCPT2xWx8wO7VAzDlRj80hGtNQPujGXFi9/YzzBX6aRQZCqV2Vu3NbODkpc+BfzMZkjwI0yrV7jzan+qM35Ja7ZulxrlIYyxyuNWUiwSIexf4XJWnslXkjpz+u96KgfAQ0eMPTbGDCEZ9RRYivhki9ViAeHpIjdeSWN98Ea4unTDhTuBc2sqLb4/pNes9LSNvBMTykSTGfwAFIVJVGBcM+J8zYZNqPIMi8sYsZunaEBT3BZ9ItDTEajwAjeFQwPOg5WZI9Q2072dWawTEsPp47jOF704q8PwHbB48Asi3BqJc3 X-Forefront-Antispam-Report: CIP:12.22.5.234;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(376002)(346002)(396003)(136003)(39860400002)(46966006)(36840700001)(40470700004)(70206006)(2616005)(336012)(47076005)(1076003)(186003)(316002)(26005)(426003)(6916009)(86362001)(36756003)(2906002)(6666004)(83380400001)(107886003)(82740400003)(356005)(40460700003)(40480700001)(36860700001)(478600001)(82310400005)(7416002)(41300700001)(8936002)(5660300002)(54906003)(81166007)(8676002)(4326008)(70586007)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2022 04:19:03.7176 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2a88de84-39ce-421e-ac28-08da8b07f049 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.234];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT090.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4263 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org The various callers of release_pages() are passing in either various types (signed or unsigned) and lengths (int or long) of integers, for the second argument (number of pages). To make this conversion accurate and to avoid having to check for overflow (or deal with type conversion warnings), let's just change release_pages() to accept an unsigned long for the number of pages. Also change the name of the argument, from "nr" to "npages", for clarity, as long as that line is being changed anyway. Signed-off-by: John Hubbard --- include/linux/mm.h | 2 +- mm/swap.c | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 21f8b27bd9fd..61c5dc37370e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1145,7 +1145,7 @@ static inline void folio_put_refs(struct folio *folio, int refs) __folio_put(folio); } -void release_pages(struct page **pages, int nr); +void release_pages(struct page **pages, unsigned long npages); /** * folios_put - Decrement the reference count on an array of folios. diff --git a/mm/swap.c b/mm/swap.c index 9cee7f6a3809..ac6482d86187 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -931,15 +931,15 @@ void lru_cache_disable(void) * Decrement the reference count on all the pages in @pages. If it * fell to zero, remove the page from the LRU and free it. */ -void release_pages(struct page **pages, int nr) +void release_pages(struct page **pages, unsigned long npages) { - int i; + unsigned long i; LIST_HEAD(pages_to_free); struct lruvec *lruvec = NULL; unsigned long flags = 0; unsigned int lock_batch; - for (i = 0; i < nr; i++) { + for (i = 0; i < npages; i++) { struct folio *folio = page_folio(pages[i]); /* From patchwork Wed Aug 31 04:18:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 12960352 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED676C54EE9 for ; Wed, 31 Aug 2022 04:19:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231693AbiHaETN (ORCPT ); Wed, 31 Aug 2022 00:19:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231355AbiHaES5 (ORCPT ); Wed, 31 Aug 2022 00:18:57 -0400 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2051.outbound.protection.outlook.com [40.107.223.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7A90AB4CE; Tue, 30 Aug 2022 21:18:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=guPiIGM3y8NJTpPXxTQvuVowl4Ke5zWQIP9ryEDot+Ufk6rMGI2xiTK0a1c9Lvzd0AX98QDqRPiBpVJk2Wz2/qfXMgimZ5A7OVjGTMwnpZlQG8XqnMPDf48Ic+dFD/iWIYe3dTxjsT009HN5Rk04xU1RpcQjv8Bf9BCcTSpxbLuzZelOdO5KGZziZQO6cgmuW7Su2ySt3UUOdgqDlA45l/moKJs3/vtEly22bwe+OFD1DMaS88cfIx7ihVTRyo2b9oBFbScc6obNo7oYOKygQjKs6nJE/EUhS53MF6EUbj437HNvoYErE44ng46Oxo/S3F57dbD+J2e0Y5+dH48fiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qxDgq14BPL52n/7cjhue2PT6ARIQiBfUGFJisDO/ekY=; b=cYBLDGTsqvAmCphdRhQqI8d0rYZDvsOtb4rkBN0JSFm/iuDc+JZUy2mTnsaEGlmuDtwzFU3peyRrUJXk/oTCqPi/wUV9EK3GD2Hncq7QGKOowutBheU+zQa97y7chKDelFei7GrcHpMlaKxvb5Ks992D0577l6/Pp2L13u6Re2JZBiVio2KkOrDrwMdsYvpXXeJ6gHYvQovIQRjVe38WPmF3wEm3pTjLl4HnIFnOPbBj1tfI5kMcU7Ez7NQrPrTKey+fESnI4xwTO716Aazn6mAopRwPrOTTcvJ3sij438Xy9Nuz3GPJc9jU3lspMessdL8vJkgnJaCjdhOVjEO2Uw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qxDgq14BPL52n/7cjhue2PT6ARIQiBfUGFJisDO/ekY=; b=lkHgtiGj08Bud1xzpqU8q/+5Lm16BDjjvLUJ87hulnZzJPRpqaraxsihaEgSw/msBR0oTUi4Xv9dqoG8Ws5DvoqLr+5WjZuIeowI4lJ8wk6mcHl/WMtQwA9OWxbDJu1d4WTJBJohD7D7gCUjzIgpSI8xTLRiVIXdmz07pDdBXGFUbq9N9QAzX3qkRymmqNAg1RWSWQaAA2xpb0j5FZ/Ufs85HxMQEni0aDLMOKWMTPlVBsdI/t8Av5jjtklRVUtdOPoKMOkm2jBgVS/VEfHn45SsoeEIX2z5jnUUcqCLqVReFoO6cuRNFNQxotsZJXVVRoBXvMTPXLes5Bjdc+GOQA== Received: from MW4P223CA0011.NAMP223.PROD.OUTLOOK.COM (2603:10b6:303:80::16) by PH0PR12MB5402.namprd12.prod.outlook.com (2603:10b6:510:ef::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5566.15; Wed, 31 Aug 2022 04:18:53 +0000 Received: from CO1NAM11FT082.eop-nam11.prod.protection.outlook.com (2603:10b6:303:80:cafe::7f) by MW4P223CA0011.outlook.office365.com (2603:10b6:303:80::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5588.10 via Frontend Transport; Wed, 31 Aug 2022 04:18:53 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT082.mail.protection.outlook.com (10.13.175.224) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5588.10 via Frontend Transport; Wed, 31 Aug 2022 04:18:53 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.38; Wed, 31 Aug 2022 04:18:49 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 30 Aug 2022 21:18:48 -0700 Received: from sandstorm.attlocal.net (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.29 via Frontend Transport; Tue, 30 Aug 2022 21:18:47 -0700 From: John Hubbard To: Andrew Morton CC: Jens Axboe , Alexander Viro , Miklos Szeredi , Christoph Hellwig , "Darrick J . Wong" , Trond Myklebust , Anna Schumaker , Jan Kara , David Hildenbrand , Logan Gunthorpe , , , , , , LKML , John Hubbard Subject: [PATCH v2 2/7] mm/gup: introduce pin_user_page() Date: Tue, 30 Aug 2022 21:18:38 -0700 Message-ID: <20220831041843.973026-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831041843.973026-1-jhubbard@nvidia.com> References: <20220831041843.973026-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 33d05557-4ac9-4362-9877-08da8b07ea09 X-MS-TrafficTypeDiagnostic: PH0PR12MB5402:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JO9AWp+3mG7wrlLkjtdgvEVEjAPRhnNI96EXlv1zVTmDhbuic6JP2Wf1V7oS4MsW8KbXrJ7Iaxcg8d392ZAKu3UwUpD/Pi/NnHZGM9Ubc66eN9MYROD6GVjMC99Naiba924lWruJc9aMEs3b+C3+WGOtI4JNwsYg5nbGJJ7zzuGguUJJM1D2YRoOtIcZbH9E7lh3d9VUO8DI9LKf1IJ/joMUPvz/tF0yqllLpTm4Yq5tGHAiLHr5O51hmb8Sa0axBL0r4p1Rj0aE74SfZbAezID3wmGbMu6clCTXWfn3tMaXRoYVSKu6UI23Uate+HKUlEW/1sM5QNOJArJvRWX2SvS8msjbHh7Pksd2PLm5YfRvoWC50L36TV/xV6rPi/aQVqSceaz501xNdwsICjyFgR1aEhdGhYn/oobTUh01tsjfy5PxBbTtH9EXa5bzfVlziIPzIrGPbdeGCAEbCvc8ycvBkT6ai+6u0WA3587xRRRujR+Em6Le4P8G+TacrvFG8odnKrW9DD5d1x5UkRlSoM/cQxJw8HR2D2VT9PRCjhc0FdSoJI0xYArQ1+TWUpMzZmXslw04gGdsOiLbV1rFFQlCZy2Ypom2EFrWhrjFiDYU4ZUUf2rb3q06nyMXm/JYcZ8TV9zCszs3ObsUXMEcl622nNFIx7YB/E6jwDwv0SK642xpGkPNkL4a0EIzosaiCt27XYp15g05AaXrGIW9vOFug6TrtS/1hSL5M69Wpe4haSbNlhoQEcImvMYwAMeQoxgkbogOuzxQmWd5Tkz+PehrjogVuuZSRoQTUGzCEu/2hHoTn9Kx8fak9S/W0pWg X-Forefront-Antispam-Report: CIP:12.22.5.236;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(376002)(396003)(136003)(346002)(46966006)(36840700001)(40470700004)(70206006)(86362001)(82310400005)(6666004)(478600001)(107886003)(41300700001)(81166007)(26005)(336012)(82740400003)(40460700003)(40480700001)(186003)(356005)(83380400001)(47076005)(1076003)(426003)(5660300002)(2616005)(36860700001)(4326008)(8936002)(7416002)(8676002)(54906003)(2906002)(316002)(6916009)(36756003)(70586007)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2022 04:18:53.2625 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 33d05557-4ac9-4362-9877-08da8b07ea09 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.236];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT082.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5402 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org pin_user_page() is an externally-usable version of try_grab_page(), but with semantics that match get_page(), so that it can act as a drop-in replacement for get_page(). Specifically, pin_user_page() has a void return type. pin_user_page() elevates a page's refcount using FOLL_PIN rules. This means that the caller must release the page via unpin_user_page(). Signed-off-by: John Hubbard --- include/linux/mm.h | 1 + mm/gup.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 51 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 61c5dc37370e..c6c98d9c38ba 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1876,6 +1876,7 @@ long pin_user_pages_remote(struct mm_struct *mm, long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas); +void pin_user_page(struct page *page); long pin_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas); diff --git a/mm/gup.c b/mm/gup.c index 5abdaf487460..2c231dca39dd 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -3213,6 +3213,56 @@ long pin_user_pages(unsigned long start, unsigned long nr_pages, } EXPORT_SYMBOL(pin_user_pages); +/** + * pin_user_page() - apply a FOLL_PIN reference to a file-backed page that the + * caller already owns. + * + * @page: the page to be pinned. + * + * pin_user_page() elevates a page's refcount using FOLL_PIN rules. This means + * that the caller must release the page via unpin_user_page(). + * + * pin_user_page() is intended as a drop-in replacement for get_page(). This + * provides a way for callers to do a subsequent unpin_user_page() on the + * affected page. However, it is only intended for use by callers (file systems, + * block/bio) that have a file-backed page. Anonymous pages are not expected nor + * supported, and will generate a warning. + * + * pin_user_page() may also be thought of as an externally-usable version of + * try_grab_page(), but with semantics that match get_page(), so that it can act + * as a drop-in replacement for get_page(). + * + * IMPORTANT: The caller must release the page via unpin_user_page(). + * + */ +void pin_user_page(struct page *page) +{ + struct folio *folio = page_folio(page); + + WARN_ON_ONCE(folio_ref_count(folio) <= 0); + + /* + * This function is only intended for file-backed callers, who already + * have a page reference. + */ + WARN_ON_ONCE(PageAnon(page)); + + /* + * Similar to try_grab_page(): be sure to *also* + * increment the normal page refcount field at least once, + * so that the page really is pinned. + */ + if (folio_test_large(folio)) { + folio_ref_add(folio, 1); + atomic_add(1, folio_pincount_ptr(folio)); + } else { + folio_ref_add(folio, GUP_PIN_COUNTING_BIAS); + } + + node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, 1); +} +EXPORT_SYMBOL(pin_user_page); + /* * pin_user_pages_unlocked() is the FOLL_PIN variant of * get_user_pages_unlocked(). Behavior is the same, except that this one sets From patchwork Wed Aug 31 04:18:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 12960350 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B42DC64991 for ; Wed, 31 Aug 2022 04:19:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231641AbiHaES7 (ORCPT ); Wed, 31 Aug 2022 00:18:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229961AbiHaESz (ORCPT ); Wed, 31 Aug 2022 00:18:55 -0400 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2060.outbound.protection.outlook.com [40.107.92.60]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D046A99C6; Tue, 30 Aug 2022 21:18:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LWQSHQYx708MVm7e09IZuA6TDoe0fBNx3KZydn487SxuAtzJQM4COgkHCJCCZIl1Oq3tzHwjew0hsc7mTLsgFaM3yLa2tGej3Qaptag4tVVyLIoyel5QcqzKz85FzUFersDhsvaCDrs4+tA1MjEgkqcCfu6nxpQTw8w9MA47FkPB+2zgO4aFJaauEgwiE/SCjIwGh73KsoRqs3qRVljGvEuDtWPbvE0lnNjeAjftdzJRlkkbMXrFt4KP4580cOwmJnwfU3z1WA6rwND3ByvFlgchy29r5bHnPDzMNNNTzgdzo1c7XQQCrifZ4TZpVtC0P+Bip1+6HU/SFuJw0JWKlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dh2ml7UalEADJoQQ8FstIf4E+zz25vcwC1vh2wcDjBs=; b=Gs2WPg8yK8aGbAfY40l73CXArIDuKQLNH1AfObthG0UnwPh2iQ1B19wtn9r/5fTJdBHWWpd2Ndb8Di4avdDK/ETSDkRFSJ7OZ70MUMeQfL7aTMtmxwIcLo2otiw6oGdn81ycCg+stporeiKLizvwjoRtxh5L4kIFY7dFWsHm5GlJWQCfrAUnNcTbjTVqdTHyg5VM2JftG98l+w07+KO3qWrCmRvo0nrB8TyYiUPRsu5Pgwd8ZQ4/fREL4M9kg6RqShw+088JPtG+bHwlPKTAQK+RL18JkwQd8yU0N9EQ0JYjFFLqVMSaItnFYzO5tVAuZPikqr5bxoMiM17lEzkZ8Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dh2ml7UalEADJoQQ8FstIf4E+zz25vcwC1vh2wcDjBs=; b=dWncy413i2Rv7VVHyVeqvmmHCb/TvGfWKBWpuJ9yUT9LOpYdvwzKYZEdPJWIOv/eOOsgAgZSE5AQmUCIE44iJiY0MFVQq9wTzDyDpnwy4OSPCqhdBZtZHFDjHcZeEVizHIY2lpvltb8JUKYXyTv7yIzkxR95nerEJ+OYgBF12vp1/XlVgi6vaIW2ITPZvsmKRde0Ciqy8nnn3FgZfenThl+0GY2YqTiQ6OdFNH7GJCaSD70TIhgcqHvmHRp7OHU9Sv7JLvX4NvL+7ghJMXuAu5IQoJbrtjyQPsbjcX/Qfqs/G9w7XB5BxzQZ1SYv9tnnnQPoPBJXnhT1ZaNEY1rlqw== Received: from MW4PR03CA0162.namprd03.prod.outlook.com (2603:10b6:303:8d::17) by DS0PR12MB6416.namprd12.prod.outlook.com (2603:10b6:8:cb::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5566.21; Wed, 31 Aug 2022 04:18:52 +0000 Received: from CO1NAM11FT024.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8d:cafe::a0) by MW4PR03CA0162.outlook.office365.com (2603:10b6:303:8d::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5588.10 via Frontend Transport; Wed, 31 Aug 2022 04:18:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by CO1NAM11FT024.mail.protection.outlook.com (10.13.174.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5588.10 via Frontend Transport; Wed, 31 Aug 2022 04:18:51 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.38; Wed, 31 Aug 2022 04:18:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 30 Aug 2022 21:18:50 -0700 Received: from sandstorm.attlocal.net (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.29 via Frontend Transport; Tue, 30 Aug 2022 21:18:49 -0700 From: John Hubbard To: Andrew Morton CC: Jens Axboe , Alexander Viro , Miklos Szeredi , Christoph Hellwig , "Darrick J . Wong" , Trond Myklebust , Anna Schumaker , Jan Kara , David Hildenbrand , Logan Gunthorpe , , , , , , LKML , John Hubbard Subject: [PATCH v2 3/7] block: add dio_w_*() wrappers for pin, unpin user pages Date: Tue, 30 Aug 2022 21:18:39 -0700 Message-ID: <20220831041843.973026-4-jhubbard@nvidia.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831041843.973026-1-jhubbard@nvidia.com> References: <20220831041843.973026-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2f22beff-c2c4-4a57-c55a-08da8b07e93a X-MS-TrafficTypeDiagnostic: DS0PR12MB6416:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iVoJlapBVYz7mktkKg4V7KA+LCMYzZqJxl01XKuIOAW2W6mfIu2bN60sygfMSyVc20pEm93mCCuk464EKP0z+48Sdemg5TPkB2cyBNztwbr8lUvOVFDCi8rmieNHT/Q8n7dS6rBXgnDRC2ofYm10K8r4YHc3cEpsrZZuO45C4TyjP9kGMRwx7AYBoM+GM/l+ai5tNoitg8TAIkwaZfGUGUxh8owA9MuaVmAh5afuggARUCk6tBNBemRs/MR/OKV6Z2cANU9Bwmky8hpgFRDyA959qQbHoTJF0g8ewREewic6V2KKg0QZHPO/PWMRanR31D4pbJmNmI+V5oEka1B5U1JZBSRkcyR5hOjWdCoXa2HrkVep5sIIrJ+zzg571TCSbjnzSveNTQ3FkXCkKxs3x0yw/ffoep2eWuDG3EObNdReNRqg7IeQTKBWGNNsIQ6nf5UVWsmXOtimqRnTey1rg1ZtaDgs07tSrB/icdWW2Tf2MRAJEjF69v9VvLGZePeEdC2szFBR6WNPTMqsVcCAhnk2ihqBP7ut8xB5F9nceMejdeR//bcf3J8cKALFJTI5WpVGK/+fBjm16Dutj3GN7b0r3OipaHOtGNqiPbGZ3lqte2SqEO7nlVw3fcXa7Dy2v4KIVArUIOEU5QKBIvJDaj2KE01rTiSxTNxCxzpIYX+4IV8p7+hF5Xd1oGcCXcaXOUG2NYUNgXi7YG8RQ7HWyJWaCrvt+XDO2n7B8xD9Y3oO3YnzeuBsWJBuDElS9TdJH4wGGPeZa4xKyxqhb1QqKU0lXq8Nyx9aOmr1qCPI/+ArRfRysPV3A5XXdUCiaHvJ X-Forefront-Antispam-Report: CIP:12.22.5.235;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(376002)(396003)(346002)(39860400002)(136003)(40470700004)(36840700001)(46966006)(36756003)(40460700003)(86362001)(70586007)(54906003)(316002)(6916009)(70206006)(36860700001)(81166007)(356005)(82740400003)(83380400001)(336012)(5660300002)(26005)(82310400005)(6666004)(41300700001)(478600001)(8676002)(2906002)(7416002)(4326008)(107886003)(8936002)(40480700001)(2616005)(426003)(1076003)(186003)(47076005)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2022 04:18:51.8759 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2f22beff-c2c4-4a57-c55a-08da8b07e93a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.235];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT024.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB6416 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Background: The Direct IO part of the block infrastructure is being changed to use pin_user_page*() and unpin_user_page*() calls, in place of a mix of get_user_pages_fast(), get_page(), and put_page(). These have to be changed over all at the same time, for block, bio, and all filesystems. However, most filesystems can be changed via iomap and core filesystem routines, so let's get that in place, and then continue on with converting the remaining filesystems (9P, CIFS) and anything else that feeds pages into bio that ultimately get released via bio_release_pages(). Add a new config parameter, CONFIG_BLK_USE_PIN_USER_PAGES_FOR_DIO, and dio_w_*() wrapper functions. The dio_w_ prefix was chosen for uniqueness, so as to ease a subsequent kernel-wide rename via search-and-replace. Together, these allow the developer to choose between these sets of routines, for Direct IO code paths: a) pin_user_pages_fast() pin_user_page() unpin_user_page() b) get_user_pages_fast() get_page() put_page() CONFIG_BLK_USE_PIN_USER_PAGES_FOR_DIO is a temporary setting, and may be deleted once the conversion is complete. In the meantime, developers can enable this in order to try out each filesystem. Please remember that these /proc/vmstat items (below) should normally contain the same values as each other, except during the middle of pin/unpin operations. As such, they can be helpful when monitoring test runs: nr_foll_pin_acquired nr_foll_pin_released Signed-off-by: John Hubbard --- block/Kconfig | 24 ++++++++++++++++++++++++ include/linux/bvec.h | 37 +++++++++++++++++++++++++++++++++++++ 2 files changed, 61 insertions(+) diff --git a/block/Kconfig b/block/Kconfig index 444c5ab3b67e..d4fdd606d138 100644 --- a/block/Kconfig +++ b/block/Kconfig @@ -48,6 +48,30 @@ config BLK_DEV_BSG_COMMON config BLK_ICQ bool +config BLK_USE_PIN_USER_PAGES_FOR_DIO + bool "DEVELOPERS ONLY: Enable pin_user_pages() for Direct IO" if EXPERT + default n + + help + For Direct IO code, retain the pages via calls to + pin_user_pages_fast(), instead of via get_user_pages_fast(). + Likewise, use pin_user_page() instead of get_page(). And then + release such pages via unpin_user_page(), instead of + put_page(). + + This is a temporary setting, which will be deleted once the + conversion is completed, reviewed, and tested. In the meantime, + developers can enable this in order to try out each filesystem. + For that, it's best to monitor these /proc/vmstat items: + + nr_foll_pin_acquired + nr_foll_pin_released + + ...to ensure that they remain equal, when "at rest". + + Say yes here ONLY if are actively developing or testing the + block layer or filesystems with pin_user_pages_fast(). + config BLK_DEV_BSGLIB bool "Block layer SG support v4 helper lib" select BLK_DEV_BSG_COMMON diff --git a/include/linux/bvec.h b/include/linux/bvec.h index 35c25dff651a..952d869702d2 100644 --- a/include/linux/bvec.h +++ b/include/linux/bvec.h @@ -241,4 +241,41 @@ static inline void *bvec_virt(struct bio_vec *bvec) return page_address(bvec->bv_page) + bvec->bv_offset; } +#ifdef CONFIG_BLK_USE_PIN_USER_PAGES_FOR_DIO +#define dio_w_pin_user_pages_fast(s, n, p, f) pin_user_pages_fast(s, n, p, f) +#define dio_w_pin_user_page(p) pin_user_page(p) +#define dio_w_iov_iter_pin_pages(i, p, m, n, s) iov_iter_pin_pages(i, p, m, n, s) +#define dio_w_iov_iter_pin_pages_alloc(i, p, m, s) iov_iter_pin_pages_alloc(i, p, m, s) +#define dio_w_unpin_user_page(p) unpin_user_page(p) +#define dio_w_unpin_user_pages(p, n) unpin_user_pages(p, n) +#define dio_w_unpin_user_pages_dirty_lock(p, n, d) unpin_user_pages_dirty_lock(p, n, d) + +#else +#define dio_w_pin_user_pages_fast(s, n, p, f) get_user_pages_fast(s, n, p, f) +#define dio_w_pin_user_page(p) get_page(p) +#define dio_w_iov_iter_pin_pages(i, p, m, n, s) iov_iter_get_pages2(i, p, m, n, s) +#define dio_w_iov_iter_pin_pages_alloc(i, p, m, s) iov_iter_get_pages_alloc2(i, p, m, s) +#define dio_w_unpin_user_page(p) put_page(p) + +static inline void dio_w_unpin_user_pages(struct page **pages, + unsigned long npages) +{ + release_pages(pages, npages); +} + +static inline void dio_w_unpin_user_pages_dirty_lock(struct page **pages, + unsigned long npages, + bool make_dirty) +{ + unsigned long i; + + for (i = 0; i < npages; i++) { + if (make_dirty) + set_page_dirty_lock(pages[i]); + put_page(pages[i]); + } +} + +#endif + #endif /* __LINUX_BVEC_H */ From patchwork Wed Aug 31 04:18:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 12960351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DC2BC0502A for ; Wed, 31 Aug 2022 04:19:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231247AbiHaETM (ORCPT ); Wed, 31 Aug 2022 00:19:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231300AbiHaES4 (ORCPT ); Wed, 31 Aug 2022 00:18:56 -0400 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2075.outbound.protection.outlook.com [40.107.237.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2877EAB40F; Tue, 30 Aug 2022 21:18:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=C2bVPqSWletShz42MXF3jiunLlg2Z/23IhC6QM1o5Va/Nczd37j7IFYyNHZd1Iaz+Eyn9Rz9prWuSTYPhcr70g6UyaJJYZZMaAjwkUhrkC8ewnIZ6sjQZcCEZt12SoM6XizV4yb69gTZKfkFg8Z+rzwnfsrKba+VWXZOn90hP75SELr852MMaSy9VNCatvlgUGpmNpS8G6mo2MzZbQkodbUjYUQlVZmowtafjGZgRtOIdgeR12f3sDX/kM/e3+yUh2dKCdtVhEVw21C7o7bKiUQhA7z1Turmspa7TQ45KNjx9JiCZkl1MU4JBYQKerMp8b1C3S5ikJ/eCX3WDOgQzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zBugDiWZB7x/F4ljUpJEaRgWrMgNVOp/VewbCcAd91I=; b=M+WszXdZU/KqCUCHcqGkjAPPXst88H2o0uaCYDJ1IDkCCRE+cejzHXy28iK/o0Umq3BUtlHQCbjkyWoPswmCTNk0NR2jciKq6hOEUhgOHEXFPvECIjWcjjvsImZv7N8wTqG9yhTHuIGzYWXPkX17lqpERmkNY9SpChTQqws1aGgkE58DDbz/Vb8I/Yzta47HJtz4NThL9t6OJd9d7is5vepchz/2+lgSbftf5j01iCTqaNPIZ2K5dgpYn38L7iloeB5GS//mk8q4saTb8S3piBIG/DFZS+anpFZwkTty404rZ/jykrAAMiVdjRfzAP/CBTzKU3ItNNJqj+7bIp2AbQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zBugDiWZB7x/F4ljUpJEaRgWrMgNVOp/VewbCcAd91I=; b=kiMv7OT6sa48i3zWVQLYJZfNYbgJKH+zwo41k7D0FAXCKQIgPOTaozLGyxF5FwtvGe21oYmoNReC3FE0kFIIH/QJp+QF6xU1Gd4SCfPYT6O8UirueYQMJSjv0DepIgsUItos6ZQ5glWBvL6JKisrlvwXitDLu+1fmVIW9DVXrAt6UrEgnuSLD2jQq+U3Yg01cGnheaN1EfcL+tatvItqobWS0fHiLdJA3qT33AA0ZS70i/QHS2LT43xGSAIsFxfzPwIbHHzfjCKH4OZSIKuTcQUn4AMrTK6R7vZS6GyubXxX2muQvFhN9kC3CDOpwLMqjhkxMiCxBd68ca2VlHWjRQ== Received: from MW4PR03CA0141.namprd03.prod.outlook.com (2603:10b6:303:8c::26) by DM8PR12MB5416.namprd12.prod.outlook.com (2603:10b6:8:28::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5588.10; Wed, 31 Aug 2022 04:18:53 +0000 Received: from CO1NAM11FT003.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8c:cafe::a5) by MW4PR03CA0141.outlook.office365.com (2603:10b6:303:8c::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5588.10 via Frontend Transport; Wed, 31 Aug 2022 04:18:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT003.mail.protection.outlook.com (10.13.175.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5588.10 via Frontend Transport; Wed, 31 Aug 2022 04:18:52 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.38; Wed, 31 Aug 2022 04:18:52 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 30 Aug 2022 21:18:51 -0700 Received: from sandstorm.attlocal.net (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.29 via Frontend Transport; Tue, 30 Aug 2022 21:18:50 -0700 From: John Hubbard To: Andrew Morton CC: Jens Axboe , Alexander Viro , Miklos Szeredi , Christoph Hellwig , "Darrick J . Wong" , Trond Myklebust , Anna Schumaker , Jan Kara , David Hildenbrand , Logan Gunthorpe , , , , , , LKML , John Hubbard Subject: [PATCH v2 4/7] iov_iter: new iov_iter_pin_pages*() routines Date: Tue, 30 Aug 2022 21:18:40 -0700 Message-ID: <20220831041843.973026-5-jhubbard@nvidia.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831041843.973026-1-jhubbard@nvidia.com> References: <20220831041843.973026-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f7e8e07b-7016-4b6f-1204-08da8b07e9af X-MS-TrafficTypeDiagnostic: DM8PR12MB5416:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gQ9eRZ8247NBf6rCDxetAkOxqaO7Ik6fWFvdkRQGZUr7MIIHeGgLahmBmHn1rYQyjCMyxETprh3C/Hccx1FmonNOfcwvZjb+8V8qtwvUsHc7uldq2VRNRrkMW4IqMFtMKvth4peQQ1Ezr2ht8s7I1cHBYocmNj0G6jgUpoM+Cbif/tdh+iKdb6GvGywi1RisEGOU22QZA5eKI4FZLjOPT1BDNs1c47veyL46i02iomjRJ9VDFDZ0OkQaY0jPDUiJUu1BByHVqVfICAhtLyGGRgHJtDo9H+BTso9ffE7mJ+SHVlI4SvrrJ+zbg4PAdpyibEc0G4TZZCpqLhRY+pTcIqSGYMD6xGBksac/yPsBHtx0tECPKq2WNE7WBJFqSxcY0A4G5j6Qt0dUEstcGLpMBD7rz9gjHz5Qwv/krDveyeXx8+ksS5R3ZlfzqOi8796pcF4qhHru2zAfnDbaazGTA00riPoHZ2HCf9qCdKcFCVuLnElFVerNoqMXmD9DAbNwTDeL6GduIdolumiQsJVVdTF2ypw+djWNIMqdLyEuMFTnXY9qSzx17mUuXESpJd8dqaTtsq+diA2FWNVLHkHEh2seUgtFCZrcIUtLkZFZxgsq2LcrjEy97MnCTfGVREmGAHGVMz5pJtZI0UvNh5ZZA8Ab0CIC2a9s/G+7vS2UvkAXhPbRPSoh7Asj6jvJJZfM27+7C0+0rmmBJLahridfppF/JBGOYmfVxGFIG2OUGqWD154ysShE6ckG82Q2yDWrLy1KQ1aIxDrCwj1qi+WrBaVQHsuLITexx/hARbjs9vBK97qx8lxtT8DdJS+2QKIj X-Forefront-Antispam-Report: CIP:12.22.5.238;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(39860400002)(346002)(136003)(376002)(396003)(46966006)(40470700004)(36840700001)(86362001)(36756003)(82740400003)(41300700001)(36860700001)(107886003)(426003)(1076003)(47076005)(26005)(336012)(2616005)(6666004)(186003)(6916009)(54906003)(40480700001)(82310400005)(70586007)(4326008)(83380400001)(8676002)(316002)(478600001)(7416002)(81166007)(70206006)(40460700003)(356005)(2906002)(8936002)(5660300002)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2022 04:18:52.6693 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f7e8e07b-7016-4b6f-1204-08da8b07e9af X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.238];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT003.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM8PR12MB5416 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Provide two new wrapper routines that are intended for user space pages only: iov_iter_pin_pages() iov_iter_pin_pages_alloc() Internally, these routines call pin_user_pages_fast(), instead of get_user_pages_fast(), for user_backed_iter(i) and iov_iter_bvec(i) cases. As always, callers must use unpin_user_pages() or a suitable FOLL_PIN variant, to release the pages, if they actually were acquired via pin_user_pages_fast(). This is a prerequisite to converting bio/block layers over to use pin_user_pages_fast(). Signed-off-by: John Hubbard --- include/linux/uio.h | 4 +++ lib/iov_iter.c | 86 +++++++++++++++++++++++++++++++++++++++++---- 2 files changed, 84 insertions(+), 6 deletions(-) diff --git a/include/linux/uio.h b/include/linux/uio.h index 5896af36199c..e26908e443d1 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -251,6 +251,10 @@ ssize_t iov_iter_get_pages2(struct iov_iter *i, struct page **pages, size_t maxsize, unsigned maxpages, size_t *start); ssize_t iov_iter_get_pages_alloc2(struct iov_iter *i, struct page ***pages, size_t maxsize, size_t *start); +ssize_t iov_iter_pin_pages(struct iov_iter *i, struct page **pages, + size_t maxsize, unsigned int maxpages, size_t *start); +ssize_t iov_iter_pin_pages_alloc(struct iov_iter *i, struct page ***pages, + size_t maxsize, size_t *start); int iov_iter_npages(const struct iov_iter *i, int maxpages); void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 4b7fce72e3e5..c63ce0eadfcb 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -1425,9 +1425,31 @@ static struct page *first_bvec_segment(const struct iov_iter *i, return page; } +enum pages_alloc_internal_flags { + USE_FOLL_GET, + MAYBE_USE_FOLL_PIN +}; + +/* + * Pins pages, either via get_page(), or via pin_user_page*(). The caller is + * responsible for tracking which pinning mechanism was used here, and releasing + * pages via the appropriate call: put_page() or unpin_user_page(). + * + * The way to figure that out is: + * + * a) If how_to_pin == FOLL_GET, then this routine will always pin via + * get_page(). + * + * b) If how_to_pin == MAYBE_USE_FOLL_PIN, then this routine will pin via + * pin_user_page*() for either user_backed_iter(i) cases, or + * iov_iter_is_bvec(i) cases. However, for the other cases (pipe, + * xarray), pages will be pinned via get_page(). + */ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages, size_t maxsize, - unsigned int maxpages, size_t *start) + unsigned int maxpages, size_t *start, + enum pages_alloc_internal_flags how_to_pin) + { unsigned int n; @@ -1454,7 +1476,12 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i, n = want_pages_array(pages, maxsize, *start, maxpages); if (!n) return -ENOMEM; - res = get_user_pages_fast(addr, n, gup_flags, *pages); + + if (how_to_pin == MAYBE_USE_FOLL_PIN) + res = pin_user_pages_fast(addr, n, gup_flags, *pages); + else + res = get_user_pages_fast(addr, n, gup_flags, *pages); + if (unlikely(res <= 0)) return res; maxsize = min_t(size_t, maxsize, res * PAGE_SIZE - *start); @@ -1470,8 +1497,13 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i, if (!n) return -ENOMEM; p = *pages; - for (int k = 0; k < n; k++) - get_page(p[k] = page + k); + for (int k = 0; k < n; k++) { + p[k] = page + k; + if (how_to_pin == MAYBE_USE_FOLL_PIN) + pin_user_page(p[k]); + else + get_page(p[k]); + } maxsize = min_t(size_t, maxsize, n * PAGE_SIZE - *start); i->count -= maxsize; i->iov_offset += maxsize; @@ -1497,10 +1529,29 @@ ssize_t iov_iter_get_pages2(struct iov_iter *i, return 0; BUG_ON(!pages); - return __iov_iter_get_pages_alloc(i, &pages, maxsize, maxpages, start); + return __iov_iter_get_pages_alloc(i, &pages, maxsize, maxpages, start, + USE_FOLL_GET); } EXPORT_SYMBOL(iov_iter_get_pages2); +/* + * A FOLL_PIN variant that calls pin_user_pages_fast() instead of + * get_user_pages_fast(). + */ +ssize_t iov_iter_pin_pages(struct iov_iter *i, + struct page **pages, size_t maxsize, unsigned int maxpages, + size_t *start) +{ + if (!maxpages) + return 0; + if (WARN_ON_ONCE(!pages)) + return -EINVAL; + + return __iov_iter_get_pages_alloc(i, &pages, maxsize, maxpages, start, + MAYBE_USE_FOLL_PIN); +} +EXPORT_SYMBOL(iov_iter_pin_pages); + ssize_t iov_iter_get_pages_alloc2(struct iov_iter *i, struct page ***pages, size_t maxsize, size_t *start) @@ -1509,7 +1560,8 @@ ssize_t iov_iter_get_pages_alloc2(struct iov_iter *i, *pages = NULL; - len = __iov_iter_get_pages_alloc(i, pages, maxsize, ~0U, start); + len = __iov_iter_get_pages_alloc(i, pages, maxsize, ~0U, start, + USE_FOLL_GET); if (len <= 0) { kvfree(*pages); *pages = NULL; @@ -1518,6 +1570,28 @@ ssize_t iov_iter_get_pages_alloc2(struct iov_iter *i, } EXPORT_SYMBOL(iov_iter_get_pages_alloc2); +/* + * A FOLL_PIN variant that calls pin_user_pages_fast() instead of + * get_user_pages_fast(). + */ +ssize_t iov_iter_pin_pages_alloc(struct iov_iter *i, + struct page ***pages, size_t maxsize, + size_t *start) +{ + ssize_t len; + + *pages = NULL; + + len = __iov_iter_get_pages_alloc(i, pages, maxsize, ~0U, start, + MAYBE_USE_FOLL_PIN); + if (len <= 0) { + kvfree(*pages); + *pages = NULL; + } + return len; +} +EXPORT_SYMBOL(iov_iter_pin_pages_alloc); + size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i) { From patchwork Wed Aug 31 04:18:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 12960353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1002DC3DA6B for ; Wed, 31 Aug 2022 04:19:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230008AbiHaETP (ORCPT ); Wed, 31 Aug 2022 00:19:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231620AbiHaES6 (ORCPT ); Wed, 31 Aug 2022 00:18:58 -0400 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2085.outbound.protection.outlook.com [40.107.237.85]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20406AB1A1; Tue, 30 Aug 2022 21:18:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=J+j+d3FvY66F0aMEl7B+PQ5BZOul1SJb8BLmRimlYmHt8CmZ0y16zxS0FWAEFUSqoF56bRAeVgVdXmbi6YUQD0kwrrtC774h2IHfhFmuzG+Mpz93q81eX6GrPpino4qSXI+FlyXElKuFabxngqjacZpWYBToEHs6yXApwhe9XQEj06v+wP8tB8haglyxiKri7d3AzZDNQYBBqUvmsJNOylPdp6i9e84nBk/ClkZpdAhoKPpihjNg6+IhO95CZAykoyYMl7AbU77hvjxAH9Mswj63m453JzgZmSVfA0m9HU6MRpys2/HyFBSsZX6lPiRJ8XM14eidbTBb74lPChEuZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gfeJQm0LK+R1tjTZehVFwsPGW1k7GJS7dUu3qTkFBT0=; b=Xt5Dz4oHReAJ+fOFynd15laRtaWyFre2FxCC/ZhmpCWYgXJeZUaXMxgwp7A/pc2isdzbpeJMPThBJstUxz0YxMWL5Js4WjsmyWiH8LWCH54cNSfaRGLC7bcMW02SjQ8sU9aXapn5sWBhgWq7c3DSm07Lx0z7jo89HtaP5paif/ZdD4yojtLv7c8IkQvS5KrXa6nOREJH199JBBDPV4ELQO+sQy3utMEJ84necT53NLF+1NNIALggO4bRDZI5OyYOY4LiNWJB1yDl5ww0/yd0ZLb/IArloWKlvQBb9ZW8mQBJ47wsyA4LlLuUGzzbBFq1mIJYeY2opGquGrQa1F4Vww== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gfeJQm0LK+R1tjTZehVFwsPGW1k7GJS7dUu3qTkFBT0=; b=sMviMMM5Pfy6gaKGkXpH6Ov7jbKbP5kJdCn5JCyOyPJye0sJfOneZbGBA5GPr0y2RtuXrZlkeRZ0Eb4kpVKHh6U+prAC1xfGgoLQAtmGokzu6XcgBCb0vg34ShDWGyymhsi1tmv89v+cb3UmxaF5TnfBwbiH06OcHZMi7i7E6DyAiNma0vDfTg+hh/SMlRzZuXe5Qse7cGmiejEGSd2JQsuxzmJ3ds65vV3Z+sbD3uVw8X2n+tGTwwzrDezC2c60pH51zFGyuPGFrMDc71nYscO0rK7I+0Wy2XKvmLNWW5drDrdklY7L/twgVj4+V8xOpON7JOjUicTH5JOZ26rODg== Received: from MW4PR03CA0156.namprd03.prod.outlook.com (2603:10b6:303:8d::11) by PH7PR12MB7018.namprd12.prod.outlook.com (2603:10b6:510:1b8::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5566.16; Wed, 31 Aug 2022 04:18:54 +0000 Received: from CO1NAM11FT024.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8d:cafe::fa) by MW4PR03CA0156.outlook.office365.com (2603:10b6:303:8d::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5588.10 via Frontend Transport; Wed, 31 Aug 2022 04:18:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT024.mail.protection.outlook.com (10.13.174.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5588.10 via Frontend Transport; Wed, 31 Aug 2022 04:18:54 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.38; Wed, 31 Aug 2022 04:18:53 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 30 Aug 2022 21:18:52 -0700 Received: from sandstorm.attlocal.net (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.29 via Frontend Transport; Tue, 30 Aug 2022 21:18:51 -0700 From: John Hubbard To: Andrew Morton CC: Jens Axboe , Alexander Viro , Miklos Szeredi , Christoph Hellwig , "Darrick J . Wong" , Trond Myklebust , Anna Schumaker , Jan Kara , David Hildenbrand , Logan Gunthorpe , , , , , , LKML , John Hubbard Subject: [PATCH v2 5/7] block, bio, fs: convert most filesystems to pin_user_pages_fast() Date: Tue, 30 Aug 2022 21:18:41 -0700 Message-ID: <20220831041843.973026-6-jhubbard@nvidia.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831041843.973026-1-jhubbard@nvidia.com> References: <20220831041843.973026-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 72f01052-aa3f-4585-c39a-08da8b07ea83 X-MS-TrafficTypeDiagnostic: PH7PR12MB7018:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 97zKzFb3Z81LCu3QlMr48Z3zpnlwu6BdgKnKqlo0vY396kq7/YCWh29s9mdSkjgzksQ5g3FLqToxsMHyphHDQh3Q3hvnd0QJqT9Yh47w00r0KnejIJ62OF5EtmK0sDvRUtJGdsxK6iwJfqYHfsZjx7qAfQszauRbmwONF2fpV/jCI59Lw0RbdBvMsdS01/h7LDhuPqfbKR5c4qe+FMJ1OwGFp9syvlji6oCfunqxR97SPvCEJbB+pZ2EojwD2o8aT+FEb4f7cwOsImoknT9sk8Fj0HR86oVfMFB9+pct5z2Ef9KUX684POMOmPbVN9SDNFBk7A56RhLvt8/SvVCqdIFa+X5PkJoWpWlU2t7pgksmFlh5eoypdrN3zvHyarHLM94iPguz68+QklVyXKJOWxTyiik0us28PCq/FbWyRNuKokYyNSgXdnH5zPK89qjrG/r/tK9ZzQUdQhshbTeuw+PKpTd/yJcyU1+/FeVPCFUg0VE9+4JLWXhdND9o2RjOnzsQ483JOmkIIfAHgFSXzmEJpGGIc/XPd8k1SfX6xfLk1qbVML41f5eQvsrZ4aOSF3yzp3RGxapkEViNp+5iUi7n2rKT/RTd9vUXScoYI5CEwV1UX2YOQL1h69FTZq9GEJaBhOaHmV8POOQOEfDONw2Kd7MlI4kfco39vLlqFbFiPn8JEdcIeDkCQxIal0Dm1Oxvw8CdM6sOGwPfz50pYJHY5dnJPLvZx/TSB4ij4mH+S1wtcjlw2eC+D5JtTnWSS9Nza91uhOQ1v4OWsUW+QyBeaYt+FpvoIBmfHMntkmQ= X-Forefront-Antispam-Report: CIP:12.22.5.238;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(136003)(376002)(346002)(396003)(39860400002)(40470700004)(36840700001)(46966006)(8936002)(4326008)(6666004)(7416002)(2616005)(36756003)(82740400003)(356005)(70586007)(40460700003)(30864003)(26005)(5660300002)(82310400005)(40480700001)(8676002)(86362001)(2906002)(107886003)(47076005)(186003)(426003)(336012)(81166007)(1076003)(83380400001)(54906003)(36860700001)(70206006)(316002)(6916009)(41300700001)(478600001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2022 04:18:54.0632 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 72f01052-aa3f-4585-c39a-08da8b07ea83 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.238];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT024.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7018 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Use dio_w_*() wrapper calls, in place of get_user_pages_fast(), get_page() and put_page(). This converts the Direct IO parts of most filesystems over to using FOLL_PIN (pin_user_page*()) page pinning. Signed-off-by: John Hubbard --- block/bio.c | 27 ++++++++++++++------------- block/blk-map.c | 7 ++++--- fs/direct-io.c | 40 ++++++++++++++++++++-------------------- fs/iomap/direct-io.c | 2 +- 4 files changed, 39 insertions(+), 37 deletions(-) diff --git a/block/bio.c b/block/bio.c index 3d3a2678fea2..6c6110f7054e 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1125,7 +1125,7 @@ void __bio_release_pages(struct bio *bio, bool mark_dirty) bio_for_each_segment_all(bvec, bio, iter_all) { if (mark_dirty && !PageCompound(bvec->bv_page)) set_page_dirty_lock(bvec->bv_page); - put_page(bvec->bv_page); + dio_w_unpin_user_page(bvec->bv_page); } } EXPORT_SYMBOL_GPL(__bio_release_pages); @@ -1162,7 +1162,7 @@ static int bio_iov_add_page(struct bio *bio, struct page *page, } if (same_page) - put_page(page); + dio_w_unpin_user_page(page); return 0; } @@ -1176,7 +1176,7 @@ static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page, queue_max_zone_append_sectors(q), &same_page) != len) return -EINVAL; if (same_page) - put_page(page); + dio_w_unpin_user_page(page); return 0; } @@ -1187,10 +1187,10 @@ static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page, * @bio: bio to add pages to * @iter: iov iterator describing the region to be mapped * - * Pins pages from *iter and appends them to @bio's bvec array. The - * pages will have to be released using put_page() when done. - * For multi-segment *iter, this function only adds pages from the - * next non-empty segment of the iov iterator. + * Pins pages from *iter and appends them to @bio's bvec array. The pages will + * have to be released using dio_w_unpin_user_page when done. For multi-segment + * *iter, this function only adds pages from the next non-empty segment of the + * iov iterator. */ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) { @@ -1218,8 +1218,9 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) * result to ensure the bio's total size is correct. The remainder of * the iov data will be picked up in the next bio iteration. */ - size = iov_iter_get_pages2(iter, pages, UINT_MAX - bio->bi_iter.bi_size, - nr_pages, &offset); + size = dio_w_iov_iter_pin_pages(iter, pages, + UINT_MAX - bio->bi_iter.bi_size, + nr_pages, &offset); if (unlikely(size <= 0)) return size ? size : -EFAULT; @@ -1252,7 +1253,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) iov_iter_revert(iter, left); out: while (i < nr_pages) - put_page(pages[i++]); + dio_w_unpin_user_page(pages[i++]); return ret; } @@ -1444,9 +1445,9 @@ void bio_set_pages_dirty(struct bio *bio) * have been written out during the direct-IO read. So we take another ref on * the BIO and re-dirty the pages in process context. * - * It is expected that bio_check_pages_dirty() will wholly own the BIO from - * here on. It will run one put_page() against each page and will run one - * bio_put() against the BIO. + * It is expected that bio_check_pages_dirty() will wholly own the BIO from here + * on. It will run one dio_w_unpin_user_page() against each page and will run + * one bio_put() against the BIO. */ static void bio_dirty_fn(struct work_struct *work); diff --git a/block/blk-map.c b/block/blk-map.c index 7196a6b64c80..4e333ad9776d 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -254,7 +254,8 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, size_t offs, added = 0; int npages; - bytes = iov_iter_get_pages_alloc2(iter, &pages, LONG_MAX, &offs); + bytes = dio_w_iov_iter_pin_pages_alloc(iter, &pages, LONG_MAX, + &offs); if (unlikely(bytes <= 0)) { ret = bytes ? bytes : -EFAULT; goto out_unmap; @@ -276,7 +277,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, if (!bio_add_hw_page(rq->q, bio, page, n, offs, max_sectors, &same_page)) { if (same_page) - put_page(page); + dio_w_unpin_user_page(page); break; } @@ -289,7 +290,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, * release the pages we didn't map into the bio, if any */ while (j < npages) - put_page(pages[j++]); + dio_w_unpin_user_page(pages[j++]); kvfree(pages); /* couldn't stuff something into bio? */ if (bytes) { diff --git a/fs/direct-io.c b/fs/direct-io.c index f669163d5860..05c044c55374 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -169,8 +169,8 @@ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio) const enum req_op dio_op = dio->opf & REQ_OP_MASK; ssize_t ret; - ret = iov_iter_get_pages2(sdio->iter, dio->pages, LONG_MAX, DIO_PAGES, - &sdio->from); + ret = dio_w_iov_iter_pin_pages(sdio->iter, dio->pages, LONG_MAX, + DIO_PAGES, &sdio->from); if (ret < 0 && sdio->blocks_available && dio_op == REQ_OP_WRITE) { struct page *page = ZERO_PAGE(0); @@ -181,7 +181,7 @@ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio) */ if (dio->page_errors == 0) dio->page_errors = ret; - get_page(page); + dio_w_pin_user_page(page); dio->pages[0] = page; sdio->head = 0; sdio->tail = 1; @@ -197,7 +197,7 @@ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio) sdio->to = ((ret - 1) & (PAGE_SIZE - 1)) + 1; return 0; } - return ret; + return ret; } /* @@ -324,7 +324,7 @@ static void dio_aio_complete_work(struct work_struct *work) static blk_status_t dio_bio_complete(struct dio *dio, struct bio *bio); /* - * Asynchronous IO callback. + * Asynchronous IO callback. */ static void dio_bio_end_aio(struct bio *bio) { @@ -449,7 +449,7 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio) static inline void dio_cleanup(struct dio *dio, struct dio_submit *sdio) { while (sdio->head < sdio->tail) - put_page(dio->pages[sdio->head++]); + dio_w_unpin_user_page(dio->pages[sdio->head++]); } /* @@ -716,7 +716,7 @@ static inline int dio_bio_add_page(struct dio_submit *sdio) */ if ((sdio->cur_page_len + sdio->cur_page_offset) == PAGE_SIZE) sdio->pages_in_io--; - get_page(sdio->cur_page); + dio_w_pin_user_page(sdio->cur_page); sdio->final_block_in_bio = sdio->cur_page_block + (sdio->cur_page_len >> sdio->blkbits); ret = 0; @@ -725,7 +725,7 @@ static inline int dio_bio_add_page(struct dio_submit *sdio) } return ret; } - + /* * Put cur_page under IO. The section of cur_page which is described by * cur_page_offset,cur_page_len is put into a BIO. The section of cur_page @@ -787,7 +787,7 @@ static inline int dio_send_cur_page(struct dio *dio, struct dio_submit *sdio, * An autonomous function to put a chunk of a page under deferred IO. * * The caller doesn't actually know (or care) whether this piece of page is in - * a BIO, or is under IO or whatever. We just take care of all possible + * a BIO, or is under IO or whatever. We just take care of all possible * situations here. The separation between the logic of do_direct_IO() and * that of submit_page_section() is important for clarity. Please don't break. * @@ -832,13 +832,13 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page, */ if (sdio->cur_page) { ret = dio_send_cur_page(dio, sdio, map_bh); - put_page(sdio->cur_page); + dio_w_unpin_user_page(sdio->cur_page); sdio->cur_page = NULL; if (ret) return ret; } - get_page(page); /* It is in dio */ + dio_w_pin_user_page(page); /* It is in dio */ sdio->cur_page = page; sdio->cur_page_offset = offset; sdio->cur_page_len = len; @@ -853,7 +853,7 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page, ret = dio_send_cur_page(dio, sdio, map_bh); if (sdio->bio) dio_bio_submit(dio, sdio); - put_page(sdio->cur_page); + dio_w_unpin_user_page(sdio->cur_page); sdio->cur_page = NULL; } return ret; @@ -890,7 +890,7 @@ static inline void dio_zero_block(struct dio *dio, struct dio_submit *sdio, * We need to zero out part of an fs block. It is either at the * beginning or the end of the fs block. */ - if (end) + if (end) this_chunk_blocks = dio_blocks_per_fs_block - this_chunk_blocks; this_chunk_bytes = this_chunk_blocks << sdio->blkbits; @@ -954,7 +954,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, ret = get_more_blocks(dio, sdio, map_bh); if (ret) { - put_page(page); + dio_w_unpin_user_page(page); goto out; } if (!buffer_mapped(map_bh)) @@ -999,7 +999,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, /* AKPM: eargh, -ENOTBLK is a hack */ if (dio_op == REQ_OP_WRITE) { - put_page(page); + dio_w_unpin_user_page(page); return -ENOTBLK; } @@ -1012,7 +1012,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, if (sdio->block_in_file >= i_size_aligned >> blkbits) { /* We hit eof */ - put_page(page); + dio_w_unpin_user_page(page); goto out; } zero_user(page, from, 1 << blkbits); @@ -1052,7 +1052,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, sdio->next_block_for_io, map_bh); if (ret) { - put_page(page); + dio_w_unpin_user_page(page); goto out; } sdio->next_block_for_io += this_chunk_blocks; @@ -1067,8 +1067,8 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, break; } - /* Drop the ref which was taken in get_user_pages() */ - put_page(page); + /* Drop the ref which was taken in [get|pin]_user_pages() */ + dio_w_unpin_user_page(page); } out: return ret; @@ -1288,7 +1288,7 @@ ssize_t __blockdev_direct_IO(struct kiocb *iocb, struct inode *inode, ret2 = dio_send_cur_page(dio, &sdio, &map_bh); if (retval == 0) retval = ret2; - put_page(sdio.cur_page); + dio_w_unpin_user_page(sdio.cur_page); sdio.cur_page = NULL; } if (sdio.bio) diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index 4eb559a16c9e..fc7763c418d1 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -202,7 +202,7 @@ static void iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio, bio->bi_private = dio; bio->bi_end_io = iomap_dio_bio_end_io; - get_page(page); + dio_w_pin_user_page(page); __bio_add_page(bio, page, len, 0); iomap_dio_submit_bio(iter, dio, bio, pos); } From patchwork Wed Aug 31 04:18:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 12960356 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CBB5ECAAD8 for ; Wed, 31 Aug 2022 04:20:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232112AbiHaEUV (ORCPT ); Wed, 31 Aug 2022 00:20:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231852AbiHaETa (ORCPT ); Wed, 31 Aug 2022 00:19:30 -0400 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2046.outbound.protection.outlook.com [40.107.101.46]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B669B775B; Tue, 30 Aug 2022 21:19:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=au1kMEZNWkQWqczF+kL+IONwGPN0SGcEuDldSIqN2rXCEtW+6rqM0VeOw7J3nRtHonZgZOybpDbV4xaH4ucg9aJyJ2Ihlq7kMOCnR3fQ/0DsB1qU+DCtHzHkNEfyyi6HfZEsblxftJ14Wj8Fb3Ig/w0/Arx3VCNpDZ28ooMC3c4FCQ4b2BIY/oKiVRABMWYBpXbGla7M/MMZpki6HWvyEQQrvigk6sZMzTeSDuvDVees071KUVsRaxvkt4UaIcnE9e6hDzdTpayEPLwM6NG12+ho0uQ3oiVFzy6lnRbxwYeJdY7BcY8xS1yNk7TSlpCfLwFD15+f76bidr0wKB8DtA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=aczRj79FDq5zhgDXgkdSZ6tHCMLon+GUs7nYvdVmu24=; b=EywoaBpCfsNxHV8bjgnl/AA+fRdZSnxtfym3J+4Uv3VNp4PExjwRSZpEbFo0HS3VPv+5jPQZRMYCfbP08W190S/swl6IXKsPazpwvW61H7vLi1Nf9PefRfqR13pZRk2xjB9W5e+hRx30RR4zg7jjGuNGy5o5CRRRNkin1UkraEGmeysFofGsztU3mLWwAm1WP0TQZnRo3vRX14p2bkw5V98PrTh7GnQIn0PqemLUU0dh89KPg02OkwUsaSnA03nxLyEThqzK/v4MGjFiHro+lFfnhX6WjvsPX2gSn1dbtq2/tJrbxgwddBQ6ODjGoBeHJLUiwtrMJscqYm/hPJ/Tow== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aczRj79FDq5zhgDXgkdSZ6tHCMLon+GUs7nYvdVmu24=; b=Vi8obwFlYuyrSMwI30cNt21RrBFdUoc8tzPUG84uRajCK3pFcfMuzg9qxSWD4Jse0AlNZDXLnoVXAhS/GpW5as2xk97JO9WodeHWdeKXge004BFTmRXmdJCsof74w0UG4ovBvw/4rrwklXtda25q8A710th2wOoD3oscVDnMygvwqnFrscqX2VYKr52ydX28aA4dEWjaIWfBi6r4Uda/zDVqco9jEbBBNgtli6IY+DpPCJwnAlbaoUCzhSUkIKf2notlcEtVug7VFYNBIYsdyKnesfSWBzysRvGpIcftpgBM8/UkO62mstw9HTEhv5ysUNoMq9ee4002e3cjkgGzPw== Received: from DM6PR02CA0044.namprd02.prod.outlook.com (2603:10b6:5:177::21) by IA1PR12MB6410.namprd12.prod.outlook.com (2603:10b6:208:38a::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5566.21; Wed, 31 Aug 2022 04:19:05 +0000 Received: from DM6NAM11FT090.eop-nam11.prod.protection.outlook.com (2603:10b6:5:177:cafe::6) by DM6PR02CA0044.outlook.office365.com (2603:10b6:5:177::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5588.10 via Frontend Transport; Wed, 31 Aug 2022 04:19:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT090.mail.protection.outlook.com (10.13.172.184) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5588.10 via Frontend Transport; Wed, 31 Aug 2022 04:19:05 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.38; Wed, 31 Aug 2022 04:18:54 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 30 Aug 2022 21:18:54 -0700 Received: from sandstorm.attlocal.net (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.29 via Frontend Transport; Tue, 30 Aug 2022 21:18:53 -0700 From: John Hubbard To: Andrew Morton CC: Jens Axboe , Alexander Viro , Miklos Szeredi , Christoph Hellwig , "Darrick J . Wong" , Trond Myklebust , Anna Schumaker , Jan Kara , David Hildenbrand , Logan Gunthorpe , , , , , , LKML , John Hubbard Subject: [PATCH v2 6/7] NFS: direct-io: convert to FOLL_PIN pages Date: Tue, 30 Aug 2022 21:18:42 -0700 Message-ID: <20220831041843.973026-7-jhubbard@nvidia.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831041843.973026-1-jhubbard@nvidia.com> References: <20220831041843.973026-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 69e423cc-d748-4009-b298-08da8b07f116 X-MS-TrafficTypeDiagnostic: IA1PR12MB6410:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: i8BaR9LcljAxg9H/peMbr52F7edgS1XU8PnaupyQA6Z5F3k0Y+uPQ+KfiZS2OyUagBOaAicXFiPAKGCXJHVOxKJZsjEENc0NHwLWpLZ2vXsgqXVsdZVQPCHqRbMmlcyFr0Y+1NHaQN3p/AZQcrVVLKDTgj8k8hZDE/KvVzmHobE2GmLAHJF9BTihan12VWSxhO1Ey50PP4X07nryuLwcAE4l5wn2jBk/v0omdkjQc7leNbPUGuF/pgDEmZysEIj9MzV3gMMuwG/SJ6gPmJKN2BsDqmVxwZR5/TAZhgjfYTdpr/GPvqdTfqTwX+Gn7o8qnU3TOIP60VPx2jTz8zlNJmAt18UJDkeLDa6OS5Q6rwCiXkoEyROjjFMAwvgdba5IYolOrIWpOJggaMGEiCEtkwuptGWuWjFl5O3foqmR0AYH5KhkWbxKdpylULYkzUJelbuosy1U0tYj0OAQJ98fmvkvsnR89ZlzrtXIQi2EnyIakJHTw12v24KA1oG3ivtOyI96oeVA93Eg/UwZzpYb/+KtKQL7DjdE9A6UF/+L7+zlz1ov/G66j7xNkuM3OJSlqKHKqE1XcOKKIs2Fkin3RvBDOg7dih4cL/sFJVQ23+B2xksN+eOZeOhXZwbB94TuKga1ssTiIRuoR+2Ge6374+cDeKhDpdYbbfLAt7PZGOvcyHXvL0Gg+SzlMxk6FsFfMmd71+UB3dOh23dDIq7H5yh4x/YgHayq2Hhl/hg7XVSzQBgLHnW/kMkZc1E6Kdveyt6O8BgQXv1yRa9gAe/9YAPcTWjADapjZ6PZZ4LubWzakmyjYouhUV0urI4UFuOg X-Forefront-Antispam-Report: CIP:12.22.5.234;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(346002)(396003)(39860400002)(376002)(136003)(46966006)(36840700001)(40470700004)(36860700001)(40480700001)(82310400005)(356005)(86362001)(40460700003)(82740400003)(70586007)(70206006)(81166007)(8676002)(4326008)(7416002)(41300700001)(8936002)(54906003)(5660300002)(478600001)(426003)(1076003)(316002)(6916009)(47076005)(2616005)(2906002)(186003)(83380400001)(26005)(6666004)(336012)(107886003)(36756003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2022 04:19:05.0768 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 69e423cc-d748-4009-b298-08da8b07f116 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.234];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT090.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6410 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Convert the NFS Direct IO layer to use pin_user_pages_fast() and unpin_user_page(), instead of get_user_pages_fast() and put_page(). The user of pin_user_pages_fast() depends upon: 1) CONFIG_BLK_USE_PIN_USER_PAGES_FOR_DIO, and 2) User-space-backed pages: user_backed_iter(i) == true Signed-off-by: John Hubbard --- fs/nfs/direct.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c index 1707f46b1335..71b794f39ee2 100644 --- a/fs/nfs/direct.c +++ b/fs/nfs/direct.c @@ -142,11 +142,13 @@ int nfs_swap_rw(struct kiocb *iocb, struct iov_iter *iter) return 0; } -static void nfs_direct_release_pages(struct page **pages, unsigned int npages) +static void nfs_direct_release_pages(struct iov_iter *iter, struct page **pages, + unsigned int npages) { - unsigned int i; - for (i = 0; i < npages; i++) - put_page(pages[i]); + if (user_backed_iter(iter) || iov_iter_is_bvec(iter)) + dio_w_unpin_user_pages(pages, npages); + else + release_pages(pages, npages); } void nfs_init_cinfo_from_dreq(struct nfs_commit_info *cinfo, @@ -332,11 +334,11 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq, size_t pgbase; unsigned npages, i; - result = iov_iter_get_pages_alloc2(iter, &pagevec, + result = dio_w_iov_iter_pin_pages_alloc(iter, &pagevec, rsize, &pgbase); if (result < 0) break; - + bytes = result; npages = (result + pgbase + PAGE_SIZE - 1) / PAGE_SIZE; for (i = 0; i < npages; i++) { @@ -362,7 +364,7 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq, pos += req_len; dreq->bytes_left -= req_len; } - nfs_direct_release_pages(pagevec, npages); + nfs_direct_release_pages(iter, pagevec, npages); kvfree(pagevec); if (result < 0) break; @@ -791,8 +793,8 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq, size_t pgbase; unsigned npages, i; - result = iov_iter_get_pages_alloc2(iter, &pagevec, - wsize, &pgbase); + result = dio_w_iov_iter_pin_pages_alloc(iter, &pagevec, + wsize, &pgbase); if (result < 0) break; @@ -829,7 +831,7 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq, pos += req_len; dreq->bytes_left -= req_len; } - nfs_direct_release_pages(pagevec, npages); + nfs_direct_release_pages(iter, pagevec, npages); kvfree(pagevec); if (result < 0) break; From patchwork Wed Aug 31 04:18:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 12960355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15329C64991 for ; Wed, 31 Aug 2022 04:20:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229936AbiHaEUW (ORCPT ); Wed, 31 Aug 2022 00:20:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231620AbiHaETe (ORCPT ); Wed, 31 Aug 2022 00:19:34 -0400 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9EE4FB7763; Tue, 30 Aug 2022 21:19:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TYinaZuGih8oHR1FDQjWX60Hlqf9lLXdsLkXtbHFQB5tThgYdMHJOog+X60aPiI85pl6UjH0hVjRLPEjoF0X1E4zXDOZZceVHUQC7oezVSmBjlge1xjPdVyMFb09HPgN4vkK0Gm8gr6T8jtKXqmju6gvKTGl4JMtZ9M/rnHF57UzrQ79dFr/yR0SpF4Hw7mtDLD35475HD3xkGq9tMT+NnzwiIgXmaHOWixKga7zFGJzW0ccZWHY8Pr6W21GBxzTr2Tkc/IdTDHlAy/ERCa+wyCdTdp/28l1sxWiIS/jlsYHFngWPwmo2jS/2K0n8IAqymNjReTB7/G4OHyyVgSLtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OhzEKD6XzVqcKpciBei7NKL/0bYb8BFurKTQDrfYJWg=; b=XgQR27T45eyFoKqM2ndJ7oLbY4ZvIRy0oVBT2LUA45oSnOBjTblgm01EMKpnVZZnBdpX3uTeDrLUKX3IK4WWx8vHIBWZ2D8KHRoSETAfnWFYnt4i9NlpNuhPbXcRCU3DEKRoeJ3bKJ7497+K4G2UjF7Dj7q40S0GGev4Ar6MMPkjKbREWbA0LAjV3U6xCPIKcss/ieHDXBd3WYk2hFovUDKZ05xi8bfxHtj//lsdJgSED/YucJzjnERpezjVHLL9aqTZ9iyoUfcQlkr4Has2QSOZAU4GySvTIZDdyh+bPlB3wRZFoxus20pDPQL+l5o96zkdYFLEXjvnuZGu9XpmCw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OhzEKD6XzVqcKpciBei7NKL/0bYb8BFurKTQDrfYJWg=; b=JqT08abPFL8rlg+NDrO9y44a5Us5dLOWl4ObUbXQscf2X/Qi9qlcZLAITfAyKnZx9X4Ce1ypuUFCeFwCfq6cNYzgfEs9RK1UDZxq0ia0wMxpAhZ2m1IwCHE8GT5t6A/O2yQRu0IhJYNwub+d8oD9tEXwWVPm+ZfOmk5XaXycK8OA4tigWbHW4opfYhBaIg2k+G04+whEYZU+AocoCP38QxyWivQOv/Q4x9G0jLsaevC5rCuaVRAxmkzZ3XtzbFstN6sLyNj6ZiaJGgFuMnLXkQ8AF0728G6VuGdEe2ViRQZvVhjNfwDjJibRiEfbyq5muU2onUtL2MViy3CIHzUH7w== Received: from DM6PR02CA0072.namprd02.prod.outlook.com (2603:10b6:5:177::49) by MN2PR12MB4583.namprd12.prod.outlook.com (2603:10b6:208:26e::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5566.15; Wed, 31 Aug 2022 04:19:06 +0000 Received: from DM6NAM11FT090.eop-nam11.prod.protection.outlook.com (2603:10b6:5:177:cafe::dc) by DM6PR02CA0072.outlook.office365.com (2603:10b6:5:177::49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5566.16 via Frontend Transport; Wed, 31 Aug 2022 04:19:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT090.mail.protection.outlook.com (10.13.172.184) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5588.10 via Frontend Transport; Wed, 31 Aug 2022 04:19:05 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.38; Wed, 31 Aug 2022 04:18:56 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Tue, 30 Aug 2022 21:18:55 -0700 Received: from sandstorm.attlocal.net (10.127.8.14) by mail.nvidia.com (10.129.68.8) with Microsoft SMTP Server id 15.2.986.29 via Frontend Transport; Tue, 30 Aug 2022 21:18:54 -0700 From: John Hubbard To: Andrew Morton CC: Jens Axboe , Alexander Viro , Miklos Szeredi , Christoph Hellwig , "Darrick J . Wong" , Trond Myklebust , Anna Schumaker , Jan Kara , David Hildenbrand , Logan Gunthorpe , , , , , , LKML , John Hubbard Subject: [PATCH v2 7/7] fuse: convert direct IO paths to use FOLL_PIN Date: Tue, 30 Aug 2022 21:18:43 -0700 Message-ID: <20220831041843.973026-8-jhubbard@nvidia.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831041843.973026-1-jhubbard@nvidia.com> References: <20220831041843.973026-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4177b4fa-919c-4dfa-0ee4-08da8b07f176 X-MS-TrafficTypeDiagnostic: MN2PR12MB4583:EE_ X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xWsdieSEkCNlw6X9Yi+FSt7oZEGAYsVVqxfcBmQDoF7X4xVBaVnfM6x4LTVpKAT/sJ/hjJwfynmp1NI8LyCJgk1GQ/8k/PMPItApg87SMODymjeNJ/SbZEleEG2by09acv1Egw2LJ4WaEzAvVw8YQgriyRfeYN1/P2vEgFYjYmQEvZPmHfYevJD8a5DmZ2QKp6cl2lZD1gMiKHc2ZQycP6kFHpV3lkWUTD+xZmyc5ac03D84EvtGU820xyJQRr8wmwR7hdnGRFZinBjsAf8Hng8SnAUa8CnQ3twlFv4Qmyjv34oQEnl5pZ5HrkuexjWZOwPX1STOjC/hDxLePj1mTZeHYv2DjoMggd5LPg3AxMzpeVTGku3McbvIqNaBqnr5XsolcOXI8pspGzb3LrfhLVi80W4yfHFJcvXPjVUo6NGmhgZrx4pQA+2e31+/pUQwdNF+g1ry97gbi2kEQEtnNspjqonHO+Sk+vs4r6ixiW4GfYuJQZx35Phw5ITLxXYrgMIdjN0L7ik7dxXajNKj+jd2NzvKBHSwD399Phhw0yIbUQ8xXRaAut3o11Vky282BtlJ4hKQ2vScAf1Xn7MKF5pMvKEPEOzVaxOW46s4b/u6hJUYl4tGprVBOTHABlU4nPzz3dDz2P6rMr4kdh1/2XneqX7pvnGs7M2Hujz7xZYOtQjtLXlRk2j+Oru+3+xEPHbgXlVPqOKn58g7ky5sdYn7/AOTxTKZq032Eb1FPxYmx8mbRd4SaIgxrzIPQeJSt5s82U+qDPQtEJuOnd9VuYG82BqctoZKdzvCaT97hks= X-Forefront-Antispam-Report: CIP:12.22.5.234;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230016)(4636009)(396003)(39860400002)(346002)(136003)(376002)(36840700001)(40470700004)(46966006)(41300700001)(40460700003)(107886003)(6666004)(82310400005)(36756003)(2906002)(4326008)(8676002)(26005)(316002)(6916009)(54906003)(478600001)(40480700001)(82740400003)(356005)(81166007)(426003)(336012)(47076005)(186003)(2616005)(86362001)(70586007)(36860700001)(8936002)(1076003)(5660300002)(7416002)(83380400001)(70206006)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Aug 2022 04:19:05.7018 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4177b4fa-919c-4dfa-0ee4-08da8b07f176 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.234];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT090.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4583 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Convert the fuse filesystem to use pin_user_pages_fast() and unpin_user_page(), instead of get_user_pages_fast() and put_page(). The user of pin_user_pages_fast() depends upon: 1) CONFIG_BLK_USE_PIN_USER_PAGES_FOR_DIO, and 2) User-space-backed pages or ITER_BVEC pages. Signed-off-by: John Hubbard --- fs/fuse/dev.c | 11 +++++++++-- fs/fuse/file.c | 32 +++++++++++++++++++++----------- fs/fuse/fuse_i.h | 1 + 3 files changed, 31 insertions(+), 13 deletions(-) diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index 51897427a534..5de98a7a45b1 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -675,7 +675,12 @@ static void fuse_copy_finish(struct fuse_copy_state *cs) flush_dcache_page(cs->pg); set_page_dirty_lock(cs->pg); } - put_page(cs->pg); + if (!cs->pipebufs && + (user_backed_iter(cs->iter) || iov_iter_is_bvec(cs->iter))) + dio_w_unpin_user_page(cs->pg); + + else + put_page(cs->pg); } cs->pg = NULL; } @@ -730,7 +735,9 @@ static int fuse_copy_fill(struct fuse_copy_state *cs) } } else { size_t off; - err = iov_iter_get_pages2(cs->iter, &page, PAGE_SIZE, 1, &off); + + err = dio_w_iov_iter_pin_pages(cs->iter, &page, PAGE_SIZE, 1, + &off); if (err < 0) return err; BUG_ON(!err); diff --git a/fs/fuse/file.c b/fs/fuse/file.c index 1a3afd469e3a..01da38928d0b 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -625,14 +625,19 @@ void fuse_read_args_fill(struct fuse_io_args *ia, struct file *file, loff_t pos, } static void fuse_release_user_pages(struct fuse_args_pages *ap, - bool should_dirty) + bool should_dirty, bool is_user_or_bvec) { unsigned int i; - for (i = 0; i < ap->num_pages; i++) { - if (should_dirty) - set_page_dirty_lock(ap->pages[i]); - put_page(ap->pages[i]); + if (is_user_or_bvec) { + dio_w_unpin_user_pages_dirty_lock(ap->pages, ap->num_pages, + should_dirty); + } else { + for (i = 0; i < ap->num_pages; i++) { + if (should_dirty) + set_page_dirty_lock(ap->pages[i]); + put_page(ap->pages[i]); + } } } @@ -733,7 +738,7 @@ static void fuse_aio_complete_req(struct fuse_mount *fm, struct fuse_args *args, struct fuse_io_priv *io = ia->io; ssize_t pos = -1; - fuse_release_user_pages(&ia->ap, io->should_dirty); + fuse_release_user_pages(&ia->ap, io->should_dirty, io->is_user_or_bvec); if (err) { /* Nothing */ @@ -1414,10 +1419,10 @@ static int fuse_get_user_pages(struct fuse_args_pages *ap, struct iov_iter *ii, while (nbytes < *nbytesp && ap->num_pages < max_pages) { unsigned npages; size_t start; - ret = iov_iter_get_pages2(ii, &ap->pages[ap->num_pages], - *nbytesp - nbytes, - max_pages - ap->num_pages, - &start); + ret = dio_w_iov_iter_pin_pages(ii, &ap->pages[ap->num_pages], + *nbytesp - nbytes, + max_pages - ap->num_pages, + &start); if (ret < 0) break; @@ -1483,6 +1488,10 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter, fl_owner_t owner = current->files; size_t nbytes = min(count, nmax); + /* For use in fuse_release_user_pages(): */ + io->is_user_or_bvec = user_backed_iter(iter) || + iov_iter_is_bvec(iter); + err = fuse_get_user_pages(&ia->ap, iter, &nbytes, write, max_pages); if (err && !nbytes) @@ -1498,7 +1507,8 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter, } if (!io->async || nres < 0) { - fuse_release_user_pages(&ia->ap, io->should_dirty); + fuse_release_user_pages(&ia->ap, io->should_dirty, + io->is_user_or_bvec); fuse_io_free(ia); } ia = NULL; diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h index 488b460e046f..6ee7f72e29eb 100644 --- a/fs/fuse/fuse_i.h +++ b/fs/fuse/fuse_i.h @@ -290,6 +290,7 @@ struct fuse_io_priv { struct kiocb *iocb; struct completion *done; bool blocking; + bool is_user_or_bvec; }; #define FUSE_IO_PRIV_SYNC(i) \