From patchwork Sat Oct 6 02:49:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10629069 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 093BC15A6 for ; Sat, 6 Oct 2018 02:50:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E9F3829700 for ; Sat, 6 Oct 2018 02:50:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DD55F2970B; Sat, 6 Oct 2018 02:50:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4A62A29700 for ; Sat, 6 Oct 2018 02:50:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3D0D26B000E; Fri, 5 Oct 2018 22:49:57 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 32FC56B0010; Fri, 5 Oct 2018 22:49:57 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 184906B0266; Fri, 5 Oct 2018 22:49:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by kanga.kvack.org (Postfix) with ESMTP id D5FAE6B000E for ; Fri, 5 Oct 2018 22:49:56 -0400 (EDT) Received: by mail-pf1-f199.google.com with SMTP id v9-v6so11275790pff.4 for ; Fri, 05 Oct 2018 19:49:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=01DDUSyuajCXizBTsypEqSwY2/MJLvR3TICruWaqIVc=; b=OPGBzkJ9xFII0NJKg8wzH6f2zAhVlZcu1qnIGEvDelUnG7NrHB5mmatzeFrWZu21uK rTHPqFe91vgQCn82zzAc78CsevFO76VtQGmHwnG3KrHcCGfRZ3O1aibhmA9lE12acJlr wgNpfshCp3ihyQxCKamXp8ruSCkdQ6Nf4/I20IGiUesaZA+/Nj16B8QUC6AoOWWP+MhI cp50bjdyLj/HNDRhDLQdYP/5ILksQx6oCdtWJ96+HvM4ZqPMTxnxcCfudtpkJJZLH35j VaNXOuSMSVZ8PHCLTBKMDlU1xHj5GPeicB3/i35U+R20CQ09WGQZ1vL0SJqomq9NC6oh 7mWA== X-Gm-Message-State: ABuFfojGQ/plfcI5WA3vqJfvYaBY2WAU18KXLhitchd/Gc1SaTE0jIpU ZLmC4HXFVSpr0fugsXxuXGDG7RZj6NGD+Ew9O4cT+0BpB9VEi0+nYiTtZvX87+/f3w9kuUcbMBv HXO7Y9CPzp9bt9ybRbo8pAo6IpI5nw5QWCXCUa1J4C7pP+D7NGSWEUwnXCuYWdgSosPjQ/Rr9zr Hl7O9nIcxbTEGaRm7lskKzF7DP/4zb8mnKksgU8s+GmQczxNY25fXfwg8JzNLG3Hstg2xkus4Il buQUt/otThdgDuPqQAb2e5FeFG4JykqyrVJvOmBQEABlXcC2eF9uqZDpnm3J+KT+5KYbvcIMeka WnGFEn9x+Ju+lEmeVXMMG9D8E1WooP9HP+2/N5SCm5dISlKeOiAwRZdBap0dGI3ZNe1DCQ6kZo7 r X-Received: by 2002:a17:902:1026:: with SMTP id b35-v6mr14382170pla.283.1538794196543; Fri, 05 Oct 2018 19:49:56 -0700 (PDT) X-Received: by 2002:a17:902:1026:: with SMTP id b35-v6mr14382141pla.283.1538794195672; Fri, 05 Oct 2018 19:49:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538794195; cv=none; d=google.com; s=arc-20160816; b=seLkDRS5uxkydiaXuf+BRyiBT8du+Gs00hqGoRIl5lTbl5vgYFcQX8i+FjkgFz53kP R6AyHvIkRzDglxeQq1gGENHuDmC4fYfodzvkDcAamwTpxtHGza7cAyxeHw7/tSy8ZQxd zOCUB8bU5jRU/tdP3pKN0uuaqsBPCF3p6w1TqnMRT+qR9loedVZ3U4+26nOPl5ICXHT6 EkSTjHbayh1K2Ns41CWKinHky88ngtFw+7BwEA5HJM7+OF8UUY0nqterfsydyeaEBTRG Rgnw6HJUFW4nK+VJZdb0Kny70ob4ktwsOtBQi7GMdSunreZchwSHtbJgqWyiofF1y/3S dSLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=01DDUSyuajCXizBTsypEqSwY2/MJLvR3TICruWaqIVc=; b=XjOutSWaPnXH1ETukNtKgs4xrDXO2q3ljsYT+X+iNzFwyjRrVwGsAcwtGa09lqJS5j zF/GNxAKFDx4HoXFLffDF3xAzi4apDjior35EW1OOkTLkmo9Ej9ut/vEziJNKRzYOYV/ +Ay/GO0fA2xAX5Ki2F5L8Ri/XNMIPyuv+2jNZDicaKvxpm+O+0VCb/PKmBWcDFEIunmm phMrfNcfA7f1W3CtDl1ajOdY53cTJCm8CZh/sGpJAClYwwuqHNtbpPjZtfBYfgL5knby ja37T+r2kxI5hEbNpwa+wEzepVoVpQonY+5hqGVF7hvqFTsG0Hvo2ZxdDZ1UbmP2+nCo nJLQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=gZT62M2H; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id b38-v6sor6977298plb.21.2018.10.05.19.49.55 for (Google Transport Security); Fri, 05 Oct 2018 19:49:55 -0700 (PDT) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=gZT62M2H; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=01DDUSyuajCXizBTsypEqSwY2/MJLvR3TICruWaqIVc=; b=gZT62M2H6Q36uwqP8UJt98SnxUmEe0yrjAmJ2xll4EC9dRr78DoAkNM4Bsvi+vH0ug yGvv//ICP4M0GjxzxoA9O/8lCppksEoFyonWiBtBpHDx4apadIU56gbXr6qwB9+SgNM5 oeAK+/uFC2WaE9hVjy8f5GLT+X3NwJBMIJA2KTrzh2Z/C2m7PQH3OJlrD+VPM8lZsq0e SPC5HneuopMYVZxZzzJ2DfZXllsm3L6UGySg0xVB/BLB6EcewLjyFfBDT2hZhMsHZuiM Ig2I60+7M9LHpPByDDeqPTx9cDPFzkJM4mAnJZfgruICN+xh2KZsYmZlqj9j5e9M66Q6 mwcw== X-Google-Smtp-Source: ACcGV61qFbcFaNDprDERpWbkRlyDNawmPGj+nXn0xCRdARDugWyI37vkT6XxOYW+se4JH8oMqeQVSA== X-Received: by 2002:a17:902:6b06:: with SMTP id o6-v6mr13810554plk.6.1538794195317; Fri, 05 Oct 2018 19:49:55 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id n63-v6sm3962785pfn.9.2018.10.05.19.49.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 05 Oct 2018 19:49:54 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Matthew Wilcox , Michal Hocko , Christopher Lameter , Jason Gunthorpe , Dan Williams , Jan Kara Cc: linux-mm@kvack.org, LKML , linux-rdma , linux-fsdevel@vger.kernel.org, John Hubbard , Al Viro , Jerome Glisse , Christoph Hellwig , Ralph Campbell Subject: [PATCH v3 2/3] mm: introduce put_user_page*(), placeholder versions Date: Fri, 5 Oct 2018 19:49:48 -0700 Message-Id: <20181006024949.20691-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20181006024949.20691-1-jhubbard@nvidia.com> References: <20181006024949.20691-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard Introduces put_user_page(), which simply calls put_page(). This provides a way to update all get_user_pages*() callers, so that they call put_user_page(), instead of put_page(). Also introduces put_user_pages(), and a few dirty/locked variations, as a replacement for release_pages(), and also as a replacement for open-coded loops that release multiple pages. These may be used for subsequent performance improvements, via batching of pages to be released. This prepares for eventually fixing the problem described in [1], and is following a plan listed in [2], [3], [4]. [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubbard@nvidia.com Proposed steps for fixing get_user_pages() + DMA problems. [3]https://lkml.kernel.org/r/20180710082100.mkdwngdv5kkrcz6n@quack2.suse.cz Bounce buffers (otherwise [2] is not really viable). [4] https://lkml.kernel.org/r/20181003162115.GG24030@quack2.suse.cz Follow-up discussions. CC: Matthew Wilcox CC: Michal Hocko CC: Christopher Lameter CC: Jason Gunthorpe CC: Dan Williams CC: Jan Kara CC: Al Viro CC: Jerome Glisse CC: Christoph Hellwig CC: Ralph Campbell Signed-off-by: John Hubbard Reviewed-by: Jan Kara --- include/linux/mm.h | 48 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 46 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0416a7204be3..305b206e6851 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -137,6 +137,8 @@ extern int overcommit_ratio_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); extern int overcommit_kbytes_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); +int set_page_dirty(struct page *page); +int set_page_dirty_lock(struct page *page); #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) @@ -943,6 +945,50 @@ static inline void put_page(struct page *page) __put_page(page); } +/* Pages that were pinned via get_user_pages*() should be released via + * either put_user_page(), or one of the put_user_pages*() routines + * below. + */ +static inline void put_user_page(struct page *page) +{ + put_page(page); +} + +static inline void put_user_pages_dirty(struct page **pages, + unsigned long npages) +{ + unsigned long index; + + for (index = 0; index < npages; index++) { + if (!PageDirty(pages[index])) + set_page_dirty(pages[index]); + + put_user_page(pages[index]); + } +} + +static inline void put_user_pages_dirty_lock(struct page **pages, + unsigned long npages) +{ + unsigned long index; + + for (index = 0; index < npages; index++) { + if (!PageDirty(pages[index])) + set_page_dirty_lock(pages[index]); + + put_user_page(pages[index]); + } +} + +static inline void put_user_pages(struct page **pages, + unsigned long npages) +{ + unsigned long index; + + for (index = 0; index < npages; index++) + put_user_page(pages[index]); +} + #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define SECTION_IN_PAGE_FLAGS #endif @@ -1534,8 +1580,6 @@ int redirty_page_for_writepage(struct writeback_control *wbc, void account_page_dirtied(struct page *page, struct address_space *mapping); void account_page_cleaned(struct page *page, struct address_space *mapping, struct bdi_writeback *wb); -int set_page_dirty(struct page *page); -int set_page_dirty_lock(struct page *page); void __cancel_dirty_page(struct page *page); static inline void cancel_dirty_page(struct page *page) {