From patchwork Tue Mar 22 21:39:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12789066 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57E65C433FE for ; Tue, 22 Mar 2022 21:39:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D7EF56B0095; Tue, 22 Mar 2022 17:39:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D2C106B0096; Tue, 22 Mar 2022 17:39:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BCC736B0098; Tue, 22 Mar 2022 17:39:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0229.hostedemail.com [216.40.44.229]) by kanga.kvack.org (Postfix) with ESMTP id AC9B66B0095 for ; Tue, 22 Mar 2022 17:39:50 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 67DC49D673 for ; Tue, 22 Mar 2022 21:39:50 +0000 (UTC) X-FDA: 79273339740.30.319896F Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf03.hostedemail.com (Postfix) with ESMTP id ED00020014 for ; Tue, 22 Mar 2022 21:39:49 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 481EE61673; Tue, 22 Mar 2022 21:39:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 971D9C340EC; Tue, 22 Mar 2022 21:39:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1647985187; bh=TkJ0cJQXEXCw8B24NXJixdrql+CcTRDCJStXqy3s2GY=; h=Date:To:From:In-Reply-To:Subject:From; b=QD/F/geTDZra5hTN341pb3wrNihHd/7RDPomf12PbYr3a6a4lDaT0oBLIfQyxp/4m H+oMJWGpTZrVtxdLDIefgb307E+ikHOVYlWcNZRATffqhzbZDOky6nEP0YrrMoQV+J uMkgAyyNx1cHQfMweheJegXHYWlcdwU8ML0Ft2aM= Date: Tue, 22 Mar 2022 14:39:46 -0700 To: willy@infradead.org,peterx@redhat.com,lukas.bulwahn@gmail.com,kirill.shutemov@linux.intel.com,jgg@ziepe.ca,jgg@nvidia.com,jack@suse.cz,imbrenda@linux.ibm.com,hch@lst.de,david@redhat.com,alex.williamson@redhat.com,aarcange@redhat.com,jhubbard@nvidia.com,akpm@linux-foundation.org,patches@lists.linux.dev,linux-mm@kvack.org,mm-commits@vger.kernel.org,torvalds@linux-foundation.org,akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220322143803.04a5e59a07e48284f196a2f9@linux-foundation.org> Subject: [patch 025/227] mm: change lookup_node() to use get_user_pages_fast() Message-Id: <20220322213947.971D9C340EC@smtp.kernel.org> X-Stat-Signature: urk37ce48rgqud17bdqj1xopprm1hgna X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: ED00020014 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b="QD/F/geT"; dmarc=none; spf=pass (imf03.hostedemail.com: domain of akpm@linux-foundation.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org X-Rspam-User: X-HE-Tag: 1647985189-628598 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: John Hubbard Subject: mm: change lookup_node() to use get_user_pages_fast() The purpose of calling get_user_pages_locked() from lookup_node() was to allow for unlocking the mmap_lock when reading a page from the disk during a page fault (hidden behind VM_FAULT_RETRY). The idea was to reduce contention on the heavily-used mmap_lock. (Thanks to Jan Kara for clearly pointing that out, and in fact I've used some of his wording here.) However, it is unlikely for lookup_node() to take a page fault. With that in mind, change over to calling get_user_pages_fast(). This simplifies the code, runs a little faster in the expected case, and allows removing get_user_pages_locked() entirely, in a subsequent patch. Link: https://lkml.kernel.org/r/20220204020010.68930-5-jhubbard@nvidia.com Signed-off-by: John Hubbard Reviewed-by: Jan Kara Reviewed-by: Jason Gunthorpe Reviewed-by: Claudio Imbrenda Reviewed-by: Christoph Hellwig Cc: Alex Williamson Cc: Andrea Arcangeli Cc: David Hildenbrand Cc: Jason Gunthorpe Cc: Kirill A. Shutemov Cc: Lukas Bulwahn Cc: Matthew Wilcox (Oracle) Cc: Peter Xu Signed-off-by: Andrew Morton --- mm/mempolicy.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) --- a/mm/mempolicy.c~mm-change-lookup_node-to-use-get_user_pages_fast +++ a/mm/mempolicy.c @@ -907,17 +907,14 @@ static void get_policy_nodemask(struct m static int lookup_node(struct mm_struct *mm, unsigned long addr) { struct page *p = NULL; - int err; + int ret; - int locked = 1; - err = get_user_pages_locked(addr & PAGE_MASK, 1, 0, &p, &locked); - if (err > 0) { - err = page_to_nid(p); + ret = get_user_pages_fast(addr & PAGE_MASK, 1, 0, &p); + if (ret > 0) { + ret = page_to_nid(p); put_page(p); } - if (locked) - mmap_read_unlock(mm); - return err; + return ret; } /* Retrieve NUMA policy */ @@ -968,14 +965,14 @@ static long do_get_mempolicy(int *policy if (flags & MPOL_F_NODE) { if (flags & MPOL_F_ADDR) { /* - * Take a refcount on the mpol, lookup_node() - * will drop the mmap_lock, so after calling - * lookup_node() only "pol" remains valid, "vma" - * is stale. + * Take a refcount on the mpol, because we are about to + * drop the mmap_lock, after which only "pol" remains + * valid, "vma" is stale. */ pol_refcount = pol; vma = NULL; mpol_get(pol); + mmap_read_unlock(mm); err = lookup_node(mm, addr); if (err < 0) goto out;