From patchwork Sat Feb 12 00:29:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12744145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 563B5C433EF for ; Sat, 12 Feb 2022 00:29:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D19AC6B007E; Fri, 11 Feb 2022 19:29:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CC9506B0080; Fri, 11 Feb 2022 19:29:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE0116B0081; Fri, 11 Feb 2022 19:29:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0203.hostedemail.com [216.40.44.203]) by kanga.kvack.org (Postfix) with ESMTP id AFE666B007E for ; Fri, 11 Feb 2022 19:29:07 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 681608249980 for ; Sat, 12 Feb 2022 00:29:07 +0000 (UTC) X-FDA: 79132243134.08.3A73E3A Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf31.hostedemail.com (Postfix) with ESMTP id E30F220005 for ; Sat, 12 Feb 2022 00:29:06 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DE58EB82DFA; Sat, 12 Feb 2022 00:29:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A1E5FC340EB; Sat, 12 Feb 2022 00:29:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1644625744; bh=aoYfyEQM3bHoAERuiZzHviZo0WbRxY1N3zp6IF/uRug=; h=Date:To:From:In-Reply-To:Subject:From; b=ia5G5obRGttJWrBzpbK3kuh8BXU2gkzNY4RwqfNAl8Wx4+/vtL1WnueDvdeo/QqQ1 JqMTmk/z1EN/K7106X79QlauMIOgftMaurJdCQyXujbkrKputX1WMP8w3wmyse5MtW SJPyISa5WhzNcesulx66vwokGAmjESUavEPDfHFo= Date: Fri, 11 Feb 2022 16:29:04 -0800 To: willy@infradead.org,peterx@redhat.com,lukas.bulwahn@gmail.com,kirill.shutemov@linux.intel.com,jgg@ziepe.ca,jgg@nvidia.com,jack@suse.cz,imbrenda@linux.ibm.com,hch@lst.de,david@redhat.com,alex.williamson@redhat.com,aarcange@redhat.com,jhubbard@nvidia.com,akpm@linux-foundation.org,patches@lists.linux.dev,linux-mm@kvack.org,mm-commits@vger.kernel.org,torvalds@linux-foundation.org,akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220211162756.9f8e8baef81183041ccfc16f@linux-foundation.org> Subject: [patch 4/5] mm: change lookup_node() to use get_user_pages_fast() Message-Id: <20220212002904.A1E5FC340EB@smtp.kernel.org> X-Rspamd-Queue-Id: E30F220005 X-Rspam-User: Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=ia5G5obR; spf=pass (imf31.hostedemail.com: domain of akpm@linux-foundation.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Stat-Signature: 57xhej95pjwitzm41y178dfzw6rb17bw X-Rspamd-Server: rspam03 X-HE-Tag: 1644625746-603231 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: John Hubbard Subject: mm: change lookup_node() to use get_user_pages_fast() The purpose of calling get_user_pages_locked() from lookup_node() was to allow for unlocking the mmap_lock when reading a page from the disk during a page fault (hidden behind VM_FAULT_RETRY). The idea was to reduce contention on the heavily-used mmap_lock. (Thanks to Jan Kara for clearly pointing that out, and in fact I've used some of his wording here.) However, it is unlikely for lookup_node() to take a page fault. With that in mind, change over to calling get_user_pages_fast(). This simplifies the code, runs a little faster in the expected case, and allows removing get_user_pages_locked() entirely, in a subsequent patch. Link: https://lkml.kernel.org/r/20220204020010.68930-5-jhubbard@nvidia.com Signed-off-by: John Hubbard Reviewed-by: Jan Kara Reviewed-by: Jason Gunthorpe Reviewed-by: Claudio Imbrenda Reviewed-by: Christoph Hellwig Cc: Alex Williamson Cc: Andrea Arcangeli Cc: David Hildenbrand Cc: Jason Gunthorpe Cc: Kirill A. Shutemov Cc: Lukas Bulwahn Cc: Matthew Wilcox (Oracle) Cc: Peter Xu Signed-off-by: Andrew Morton --- a/mm/mempolicy.c~mm-change-lookup_node-to-use-get_user_pages_fast +++ a/mm/mempolicy.c @@ -907,17 +907,14 @@ static void get_policy_nodemask(struct m static int lookup_node(struct mm_struct *mm, unsigned long addr) { struct page *p = NULL; - int err; + int ret; - int locked = 1; - err = get_user_pages_locked(addr & PAGE_MASK, 1, 0, &p, &locked); - if (err > 0) { - err = page_to_nid(p); + ret = get_user_pages_fast(addr & PAGE_MASK, 1, 0, &p); + if (ret > 0) { + ret = page_to_nid(p); put_page(p); } - if (locked) - mmap_read_unlock(mm); - return err; + return ret; } /* Retrieve NUMA policy */ @@ -968,14 +965,14 @@ static long do_get_mempolicy(int *policy if (flags & MPOL_F_NODE) { if (flags & MPOL_F_ADDR) { /* - * Take a refcount on the mpol, lookup_node() - * will drop the mmap_lock, so after calling - * lookup_node() only "pol" remains valid, "vma" - * is stale. + * Take a refcount on the mpol, because we are about to + * drop the mmap_lock, after which only "pol" remains + * valid, "vma" is stale. */ pol_refcount = pol; vma = NULL; mpol_get(pol); + mmap_read_unlock(mm); err = lookup_node(mm, addr); if (err < 0) goto out;