From patchwork Fri Oct 9 19:50:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11828271 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2271B1744 for ; Fri, 9 Oct 2020 19:56:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1383922CA0 for ; Fri, 9 Oct 2020 19:56:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391425AbgJITzp (ORCPT ); Fri, 9 Oct 2020 15:55:45 -0400 Received: from mga11.intel.com ([192.55.52.93]:40692 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391217AbgJITxx (ORCPT ); Fri, 9 Oct 2020 15:53:53 -0400 IronPort-SDR: UEyefvdFo8RhoS2CyQvHWs4S368T1Q2/u02koU3gcMNc8vJGdiDc16SNWw6hcKwXxyDmkb/Pj3 l4VxCwhGx8mA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162068217" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162068217" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:51 -0700 IronPort-SDR: V1KD7wTraPak1zxWaGCn8NDyAgfcS4zsKsBv4Z0vY7HUn25qjJm9u99gQjtiJGVT+R+oXmWyM0 H2mADCd5/x5Q== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="462301148" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:50 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 52/58] mm: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:50:27 -0700 Message-Id: <20201009195033.3208459-53-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Ira Weiny These kmap() calls are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Signed-off-by: Ira Weiny --- mm/memory.c | 8 ++++---- mm/swapfile.c | 4 ++-- mm/userfaultfd.c | 4 ++-- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index fcfc4ca36eba..75a054882d7a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4945,7 +4945,7 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, if (bytes > PAGE_SIZE-offset) bytes = PAGE_SIZE-offset; - maddr = kmap(page); + maddr = kmap_thread(page); if (write) { copy_to_user_page(vma, page, addr, maddr + offset, buf, bytes); @@ -4954,7 +4954,7 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, copy_from_user_page(vma, page, addr, buf, maddr + offset, bytes); } - kunmap(page); + kunmap_thread(page); put_page(page); } len -= bytes; @@ -5216,14 +5216,14 @@ long copy_huge_page_from_user(struct page *dst_page, for (i = 0; i < pages_per_huge_page; i++) { if (allow_pagefault) - page_kaddr = kmap(dst_page + i); + page_kaddr = kmap_thread(dst_page + i); else page_kaddr = kmap_atomic(dst_page + i); rc = copy_from_user(page_kaddr, (const void __user *)(src + i * PAGE_SIZE), PAGE_SIZE); if (allow_pagefault) - kunmap(dst_page + i); + kunmap_thread(dst_page + i); else kunmap_atomic(page_kaddr); diff --git a/mm/swapfile.c b/mm/swapfile.c index debc94155f74..e3296ff95648 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3219,7 +3219,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) error = PTR_ERR(page); goto bad_swap_unlock_inode; } - swap_header = kmap(page); + swap_header = kmap_thread(page); maxpages = read_swap_header(p, swap_header, inode); if (unlikely(!maxpages)) { @@ -3395,7 +3395,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) filp_close(swap_file, NULL); out: if (page && !IS_ERR(page)) { - kunmap(page); + kunmap_thread(page); put_page(page); } if (name) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 9a3d451402d7..4d38c881bb2d 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -586,11 +586,11 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, mmap_read_unlock(dst_mm); BUG_ON(!page); - page_kaddr = kmap(page); + page_kaddr = kmap_thread(page); err = copy_from_user(page_kaddr, (const void __user *) src_addr, PAGE_SIZE); - kunmap(page); + kunmap_thread(page); if (unlikely(err)) { err = -EFAULT; goto out;