From patchwork Mon Feb 17 05:03:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 11385231 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 92279138D for ; Mon, 17 Feb 2020 05:04:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5F90B20718 for ; Mon, 17 Feb 2020 05:04:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F90B20718 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8FDEC6B0007; Mon, 17 Feb 2020 00:04:22 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8ADC56B0008; Mon, 17 Feb 2020 00:04:22 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C5BE6B000A; Mon, 17 Feb 2020 00:04:22 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0208.hostedemail.com [216.40.44.208]) by kanga.kvack.org (Postfix) with ESMTP id 63C126B0007 for ; Mon, 17 Feb 2020 00:04:22 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0BA74181AEF00 for ; Mon, 17 Feb 2020 05:04:22 +0000 (UTC) X-FDA: 76498427964.02.fish48_7d52f2e97ec14 X-Spam-Summary: 2,0,0,29797dc2dd447e33,d41d8cd98f00b204,anshuman.khandual@arm.com,::linux-kernel@vger.kernel.org:anshuman.khandual@arm.com:guoren@kernel.org:geert@linux-m68k.org:ralf@linux-mips.org:paulburton@kernel.org:benh@kernel.crashing.org:paulus@samba.org:mpe@ellerman.id.au:ysato@users.sourceforge.jp:dalias@libc.org:dave.hansen@linux.intel.com:luto@kernel.org:peterz@infradead.org:tglx@linutronix.de:mingo@redhat.com:akpm@linux-foundation.org:rostedt@goodmis.org:mgorman@suse.de:linux-m68k@lists.linux-m68k.org:linux-mips@vger.kernel.org:linuxppc-dev@lists.ozlabs.org:linux-sh@vger.kernel.org,RULES_HIT:2:41:355:379:541:800:960:973:988:989:1260:1261:1345:1359:1431:1437:1535:1605:1730:1747:1777:1792:1963:2393:2559:2562:2898:2901:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:3874:4049:4119:4250:4321:4605:5007:6261:6742:6743:7903:8634:9592:10004:11026:11473:11657:11658:11914:12043:12296:12297:12438:12555:12895:12986:13846:14096:14394:21063:21080:21451:21627:21740:21990: 30003:30 X-HE-Tag: fish48_7d52f2e97ec14 X-Filterd-Recvd-Size: 8587 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Mon, 17 Feb 2020 05:04:21 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4B8081FB; Sun, 16 Feb 2020 21:04:20 -0800 (PST) Received: from p8cg001049571a15.blr.arm.com (p8cg001049571a15.blr.arm.com [10.162.16.95]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3017F3F703; Sun, 16 Feb 2020 21:04:13 -0800 (PST) From: Anshuman Khandual To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Anshuman Khandual , Guo Ren , Geert Uytterhoeven , Ralf Baechle , Paul Burton , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Yoshinori Sato , Rich Felker , Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Andrew Morton , Steven Rostedt , Mel Gorman , linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-sh@vger.kernel.org Subject: [PATCH 2/5] mm/vma: Make vma_is_accessible() available for general use Date: Mon, 17 Feb 2020 10:33:50 +0530 Message-Id: <1581915833-21984-3-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1581915833-21984-1-git-send-email-anshuman.khandual@arm.com> References: <1581915833-21984-1-git-send-email-anshuman.khandual@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Lets move vma_is_accessible() helper to include/linux/mm.h which makes it available for general use. While here, this replaces all remaining open encodings for VMA access check with vma_is_accessible(). Cc: Guo Ren Cc: Geert Uytterhoeven Cc: Paul Burton Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: Yoshinori Sato Cc: Rich Felker Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Andrew Morton Cc: Steven Rostedt Cc: Mel Gorman Cc: linux-kernel@vger.kernel.org Cc: linux-m68k@lists.linux-m68k.org Cc: linux-mips@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-sh@vger.kernel.org Cc: linux-mm@kvack.org Signed-off-by: Anshuman Khandual Acked-by: Geert Uytterhoeven Acked-by: Guo Ren --- arch/csky/mm/fault.c | 2 +- arch/m68k/mm/fault.c | 2 +- arch/mips/mm/fault.c | 2 +- arch/powerpc/mm/fault.c | 2 +- arch/sh/mm/fault.c | 2 +- arch/x86/mm/fault.c | 2 +- include/linux/mm.h | 5 +++++ kernel/sched/fair.c | 2 +- mm/gup.c | 2 +- mm/memory.c | 5 ----- mm/mempolicy.c | 3 +-- 11 files changed, 14 insertions(+), 15 deletions(-) diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c index f76618b630f9..4b3511b8298d 100644 --- a/arch/csky/mm/fault.c +++ b/arch/csky/mm/fault.c @@ -137,7 +137,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long write, if (!(vma->vm_flags & VM_WRITE)) goto bad_area; } else { - if (!(vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC))) + if (!vma_is_accessible(vma)) goto bad_area; } diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c index e9b1d7585b43..d5131ec5d923 100644 --- a/arch/m68k/mm/fault.c +++ b/arch/m68k/mm/fault.c @@ -125,7 +125,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, case 1: /* read, present */ goto acc_err; case 0: /* read, not present */ - if (!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE))) + if (!vma_is_accessible(vma)) goto acc_err; } diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c index 1e8d00793784..5b9f947bfa32 100644 --- a/arch/mips/mm/fault.c +++ b/arch/mips/mm/fault.c @@ -142,7 +142,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, goto bad_area; } } else { - if (!(vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC))) + if (!vma_is_accessible(vma)) goto bad_area; } } diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 8db0507619e2..71a3658c516b 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -314,7 +314,7 @@ static bool access_error(bool is_write, bool is_exec, return false; } - if (unlikely(!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)))) + if (unlikely(!vma_is_accessible(vma))) return true; /* * We should ideally do the vma pkey access check here. But in the diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c index 5f51456f4fc7..a8c4253f37d7 100644 --- a/arch/sh/mm/fault.c +++ b/arch/sh/mm/fault.c @@ -355,7 +355,7 @@ static inline int access_error(int error_code, struct vm_area_struct *vma) return 1; /* read, not present: */ - if (unlikely(!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)))) + if (unlikely(!vma_is_accessible(vma))) return 1; return 0; diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index fa4ea09593ab..c461eaab0306 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1200,7 +1200,7 @@ access_error(unsigned long error_code, struct vm_area_struct *vma) return 1; /* read, not present: */ - if (unlikely(!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)))) + if (unlikely(!vma_is_accessible(vma))) return 1; return 0; diff --git a/include/linux/mm.h b/include/linux/mm.h index 52269e56c514..b0e53ef13ff1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -541,6 +541,11 @@ static inline bool vma_is_anonymous(struct vm_area_struct *vma) return !vma->vm_ops; } +static inline bool vma_is_accessible(struct vm_area_struct *vma) +{ + return vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC); +} + #ifdef CONFIG_SHMEM /* * The vma_is_shmem is not inline because it is used only by slow diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fe4e0d775375..6ce54d57dd09 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2573,7 +2573,7 @@ static void task_numa_work(struct callback_head *work) * Skip inaccessible VMAs to avoid any confusion between * PROT_NONE and NUMA hinting ptes */ - if (!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE))) + if (!vma_is_accessible(vma)) continue; do { diff --git a/mm/gup.c b/mm/gup.c index 1b521e0ac1de..c8ffe2e61f03 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1171,7 +1171,7 @@ long populate_vma_page_range(struct vm_area_struct *vma, * We want mlock to succeed for regions that have any permissions * other than PROT_NONE. */ - if (vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC)) + if (vma_is_accessible(vma)) gup_flags |= FOLL_FORCE; /* diff --git a/mm/memory.c b/mm/memory.c index 0bccc622e482..2f07747612b7 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3942,11 +3942,6 @@ static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd) return VM_FAULT_FALLBACK; } -static inline bool vma_is_accessible(struct vm_area_struct *vma) -{ - return vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE); -} - static vm_fault_t create_huge_pud(struct vm_fault *vmf) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 977c641f78cf..91c1ad6ab8ea 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -649,8 +649,7 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end, if (flags & MPOL_MF_LAZY) { /* Similar to task_numa_work, skip inaccessible VMAs */ - if (!is_vm_hugetlb_page(vma) && - (vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE)) && + if (!is_vm_hugetlb_page(vma) && vma_is_accessible(vma) && !(vma->vm_flags & VM_MIXEDMAP)) change_prot_numa(vma, start, endvma); return 1;