From patchwork Thu Jun 18 06:43:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11611465 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A585592A for ; Thu, 18 Jun 2020 06:43:50 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7619020FC3 for ; Thu, 18 Jun 2020 06:43:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Gc5W1SHp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7619020FC3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=90Mfz3UD6EK+PVM+ATUKQJFPfZrHcGSArjHGvFUqm2g=; b=Gc5W1SHpgeA3TH 7qfv6UJ8QCU4q2zr/1xAXkc24OyMQXu+AVjDZ/ahBG8aT1GE72lPVfMP1E3SXpxV2pkh+Lwa2qIZn c5u6UDdDVIMxja7JxMecC5VdA//Zwqra+aA+YBFYSCXQYSu/7JrPxDcMZJjKEU9uA0oY32oKQY0LS ZMueOTtHEagnx2PknVhwA2+7xTYDsVVw1y6U+538GupWCCNiB2QbSZWhbIGsaieRust3d/gsHx7wY 77qJLYqJMs/n4zbxzuCHxSLoes2D2xwEAXE6sPfsUQYxfX+2mziSbS0L/miphJ7dfshP5aUp/HQGD +0etrl4GY0LOyTsilgIQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jloH4-0008GR-EP; Thu, 18 Jun 2020 06:43:46 +0000 Received: from 195-192-102-148.dyn.cablelink.at ([195.192.102.148] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jloGY-0007yl-M6; Thu, 18 Jun 2020 06:43:15 +0000 From: Christoph Hellwig To: Andrew Morton Subject: [PATCH 1/3] x86/hyperv: allocate the hypercall page with only read and execute bits Date: Thu, 18 Jun 2020 08:43:05 +0200 Message-Id: <20200618064307.32739-2-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200618064307.32739-1-hch@lst.de> References: <20200618064307.32739-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-hyperv@vger.kernel.org, Peter Zijlstra , Catalin Marinas , x86@kernel.org, Dexuan Cui , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Jessica Yu , Vitaly Kuznetsov , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Avoid a W^X violation cause by the fact that PAGE_KERNEL_EXEC includes the writable bit. For this resurrect the removed PAGE_KERNEL_RX definitіon, but as PAGE_KERNEL_ROX to match arm64 and powerpc. Fixes: 78bb17f76edc ("x86/hyperv: use vmalloc_exec for the hypercall page") Reported-by: Dexuan Cui Signed-off-by: Christoph Hellwig Tested-by: Vitaly Kuznetsov Acked-by: Wei Liu --- arch/x86/hyperv/hv_init.c | 4 +++- arch/x86/include/asm/pgtable_types.h | 2 ++ 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index a54c6a401581dd..2bdc72e6890eca 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -375,7 +375,9 @@ void __init hyperv_init(void) guest_id = generate_guest_id(0, LINUX_VERSION_CODE, 0); wrmsrl(HV_X64_MSR_GUEST_OS_ID, guest_id); - hv_hypercall_pg = vmalloc_exec(PAGE_SIZE); + hv_hypercall_pg = __vmalloc_node_range(PAGE_SIZE, 1, VMALLOC_START, + VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_ROX, + VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, __func__); if (hv_hypercall_pg == NULL) { wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0); goto remove_cpuhp_state; diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index 2da1f95b88d761..816b31c685505f 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -194,6 +194,7 @@ enum page_cache_mode { #define _PAGE_TABLE_NOENC (__PP|__RW|_USR|___A| 0|___D| 0| 0) #define _PAGE_TABLE (__PP|__RW|_USR|___A| 0|___D| 0| 0| _ENC) #define __PAGE_KERNEL_RO (__PP| 0| 0|___A|__NX|___D| 0|___G) +#define __PAGE_KERNEL_ROX (__PP| 0| 0|___A| 0|___D| 0|___G) #define __PAGE_KERNEL_NOCACHE (__PP|__RW| 0|___A|__NX|___D| 0|___G| __NC) #define __PAGE_KERNEL_VVAR (__PP| 0|_USR|___A|__NX|___D| 0|___G) #define __PAGE_KERNEL_LARGE (__PP|__RW| 0|___A|__NX|___D|_PSE|___G) @@ -219,6 +220,7 @@ enum page_cache_mode { #define PAGE_KERNEL_RO __pgprot_mask(__PAGE_KERNEL_RO | _ENC) #define PAGE_KERNEL_EXEC __pgprot_mask(__PAGE_KERNEL_EXEC | _ENC) #define PAGE_KERNEL_EXEC_NOENC __pgprot_mask(__PAGE_KERNEL_EXEC | 0) +#define PAGE_KERNEL_ROX __pgprot_mask(__PAGE_KERNEL_ROX | _ENC) #define PAGE_KERNEL_NOCACHE __pgprot_mask(__PAGE_KERNEL_NOCACHE | _ENC) #define PAGE_KERNEL_LARGE __pgprot_mask(__PAGE_KERNEL_LARGE | _ENC) #define PAGE_KERNEL_LARGE_EXEC __pgprot_mask(__PAGE_KERNEL_LARGE_EXEC | _ENC) From patchwork Thu Jun 18 06:43:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11611467 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2FE5C618 for ; Thu, 18 Jun 2020 06:44:01 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0E7EB20FC3 for ; Thu, 18 Jun 2020 06:44:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="bY9mP222" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0E7EB20FC3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=d0gUFNLaUlWxmalxkcpkGeKmBhifNc69NK9idANp3ew=; b=bY9mP2224FmVWv y727lsGuZ8gVNyzh3ji1JeE8J2nX4EjpeTGC8WiF7nHbXvdc+sLdGknRkV5tzsKI6+DBARpZtTCWX 0WrfmH5YbuKZZXha9zL1qgt225g36T7PdBMSLmyJ6xtKH8Pc3ELahrVWaPVYzjh0Q8xtsLm4n3TO3 /6jtXUJWc2aq+IsFwKSxi9QkCEDGFnRH1oebVqgSefIxBq1JhKEdANpxg4IM+8i67Rvbfb60UyVdg HIubxdtQ8SdB1sM4451WtJqTY/3bJwnuPAhnelQ37v3lwPPmQDb4buGicfm17k53HR6911iy9KA+d pdITEutzD4qPLHGPIa9g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jloHH-0008Ts-8k; Thu, 18 Jun 2020 06:43:59 +0000 Received: from 195-192-102-148.dyn.cablelink.at ([195.192.102.148] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jloGb-000803-Rs; Thu, 18 Jun 2020 06:43:18 +0000 From: Christoph Hellwig To: Andrew Morton Subject: [PATCH 2/3] arm64: use PAGE_KERNEL_ROX directly in alloc_insn_page Date: Thu, 18 Jun 2020 08:43:06 +0200 Message-Id: <20200618064307.32739-3-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200618064307.32739-1-hch@lst.de> References: <20200618064307.32739-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-hyperv@vger.kernel.org, Peter Zijlstra , Catalin Marinas , x86@kernel.org, Dexuan Cui , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Jessica Yu , Vitaly Kuznetsov , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Use PAGE_KERNEL_ROX directly instead of allocating RWX and setting the page read-only just after the allocation. Signed-off-by: Christoph Hellwig --- arch/arm64/kernel/probes/kprobes.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c index d1c95dcf1d7833..cbe49cd117cfec 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -120,15 +120,9 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) void *alloc_insn_page(void) { - void *page; - - page = vmalloc_exec(PAGE_SIZE); - if (page) { - set_memory_ro((unsigned long)page, 1); - set_vm_flush_reset_perms(page); - } - - return page; + return __vmalloc_node_range(PAGE_SIZE, 1, VMALLOC_START, VMALLOC_END, + GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS, + NUMA_NO_NODE, __func__); } /* arm kprobe: install breakpoint in text */ From patchwork Thu Jun 18 06:43:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11611469 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6AACA618 for ; Thu, 18 Jun 2020 06:44:23 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 457AB20FC3 for ; Thu, 18 Jun 2020 06:44:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="IN5X8xBJ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 457AB20FC3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bIvV4PLy2TN9M6ztSut5IQvy1hLFut1fetHUHLN2R0k=; b=IN5X8xBJPfXT6P Tg7H5/Xbr5feQEe2Xoshs1eDfCFdWW62x5trrHVrly2fSwHwtYUGsln6e2CSR5nr0hEkBm8hY9NL5 slPe1DTqbfgb6NUVaDsGNU54VNuXNcpZ379VAqgr3uTpR/madUF4ttLVW1oaj1aLTzwVZvfE0Rwoa OUTG+fgNjksP+kUS1Bg7K6uTqR03bHHtBIsNre/JRciIpmjfmEdJXrlVXzxy22WXIct7sbTAtnbEO evLvBCoxuEMVSVtXc8l5J1P++xCdyADWAHAsOhuhBmdHcLymPDvlEb1pPfX2vwnwt0iuBVVpZFCQk PU2oxjFfWPzV3O3oAyfA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jloHY-0000HK-8C; Thu, 18 Jun 2020 06:44:16 +0000 Received: from 195-192-102-148.dyn.cablelink.at ([195.192.102.148] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jloGe-00081Z-Qm; Thu, 18 Jun 2020 06:43:21 +0000 From: Christoph Hellwig To: Andrew Morton Subject: [PATCH 3/3] mm: remove vmalloc_exec Date: Thu, 18 Jun 2020 08:43:07 +0200 Message-Id: <20200618064307.32739-4-hch@lst.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200618064307.32739-1-hch@lst.de> References: <20200618064307.32739-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-hyperv@vger.kernel.org, Peter Zijlstra , Catalin Marinas , x86@kernel.org, Dexuan Cui , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Jessica Yu , Vitaly Kuznetsov , Will Deacon , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Merge vmalloc_exec into its only caller. Note that for !CONFIG_MMU __vmalloc_node_range maps to __vmalloc, which directly clears the __GFP_HIGHMEM added by the vmalloc_exec stub anyway. Signed-off-by: Christoph Hellwig Reviewed-by: David Hildenbrand --- include/linux/vmalloc.h | 1 - kernel/module.c | 4 +++- mm/nommu.c | 17 ----------------- mm/vmalloc.c | 20 -------------------- 4 files changed, 3 insertions(+), 39 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 48bb681e6c2aed..0221f852a7e1a3 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -106,7 +106,6 @@ extern void *vzalloc(unsigned long size); extern void *vmalloc_user(unsigned long size); extern void *vmalloc_node(unsigned long size, int node); extern void *vzalloc_node(unsigned long size, int node); -extern void *vmalloc_exec(unsigned long size); extern void *vmalloc_32(unsigned long size); extern void *vmalloc_32_user(unsigned long size); extern void *__vmalloc(unsigned long size, gfp_t gfp_mask); diff --git a/kernel/module.c b/kernel/module.c index e8a198588f26ee..0c6573b98c3662 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -2783,7 +2783,9 @@ static void dynamic_debug_remove(struct module *mod, struct _ddebug *debug) void * __weak module_alloc(unsigned long size) { - return vmalloc_exec(size); + return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, + GFP_KERNEL, PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS, + NUMA_NO_NODE, __func__); } bool __weak module_init_section(const char *name) diff --git a/mm/nommu.c b/mm/nommu.c index cdcad5d61dd194..f32a69095d509e 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -290,23 +290,6 @@ void *vzalloc_node(unsigned long size, int node) } EXPORT_SYMBOL(vzalloc_node); -/** - * vmalloc_exec - allocate virtually contiguous, executable memory - * @size: allocation size - * - * Kernel-internal function to allocate enough pages to cover @size - * the page level allocator and map them into contiguous and - * executable kernel virtual space. - * - * For tight control over page level allocator and protection flags - * use __vmalloc() instead. - */ - -void *vmalloc_exec(unsigned long size) -{ - return __vmalloc(size, GFP_KERNEL | __GFP_HIGHMEM); -} - /** * vmalloc_32 - allocate virtually contiguous memory (32bit addressable) * @size: allocation size diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 3091c2ca60dfd1..f60948aac0d0e4 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2696,26 +2696,6 @@ void *vzalloc_node(unsigned long size, int node) } EXPORT_SYMBOL(vzalloc_node); -/** - * vmalloc_exec - allocate virtually contiguous, executable memory - * @size: allocation size - * - * Kernel-internal function to allocate enough pages to cover @size - * the page level allocator and map them into contiguous and - * executable kernel virtual space. - * - * For tight control over page level allocator and protection flags - * use __vmalloc() instead. - * - * Return: pointer to the allocated memory or %NULL on error - */ -void *vmalloc_exec(unsigned long size) -{ - return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS, - NUMA_NO_NODE, __builtin_return_address(0)); -} - #if defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA32) #define GFP_VMALLOC32 (GFP_DMA32 | GFP_KERNEL) #elif defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA)