From patchwork Tue Dec 10 04:47:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11281301 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3BF566C1 for ; Tue, 10 Dec 2019 04:47:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E3164206E0 for ; Tue, 10 Dec 2019 04:47:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="O5nCdYx2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E3164206E0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EF37F6B2A7C; Mon, 9 Dec 2019 23:47:31 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EC9756B2A7D; Mon, 9 Dec 2019 23:47:31 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D91026B2A7E; Mon, 9 Dec 2019 23:47:31 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id C46C86B2A7C for ; Mon, 9 Dec 2019 23:47:31 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 8E0C94DC9 for ; Tue, 10 Dec 2019 04:47:31 +0000 (UTC) X-FDA: 76247998302.17.wood42_248499f59eb33 X-Spam-Summary: 2,0,0,08b4c92ffe3f1d9a,d41d8cd98f00b204,dja@axtens.net,:linux-kernel@vger.kernel.org::linuxppc-dev@lists.ozlabs.org:linux-s390@vger.kernel.org:linux-xtensa@linux-xtensa.org:linux-arch@vger.kernel.org:linux-arm-kernel@lists.infradead.org:kasan-dev@googlegroups.com:christophe.leroy@c-s.fr:aneesh.kumar@linux.ibm.com:bsingharora@gmail.com:dja@axtens.net,RULES_HIT:1:2:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1981:2194:2198:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3874:4051:4250:4321:4605:5007:6119:6261:6653:6737:7875:7903:8603:10004:11026:11473:11657:11658:11914:12043:12048:12114:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:13894:14394:21080:21444:21451:21627:21795:30012:30051:30054,0,RBL:209.85.214.193:@axtens.net:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL: neutral, X-HE-Tag: wood42_248499f59eb33 X-Filterd-Recvd-Size: 11190 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Tue, 10 Dec 2019 04:47:30 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id w7so6751123plz.12 for ; Mon, 09 Dec 2019 20:47:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Qz4EMw+QBJw2XWxTbQKVczK3wuv6III+QMaefXi4B1g=; b=O5nCdYx24XRHih+SobL+eWjqEusRBefcWBDsBAOpFVAMMe2QengfDnaGE/2eb0QLV2 8SvJ9eVwh2xDfgv4uBgb6H0ZItjFdyKZ+Miq75UP9ce2bzPevxnMdJ0+S6UXrQPHQNOt ILMzn3kwdJAJNWxoffMwxSm2RUfbtz0r2eGag= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Qz4EMw+QBJw2XWxTbQKVczK3wuv6III+QMaefXi4B1g=; b=UYVRQ4vRJnOU1c3qFd+FQ8UN/yPCsf+nrgphOWKpCLnb0EVE7xd5oN0FUblRR20GVL K6dUpUUG8UBWk5rslFEfsTZLnJXqI/EqesnEHflXNECOY841bY9fymc3q26HThBBpKF8 LOn69cF9s5HTpuyl+869XYJ1Sq/R//j5KpM6lOz20bKLjrMYPecWtrLhV0slNaOPFscF CKGhFPRZDuRWPPXWnsoroLIM7mmr2cpVpe5qU6clCY7gRYTeGvaQMCBwudixV12odH0Y CBmtJgeb4g1yI1+LIYsIcrGe/5qTPFaji2wJQckkp4stcoc6RBFler1aZad2lsssJ4kT atrA== X-Gm-Message-State: APjAAAXzw8M2gyrQabIFLbnTjG0dVYxPnNBEXbEokYfTSZ4lLr88K7Ql z0tOHJCUZtWn8eruKOnm0bLNYw== X-Google-Smtp-Source: APXvYqw7LnsjeT+HcbOOl/JAdXG9ZgiZpjPbgIlnkKCZPr9JzIqeMTG75gBMCWn5zm8HjDsTA4uL0A== X-Received: by 2002:a17:90a:374f:: with SMTP id u73mr3243246pjb.22.1575953249787; Mon, 09 Dec 2019 20:47:29 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-e460-0b66-7007-c654.static.ipv6.internode.on.net. [2001:44b8:1113:6700:e460:b66:7007:c654]) by smtp.gmail.com with ESMTPSA id a14sm1176178pfn.22.2019.12.09.20.47.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Dec 2019 20:47:29 -0800 (PST) From: Daniel Axtens To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kasan-dev@googlegroups.com, christophe.leroy@c-s.fr, aneesh.kumar@linux.ibm.com, bsingharora@gmail.com Cc: Daniel Axtens Subject: [PATCH v2 1/4] mm: define MAX_PTRS_PER_{PTE,PMD,PUD} Date: Tue, 10 Dec 2019 15:47:11 +1100 Message-Id: <20191210044714.27265-2-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191210044714.27265-1-dja@axtens.net> References: <20191210044714.27265-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: powerpc has boot-time configurable PTRS_PER_PTE, PMD and PUD. The values are selected based on the MMU under which the kernel is booted. This is much like how 4 vs 5-level paging on x86_64 leads to boot-time configurable PTRS_PER_P4D. So far, this hasn't leaked out of arch/powerpc. But with KASAN, we have static arrays based on PTRS_PER_*, so for powerpc support must provide constant upper bounds for generic code. Define MAX_PTRS_PER_{PTE,PMD,PUD} for this purpose. I have configured these constants: - in asm-generic headers - on arches that implement KASAN: x86, s390, arm64, xtensa and powerpc I haven't wired up any other arches just yet - there is no user of the constants outside of the KASAN code I add in the next patch, so missing the constants on arches that don't support KASAN shouldn't break anything. Suggested-by: Christophe Leroy Signed-off-by: Daniel Axtens Reported-by: kbuild test robot --- arch/arm64/include/asm/pgtable-hwdef.h | 3 +++ arch/powerpc/include/asm/book3s/64/hash.h | 4 ++++ arch/powerpc/include/asm/book3s/64/pgtable.h | 7 +++++++ arch/powerpc/include/asm/book3s/64/radix.h | 5 +++++ arch/s390/include/asm/pgtable.h | 3 +++ arch/x86/include/asm/pgtable_types.h | 5 +++++ arch/xtensa/include/asm/pgtable.h | 1 + include/asm-generic/pgtable-nop4d-hack.h | 9 +++++---- include/asm-generic/pgtable-nopmd.h | 9 +++++---- include/asm-generic/pgtable-nopud.h | 9 +++++---- 10 files changed, 43 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index d9fbd433cc17..485e1f3c5c6f 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -41,6 +41,7 @@ #define ARM64_HW_PGTABLE_LEVEL_SHIFT(n) ((PAGE_SHIFT - 3) * (4 - (n)) + 3) #define PTRS_PER_PTE (1 << (PAGE_SHIFT - 3)) +#define MAX_PTRS_PER_PTE PTRS_PER_PTE /* * PMD_SHIFT determines the size a level 2 page table entry can map. @@ -50,6 +51,7 @@ #define PMD_SIZE (_AC(1, UL) << PMD_SHIFT) #define PMD_MASK (~(PMD_SIZE-1)) #define PTRS_PER_PMD PTRS_PER_PTE +#define MAX_PTRS_PER_PMD PTRS_PER_PMD #endif /* @@ -60,6 +62,7 @@ #define PUD_SIZE (_AC(1, UL) << PUD_SHIFT) #define PUD_MASK (~(PUD_SIZE-1)) #define PTRS_PER_PUD PTRS_PER_PTE +#define MAX_PTRS_PER_PUD PTRS_PER_PUD #endif /* diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h index 2781ebf6add4..fce329b8452e 100644 --- a/arch/powerpc/include/asm/book3s/64/hash.h +++ b/arch/powerpc/include/asm/book3s/64/hash.h @@ -18,6 +18,10 @@ #include #endif +#define H_PTRS_PER_PTE (1 << H_PTE_INDEX_SIZE) +#define H_PTRS_PER_PMD (1 << H_PMD_INDEX_SIZE) +#define H_PTRS_PER_PUD (1 << H_PUD_INDEX_SIZE) + /* Bits to set in a PMD/PUD/PGD entry valid bit*/ #define HASH_PMD_VAL_BITS (0x8000000000000000UL) #define HASH_PUD_VAL_BITS (0x8000000000000000UL) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index b01624e5c467..209817235a44 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -231,6 +231,13 @@ extern unsigned long __pmd_frag_size_shift; #define PTRS_PER_PUD (1 << PUD_INDEX_SIZE) #define PTRS_PER_PGD (1 << PGD_INDEX_SIZE) +#define MAX_PTRS_PER_PTE ((H_PTRS_PER_PTE > R_PTRS_PER_PTE) ? \ + H_PTRS_PER_PTE : R_PTRS_PER_PTE) +#define MAX_PTRS_PER_PMD ((H_PTRS_PER_PMD > R_PTRS_PER_PMD) ? \ + H_PTRS_PER_PMD : R_PTRS_PER_PMD) +#define MAX_PTRS_PER_PUD ((H_PTRS_PER_PUD > R_PTRS_PER_PUD) ? \ + H_PTRS_PER_PUD : R_PTRS_PER_PUD) + /* PMD_SHIFT determines what a second-level page table entry can map */ #define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE) #define PMD_SIZE (1UL << PMD_SHIFT) diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h index d97db3ad9aae..4f826259de71 100644 --- a/arch/powerpc/include/asm/book3s/64/radix.h +++ b/arch/powerpc/include/asm/book3s/64/radix.h @@ -35,6 +35,11 @@ #define RADIX_PMD_SHIFT (PAGE_SHIFT + RADIX_PTE_INDEX_SIZE) #define RADIX_PUD_SHIFT (RADIX_PMD_SHIFT + RADIX_PMD_INDEX_SIZE) #define RADIX_PGD_SHIFT (RADIX_PUD_SHIFT + RADIX_PUD_INDEX_SIZE) + +#define R_PTRS_PER_PTE (1 << RADIX_PTE_INDEX_SIZE) +#define R_PTRS_PER_PMD (1 << RADIX_PMD_INDEX_SIZE) +#define R_PTRS_PER_PUD (1 << RADIX_PUD_INDEX_SIZE) + /* * Size of EA range mapped by our pagetables. */ diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 7b03037a8475..3b491ce52ed2 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -342,6 +342,9 @@ static inline int is_module_addr(void *addr) #define PTRS_PER_PGD _CRST_ENTRIES #define MAX_PTRS_PER_P4D PTRS_PER_P4D +#define MAX_PTRS_PER_PUD PTRS_PER_PUD +#define MAX_PTRS_PER_PMD PTRS_PER_PMD +#define MAX_PTRS_PER_PTE PTRS_PER_PTE /* * Segment table and region3 table entry encoding diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index ea7400726d7a..82d523db133b 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -257,6 +257,11 @@ enum page_cache_mode { # include #endif +/* There is no runtime switching of these sizes */ +#define MAX_PTRS_PER_PUD PTRS_PER_PUD +#define MAX_PTRS_PER_PMD PTRS_PER_PMD +#define MAX_PTRS_PER_PTE PTRS_PER_PTE + #ifndef __ASSEMBLY__ #include diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h index 27ac17c9da09..5d6aa16ceae6 100644 --- a/arch/xtensa/include/asm/pgtable.h +++ b/arch/xtensa/include/asm/pgtable.h @@ -55,6 +55,7 @@ * we don't really have any PMD directory physically. */ #define PTRS_PER_PTE 1024 +#define MAX_PTRS_PER_PTE 1024 #define PTRS_PER_PTE_SHIFT 10 #define PTRS_PER_PGD 1024 #define PGD_ORDER 0 diff --git a/include/asm-generic/pgtable-nop4d-hack.h b/include/asm-generic/pgtable-nop4d-hack.h index 829bdb0d6327..6faa23f9e0b4 100644 --- a/include/asm-generic/pgtable-nop4d-hack.h +++ b/include/asm-generic/pgtable-nop4d-hack.h @@ -14,10 +14,11 @@ */ typedef struct { pgd_t pgd; } pud_t; -#define PUD_SHIFT PGDIR_SHIFT -#define PTRS_PER_PUD 1 -#define PUD_SIZE (1UL << PUD_SHIFT) -#define PUD_MASK (~(PUD_SIZE-1)) +#define PUD_SHIFT PGDIR_SHIFT +#define MAX_PTRS_PER_PUD 1 +#define PTRS_PER_PUD 1 +#define PUD_SIZE (1UL << PUD_SHIFT) +#define PUD_MASK (~(PUD_SIZE-1)) /* * The "pgd_xxx()" functions here are trivial for a folded two-level diff --git a/include/asm-generic/pgtable-nopmd.h b/include/asm-generic/pgtable-nopmd.h index 0d9b28cba16d..4a860f47f3e6 100644 --- a/include/asm-generic/pgtable-nopmd.h +++ b/include/asm-generic/pgtable-nopmd.h @@ -17,10 +17,11 @@ struct mm_struct; */ typedef struct { pud_t pud; } pmd_t; -#define PMD_SHIFT PUD_SHIFT -#define PTRS_PER_PMD 1 -#define PMD_SIZE (1UL << PMD_SHIFT) -#define PMD_MASK (~(PMD_SIZE-1)) +#define PMD_SHIFT PUD_SHIFT +#define MAX_PTRS_PER_PMD 1 +#define PTRS_PER_PMD 1 +#define PMD_SIZE (1UL << PMD_SHIFT) +#define PMD_MASK (~(PMD_SIZE-1)) /* * The "pud_xxx()" functions here are trivial for a folded two-level diff --git a/include/asm-generic/pgtable-nopud.h b/include/asm-generic/pgtable-nopud.h index d3776cb494c0..1aef1b18edbc 100644 --- a/include/asm-generic/pgtable-nopud.h +++ b/include/asm-generic/pgtable-nopud.h @@ -18,10 +18,11 @@ */ typedef struct { p4d_t p4d; } pud_t; -#define PUD_SHIFT P4D_SHIFT -#define PTRS_PER_PUD 1 -#define PUD_SIZE (1UL << PUD_SHIFT) -#define PUD_MASK (~(PUD_SIZE-1)) +#define PUD_SHIFT P4D_SHIFT +#define MAX_PTRS_PER_PUD 1 +#define PTRS_PER_PUD 1 +#define PUD_SIZE (1UL << PUD_SHIFT) +#define PUD_MASK (~(PUD_SIZE-1)) /* * The "p4d_xxx()" functions here are trivial for a folded two-level From patchwork Tue Dec 10 04:47:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11281305 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9FBD0139A for ; Tue, 10 Dec 2019 04:47:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6D88D214D8 for ; Tue, 10 Dec 2019 04:47:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="Twub/7df" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6D88D214D8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B70B66B2A7E; Mon, 9 Dec 2019 23:47:40 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B20186B2A7F; Mon, 9 Dec 2019 23:47:40 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9995C6B2A80; Mon, 9 Dec 2019 23:47:40 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id 7C7316B2A7E for ; Mon, 9 Dec 2019 23:47:40 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 3F4EE180AD81F for ; Tue, 10 Dec 2019 04:47:40 +0000 (UTC) X-FDA: 76247998680.29.stone83_25d1f1a44cb0d X-Spam-Summary: 2,0,0,be7b1fd918b3dcb4,d41d8cd98f00b204,dja@axtens.net,:linux-kernel@vger.kernel.org::linuxppc-dev@lists.ozlabs.org:linux-s390@vger.kernel.org:linux-xtensa@linux-xtensa.org:linux-arch@vger.kernel.org:linux-arm-kernel@lists.infradead.org:kasan-dev@googlegroups.com:christophe.leroy@c-s.fr:aneesh.kumar@linux.ibm.com:bsingharora@gmail.com:dja@axtens.net,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1542:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3868:4321:4605:5007:6261:6653:6737:10004:11026:11473:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:13894:14181:14394:14721:21080:21444:21451:21627:30054,0,RBL:209.85.215.196:@axtens.net:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:32,LUA_SUMMARY:none X-HE-Tag: stone83_25d1f1a44cb0d X-Filterd-Recvd-Size: 5221 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Tue, 10 Dec 2019 04:47:39 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id b137so8260775pga.6 for ; Mon, 09 Dec 2019 20:47:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=txq3LAFK3wOGlutPVrH/f5MoWKzx5UG/2rHrSgg8Ef0=; b=Twub/7dfkfSLtOLxrE276ihuEuvporLf72DEUs3hin3WvhA/s1IXbcI8fgn6bYfEap gu6iY9yK9ZBPgRRsp2Dndk9mrhs3BC8P/iXi7D2zWdBwrEIU3AYdLX3FjoTgwxeJb6n1 dheXIk9ucGRFhy5Zr5/DR0A350BnDT96IS57E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=txq3LAFK3wOGlutPVrH/f5MoWKzx5UG/2rHrSgg8Ef0=; b=rGbBzvXBgmtQEKmIKT5pIL09tV/rJ6vr60nay3OtmIJitvb8vXx8v0OVXSWlpC6jIR yx1OJpIuQKhy4dbmVvZuM2bPylb3FnR00iDWEWIFyWrRgf1bUW2GK8U8OrMyrNwCk8ND YnHN1nSSVA02q7qYpCbsEijbZxA9SOwsv3yWtvYG5ddgV//f9iLdjXNabvRoVQ/hgmrm xa3Ov/xVWOXpoSS+ddn9HPXLA0FCv5K32h7PlHrI2YwY7aDdHQiIg6wXZ3PSAke034lT IacGBEw+4GaepKodmk27dsDXF2kSwkw0JmkkfM0l8ZVP297c9f/FOGCxjqnOXg6hBjLh ZpxA== X-Gm-Message-State: APjAAAWxd0dECRgAxP4v8jETVFm+6P81xuTCGXoXfM1TW/Kr6m+A9lwD PPn4kx8245Pr6mcteTs3kZocJQ== X-Google-Smtp-Source: APXvYqwo4BmaQbSIu9CGUWSB47DJ4jc+UUxqneRNCNhT2QgM4Y9U0WXwRELaH6aJYlTs5Hvw2Ghozg== X-Received: by 2002:aa7:8f16:: with SMTP id x22mr33786940pfr.120.1575953258528; Mon, 09 Dec 2019 20:47:38 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-e460-0b66-7007-c654.static.ipv6.internode.on.net. [2001:44b8:1113:6700:e460:b66:7007:c654]) by smtp.gmail.com with ESMTPSA id c184sm1185254pfa.39.2019.12.09.20.47.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Dec 2019 20:47:37 -0800 (PST) From: Daniel Axtens To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kasan-dev@googlegroups.com, christophe.leroy@c-s.fr, aneesh.kumar@linux.ibm.com, bsingharora@gmail.com Cc: Daniel Axtens Subject: [PATCH v2 2/4] kasan: use MAX_PTRS_PER_* for early shadow Date: Tue, 10 Dec 2019 15:47:12 +1100 Message-Id: <20191210044714.27265-3-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191210044714.27265-1-dja@axtens.net> References: <20191210044714.27265-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This helps with powerpc support, and should have no effect on anything else. Suggested-by: Christophe Leroy Signed-off-by: Daniel Axtens --- include/linux/kasan.h | 6 +++--- mm/kasan/init.c | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index e18fe54969e9..d2f2a4ffcb12 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -15,9 +15,9 @@ struct task_struct; #include extern unsigned char kasan_early_shadow_page[PAGE_SIZE]; -extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE]; -extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD]; -extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD]; +extern pte_t kasan_early_shadow_pte[MAX_PTRS_PER_PTE]; +extern pmd_t kasan_early_shadow_pmd[MAX_PTRS_PER_PMD]; +extern pud_t kasan_early_shadow_pud[MAX_PTRS_PER_PUD]; extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D]; int kasan_populate_early_shadow(const void *shadow_start, diff --git a/mm/kasan/init.c b/mm/kasan/init.c index ce45c491ebcd..8b54a96d3b3e 100644 --- a/mm/kasan/init.c +++ b/mm/kasan/init.c @@ -46,7 +46,7 @@ static inline bool kasan_p4d_table(pgd_t pgd) } #endif #if CONFIG_PGTABLE_LEVELS > 3 -pud_t kasan_early_shadow_pud[PTRS_PER_PUD] __page_aligned_bss; +pud_t kasan_early_shadow_pud[MAX_PTRS_PER_PUD] __page_aligned_bss; static inline bool kasan_pud_table(p4d_t p4d) { return p4d_page(p4d) == virt_to_page(lm_alias(kasan_early_shadow_pud)); @@ -58,7 +58,7 @@ static inline bool kasan_pud_table(p4d_t p4d) } #endif #if CONFIG_PGTABLE_LEVELS > 2 -pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD] __page_aligned_bss; +pmd_t kasan_early_shadow_pmd[MAX_PTRS_PER_PMD] __page_aligned_bss; static inline bool kasan_pmd_table(pud_t pud) { return pud_page(pud) == virt_to_page(lm_alias(kasan_early_shadow_pmd)); @@ -69,7 +69,7 @@ static inline bool kasan_pmd_table(pud_t pud) return false; } #endif -pte_t kasan_early_shadow_pte[PTRS_PER_PTE] __page_aligned_bss; +pte_t kasan_early_shadow_pte[MAX_PTRS_PER_PTE] __page_aligned_bss; static inline bool kasan_pte_table(pmd_t pmd) { From patchwork Tue Dec 10 04:47:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11281307 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1733E139A for ; Tue, 10 Dec 2019 04:47:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D78DC20836 for ; Tue, 10 Dec 2019 04:47:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="WI2BC14z" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D78DC20836 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1AD036B2A80; Mon, 9 Dec 2019 23:47:45 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 15C4C6B2A81; Mon, 9 Dec 2019 23:47:45 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 024F26B2A82; Mon, 9 Dec 2019 23:47:44 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0170.hostedemail.com [216.40.44.170]) by kanga.kvack.org (Postfix) with ESMTP id D63B06B2A80 for ; Mon, 9 Dec 2019 23:47:44 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 94E8352D1 for ; Tue, 10 Dec 2019 04:47:44 +0000 (UTC) X-FDA: 76247998848.12.berry91_2673a8ee5e429 X-Spam-Summary: 2,0,0,613791903bc2f76a,d41d8cd98f00b204,dja@axtens.net,:linux-kernel@vger.kernel.org::linuxppc-dev@lists.ozlabs.org:linux-s390@vger.kernel.org:linux-xtensa@linux-xtensa.org:linux-arch@vger.kernel.org:linux-arm-kernel@lists.infradead.org:kasan-dev@googlegroups.com:christophe.leroy@c-s.fr:aneesh.kumar@linux.ibm.com:bsingharora@gmail.com:dja@axtens.net,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3866:3868:3870:3871:3872:4250:4321:5007:6119:6261:6653:6737:10004:11232:11658:11914:12043:12048:12297:12517:12519:12555:12895:12986:13069:13311:13357:13894:14096:14181:14384:14394:14721:21080:21444:21451:21627:30054,0,RBL:209.85.210.196:@axtens.net:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:non e X-HE-Tag: berry91_2673a8ee5e429 X-Filterd-Recvd-Size: 4606 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Tue, 10 Dec 2019 04:47:44 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id 2so8377622pfg.12 for ; Mon, 09 Dec 2019 20:47:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EiQDn6/0oRuwTLnfNvzdiHGZKSMrLbDZhPom2rMZn7o=; b=WI2BC14zZiQ98S0DzLqneHxoV18dt5gpqrlVywoE5oHdg5Nlck/8qvJGmFxuUzZ160 kjo1gTrn0FS/5oKF8n1ZyXpA2/u2Ffzxnhl4fBdAsjpQ5rGl9rvtZ7NjL/26AIget8jj UTKZNquTzxwyrmTm1IXBllkukRs5NwAkynJ+A= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EiQDn6/0oRuwTLnfNvzdiHGZKSMrLbDZhPom2rMZn7o=; b=NjjaKiHZUCkGjNYTIsJX1chOhAfGKzitp5Lxpgw2gw5OwatCNDTzFUQEkqEBCl3Xgk XGHoEjQjgWEltY0f/5D2adN1a+hsCE7H8WuqmKE24MFOzET8Fg/Lk3y8bC4ay9Piut0p j7X/T2/gCL4ZSdKFKSbZcYVGh7DR+POZjEO01FBezlaIKBEb1qnGKPqrhzTvQu0ghOqa MHOdblow3c+weyt5WQdb+vlD9Qd/2Plu3cDX8sOR0RgFzRT6YhG288xeOoHqbjYmzI0g tsZyU7ElYZznDTFvuCcqrfguC/HRJXXijI7s4wdYcXOPOvtqDqrSyYa5FxP3rmPeD2ki l7zw== X-Gm-Message-State: APjAAAXztSjsZVJANogy4M6IsQPv2iY/3NuVxe5J2lfmAWg1fTNtQBYn ZL0MR1PqZ5Vky8NASrCPiT0M6A== X-Google-Smtp-Source: APXvYqwENZz+Juyen/2aq5gIiV77utFbr37CGrm9NGV3YB5/lvZeMjOVCPCut/3HTEKuNdLqG8oHmw== X-Received: by 2002:a63:5d03:: with SMTP id r3mr22623144pgb.306.1575953263265; Mon, 09 Dec 2019 20:47:43 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-e460-0b66-7007-c654.static.ipv6.internode.on.net. [2001:44b8:1113:6700:e460:b66:7007:c654]) by smtp.gmail.com with ESMTPSA id r6sm1166225pfh.91.2019.12.09.20.47.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Dec 2019 20:47:42 -0800 (PST) From: Daniel Axtens To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kasan-dev@googlegroups.com, christophe.leroy@c-s.fr, aneesh.kumar@linux.ibm.com, bsingharora@gmail.com Cc: Daniel Axtens Subject: [PATCH v2 3/4] kasan: Document support on 32-bit powerpc Date: Tue, 10 Dec 2019 15:47:13 +1100 Message-Id: <20191210044714.27265-4-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191210044714.27265-1-dja@axtens.net> References: <20191210044714.27265-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000649, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KASAN is supported on 32-bit powerpc and the docs should reflect this. Suggested-by: Christophe Leroy Signed-off-by: Daniel Axtens Reviewed-by: Christophe Leroy --- Documentation/dev-tools/kasan.rst | 3 ++- Documentation/powerpc/kasan.txt | 12 ++++++++++++ 2 files changed, 14 insertions(+), 1 deletion(-) create mode 100644 Documentation/powerpc/kasan.txt diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst index e4d66e7c50de..4af2b5d2c9b4 100644 --- a/Documentation/dev-tools/kasan.rst +++ b/Documentation/dev-tools/kasan.rst @@ -22,7 +22,8 @@ global variables yet. Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later. Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390 -architectures, and tag-based KASAN is supported only for arm64. +architectures. It is also supported on 32-bit powerpc kernels. Tag-based KASAN +is supported only on arm64. Usage ----- diff --git a/Documentation/powerpc/kasan.txt b/Documentation/powerpc/kasan.txt new file mode 100644 index 000000000000..a85ce2ff8244 --- /dev/null +++ b/Documentation/powerpc/kasan.txt @@ -0,0 +1,12 @@ +KASAN is supported on powerpc on 32-bit only. + +32 bit support +============== + +KASAN is supported on both hash and nohash MMUs on 32-bit. + +The shadow area sits at the top of the kernel virtual memory space above the +fixmap area and occupies one eighth of the total kernel virtual memory space. + +Instrumentation of the vmalloc area is not currently supported, but modules +are. From patchwork Tue Dec 10 04:47:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11281309 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 86FBC930 for ; Tue, 10 Dec 2019 04:47:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2B7C4206E0 for ; Tue, 10 Dec 2019 04:47:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="MVnVuT//" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2B7C4206E0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1850C6B2A81; Mon, 9 Dec 2019 23:47:51 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 10EE96B2A82; Mon, 9 Dec 2019 23:47:51 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF03F6B2A83; Mon, 9 Dec 2019 23:47:50 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id D36606B2A81 for ; Mon, 9 Dec 2019 23:47:50 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 88B86180AD81D for ; Tue, 10 Dec 2019 04:47:50 +0000 (UTC) X-FDA: 76247999100.27.mouth39_2743619ea9c5b X-Spam-Summary: 50,0,0,c2db34d56b96b659,d41d8cd98f00b204,dja@axtens.net,:linux-kernel@vger.kernel.org::linuxppc-dev@lists.ozlabs.org:linux-s390@vger.kernel.org:linux-xtensa@linux-xtensa.org:linux-arch@vger.kernel.org:linux-arm-kernel@lists.infradead.org:kasan-dev@googlegroups.com:christophe.leroy@c-s.fr:aneesh.kumar@linux.ibm.com:bsingharora@gmail.com:dja@axtens.net,RULES_HIT:41:69:327:355:379:421:541:800:901:960:967:973:988:989:1260:1263:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1801:1981:2194:2198:2199:2200:2393:2525:2538:2553:2560:2563:2682:2685:2693:2736:2740:2859:2901:2915:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3873:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:4605:5007:6119:6261:6630:6653:6691:6737:7875:7903:8603:8660:8957:9010:9025:9036:9040:9388:10559:11026:11232:11473:11657:11658:11854:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13 141:1314 X-HE-Tag: mouth39_2743619ea9c5b X-Filterd-Recvd-Size: 24427 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Tue, 10 Dec 2019 04:47:49 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id s18so8400661pfm.4 for ; Mon, 09 Dec 2019 20:47:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wX2TbuULL2Mx/wD16qllYpbbkXpKBaI6gMmrc1az9gk=; b=MVnVuT//ZHnRP2WpKlk/OOHgvve9AiH5UXooDYltag5t81HGBkcSOt5cH6+65qo2n7 iKlCPGZhdQib4FeAva/vhi6bdGYyixEfxV3Z/HnDXcbBKo33YenC24nl/iK5wNvUPm9x 7hGjJt+MwUYrPThGZ9dNax1m2V+/Di5ubR7d0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wX2TbuULL2Mx/wD16qllYpbbkXpKBaI6gMmrc1az9gk=; b=gckyXQ2LE9d6p0fz2wrWOaOV7N599a/fcAWClrR9rwzcR9l4WAtnxBqyMeX8Dl9mua CPEtE1PxhebBYfW9o6gkOz6RZfTQfS6D4hglcHD5paIg09rQ621Aeb3bhrm45vsQL0hN vXp3OOIAlE/bCD5g8B/Py3pkVEz99ULTNwzlZBgQl/S8kXtNEHJPf6JfTKaUVIM8+Dty EwZ0xk7AmHVE/krG0/6XjtpLovGO13Yc4TnRjWk4ofTsV/KpewEhA1ATaZY7VOJseEnj 1wizX16YSNY7f5vMLYIqPNrtkMXy+fr2e8iS5cm4Qtp9R/6qTIho5dgFCdWBSuk+HkAR PUvQ== X-Gm-Message-State: APjAAAUPXwTbQT831Q5aXyj753YUncLbxB/4OkXPkeBuAgM0NcPpaNqZ 1Sm+ojg7eywFQGySv83wtFGw6fQxixI= X-Google-Smtp-Source: APXvYqwtixnIWf6j8LaLVyPgQfxMlCw5sl9wEmOcGLH0/jkfmBahyf03GAFrY2AxGO08McksKyR68Q== X-Received: by 2002:a63:6c03:: with SMTP id h3mr21235223pgc.19.1575953268165; Mon, 09 Dec 2019 20:47:48 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-e460-0b66-7007-c654.static.ipv6.internode.on.net. [2001:44b8:1113:6700:e460:b66:7007:c654]) by smtp.gmail.com with ESMTPSA id x4sm1155906pff.143.2019.12.09.20.47.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Dec 2019 20:47:47 -0800 (PST) From: Daniel Axtens To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kasan-dev@googlegroups.com, christophe.leroy@c-s.fr, aneesh.kumar@linux.ibm.com, bsingharora@gmail.com Cc: Daniel Axtens Subject: [PATCH v2 4/4] powerpc: Book3S 64-bit "heavyweight" KASAN support Date: Tue, 10 Dec 2019 15:47:14 +1100 Message-Id: <20191210044714.27265-5-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191210044714.27265-1-dja@axtens.net> References: <20191210044714.27265-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KASAN support on powerpc64 is challenging: - We want to be able to support inline instrumentation so as to be able to catch global and stack issues. - We run some code in real mode after boot, most notably a lot of KVM code. We'd like to be able to instrument this. [For those not immersed in ppc64, in real mode, the top nibble or 2 bits (depending on radix/hash mmu) of the address is ignored. The linear mapping is placed at 0xc000000000000000. This means that a pointer to part of the linear mapping will work both in real mode, where it will be interpreted as a physical address of the form 0x000..., and out of real mode, where it will go via the linear mapping.] - Inline instrumentation requires a fixed offset. - Because of our running things in real mode, the offset has to point to valid memory both in and out of real mode. This makes finding somewhere to put the KASAN shadow region challenging. One approach is just to give up on inline instrumentation and override the address->shadow calculation. This way we can delay all checking until after we get everything set up to our satisfaction. However, we'd really like to do better. What we can do - if we know _at compile time_ how much contiguous physical memory we have - is to set aside the top 1/8th of the memory and use that. This is a big hammer (hence the "heavyweight" name) and comes with 3 big consequences: - kernels will simply fail to boot on machines with less memory than specified when compiling. - kernels running on machines with more memory than specified when compiling will simply ignore the extra memory. - there's no nice way to handle physically discontiguous memory, so you are restricted to the first physical memory block. If you can bear all this, you get full support for KASAN. Despite the limitations, it can still find bugs, e.g. http://patchwork.ozlabs.org/patch/1103775/ The current implementation is Radix only. Massive thanks to mpe, who had the idea for the initial design. Signed-off-by: Daniel Axtens Reported-by: kbuild test robot --- Changes since v1: - Landed kasan vmalloc support upstream - Lots of feedback from Christophe. Changes since the rfc: - Boots real and virtual hardware, kvm works. - disabled reporting when we're checking the stack for exception frames. The behaviour isn't wrong, just incompatible with KASAN. - Documentation! - Dropped old module stuff in favour of KASAN_VMALLOC. The bugs with ftrace and kuap were due to kernel bloat pushing prom_init calls to be done via the plt. Because we did not have a relocatable kernel, and they are done very early, this caused everything to explode. Compile with CONFIG_RELOCATABLE! --- Documentation/dev-tools/kasan.rst | 8 +- Documentation/powerpc/kasan.txt | 102 +++++++++++++++++- arch/powerpc/Kconfig | 3 + arch/powerpc/Kconfig.debug | 21 ++++ arch/powerpc/Makefile | 11 ++ arch/powerpc/include/asm/kasan.h | 20 +++- arch/powerpc/kernel/process.c | 8 ++ arch/powerpc/kernel/prom.c | 59 +++++++++- arch/powerpc/mm/kasan/Makefile | 3 +- .../mm/kasan/{kasan_init_32.c => init_32.c} | 0 arch/powerpc/mm/kasan/init_book3s_64.c | 67 ++++++++++++ 11 files changed, 293 insertions(+), 9 deletions(-) rename arch/powerpc/mm/kasan/{kasan_init_32.c => init_32.c} (100%) create mode 100644 arch/powerpc/mm/kasan/init_book3s_64.c diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst index 4af2b5d2c9b4..d99dc580bc11 100644 --- a/Documentation/dev-tools/kasan.rst +++ b/Documentation/dev-tools/kasan.rst @@ -22,8 +22,9 @@ global variables yet. Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later. Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390 -architectures. It is also supported on 32-bit powerpc kernels. Tag-based KASAN -is supported only on arm64. +architectures. It is also supported on powerpc, for 32-bit kernels, and for +64-bit kernels running under the Radix MMU. Tag-based KASAN is supported only +on arm64. Usage ----- @@ -256,7 +257,8 @@ CONFIG_KASAN_VMALLOC ~~~~~~~~~~~~~~~~~~~~ With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the -cost of greater memory usage. Currently this is only supported on x86. +cost of greater memory usage. Currently this is optional on x86, and +required on 64-bit powerpc. This works by hooking into vmalloc and vmap, and dynamically allocating real shadow memory to back the mappings. diff --git a/Documentation/powerpc/kasan.txt b/Documentation/powerpc/kasan.txt index a85ce2ff8244..d6e7a415195c 100644 --- a/Documentation/powerpc/kasan.txt +++ b/Documentation/powerpc/kasan.txt @@ -1,4 +1,4 @@ -KASAN is supported on powerpc on 32-bit only. +KASAN is supported on powerpc on 32-bit and 64-bit Radix only. 32 bit support ============== @@ -10,3 +10,103 @@ fixmap area and occupies one eighth of the total kernel virtual memory space. Instrumentation of the vmalloc area is not currently supported, but modules are. + +64 bit support +============== + +Currently, only the radix MMU is supported. There have been versions for Book3E +processors floating around on the mailing list, but nothing has been merged. + +KASAN support on Book3S is a bit tricky to get right: + + - We want to be able to support inline instrumentation so as to be able to + catch global and stack issues. + + - Inline instrumentation requires a fixed offset. + + - We run a lot of code in real mode. Most notably a lot of KVM runs in real + mode, and we'd like to be able to instrument it. + + - Because we run code in real mode after boot, the offset has to point to + valid memory both in and out of real mode. + +One approach is just to give up on inline instrumentation. This way we can +delay all checks until after we get everything set up correctly. However, we'd +really like to do better. + +If we know _at compile time_ how much contiguous physical memory we have, we +can set aside the top 1/8th of the first block of physical memory and use +that. This is a big hammer and comes with 3 big consequences: + + - there's no nice way to handle physically discontiguous memory, so + you are restricted to the first physical memory block. + + - kernels will simply fail to boot on machines with less memory than specified + when compiling. + + - kernels running on machines with more memory than specified when compiling + will simply ignore the extra memory. + +If you can live with this, you get full support for KASAN. + +Tips +---- + + - Compile with CONFIG_RELOCATABLE. + + In development, we found boot hangs when building with ftrace and KUAP + on. These ended up being due to kernel bloat pushing prom_init calls to be + done via the PLT. Because we did not have a relocatable kernel, and they are + done very early, this caused us to jump off into somewhere invalid. Enabling + relocation fixes this. + +NUMA/discontiguous physical memory +---------------------------------- + +We currently cannot really deal with discontiguous physical memory. You are +restricted to the physical memory that is contiguous from physical address +zero, and must specify the size of that memory, not total memory, when +configuring your kernel. + +Discontiguous memory can occur when you have a machine with memory spread +across multiple nodes. For example, on a Talos II with 64GB of RAM: + + - 32GB runs from 0x0 to 0x0000_0008_0000_0000, + - then there's a gap, + - then the final 32GB runs from 0x0000_2000_0000_0000 to 0x0000_2008_0000_0000 + +This can create _significant_ issues: + + - If we try to treat the machine as having 64GB of _contiguous_ RAM, we would + assume that ran from 0x0 to 0x0000_0010_0000_0000. We'd then reserve the + last 1/8th - 0x0000_000e_0000_0000 to 0x0000_0010_0000_0000 as the shadow + region. But when we try to access any of that, we'll try to access pages + that are not physically present. + + - If we try to base the shadow region size on the top address, we'll need to + reserve 0x2008_0000_0000 / 8 = 0x0401_0000_0000 bytes = 4100 GB of memory, + which will clearly not work on a system with 64GB of RAM. + +Therefore, you are restricted to the memory in the node starting at 0x0. For +this system, that's 32GB. If you specify a contiguous physical memory size +greater than the size of the first contiguous region of memory, the system will +be unable to boot or even print an error message warning you. + +You can determine the layout of your system's memory by observing the messages +that the Radix MMU prints on boot. The Talos II discussed earlier has: + +radix-mmu: Mapped 0x0000000000000000-0x0000000040000000 with 1.00 GiB pages (exec) +radix-mmu: Mapped 0x0000000040000000-0x0000000800000000 with 1.00 GiB pages +radix-mmu: Mapped 0x0000200000000000-0x0000200800000000 with 1.00 GiB pages + +As discussed, you'd configure this system for 32768 MB. + +Another system prints: + +radix-mmu: Mapped 0x0000000000000000-0x0000000040000000 with 1.00 GiB pages (exec) +radix-mmu: Mapped 0x0000000040000000-0x0000002000000000 with 1.00 GiB pages +radix-mmu: Mapped 0x0000200000000000-0x0000202000000000 with 1.00 GiB pages + +This machine has more memory: 0x0000_0040_0000_0000 total, but only +0x0000_0020_0000_0000 is physically contiguous from zero, so we'd configure the +kernel for 131072 MB of physically contiguous memory. diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 1ec34e16ed65..f68650f14e61 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -173,6 +173,9 @@ config PPC select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 && PPC_RADIX_MMU select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_KASAN if PPC32 + select HAVE_ARCH_KASAN if PPC_BOOK3S_64 && PPC_RADIX_MMU + select HAVE_ARCH_KASAN_VMALLOC if PPC_BOOK3S_64 + select KASAN_VMALLOC if KASAN && PPC_BOOK3S_64 select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug index 4e1d39847462..90bb48455cb8 100644 --- a/arch/powerpc/Kconfig.debug +++ b/arch/powerpc/Kconfig.debug @@ -394,6 +394,27 @@ config PPC_FAST_ENDIAN_SWITCH help If you're unsure what this is, say N. +config PHYS_MEM_SIZE_FOR_KASAN + int "Contiguous physical memory size for KASAN (MB)" if KASAN && PPC_BOOK3S_64 + default 0 + help + + To get inline instrumentation support for KASAN on 64-bit Book3S + machines, you need to know how much contiguous physical memory your + system has. A shadow offset will be calculated based on this figure, + which will be compiled in to the kernel. KASAN will use this offset + to access its shadow region, which is used to verify memory accesses. + + If you attempt to boot on a system with less memory than you specify + here, your system will fail to boot very early in the process. If you + boot on a system with more memory than you specify, the extra memory + will wasted - it will be reserved and not used. + + For systems with discontiguous blocks of physical memory, specify the + size of the block starting at 0x0. You can determine this by looking + at the memory layout info printed to dmesg by the radix MMU code + early in boot. See Documentation/powerpc/kasan.txt. + config KASAN_SHADOW_OFFSET hex depends on KASAN diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile index f35730548e42..eff693527462 100644 --- a/arch/powerpc/Makefile +++ b/arch/powerpc/Makefile @@ -230,6 +230,17 @@ ifdef CONFIG_476FPE_ERR46 -T $(srctree)/arch/powerpc/platforms/44x/ppc476_modules.lds endif +ifdef CONFIG_PPC_BOOK3S_64 +# The KASAN shadow offset is such that linear map (0xc000...) is shadowed by +# the last 8th of linearly mapped physical memory. This way, if the code uses +# 0xc addresses throughout, accesses work both in in real mode (where the top +# 2 bits are ignored) and outside of real mode. +# +# 0xc000000000000000 >> 3 = 0xa800000000000000 = 12105675798371893248 +KASAN_SHADOW_OFFSET = $(shell echo 7 \* 1024 \* 1024 \* $(CONFIG_PHYS_MEM_SIZE_FOR_KASAN) / 8 + 12105675798371893248 | bc) +KBUILD_CFLAGS += -DKASAN_SHADOW_OFFSET=$(KASAN_SHADOW_OFFSET)UL +endif + # No AltiVec or VSX instructions when building kernel KBUILD_CFLAGS += $(call cc-option,-mno-altivec) KBUILD_CFLAGS += $(call cc-option,-mno-vsx) diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h index 296e51c2f066..98d995bc9b5e 100644 --- a/arch/powerpc/include/asm/kasan.h +++ b/arch/powerpc/include/asm/kasan.h @@ -14,13 +14,20 @@ #ifndef __ASSEMBLY__ -#include +#ifdef CONFIG_KASAN +void kasan_init(void); +#else +static inline void kasan_init(void) { } +#endif #define KASAN_SHADOW_SCALE_SHIFT 3 #define KASAN_SHADOW_START (KASAN_SHADOW_OFFSET + \ (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT)) +#ifdef CONFIG_PPC32 +#include + #define KASAN_SHADOW_OFFSET ASM_CONST(CONFIG_KASAN_SHADOW_OFFSET) #define KASAN_SHADOW_END 0UL @@ -30,11 +37,18 @@ #ifdef CONFIG_KASAN void kasan_early_init(void); void kasan_mmu_init(void); -void kasan_init(void); #else -static inline void kasan_init(void) { } static inline void kasan_mmu_init(void) { } #endif +#endif + +#ifdef CONFIG_PPC_BOOK3S_64 +#include + +#define KASAN_SHADOW_SIZE ((u64)CONFIG_PHYS_MEM_SIZE_FOR_KASAN * \ + 1024 * 1024 * 1 / 8) + +#endif /* CONFIG_PPC_BOOK3S_64 */ #endif /* __ASSEMBLY */ #endif diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 4df94b6e2f32..c60ff299f39b 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -2081,7 +2081,14 @@ void show_stack(struct task_struct *tsk, unsigned long *stack) /* * See if this is an exception frame. * We look for the "regshere" marker in the current frame. + * + * KASAN may complain about this. If it is an exception frame, + * we won't have unpoisoned the stack in asm when we set the + * exception marker. If it's not an exception frame, who knows + * how things are laid out - the shadow could be in any state + * at all. Just disable KASAN reporting for now. */ + kasan_disable_current(); if (validate_sp(sp, tsk, STACK_INT_FRAME_SIZE) && stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) { struct pt_regs *regs = (struct pt_regs *) @@ -2091,6 +2098,7 @@ void show_stack(struct task_struct *tsk, unsigned long *stack) regs->trap, (void *)regs->nip, (void *)lr); firstframe = 1; } + kasan_enable_current(); sp = newsp; } while (count++ < kstack_depth_to_print); diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c index 6620f37abe73..b32036f61cad 100644 --- a/arch/powerpc/kernel/prom.c +++ b/arch/powerpc/kernel/prom.c @@ -72,6 +72,7 @@ unsigned long tce_alloc_start, tce_alloc_end; u64 ppc64_rma_size; #endif static phys_addr_t first_memblock_size; +static phys_addr_t top_phys_addr; static int __initdata boot_cpu_count; static int __init early_parse_mem(char *p) @@ -449,6 +450,21 @@ static bool validate_mem_limit(u64 base, u64 *size) { u64 max_mem = 1UL << (MAX_PHYSMEM_BITS); +#ifdef CONFIG_KASAN + /* + * To handle the NUMA/discontiguous memory case, don't allow a block + * to be added if it falls completely beyond the configured physical + * memory. + * + * See Documentation/powerpc/kasan.txt + */ + if (base >= (u64)CONFIG_PHYS_MEM_SIZE_FOR_KASAN * 1024 * 1024) { + pr_warn("KASAN: not adding mem block at %llx (size %llx)", + base, *size); + return false; + } +#endif + if (base >= max_mem) return false; if ((base + *size) > max_mem) @@ -572,8 +588,11 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size) /* Add the chunk to the MEMBLOCK list */ if (add_mem_to_memblock) { - if (validate_mem_limit(base, &size)) + if (validate_mem_limit(base, &size)) { memblock_add(base, size); + if (base + size > top_phys_addr) + top_phys_addr = base + size; + } } } @@ -613,6 +632,8 @@ static void __init early_reserve_mem_dt(void) static void __init early_reserve_mem(void) { __be64 *reserve_map; + phys_addr_t kasan_shadow_start; + phys_addr_t kasan_memory_size; reserve_map = (__be64 *)(((unsigned long)initial_boot_params) + fdt_off_mem_rsvmap(initial_boot_params)); @@ -651,6 +672,42 @@ static void __init early_reserve_mem(void) return; } #endif + + if (IS_ENABLED(CONFIG_KASAN) && IS_ENABLED(CONFIG_PPC_BOOK3S_64)) { + kasan_memory_size = + ((phys_addr_t)CONFIG_PHYS_MEM_SIZE_FOR_KASAN << 20); + + if (top_phys_addr < kasan_memory_size) { + /* + * We are doomed. Attempts to call e.g. panic() are + * likely to fail because they call out into + * instrumented code, which will almost certainly + * access memory beyond the end of physical + * memory. Hang here so that at least the NIP points + * somewhere that will help you debug it if you look at + * it in qemu. + */ + while (true) + ; + } else if (top_phys_addr > kasan_memory_size) { + /* print a biiiig warning in hopes people notice */ + pr_err("===========================================\n" + "Physical memory exceeds compiled-in maximum!\n" + "This kernel was compiled for KASAN with %u MB physical memory.\n" + "The actual physical memory detected is %llu MB.\n" + "Memory above the compiled limit will not be used!\n" + "===========================================\n", + CONFIG_PHYS_MEM_SIZE_FOR_KASAN, + top_phys_addr / (1024 * 1024)); + } + + kasan_shadow_start = _ALIGN_DOWN(kasan_memory_size * 7 / 8, + PAGE_SIZE); + DBG("reserving %llx -> %llx for KASAN", + kasan_shadow_start, top_phys_addr); + memblock_reserve(kasan_shadow_start, + top_phys_addr - kasan_shadow_start); + } } #ifdef CONFIG_PPC_TRANSACTIONAL_MEM diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile index 6577897673dd..f02b15c78e4d 100644 --- a/arch/powerpc/mm/kasan/Makefile +++ b/arch/powerpc/mm/kasan/Makefile @@ -2,4 +2,5 @@ KASAN_SANITIZE := n -obj-$(CONFIG_PPC32) += kasan_init_32.o +obj-$(CONFIG_PPC32) += init_32.o +obj-$(CONFIG_PPC_BOOK3S_64) += init_book3s_64.o diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/init_32.c similarity index 100% rename from arch/powerpc/mm/kasan/kasan_init_32.c rename to arch/powerpc/mm/kasan/init_32.c diff --git a/arch/powerpc/mm/kasan/init_book3s_64.c b/arch/powerpc/mm/kasan/init_book3s_64.c new file mode 100644 index 000000000000..43e9252c8bd3 --- /dev/null +++ b/arch/powerpc/mm/kasan/init_book3s_64.c @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KASAN for 64-bit Book3S powerpc + * + * Copyright (C) 2019 IBM Corporation + * Author: Daniel Axtens + */ + +#define DISABLE_BRANCH_PROFILING + +#include +#include +#include +#include + +void __init kasan_init(void) +{ + int i; + void *k_start = kasan_mem_to_shadow((void *)RADIX_KERN_VIRT_START); + void *k_end = kasan_mem_to_shadow((void *)RADIX_VMEMMAP_END); + + pte_t pte = __pte(__pa(kasan_early_shadow_page) | + pgprot_val(PAGE_KERNEL) | _PAGE_PTE); + + if (!early_radix_enabled()) + panic("KASAN requires radix!"); + + for (i = 0; i < PTRS_PER_PTE; i++) + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page, + &kasan_early_shadow_pte[i], pte, 0); + + for (i = 0; i < PTRS_PER_PMD; i++) + pmd_populate_kernel(&init_mm, &kasan_early_shadow_pmd[i], + kasan_early_shadow_pte); + + for (i = 0; i < PTRS_PER_PUD; i++) + pud_populate(&init_mm, &kasan_early_shadow_pud[i], + kasan_early_shadow_pmd); + + memset(kasan_mem_to_shadow((void *)PAGE_OFFSET), KASAN_SHADOW_INIT, + KASAN_SHADOW_SIZE); + + kasan_populate_early_shadow( + kasan_mem_to_shadow((void *)RADIX_KERN_VIRT_START), + kasan_mem_to_shadow((void *)RADIX_VMALLOC_START)); + + /* leave a hole here for vmalloc */ + + kasan_populate_early_shadow( + kasan_mem_to_shadow((void *)RADIX_VMALLOC_END), + kasan_mem_to_shadow((void *)RADIX_VMEMMAP_END)); + + flush_tlb_kernel_range((unsigned long)k_start, (unsigned long)k_end); + + /* mark early shadow region as RO and wipe */ + pte = __pte(__pa(kasan_early_shadow_page) | + pgprot_val(PAGE_KERNEL_RO) | _PAGE_PTE); + for (i = 0; i < PTRS_PER_PTE; i++) + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page, + &kasan_early_shadow_pte[i], pte, 0); + + memset(kasan_early_shadow_page, 0, PAGE_SIZE); + + /* Enable error messages */ + init_task.kasan_depth = 0; + pr_info("KASAN init done (64-bit Book3S heavyweight mode)\n"); +}