From patchwork Fri Dec 16 16:21:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13075187 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ADCACC4332F for ; Fri, 16 Dec 2022 16:23:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Xw+r02ozcdyNWLoAzsmTI9mS2fohnNumkM3QxO2GmaM=; b=NlHTASrp23bqRo R4weWWFMESSTvTeBWiMgM4QYt29C4e1b5tk0uzYOo++JXIKGE+FXPor72NdB49B0K9h/XBm/PU/Ij INKMtX0YoaFopqJKO+SX4AWVOTe6N0h1hYpV3KlqpO5azvzZivuPa7y+QLavosToJEz53ArSr6YQp O6KNqzNRWTDimzzM1LrRYdoKQkhbzcD4alWDMD/0jslClo2LaFwrIGbE1YMvZieSGMdFhstEbWMig KUb28+bAlBnAdoFMFjnwCVg6v7ewbDrWTJpryDICjISDNH+OT4g4+EQFUx0jYOhHYYQDRoLMgufT/ ao/JJA8WBjBYKvGs8ksw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p6DUC-00GO2c-HI; Fri, 16 Dec 2022 16:23:00 +0000 Received: from mail-wm1-x331.google.com ([2a00:1450:4864:20::331]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p6DU8-00GO0A-Ly for linux-riscv@lists.infradead.org; Fri, 16 Dec 2022 16:22:58 +0000 Received: by mail-wm1-x331.google.com with SMTP id ja17so2233791wmb.3 for ; Fri, 16 Dec 2022 08:22:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bZlFlLyFfvGOX6GgDGk2MQgjTa7NIX6OhSZPRX3tHGU=; b=hzoUHwIPuovEIg56X3mIHKrMAVWbDg4PTj1OawlyJ0rtKp6ujgWep89ao0y+5ke3mh 1q7p1+qseox94TLoIgXvk2lPbcLe8X6GjYKBkWkw8wp3tEJUJBrkS/x0tvg2FCUXExNP CUyunn28pIiyM9pLMK9fqxH7NrkwwGcAzqhf9tDrYRIrc244BvaHFtWts6h/mmhf/AB+ OxQoDZds3cGUyWYrFryWMhyP7VUzN+BU9fbNV4YOAzoAHQCiNa4HaAY2BrXegUuG5mv+ bT2082EZHC/bsqvPriRtqkyxTkGYolAK6zcOqwFSJ5UhLB6ZpJ8nW+tMC9QovY8CRmEA uWdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bZlFlLyFfvGOX6GgDGk2MQgjTa7NIX6OhSZPRX3tHGU=; b=3o8CzooK5K1MZ5tlmDH1RT8an7FpWHflQhLz001+c+CBnV/6xbyXTLFi75nRsmRp6V ZjToIITNp8G8306Z/sqjYOLbd4OmkmNw5NgFcMlaIYAcSZUs+RVxzp9oK/VHHxGYvOsY sgt6L+vSyzNWfsFjKDzgmalHK/LbnV9uoQHE85ln1DH6o2PI5emwG6HKmjE5rOWtRiG0 P0X/6sGUH5HvJSnVnhoSe4Zq+chmsCCYvmRj2Bib9liqpaLmRxfSwh2XdlF8gRYFLqIs Tg8F5eNZSBP8XO+tHwB6wyiXh34YYj/cW8zIsSXJmkloDV9KaumXXTc6Gjld/FhTGGf/ PDkw== X-Gm-Message-State: ANoB5pnwC/b+6G69aNDHRF6nGgtOXn9m3J0KPpq07KBvXB5zdDnM//AD L0dyokD/PCxW0U+nGIXTNilLzOyiCngWctGiJyw= X-Google-Smtp-Source: AA0mqf7N1fq1MvZ+SGAcW3pfJxsnX1JzX6Mx22O0EaNaaK63KUgw9mwiXLiaX2j7wNfBovkSDAlT1g== X-Received: by 2002:a05:600c:4e91:b0:3d1:dc6f:b1a4 with SMTP id f17-20020a05600c4e9100b003d1dc6fb1a4mr36135364wmq.5.1671207773030; Fri, 16 Dec 2022 08:22:53 -0800 (PST) Received: from alex-rivos.home (lfbn-lyo-1-450-160.w2-7.abo.wanadoo.fr. [2.7.42.160]) by smtp.gmail.com with ESMTPSA id h16-20020a05600c351000b003d23a3b783bsm3444035wmq.10.2022.12.16.08.22.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 08:22:52 -0800 (PST) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Ard Biesheuvel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-efi@vger.kernel.org Cc: Alexandre Ghiti Subject: [PATCH 1/6] riscv: Split early and final KASAN population functions Date: Fri, 16 Dec 2022 17:21:36 +0100 Message-Id: <20221216162141.1701255-2-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221216162141.1701255-1-alexghiti@rivosinc.com> References: <20221216162141.1701255-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221216_082256_743771_97AED23C X-CRM114-Status: GOOD ( 15.81 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This is a preliminary work that allows to make the code more understandable. Signed-off-by: Alexandre Ghiti --- arch/riscv/mm/kasan_init.c | 181 +++++++++++++++++++++++-------------- 1 file changed, 114 insertions(+), 67 deletions(-) diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c index a22e418dbd82..a7314ffe7d76 100644 --- a/arch/riscv/mm/kasan_init.c +++ b/arch/riscv/mm/kasan_init.c @@ -95,23 +95,13 @@ static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned } static void __init kasan_populate_pud(pgd_t *pgd, - unsigned long vaddr, unsigned long end, - bool early) + unsigned long vaddr, unsigned long end) { phys_addr_t phys_addr; pud_t *pudp, *base_pud; unsigned long next; - if (early) { - /* - * We can't use pgd_page_vaddr here as it would return a linear - * mapping address but it is not mapped yet, but when populating - * early_pg_dir, we need the physical address and when populating - * swapper_pg_dir, we need the kernel virtual address so use - * pt_ops facility. - */ - base_pud = pt_ops.get_pud_virt(pfn_to_phys(_pgd_pfn(*pgd))); - } else if (pgd_none(*pgd)) { + if (pgd_none(*pgd)) { base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE); } else { base_pud = (pud_t *)pgd_page_vaddr(*pgd); @@ -128,16 +118,10 @@ static void __init kasan_populate_pud(pgd_t *pgd, next = pud_addr_end(vaddr, end); if (pud_none(*pudp) && IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) { - if (early) { - phys_addr = __pa(((uintptr_t)kasan_early_shadow_pmd)); - set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_TABLE)); + phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE); + if (phys_addr) { + set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL)); continue; - } else { - phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE); - if (phys_addr) { - set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL)); - continue; - } } } @@ -150,32 +134,19 @@ static void __init kasan_populate_pud(pgd_t *pgd, * it entirely, memblock could allocate a page at a physical address * where KASAN is not populated yet and then we'd get a page fault. */ - if (!early) - set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE)); + set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE)); } static void __init kasan_populate_p4d(pgd_t *pgd, - unsigned long vaddr, unsigned long end, - bool early) + unsigned long vaddr, unsigned long end) { phys_addr_t phys_addr; p4d_t *p4dp, *base_p4d; unsigned long next; - if (early) { - /* - * We can't use pgd_page_vaddr here as it would return a linear - * mapping address but it is not mapped yet, but when populating - * early_pg_dir, we need the physical address and when populating - * swapper_pg_dir, we need the kernel virtual address so use - * pt_ops facility. - */ - base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgd))); - } else { - base_p4d = (p4d_t *)pgd_page_vaddr(*pgd); - if (base_p4d == lm_alias(kasan_early_shadow_p4d)) - base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE); - } + base_p4d = (p4d_t *)pgd_page_vaddr(*pgd); + if (base_p4d == lm_alias(kasan_early_shadow_p4d)) + base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE); p4dp = base_p4d + p4d_index(vaddr); @@ -183,20 +154,14 @@ static void __init kasan_populate_p4d(pgd_t *pgd, next = p4d_addr_end(vaddr, end); if (p4d_none(*p4dp) && IS_ALIGNED(vaddr, P4D_SIZE) && (next - vaddr) >= P4D_SIZE) { - if (early) { - phys_addr = __pa(((uintptr_t)kasan_early_shadow_pud)); - set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_TABLE)); + phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE); + if (phys_addr) { + set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL)); continue; - } else { - phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE); - if (phys_addr) { - set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL)); - continue; - } } } - kasan_populate_pud((pgd_t *)p4dp, vaddr, next, early); + kasan_populate_pud((pgd_t *)p4dp, vaddr, next); } while (p4dp++, vaddr = next, vaddr != end); /* @@ -205,8 +170,7 @@ static void __init kasan_populate_p4d(pgd_t *pgd, * it entirely, memblock could allocate a page at a physical address * where KASAN is not populated yet and then we'd get a page fault. */ - if (!early) - set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE)); + set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE)); } #define kasan_early_shadow_pgd_next (pgtable_l5_enabled ? \ @@ -214,16 +178,15 @@ static void __init kasan_populate_p4d(pgd_t *pgd, (pgtable_l4_enabled ? \ (uintptr_t)kasan_early_shadow_pud : \ (uintptr_t)kasan_early_shadow_pmd)) -#define kasan_populate_pgd_next(pgdp, vaddr, next, early) \ +#define kasan_populate_pgd_next(pgdp, vaddr, next) \ (pgtable_l5_enabled ? \ - kasan_populate_p4d(pgdp, vaddr, next, early) : \ + kasan_populate_p4d(pgdp, vaddr, next) : \ (pgtable_l4_enabled ? \ - kasan_populate_pud(pgdp, vaddr, next, early) : \ + kasan_populate_pud(pgdp, vaddr, next) : \ kasan_populate_pmd((pud_t *)pgdp, vaddr, next))) static void __init kasan_populate_pgd(pgd_t *pgdp, - unsigned long vaddr, unsigned long end, - bool early) + unsigned long vaddr, unsigned long end) { phys_addr_t phys_addr; unsigned long next; @@ -232,11 +195,7 @@ static void __init kasan_populate_pgd(pgd_t *pgdp, next = pgd_addr_end(vaddr, end); if (IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE) { - if (early) { - phys_addr = __pa((uintptr_t)kasan_early_shadow_pgd_next); - set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_TABLE)); - continue; - } else if (pgd_page_vaddr(*pgdp) == + if (pgd_page_vaddr(*pgdp) == (unsigned long)lm_alias(kasan_early_shadow_pgd_next)) { /* * pgdp can't be none since kasan_early_init @@ -253,7 +212,95 @@ static void __init kasan_populate_pgd(pgd_t *pgdp, } } - kasan_populate_pgd_next(pgdp, vaddr, next, early); + kasan_populate_pgd_next(pgdp, vaddr, next); + } while (pgdp++, vaddr = next, vaddr != end); +} + +static void __init kasan_early_populate_pud(p4d_t *p4dp, + unsigned long vaddr, + unsigned long end) +{ + pud_t *pudp, *base_pud; + phys_addr_t phys_addr; + unsigned long next; + + if (!pgtable_l4_enabled) { + pudp = (pud_t *)p4dp; + } else { + base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(*p4dp))); + pudp = base_pud + pud_index(vaddr); + } + + do { + next = pud_addr_end(vaddr, end); + + if (pud_none(*pudp) && IS_ALIGNED(vaddr, PUD_SIZE) && + (next - vaddr) >= PUD_SIZE) { + phys_addr = __pa((uintptr_t)kasan_early_shadow_pmd); + set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_TABLE)); + continue; + } + + BUG(); + } while (pudp++, vaddr = next, vaddr != end); +} + +static void __init kasan_early_populate_p4d(pgd_t *pgdp, + unsigned long vaddr, + unsigned long end) +{ + p4d_t *p4dp, *base_p4d; + phys_addr_t phys_addr; + unsigned long next; + + /* + * We can't use pgd_page_vaddr here as it would return a linear + * mapping address but it is not mapped yet, but when populating + * early_pg_dir, we need the physical address and when populating + * swapper_pg_dir, we need the kernel virtual address so use + * pt_ops facility. + * Note that this test is then completely equivalent to + * p4dp = p4d_offset(pgdp, vaddr) + */ + if (!pgtable_l5_enabled) { + p4dp = (p4d_t *)pgdp; + } else { + base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgdp))); + p4dp = base_p4d + p4d_index(vaddr); + } + + do { + next = p4d_addr_end(vaddr, end); + + if (p4d_none(*p4dp) && IS_ALIGNED(vaddr, P4D_SIZE) && + (next - vaddr) >= P4D_SIZE) { + phys_addr = __pa((uintptr_t)kasan_early_shadow_pud); + set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_TABLE)); + continue; + } + + kasan_early_populate_pud(p4dp, vaddr, next); + } while (p4dp++, vaddr = next, vaddr != end); +} + +static void __init kasan_early_populate_pgd(pgd_t *pgdp, + unsigned long vaddr, + unsigned long end) +{ + phys_addr_t phys_addr; + unsigned long next; + + do { + next = pgd_addr_end(vaddr, end); + + if (pgd_none(*pgdp) && IS_ALIGNED(vaddr, PGDIR_SIZE) && + (next - vaddr) >= PGDIR_SIZE) { + phys_addr = __pa((uintptr_t)kasan_early_shadow_p4d); + set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_TABLE)); + continue; + } + + kasan_early_populate_p4d(pgdp, vaddr, next); } while (pgdp++, vaddr = next, vaddr != end); } @@ -290,16 +337,16 @@ asmlinkage void __init kasan_early_init(void) PAGE_TABLE)); } - kasan_populate_pgd(early_pg_dir + pgd_index(KASAN_SHADOW_START), - KASAN_SHADOW_START, KASAN_SHADOW_END, true); + kasan_early_populate_pgd(early_pg_dir + pgd_index(KASAN_SHADOW_START), + KASAN_SHADOW_START, KASAN_SHADOW_END); local_flush_tlb_all(); } void __init kasan_swapper_init(void) { - kasan_populate_pgd(pgd_offset_k(KASAN_SHADOW_START), - KASAN_SHADOW_START, KASAN_SHADOW_END, true); + kasan_early_populate_pgd(pgd_offset_k(KASAN_SHADOW_START), + KASAN_SHADOW_START, KASAN_SHADOW_END); local_flush_tlb_all(); } @@ -309,7 +356,7 @@ static void __init kasan_populate(void *start, void *end) unsigned long vaddr = (unsigned long)start & PAGE_MASK; unsigned long vend = PAGE_ALIGN((unsigned long)end); - kasan_populate_pgd(pgd_offset_k(vaddr), vaddr, vend, false); + kasan_populate_pgd(pgd_offset_k(vaddr), vaddr, vend); local_flush_tlb_all(); memset(start, KASAN_SHADOW_INIT, end - start); From patchwork Fri Dec 16 16:21:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13075188 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B770CC4332F for ; Fri, 16 Dec 2022 16:24:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LAc8GN/DHeV6VHZ6R2LuL09r+118NIGqHLbBO8GxL7A=; b=04SSbKK1yOmWBK rcRT2XzJsFFuyiLcw3f5paisOposhxGgR/cPG5AYL0nEyX12qaJpl4IFqZhTngdyho7qzmNMVSlxX fQS9ewyoGBJDqZURRVJWU28AqJLjgb4ITPUiwZPWTUf+vptB5xb73alo3u1r/VF1Fy7nuiJhKJdgF cUGk9RkrczBnM/AMQMuBOnLy6GBc5R4916ScLPZs8JZfWuPzkhWZ6+FY0iU2WTKGO0J3FNOVJLkSu 2S23KQUj0OZPgCEbYZAUMaf9imPGrF338reo7rSJl2nOjusOyBGtcj0Kz9oIACgYYt2pIJFpr31se VRtaX2dhcEZppYTVtTPA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p6DVC-00GOmS-Ps; Fri, 16 Dec 2022 16:24:02 +0000 Received: from mail-wr1-x432.google.com ([2a00:1450:4864:20::432]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p6DV7-00GOjM-Qr for linux-riscv@lists.infradead.org; Fri, 16 Dec 2022 16:24:00 +0000 Received: by mail-wr1-x432.google.com with SMTP id f18so3025242wrj.5 for ; Fri, 16 Dec 2022 08:23:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yJQoFuKpH7yTfAbQG+gRnfL9cAhkMBaH0dMNqiGISLg=; b=CGmb/DkPlQ+89dTMOmhCitHQKzcKtX5/4Hzl6rBBDwBXwXlm0uuINoeuMVSLM7ytDN jxI53MfugFy6bCfvWFR1t1ZyvE5pqBgFcbMjYOlRXGWxKSLw2TPoz50AUu2/Fja2QztJ 3PxZhGurUCUZzdMt+Yt8lL2uPVEaUsi/P99Y9FmKR88O0HKCybCfPfIm+WFs3VTQt6Vl 2rZ3JmyLQAe1HzfPEpEZyITLqWNIOvXpdjuESwva9GFta2EfarUEA+bz5S9y1tZUyylt I4VGTXm7bKbtZmwcbDa13nqy8VEdRFsboSJmp1UMbO5bJSwi+z4XeF5gLjqBdKWrFML6 nhpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yJQoFuKpH7yTfAbQG+gRnfL9cAhkMBaH0dMNqiGISLg=; b=p/TNFGPV+VCC0BYBoHztMqsapdvyP07v0diHMV5Zt713Oey0gt4pmMN8umX/dabMjM Yx85SaQw204dTnCMdY6xutum0YrwwO+eEoe4NkfZT87laPpfGrXy/MG0KxiXwqjYUEXR 85nZeZfIr9X1tqpJPALyXvBdiuljbX+bLcojn6C/TkeU7vWh43N1B6+DJSvyBvfzn/sQ VySXdb+5Aqsnl9btsY86IWu5KumVmkmDbSHB7k+cuTbhZ5uyISsbXdHrlWzRabkamXHN ZeZSSjL12ScJ6tgKvGx+6m8tcEkX4McZexiH7+eRa4cuJij9rqQDkSSJwaEAW0afWxNM D/AQ== X-Gm-Message-State: ANoB5pkJCQoGTPLtObTXDSGqOg7FxPUcWYQ0zGAX2vKRlJ1BgckmxL6C EDkIyTLBgr1Dn8cISJLc0t5ZYA== X-Google-Smtp-Source: AA0mqf76DSfbJTCnMaftkYg1OhnwWrJhuekO9y76yRxlkzxntfjr/jK/PzIAIyNsviJt5MBKJM3Vnw== X-Received: by 2002:a05:6000:1d88:b0:236:5892:33ae with SMTP id bk8-20020a0560001d8800b00236589233aemr20796414wrb.4.1671207834135; Fri, 16 Dec 2022 08:23:54 -0800 (PST) Received: from alex-rivos.home (lfbn-lyo-1-450-160.w2-7.abo.wanadoo.fr. [2.7.42.160]) by smtp.gmail.com with ESMTPSA id k6-20020a5d66c6000000b00242271fd2besm2656662wrw.89.2022.12.16.08.23.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 08:23:53 -0800 (PST) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Ard Biesheuvel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-efi@vger.kernel.org Cc: Alexandre Ghiti Subject: [PATCH 2/6] riscv: Rework kasan population functions Date: Fri, 16 Dec 2022 17:21:37 +0100 Message-Id: <20221216162141.1701255-3-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221216162141.1701255-1-alexghiti@rivosinc.com> References: <20221216162141.1701255-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221216_082357_900753_F8116CDD X-CRM114-Status: GOOD ( 23.18 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Our previous kasan population implementation used to have the final kasan shadow region mapped with kasan_early_shadow_page, because we did not clean the early mapping and then we had to populate the kasan region "in-place" which made the code cumbersome. So now we clear the early mapping, establish a temporary mapping while we populate the kasan shadow region with just the kernel regions that will be used. This new version uses the "generic" way of going through a page table that may be folded at runtime (avoid the XXX_next macros). It was tested with outline instrumentation on an Ubuntu kernel configuration successfully. Signed-off-by: Alexandre Ghiti --- arch/riscv/mm/kasan_init.c | 358 +++++++++++++++++++------------------ 1 file changed, 184 insertions(+), 174 deletions(-) diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c index a7314ffe7d76..5c7b1d07faf2 100644 --- a/arch/riscv/mm/kasan_init.c +++ b/arch/riscv/mm/kasan_init.c @@ -18,58 +18,48 @@ * For sv39, the region is aligned on PGDIR_SIZE so we only need to populate * the page global directory with kasan_early_shadow_pmd. * - * For sv48 and sv57, the region is not aligned on PGDIR_SIZE so the mapping - * must be divided as follows: - * - the first PGD entry, although incomplete, is populated with - * kasan_early_shadow_pud/p4d - * - the PGD entries in the middle are populated with kasan_early_shadow_pud/p4d - * - the last PGD entry is shared with the kernel mapping so populated at the - * lower levels pud/p4d - * - * In addition, when shallow populating a kasan region (for example vmalloc), - * this region may also not be aligned on PGDIR size, so we must go down to the - * pud level too. + * For sv48 and sv57, the region start is aligned on PGDIR_SIZE whereas the end + * region is not and then we have to go down to the PUD level. */ extern pgd_t early_pg_dir[PTRS_PER_PGD]; +pgd_t tmp_pg_dir[PTRS_PER_PGD] __page_aligned_bss; +p4d_t tmp_p4d[PTRS_PER_P4D] __page_aligned_bss; +pud_t tmp_pud[PTRS_PER_PUD] __page_aligned_bss; static void __init kasan_populate_pte(pmd_t *pmd, unsigned long vaddr, unsigned long end) { phys_addr_t phys_addr; - pte_t *ptep, *base_pte; + pte_t *ptep, *p; - if (pmd_none(*pmd)) - base_pte = memblock_alloc(PTRS_PER_PTE * sizeof(pte_t), PAGE_SIZE); - else - base_pte = (pte_t *)pmd_page_vaddr(*pmd); + if (pmd_none(*pmd)) { + p = memblock_alloc(PTRS_PER_PTE * sizeof(pte_t), PAGE_SIZE); + set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(p)), PAGE_TABLE)); + } - ptep = base_pte + pte_index(vaddr); + ptep = pte_offset_kernel(pmd, vaddr); do { if (pte_none(*ptep)) { phys_addr = memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); set_pte(ptep, pfn_pte(PFN_DOWN(phys_addr), PAGE_KERNEL)); + memset(__va(phys_addr), KASAN_SHADOW_INIT, PAGE_SIZE); } } while (ptep++, vaddr += PAGE_SIZE, vaddr != end); - - set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(base_pte)), PAGE_TABLE)); } static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned long end) { phys_addr_t phys_addr; - pmd_t *pmdp, *base_pmd; + pmd_t *pmdp, *p; unsigned long next; if (pud_none(*pud)) { - base_pmd = memblock_alloc(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE); - } else { - base_pmd = (pmd_t *)pud_pgtable(*pud); - if (base_pmd == lm_alias(kasan_early_shadow_pmd)) - base_pmd = memblock_alloc(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE); + p = memblock_alloc(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE); + set_pud(pud, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE)); } - pmdp = base_pmd + pmd_index(vaddr); + pmdp = pmd_offset(pud, vaddr); do { next = pmd_addr_end(vaddr, end); @@ -78,41 +68,28 @@ static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned phys_addr = memblock_phys_alloc(PMD_SIZE, PMD_SIZE); if (phys_addr) { set_pmd(pmdp, pfn_pmd(PFN_DOWN(phys_addr), PAGE_KERNEL)); + memset(__va(phys_addr), KASAN_SHADOW_INIT, PMD_SIZE); continue; } } kasan_populate_pte(pmdp, vaddr, next); } while (pmdp++, vaddr = next, vaddr != end); - - /* - * Wait for the whole PGD to be populated before setting the PGD in - * the page table, otherwise, if we did set the PGD before populating - * it entirely, memblock could allocate a page at a physical address - * where KASAN is not populated yet and then we'd get a page fault. - */ - set_pud(pud, pfn_pud(PFN_DOWN(__pa(base_pmd)), PAGE_TABLE)); } -static void __init kasan_populate_pud(pgd_t *pgd, +static void __init kasan_populate_pud(p4d_t *p4d, unsigned long vaddr, unsigned long end) { phys_addr_t phys_addr; - pud_t *pudp, *base_pud; + pud_t *pudp, *p; unsigned long next; - if (pgd_none(*pgd)) { - base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE); - } else { - base_pud = (pud_t *)pgd_page_vaddr(*pgd); - if (base_pud == lm_alias(kasan_early_shadow_pud)) { - base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE); - memcpy(base_pud, (void *)kasan_early_shadow_pud, - sizeof(pud_t) * PTRS_PER_PUD); - } + if (p4d_none(*p4d)) { + p = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE); + set_p4d(p4d, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE)); } - pudp = base_pud + pud_index(vaddr); + pudp = pud_offset(p4d, vaddr); do { next = pud_addr_end(vaddr, end); @@ -121,34 +98,28 @@ static void __init kasan_populate_pud(pgd_t *pgd, phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE); if (phys_addr) { set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL)); + memset(__va(phys_addr), KASAN_SHADOW_INIT, PUD_SIZE); continue; } } kasan_populate_pmd(pudp, vaddr, next); } while (pudp++, vaddr = next, vaddr != end); - - /* - * Wait for the whole PGD to be populated before setting the PGD in - * the page table, otherwise, if we did set the PGD before populating - * it entirely, memblock could allocate a page at a physical address - * where KASAN is not populated yet and then we'd get a page fault. - */ - set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_pud)), PAGE_TABLE)); } static void __init kasan_populate_p4d(pgd_t *pgd, unsigned long vaddr, unsigned long end) { phys_addr_t phys_addr; - p4d_t *p4dp, *base_p4d; + p4d_t *p4dp, *p; unsigned long next; - base_p4d = (p4d_t *)pgd_page_vaddr(*pgd); - if (base_p4d == lm_alias(kasan_early_shadow_p4d)) - base_p4d = memblock_alloc(PTRS_PER_PUD * sizeof(p4d_t), PAGE_SIZE); + if (pgd_none(*pgd)) { + p = memblock_alloc(PTRS_PER_P4D * sizeof(p4d_t), PAGE_SIZE); + set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE)); + } - p4dp = base_p4d + p4d_index(vaddr); + p4dp = p4d_offset(pgd, vaddr); do { next = p4d_addr_end(vaddr, end); @@ -157,34 +128,15 @@ static void __init kasan_populate_p4d(pgd_t *pgd, phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE); if (phys_addr) { set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL)); + memset(__va(phys_addr), KASAN_SHADOW_INIT, P4D_SIZE); continue; } } - kasan_populate_pud((pgd_t *)p4dp, vaddr, next); + kasan_populate_pud(p4dp, vaddr, next); } while (p4dp++, vaddr = next, vaddr != end); - - /* - * Wait for the whole P4D to be populated before setting the P4D in - * the page table, otherwise, if we did set the P4D before populating - * it entirely, memblock could allocate a page at a physical address - * where KASAN is not populated yet and then we'd get a page fault. - */ - set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(base_p4d)), PAGE_TABLE)); } -#define kasan_early_shadow_pgd_next (pgtable_l5_enabled ? \ - (uintptr_t)kasan_early_shadow_p4d : \ - (pgtable_l4_enabled ? \ - (uintptr_t)kasan_early_shadow_pud : \ - (uintptr_t)kasan_early_shadow_pmd)) -#define kasan_populate_pgd_next(pgdp, vaddr, next) \ - (pgtable_l5_enabled ? \ - kasan_populate_p4d(pgdp, vaddr, next) : \ - (pgtable_l4_enabled ? \ - kasan_populate_pud(pgdp, vaddr, next) : \ - kasan_populate_pmd((pud_t *)pgdp, vaddr, next))) - static void __init kasan_populate_pgd(pgd_t *pgdp, unsigned long vaddr, unsigned long end) { @@ -194,25 +146,86 @@ static void __init kasan_populate_pgd(pgd_t *pgdp, do { next = pgd_addr_end(vaddr, end); - if (IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE) { - if (pgd_page_vaddr(*pgdp) == - (unsigned long)lm_alias(kasan_early_shadow_pgd_next)) { - /* - * pgdp can't be none since kasan_early_init - * initialized all KASAN shadow region with - * kasan_early_shadow_pud: if this is still the - * case, that means we can try to allocate a - * hugepage as a replacement. - */ - phys_addr = memblock_phys_alloc(PGDIR_SIZE, PGDIR_SIZE); - if (phys_addr) { - set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_KERNEL)); - continue; - } + if (pgd_none(*pgdp) && IS_ALIGNED(vaddr, PGDIR_SIZE) && + (next - vaddr) >= PGDIR_SIZE) { + phys_addr = memblock_phys_alloc(PGDIR_SIZE, PGDIR_SIZE); + if (phys_addr) { + set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_KERNEL)); + memset(__va(phys_addr), KASAN_SHADOW_INIT, PGDIR_SIZE); + continue; } } - kasan_populate_pgd_next(pgdp, vaddr, next); + kasan_populate_p4d(pgdp, vaddr, next); + } while (pgdp++, vaddr = next, vaddr != end); +} + +static void __init kasan_early_clear_pud(p4d_t *p4dp, + unsigned long vaddr, unsigned long end) +{ + pud_t *pudp, *base_pud; + unsigned long next; + + if (!pgtable_l4_enabled) { + pudp = (pud_t *)p4dp; + } else { + base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(*p4dp))); + pudp = base_pud + pud_index(vaddr); + } + + do { + next = pud_addr_end(vaddr, end); + + if (IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) { + pud_clear(pudp); + continue; + } + + BUG(); + } while (pudp++, vaddr = next, vaddr != end); +} + +static void __init kasan_early_clear_p4d(pgd_t *pgdp, + unsigned long vaddr, unsigned long end) +{ + p4d_t *p4dp, *base_p4d; + unsigned long next; + + if (!pgtable_l5_enabled) { + p4dp = (p4d_t *)pgdp; + } else { + base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgdp))); + p4dp = base_p4d + p4d_index(vaddr); + } + + do { + next = p4d_addr_end(vaddr, end); + + if (pgtable_l4_enabled && IS_ALIGNED(vaddr, P4D_SIZE) && + (next - vaddr) >= P4D_SIZE) { + p4d_clear(p4dp); + continue; + } + + kasan_early_clear_pud(p4dp, vaddr, next); + } while (p4dp++, vaddr = next, vaddr != end); +} + +static void __init kasan_early_clear_pgd(pgd_t *pgdp, + unsigned long vaddr, unsigned long end) +{ + unsigned long next; + + do { + next = pgd_addr_end(vaddr, end); + + if (pgtable_l5_enabled && IS_ALIGNED(vaddr, PGDIR_SIZE) && + (next - vaddr) >= PGDIR_SIZE) { + pgd_clear(pgdp); + continue; + } + + kasan_early_clear_p4d(pgdp, vaddr, next); } while (pgdp++, vaddr = next, vaddr != end); } @@ -357,117 +370,64 @@ static void __init kasan_populate(void *start, void *end) unsigned long vend = PAGE_ALIGN((unsigned long)end); kasan_populate_pgd(pgd_offset_k(vaddr), vaddr, vend); - - local_flush_tlb_all(); - memset(start, KASAN_SHADOW_INIT, end - start); } -static void __init kasan_shallow_populate_pmd(pgd_t *pgdp, +static void __init kasan_shallow_populate_pud(p4d_t *p4d, unsigned long vaddr, unsigned long end) { unsigned long next; - pmd_t *pmdp, *base_pmd; - bool is_kasan_pte; - - base_pmd = (pmd_t *)pgd_page_vaddr(*pgdp); - pmdp = base_pmd + pmd_index(vaddr); - - do { - next = pmd_addr_end(vaddr, end); - is_kasan_pte = (pmd_pgtable(*pmdp) == lm_alias(kasan_early_shadow_pte)); - - if (is_kasan_pte) - pmd_clear(pmdp); - } while (pmdp++, vaddr = next, vaddr != end); -} - -static void __init kasan_shallow_populate_pud(pgd_t *pgdp, - unsigned long vaddr, unsigned long end) -{ - unsigned long next; - pud_t *pudp, *base_pud; - pmd_t *base_pmd; - bool is_kasan_pmd; - - base_pud = (pud_t *)pgd_page_vaddr(*pgdp); - pudp = base_pud + pud_index(vaddr); + void *p; + pud_t *pud_k = pud_offset(p4d, vaddr); do { next = pud_addr_end(vaddr, end); - is_kasan_pmd = (pud_pgtable(*pudp) == lm_alias(kasan_early_shadow_pmd)); - if (!is_kasan_pmd) - continue; - - base_pmd = memblock_alloc(PAGE_SIZE, PAGE_SIZE); - set_pud(pudp, pfn_pud(PFN_DOWN(__pa(base_pmd)), PAGE_TABLE)); - - if (IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) + if (pud_none(*pud_k)) { + p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); + set_pud(pud_k, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE)); continue; + } - memcpy(base_pmd, (void *)kasan_early_shadow_pmd, PAGE_SIZE); - kasan_shallow_populate_pmd((pgd_t *)pudp, vaddr, next); - } while (pudp++, vaddr = next, vaddr != end); + BUG(); + } while (pud_k++, vaddr = next, vaddr != end); } -static void __init kasan_shallow_populate_p4d(pgd_t *pgdp, +static void __init kasan_shallow_populate_p4d(pgd_t *pgd, unsigned long vaddr, unsigned long end) { unsigned long next; - p4d_t *p4dp, *base_p4d; - pud_t *base_pud; - bool is_kasan_pud; - - base_p4d = (p4d_t *)pgd_page_vaddr(*pgdp); - p4dp = base_p4d + p4d_index(vaddr); + void *p; + p4d_t *p4d_k = p4d_offset(pgd, vaddr); do { next = p4d_addr_end(vaddr, end); - is_kasan_pud = (p4d_pgtable(*p4dp) == lm_alias(kasan_early_shadow_pud)); - - if (!is_kasan_pud) - continue; - - base_pud = memblock_alloc(PAGE_SIZE, PAGE_SIZE); - set_p4d(p4dp, pfn_p4d(PFN_DOWN(__pa(base_pud)), PAGE_TABLE)); - if (IS_ALIGNED(vaddr, P4D_SIZE) && (next - vaddr) >= P4D_SIZE) + if (p4d_none(*p4d_k)) { + p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); + set_p4d(p4d_k, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE)); continue; + } - memcpy(base_pud, (void *)kasan_early_shadow_pud, PAGE_SIZE); - kasan_shallow_populate_pud((pgd_t *)p4dp, vaddr, next); - } while (p4dp++, vaddr = next, vaddr != end); + kasan_shallow_populate_pud(p4d_k, vaddr, end); + } while (p4d_k++, vaddr = next, vaddr != end); } -#define kasan_shallow_populate_pgd_next(pgdp, vaddr, next) \ - (pgtable_l5_enabled ? \ - kasan_shallow_populate_p4d(pgdp, vaddr, next) : \ - (pgtable_l4_enabled ? \ - kasan_shallow_populate_pud(pgdp, vaddr, next) : \ - kasan_shallow_populate_pmd(pgdp, vaddr, next))) - static void __init kasan_shallow_populate_pgd(unsigned long vaddr, unsigned long end) { unsigned long next; void *p; pgd_t *pgd_k = pgd_offset_k(vaddr); - bool is_kasan_pgd_next; do { next = pgd_addr_end(vaddr, end); - is_kasan_pgd_next = (pgd_page_vaddr(*pgd_k) == - (unsigned long)lm_alias(kasan_early_shadow_pgd_next)); - if (is_kasan_pgd_next) { + if (pgd_none(*pgd_k)) { p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); set_pgd(pgd_k, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE)); - } - - if (IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE) continue; + } - memcpy(p, (void *)kasan_early_shadow_pgd_next, PAGE_SIZE); - kasan_shallow_populate_pgd_next(pgd_k, vaddr, next); + kasan_shallow_populate_p4d(pgd_k, vaddr, next); } while (pgd_k++, vaddr = next, vaddr != end); } @@ -477,7 +437,37 @@ static void __init kasan_shallow_populate(void *start, void *end) unsigned long vend = PAGE_ALIGN((unsigned long)end); kasan_shallow_populate_pgd(vaddr, vend); - local_flush_tlb_all(); +} + +void create_tmp_mapping(void) +{ + void *ptr; + p4d_t *base_p4d; + + /* + * We need to clean the early mapping: this is hard to achieve "in-place", + * so install a temporary mapping like arm64 and x86 do. + */ + memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(pgd_t) * PTRS_PER_PGD); + + /* Copy the last p4d since it is shared with the kernel mapping. */ + if (pgtable_l5_enabled) { + ptr = (p4d_t *)pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_END)); + memcpy(tmp_p4d, ptr, sizeof(p4d_t) * PTRS_PER_P4D); + set_pgd(&tmp_pg_dir[pgd_index(KASAN_SHADOW_END)], + pfn_pgd(PFN_DOWN(__pa(tmp_p4d)), PAGE_TABLE)); + base_p4d = tmp_p4d; + } else { + base_p4d = (p4d_t *)tmp_pg_dir; + } + + /* Copy the last pud since it is shared with the kernel mapping. */ + if (pgtable_l4_enabled) { + ptr = (pud_t *)p4d_page_vaddr(*(base_p4d + p4d_index(KASAN_SHADOW_END))); + memcpy(tmp_pud, ptr, sizeof(pud_t) * PTRS_PER_PUD); + set_p4d(&base_p4d[p4d_index(KASAN_SHADOW_END)], + pfn_p4d(PFN_DOWN(__pa(tmp_pud)), PAGE_TABLE)); + } } void __init kasan_init(void) @@ -485,10 +475,27 @@ void __init kasan_init(void) phys_addr_t p_start, p_end; u64 i; - if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) + create_tmp_mapping(); + csr_write(CSR_SATP, PFN_DOWN(__pa(tmp_pg_dir)) | satp_mode); + + kasan_early_clear_pgd(pgd_offset_k(KASAN_SHADOW_START), + KASAN_SHADOW_START, KASAN_SHADOW_END); + + kasan_populate_early_shadow((void *)kasan_mem_to_shadow((void *)FIXADDR_START), + (void *)kasan_mem_to_shadow((void *)VMALLOC_START)); + + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) { kasan_shallow_populate( (void *)kasan_mem_to_shadow((void *)VMALLOC_START), (void *)kasan_mem_to_shadow((void *)VMALLOC_END)); + /* Shallow populate modules and BPF which are vmalloc-allocated */ + kasan_shallow_populate( + (void *)kasan_mem_to_shadow((void *)MODULES_VADDR), + (void *)kasan_mem_to_shadow((void *)MODULES_END)); + } else { + kasan_populate_early_shadow((void *)kasan_mem_to_shadow((void *)VMALLOC_START), + (void *)kasan_mem_to_shadow((void *)VMALLOC_END)); + } /* Populate the linear mapping */ for_each_mem_range(i, &p_start, &p_end) { @@ -501,8 +508,8 @@ void __init kasan_init(void) kasan_populate(kasan_mem_to_shadow(start), kasan_mem_to_shadow(end)); } - /* Populate kernel, BPF, modules mapping */ - kasan_populate(kasan_mem_to_shadow((const void *)MODULES_VADDR), + /* Populate kernel */ + kasan_populate(kasan_mem_to_shadow((const void *)MODULES_END), kasan_mem_to_shadow((const void *)MODULES_VADDR + SZ_2G)); for (i = 0; i < PTRS_PER_PTE; i++) @@ -513,4 +520,7 @@ void __init kasan_init(void) memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE); init_task.kasan_depth = 0; + + csr_write(CSR_SATP, PFN_DOWN(__pa(swapper_pg_dir)) | satp_mode); + local_flush_tlb_all(); } From patchwork Fri Dec 16 16:21:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13075190 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6DC1DC001B2 for ; Fri, 16 Dec 2022 16:25:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=att3F1ozk/En2ZN83sk3GXdOvFCkLbVqnrHJK6Va+sQ=; b=zSeKFhvVUnLWNr WsfF/ZzQCB/7iyxSKmEKWhM9tja+k+AyUzsrLWpmN0BViDMwWb8cBt7PL6IK2FlaOrsTP0LU84KTR NDUkTucPB2Gsj0t4Niy/R6yTYt+OJmYvxigQrlHHtmSivVIaUKuDshKb/K+X6A51VaRMpeMP8WPu0 0o1rPtm5H9rP9Flus6kF6H+dBC2CWSmmP96uoD+F3tX0QicuN64A3afDwQMvHRjuLEHoPj+imnP3L 3tEHVyWA6nIUi/wm0hsrIZxTF+rBvSfSjWJU3ZJb+m34R1OJcJgR2y9p+ElQvRY5Om5bEBfHhjkpB id5vRaCVICOWMmZ5h5Zw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p6DW9-00GPXe-1b; Fri, 16 Dec 2022 16:25:01 +0000 Received: from mail-wm1-x32b.google.com ([2a00:1450:4864:20::32b]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p6DW6-00GPVF-Ld for linux-riscv@lists.infradead.org; Fri, 16 Dec 2022 16:24:59 +0000 Received: by mail-wm1-x32b.google.com with SMTP id bi26-20020a05600c3d9a00b003d3404a89faso1657194wmb.1 for ; Fri, 16 Dec 2022 08:24:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xKAmpdKbBscT+wI0zsC3ixLztutjRoSSvZIlGNHmM4Q=; b=5zZ212IkhjKj94ipOAPbwYPYOlsIe2Q2iWqZ84pUt+gO3r0Eyarf711NDpKVO1C14q nc7RKDG6tdAFzkDUhRHwpMXM+IdQ7pAncnZIS/bWk/Hi0ReDG9cPXdIIugmvowrmL6wv LMTwApz9J7YrsCYDgOiiPW37lHNWWdAZw1sOVAgSDioakBmFysGZiya7IoZO/Vgqhhwf SLaq/M9LNpwE9S8g2yUbEIti1w6dDDrrElKg7SELhEGIvFdL8X/d9W70ns0APlzb3KLh EyU6JIbiXwoEnIPZ0d5qtYxPEjutK2P0Q/qFUR125kgS+E3BEfl91D5jcEuGMxVOc/Kl QPxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xKAmpdKbBscT+wI0zsC3ixLztutjRoSSvZIlGNHmM4Q=; b=f6086Hp/N/LxYBoAoZ+K5TNHDWxRKbjQKEePgG3xRryhDsZ97gDiXiiS6t5LHbhmQE zJQ9m0Ew6Qob/TnPYyjgo3BEd8z6xyMzfrRSGMFKGN5nXxszU3OwbGzsUzi0Bc/NvJmp 3uEIG93HJDXwijoWJUfvPB8kM229U/cm9c3esa7S9c2e9dQt+5CSEkcJh4nAMwQBg+ss cEZsUTEHLyWrCKhOLDOa8Ex67oBdfSyhzH2SE4E0KnGDXigpwsJnNcbsXnuRK4/Nwue1 XrhK16umWERd83R5Db9HhLZ9QonZGf8b8RoFDNATnKmiF1+MkrArr23aoByznqY5J8ah /Pxw== X-Gm-Message-State: ANoB5plc0uNBErpcZrgZVZRI0XrqGbaBFCahdZpra03GaaNNEq8bFVNF mJawrTgYmbB3JwnZP2j2mbmjcA== X-Google-Smtp-Source: AA0mqf7Kp1ma3mcIhSnx5tq7MByphnU57QGiZGBYUPbdSWTo0NaGRsZNrXC6At0HrKNSMT9J+W//Ew== X-Received: by 2002:a05:600c:4f89:b0:3cf:d0be:1231 with SMTP id n9-20020a05600c4f8900b003cfd0be1231mr36102867wmq.13.1671207895040; Fri, 16 Dec 2022 08:24:55 -0800 (PST) Received: from alex-rivos.home (lfbn-lyo-1-450-160.w2-7.abo.wanadoo.fr. [2.7.42.160]) by smtp.gmail.com with ESMTPSA id h16-20020a05600c351000b003d23a3b783bsm3450995wmq.10.2022.12.16.08.24.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 08:24:54 -0800 (PST) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Ard Biesheuvel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-efi@vger.kernel.org Cc: Alexandre Ghiti Subject: [PATCH 3/6] riscv: Move DTB_EARLY_BASE_VA to the kernel address space Date: Fri, 16 Dec 2022 17:21:38 +0100 Message-Id: <20221216162141.1701255-4-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221216162141.1701255-1-alexghiti@rivosinc.com> References: <20221216162141.1701255-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221216_082458_734317_8C19E4AB X-CRM114-Status: GOOD ( 10.20 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The early virtual address should lie in the kernel address space for inline kasan instrumentation to succeed, otherwise kasan tries to dereference an address that does not exist in the address space (since kasan only maps *kernel* address space, not the userspace). Simply use the very first address of the kernel address space for the early fdt mapping. It allowed an Ubuntu kernel to boot successfully with inline instrumentation. Signed-off-by: Alexandre Ghiti --- arch/riscv/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 58bcf395efdc..d5aa6ca732f2 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -57,7 +57,7 @@ unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] EXPORT_SYMBOL(empty_zero_page); extern char _start[]; -#define DTB_EARLY_BASE_VA PGDIR_SIZE +#define DTB_EARLY_BASE_VA (ADDRESS_SPACE_END - (PTRS_PER_PGD / 2 * PGDIR_SIZE) + 1) void *_dtb_early_va __initdata; uintptr_t _dtb_early_pa __initdata; From patchwork Fri Dec 16 16:21:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13075191 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7C794C001B2 for ; Fri, 16 Dec 2022 16:26:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Aj3EWOCYPBcCV9cGbHAwGUxz6Gsy8IpqVmS7LJQvsSE=; b=eGd9Ly9elowsda ilIMrn4cE1wJTyEUnvYvXeJxWhgL7pz2AEff9GsqChGHZEbmKu/RkjDW220Ja3DRDyidsE28sgUHn AVQrmnk7/lkQ8YesRpBB0vPW7/IFFHAjJCNmr6s8gXVNCwLeHQOr3k1SkEBRT7AGraVBLir6oggad oegtNDDVtk8CFn0UEx+OnOMVDVe/7Z9oE3zDypdYf2qlrcdcGHimPOAtto7kHmcm8OdIONfvztdOG IW4HVhVkOQo1VVwPAT7ZD6Ickd3UXOzK+3ZeTG5PcR3+wkJNLlBaf+wCgKCKg+rOqCaKeslDsfJ7K vCd8mV5wASSODwkH8CwQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p6DX8-00GPwd-QB; Fri, 16 Dec 2022 16:26:02 +0000 Received: from mail-wm1-x329.google.com ([2a00:1450:4864:20::329]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p6DX6-00GPtO-CW for linux-riscv@lists.infradead.org; Fri, 16 Dec 2022 16:26:02 +0000 Received: by mail-wm1-x329.google.com with SMTP id f13-20020a1cc90d000000b003d08c4cf679so2151827wmb.5 for ; Fri, 16 Dec 2022 08:25:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ny2TExHkEZdNiam4GgngpmtWZKCcqjrEUo0Wf99rwww=; b=6wp5stVF9wjQFTJ01YMUJ26jEyjMMZ+/S++XQfIRFc1O1EVl2zzZnEEnO5OWQgAVF5 MzNxy/fJb9BfqXJ2jywoEtILpRcDGr3989nW2gV9HaQobfOAZLyq573kcT8k9PbwIxlD rEfEkgCBf21a+MAdPaXAQ1LvnNQDg6zZ62KQq5POXUvTz+8Puz8cClo3xalXOrYbN6cr 0lvPPE3obyuSWCBTo9fVMa8R1BoUX2yd09ZFEEM5a1ZUT+hbAJ/aCPPbGJxdIlnqjNjY MgB7fCZu7OmcizKUKo30l/oUeCv2w8HY1r0NEzc7JDsx969xBMdZjKH4mfvpwOAZwpuS 3Txw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ny2TExHkEZdNiam4GgngpmtWZKCcqjrEUo0Wf99rwww=; b=fDJE6TFjpVNuLLQHy3rkJzkfzo+Nzt/EFHXkY4ec6uTDsyAFk/Hm76au2hw8U+uPCZ TBNnEbQOpdt03WMnkzkOGJTslh59ab4PfJA9+tENBu2SUCNVWQPygTzDtyfv0X7S+A5O lsUMf77TjP2tB+rVMuVmY0XEvXk+jsVHjJ9zPpobJlIc1YCyBCWaxsOVjSLltr0/Rz4t 99TxIx45OHs9Q3ruetuvlsCbEx186Wkd5KboMVy/o5S98YHkXulrpVgCsjAtO42vumjK ezVBEG2rTEcXQ0uBY/r9Us2OV3sPSl2QkdekRSz6CFB722Pk6KI5FUr/OWXQ06EmQLV5 BBRw== X-Gm-Message-State: ANoB5pnzXti++2GdJJlCmEjQEgKMNHbIS0j9vmEmkOqTXbJiGM/dlTk4 VacryvkT51QODtRgSyj9MErejQ== X-Google-Smtp-Source: AA0mqf5dEDT69p/cpReljdA52jYU0o8ebND2v49EeVDKXljII4pdzHHN/hgOHLaRCYpaiCIh+YKVwg== X-Received: by 2002:a05:600c:554b:b0:3d2:1761:3742 with SMTP id iz11-20020a05600c554b00b003d217613742mr20371748wmb.15.1671207955923; Fri, 16 Dec 2022 08:25:55 -0800 (PST) Received: from alex-rivos.home (lfbn-lyo-1-450-160.w2-7.abo.wanadoo.fr. [2.7.42.160]) by smtp.gmail.com with ESMTPSA id j9-20020a05600c190900b003b4cba4ef71sm11838404wmq.41.2022.12.16.08.25.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 08:25:55 -0800 (PST) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Ard Biesheuvel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-efi@vger.kernel.org Cc: Alexandre Ghiti Subject: [PATCH 4/6] riscv: Fix EFI stub usage of KASAN instrumented string functions Date: Fri, 16 Dec 2022 17:21:39 +0100 Message-Id: <20221216162141.1701255-5-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221216162141.1701255-1-alexghiti@rivosinc.com> References: <20221216162141.1701255-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221216_082600_480449_792C171F X-CRM114-Status: GOOD ( 19.44 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The EFI stub must not use any KASAN instrumented code as the kernel proper did not initialize the thread pointer and the mapping for the KASAN shadow region. Avoid using generic string functions by copying stub dependencies from lib/string.c to drivers/firmware/efi/libstub/string.c as RISC-V does not implement architecture-specific versions of those functions. Signed-off-by: Alexandre Ghiti --- arch/riscv/kernel/image-vars.h | 8 -- drivers/firmware/efi/libstub/Makefile | 7 +- drivers/firmware/efi/libstub/string.c | 133 ++++++++++++++++++++++++++ 3 files changed, 137 insertions(+), 11 deletions(-) diff --git a/arch/riscv/kernel/image-vars.h b/arch/riscv/kernel/image-vars.h index d6e5f739905e..15616155008c 100644 --- a/arch/riscv/kernel/image-vars.h +++ b/arch/riscv/kernel/image-vars.h @@ -23,14 +23,6 @@ * linked at. The routines below are all implemented in assembler in a * position independent manner */ -__efistub_memcmp = memcmp; -__efistub_memchr = memchr; -__efistub_strlen = strlen; -__efistub_strnlen = strnlen; -__efistub_strcmp = strcmp; -__efistub_strncmp = strncmp; -__efistub_strrchr = strrchr; - __efistub__start = _start; __efistub__start_kernel = _start_kernel; __efistub__end = _end; diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index b1601aad7e1a..031d2268bab5 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -130,9 +130,10 @@ STUBCOPY_RELOC-$(CONFIG_ARM) := R_ARM_ABS # also means that we need to be extra careful to make sure that the stub does # not rely on any absolute symbol references, considering that the virtual # kernel mapping that the linker uses is not active yet when the stub is -# executing. So build all C dependencies of the EFI stub into libstub, and do -# a verification pass to see if any absolute relocations exist in any of the -# object files. +# executing. In addition, we need to make sure that the stub does not use KASAN +# instrumented code like the generic string functions. So build all C +# dependencies of the EFI stub into libstub, and do a verification pass to see +# if any absolute relocations exist in any of the object files. # STUBCOPY_FLAGS-$(CONFIG_ARM64) += --prefix-alloc-sections=.init \ --prefix-symbols=__efistub_ diff --git a/drivers/firmware/efi/libstub/string.c b/drivers/firmware/efi/libstub/string.c index 5d13e43869ee..5154ae6e7f10 100644 --- a/drivers/firmware/efi/libstub/string.c +++ b/drivers/firmware/efi/libstub/string.c @@ -113,3 +113,136 @@ long simple_strtol(const char *cp, char **endp, unsigned int base) return simple_strtoull(cp, endp, base); } + +#ifndef __HAVE_ARCH_STRLEN +/** + * strlen - Find the length of a string + * @s: The string to be sized + */ +size_t strlen(const char *s) +{ + const char *sc; + + for (sc = s; *sc != '\0'; ++sc) + /* nothing */; + return sc - s; +} +EXPORT_SYMBOL(strlen); +#endif + +#ifndef __HAVE_ARCH_STRNLEN +/** + * strnlen - Find the length of a length-limited string + * @s: The string to be sized + * @count: The maximum number of bytes to search + */ +size_t strnlen(const char *s, size_t count) +{ + const char *sc; + + for (sc = s; count-- && *sc != '\0'; ++sc) + /* nothing */; + return sc - s; +} +EXPORT_SYMBOL(strnlen); +#endif + +#ifndef __HAVE_ARCH_STRCMP +/** + * strcmp - Compare two strings + * @cs: One string + * @ct: Another string + */ +int strcmp(const char *cs, const char *ct) +{ + unsigned char c1, c2; + + while (1) { + c1 = *cs++; + c2 = *ct++; + if (c1 != c2) + return c1 < c2 ? -1 : 1; + if (!c1) + break; + } + return 0; +} +EXPORT_SYMBOL(strcmp); +#endif + +#ifndef __HAVE_ARCH_STRRCHR +/** + * strrchr - Find the last occurrence of a character in a string + * @s: The string to be searched + * @c: The character to search for + */ +char *strrchr(const char *s, int c) +{ + const char *last = NULL; + do { + if (*s == (char)c) + last = s; + } while (*s++); + return (char *)last; +} +EXPORT_SYMBOL(strrchr); +#endif + +#ifndef __HAVE_ARCH_MEMCMP +/** + * memcmp - Compare two areas of memory + * @cs: One area of memory + * @ct: Another area of memory + * @count: The size of the area. + */ +#undef memcmp +__visible int memcmp(const void *cs, const void *ct, size_t count) +{ + const unsigned char *su1, *su2; + int res = 0; + +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS + if (count >= sizeof(unsigned long)) { + const unsigned long *u1 = cs; + const unsigned long *u2 = ct; + do { + if (get_unaligned(u1) != get_unaligned(u2)) + break; + u1++; + u2++; + count -= sizeof(unsigned long); + } while (count >= sizeof(unsigned long)); + cs = u1; + ct = u2; + } +#endif + for (su1 = cs, su2 = ct; 0 < count; ++su1, ++su2, count--) + if ((res = *su1 - *su2) != 0) + break; + return res; +} +EXPORT_SYMBOL(memcmp); +#endif + +#ifndef __HAVE_ARCH_MEMCHR +/** + * memchr - Find a character in an area of memory. + * @s: The memory area + * @c: The byte to search for + * @n: The size of the area. + * + * returns the address of the first occurrence of @c, or %NULL + * if @c is not found + */ +void *memchr(const void *s, int c, size_t n) +{ + const unsigned char *p = s; + while (n-- != 0) { + if ((unsigned char)c == *p++) { + return (void *)(p - 1); + } + } + return NULL; +} +EXPORT_SYMBOL(memchr); +#endif From patchwork Fri Dec 16 16:21:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13075192 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B9EFAC4332F for ; Fri, 16 Dec 2022 16:27:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ARKDmdWkgFKptkKBShs3hmSAGb3TRVpzYKt3yMIcPKQ=; b=KvB6TxywT+c2ON EGKqA4L0tRfR/ctzw2yfk+2aeWRit+3OmNynxgNiXun6jIu37SJJZRhF0husP87BzO7KzoKBtztT3 OQ8B+8dNEHTqoDLe+SP2nN1oOXtFW+9DVzyvcljIDj6UOtfvkKOm4hU6SxofA5INolx+Pje4ccPe4 B7sPTv79U4L5cCvtIZf4IfwULVhXL2si2ezn3KerbFGDEArTjej7/1LN++KeebxA5Rze0Y/yTXnZP FPVf+IGJ5EXdWl9oi5kb4mEHg5w0gMc5n26EwSP/vOON/sHEeec7gOVtnU9Y+MQU6IfTkY6Y+Oq6v YDMS5c3TepYsMQJg83qQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p6DY7-00GQIj-Nu; Fri, 16 Dec 2022 16:27:03 +0000 Received: from mail-wr1-x434.google.com ([2a00:1450:4864:20::434]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p6DY4-00GQGn-DT for linux-riscv@lists.infradead.org; Fri, 16 Dec 2022 16:27:01 +0000 Received: by mail-wr1-x434.google.com with SMTP id co23so3040809wrb.4 for ; Fri, 16 Dec 2022 08:26:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7CyWdPwmsoEln1rCKIxbXcZ7tw/K7pVLqcNgYwRXhLM=; b=Rvbra2pefYxN5Q59kp/VQ7E4iiWi056Ll+Lc97nNZ8xQjIW01p4/Hhmq8nyEsUCZaN fFEJ5yuh0sy9fep0YuWkUVmUgvhZWLsE6V/HZlrqB9miJXEuCFTFIs/2UY6MbYspUkw8 sgtydWJi7DQ6WrAVuF3osCpkGVohxOEeO4DvlEa5KpypUXdKdcXWxti2XjDocPJq856p qMZjBMDvhk8b7uBHCy5IqIy3rBfQdCQG92tE+pUgOJMDn560Xj0on7/f03e3sGFwkkaP g88GQZxRYpt/58KFbHAgzx82CtnYDhfZQONayWapn6ilzFCEXq+aR6oxaHrFfnaE8lO7 OAnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7CyWdPwmsoEln1rCKIxbXcZ7tw/K7pVLqcNgYwRXhLM=; b=pXx0kpUzmmLtZffsvuRwwbPB2WOgxrNevY4BtFWfxwtLH/EtYEu502++lTdgMa25ST O9/60gi337PdTFfAZobrN+9cTLfUBVmrr4pg5MOKiYt8xSx0Oe4tPZ1CzlMDsxZSKayF 12FD5vkkCdCHB0bpmxPinzGXXfA4+BJcNwOi+WwNHoRzpke+0CrMvxbU14jQBtH/6QT2 xJSaw3dxL9qQgwEQ5CNSDb6B85qbO/o6VmfRYb86EMpy1UcGPcy4kYhZbEz/nitAwBfi oCFPPZQnywJfAfgY4yMa3BmAR0ZI0uGzxx+E4i/LdArCaurobvtEZc7N1QDNRg6dp4j1 NGyQ== X-Gm-Message-State: ANoB5pnyRiO17MrHveaBu0ZSIIjQtZTgdMWFgGL8Dwv99caEkZL2kwNz w6T2jValRn/aspXuDBFZJFrghw== X-Google-Smtp-Source: AA0mqf6hPxfK2i4tDJ4xpuFfTwdw7faWUPdgx5VEnXDVJO+7l98ZIdasi7aT4CKr/yV1k0PiX318Fw== X-Received: by 2002:adf:f9c7:0:b0:242:4c28:c9a9 with SMTP id w7-20020adff9c7000000b002424c28c9a9mr19141287wrr.46.1671208016801; Fri, 16 Dec 2022 08:26:56 -0800 (PST) Received: from alex-rivos.home (lfbn-lyo-1-450-160.w2-7.abo.wanadoo.fr. [2.7.42.160]) by smtp.gmail.com with ESMTPSA id z7-20020a5d4407000000b0024245e543absm2554603wrq.88.2022.12.16.08.26.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 08:26:56 -0800 (PST) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Ard Biesheuvel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-efi@vger.kernel.org Cc: Alexandre Ghiti Subject: [PATCH 5/6] riscv: Fix ptdump when KASAN is enabled Date: Fri, 16 Dec 2022 17:21:40 +0100 Message-Id: <20221216162141.1701255-6-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221216162141.1701255-1-alexghiti@rivosinc.com> References: <20221216162141.1701255-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221216_082700_470161_830CA887 X-CRM114-Status: GOOD ( 11.83 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The KASAN shadow region was moved next to the kernel mapping but the ptdump code was not updated and it appears to break the dump of the kernel page table, so fix this by moving the KASAN shadow region in ptdump. Fixes: f7ae02333d13 ("riscv: Move KASAN mapping next to the kernel mapping") Signed-off-by: Alexandre Ghiti --- arch/riscv/mm/ptdump.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/riscv/mm/ptdump.c b/arch/riscv/mm/ptdump.c index 830e7de65e3a..20a9f991a6d7 100644 --- a/arch/riscv/mm/ptdump.c +++ b/arch/riscv/mm/ptdump.c @@ -59,10 +59,6 @@ struct ptd_mm_info { }; enum address_markers_idx { -#ifdef CONFIG_KASAN - KASAN_SHADOW_START_NR, - KASAN_SHADOW_END_NR, -#endif FIXMAP_START_NR, FIXMAP_END_NR, PCI_IO_START_NR, @@ -74,6 +70,10 @@ enum address_markers_idx { VMALLOC_START_NR, VMALLOC_END_NR, PAGE_OFFSET_NR, +#ifdef CONFIG_KASAN + KASAN_SHADOW_START_NR, + KASAN_SHADOW_END_NR, +#endif #ifdef CONFIG_64BIT MODULES_MAPPING_NR, KERNEL_MAPPING_NR, @@ -82,10 +82,6 @@ enum address_markers_idx { }; static struct addr_marker address_markers[] = { -#ifdef CONFIG_KASAN - {0, "Kasan shadow start"}, - {0, "Kasan shadow end"}, -#endif {0, "Fixmap start"}, {0, "Fixmap end"}, {0, "PCI I/O start"}, @@ -97,6 +93,10 @@ static struct addr_marker address_markers[] = { {0, "vmalloc() area"}, {0, "vmalloc() end"}, {0, "Linear mapping"}, +#ifdef CONFIG_KASAN + {0, "Kasan shadow start"}, + {0, "Kasan shadow end"}, +#endif #ifdef CONFIG_64BIT {0, "Modules/BPF mapping"}, {0, "Kernel mapping"}, @@ -362,10 +362,6 @@ static int __init ptdump_init(void) { unsigned int i, j; -#ifdef CONFIG_KASAN - address_markers[KASAN_SHADOW_START_NR].start_address = KASAN_SHADOW_START; - address_markers[KASAN_SHADOW_END_NR].start_address = KASAN_SHADOW_END; -#endif address_markers[FIXMAP_START_NR].start_address = FIXADDR_START; address_markers[FIXMAP_END_NR].start_address = FIXADDR_TOP; address_markers[PCI_IO_START_NR].start_address = PCI_IO_START; @@ -377,6 +373,10 @@ static int __init ptdump_init(void) address_markers[VMALLOC_START_NR].start_address = VMALLOC_START; address_markers[VMALLOC_END_NR].start_address = VMALLOC_END; address_markers[PAGE_OFFSET_NR].start_address = PAGE_OFFSET; +#ifdef CONFIG_KASAN + address_markers[KASAN_SHADOW_START_NR].start_address = KASAN_SHADOW_START; + address_markers[KASAN_SHADOW_END_NR].start_address = KASAN_SHADOW_END; +#endif #ifdef CONFIG_64BIT address_markers[MODULES_MAPPING_NR].start_address = MODULES_VADDR; address_markers[KERNEL_MAPPING_NR].start_address = kernel_map.virt_addr; From patchwork Fri Dec 16 16:21:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13075193 X-Patchwork-Delegate: palmer@dabbelt.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 05298C001B2 for ; Fri, 16 Dec 2022 16:28:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nhccfOmIP90knj2w7FFFDpPSgwTWaMJiDwW1D26CEF4=; b=hn8gb2iWRaki37 3AR9jAKECQPe89FPfRJldLlsQ1QokKP7L2qZ/kEndnLahs4xS2CRTZQGY9Gw6j3I0bXm99OSFjKxG 35eVK3Xtj8afiAHEpdT5wk4FojxwIUHvak/+K1biz7TpjOFECXtK77P+FDOcPU6QFGf2FDItKHccE zLHS+CDFtnYpJPNYGkRAqae2i+ddHOmouTWy+0XhjVDjvxvyK6TdCqyBzfrC4mtGNKMeVln6GAB4i n2SnzqCfi9SDQrFy7KoP1xdwQnlz/Sljusuetc2tH9xvMNedIv32WDVYlTAOs47UWDPpLNdaVLl2u NpF7HktmStfZy5aqHYtQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p6DZ5-00GQgi-M7; Fri, 16 Dec 2022 16:28:03 +0000 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p6DZ3-00GQf6-3w for linux-riscv@lists.infradead.org; Fri, 16 Dec 2022 16:28:02 +0000 Received: by mail-wm1-x334.google.com with SMTP id o5-20020a05600c510500b003d21f02fbaaso4471738wms.4 for ; Fri, 16 Dec 2022 08:27:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/qYMZscnhg3axVtI2bTNdXZnKgui98AcKoWlQAnIeCs=; b=vhEZGm1QXPdlJCD+7d9PcJy32KzwQJAMknHGf8qqBajr4gCrr0UxQ70GIvhyqSYoNz eNFWzXaH5KOeakclklBAaJQTNJ0A+j2VNskqQLWwhTpZd3CKARTbDX2opCaYTEhAiOvq eDh8gvk1mz/IqzUCP055btEAV/12C4EOpPySxKaOT77Zrlsdsqr1WHFXoPE8sw6Z8D7R b/AXf/Dv1w9KgT2gAp/6tjyNEmhKOTfQn9SLuqxmHOEsjI2c695gxRTAVuu+HCIMCAl1 4PamHQAR09nYPNXQzRqMO1oJkExVcjPxnkbuboCIC60O7ssqlEycQyWCYz4hAYvth7Dw WRNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/qYMZscnhg3axVtI2bTNdXZnKgui98AcKoWlQAnIeCs=; b=P7c6EjmY5xATVbvCi8QVNmkCSLivvzwGzuIE1Xc/4zf0jBstNWQfF0bqrobt/gAT37 3o4Vc7G7JoHk62OuHz7Zw3B7xdiIOGYOW1vmh5a+kSj7yaXHw+0QoAsKRKXWjit5HEdZ r9nC1japs0HavM6T4vcq6507bGjfS1tlH2wx3LafCgqWx9tEp3sW9AdsRBNBM77mLFMS zUeqSWr2td2r73+TiSw1oLr9sQwfCe1/bG32DK3DEByhKrFArPiWOLY7RxD7XE4Ji8mE VOIu781TI4fSAG2j6qytyNs6COqMWnAJbfkAVZxjryerUZmHBhynnMzYYaNiysfxR1nL V9qA== X-Gm-Message-State: ANoB5pmoi7rxz4JgNXhTD+SKWjd6y7YoTiS8feneXD+Z0Vd6yQH8uorI aJMEPLQK2J8I3rNuXc91nO/5289Y9FJ5jpM1aXM= X-Google-Smtp-Source: AA0mqf7guqpBhqTNxv/AP9fjpSc862y9DanbVLkBzjvCq+Op7BGNw9IUHI5Lc+bbIkQyrsvM9opB4g== X-Received: by 2002:a05:600c:34cd:b0:3cf:c2a5:5abc with SMTP id d13-20020a05600c34cd00b003cfc2a55abcmr26558882wmq.17.1671208077738; Fri, 16 Dec 2022 08:27:57 -0800 (PST) Received: from alex-rivos.home (lfbn-lyo-1-450-160.w2-7.abo.wanadoo.fr. [2.7.42.160]) by smtp.gmail.com with ESMTPSA id i27-20020a05600c4b1b00b003d220ef3232sm2784387wmp.34.2022.12.16.08.27.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Dec 2022 08:27:57 -0800 (PST) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Ard Biesheuvel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-efi@vger.kernel.org Cc: Alexandre Ghiti Subject: [PATCH 6/6] riscv: Unconditionnally select KASAN_VMALLOC if KASAN Date: Fri, 16 Dec 2022 17:21:41 +0100 Message-Id: <20221216162141.1701255-7-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221216162141.1701255-1-alexghiti@rivosinc.com> References: <20221216162141.1701255-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221216_082801_175298_D9CFA5FC X-CRM114-Status: UNSURE ( 6.86 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org If KASAN is enabled, VMAP_STACK depends on KASAN_VMALLOC so enable KASAN_VMALLOC with KASAN so that we can enable VMAP_STACK by default. Signed-off-by: Alexandre Ghiti --- arch/riscv/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 6b48a3ae9843..2be0d0d230df 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -113,6 +113,7 @@ config RISCV select HAVE_RSEQ select IRQ_DOMAIN select IRQ_FORCED_THREADING + select KASAN_VMALLOC if KASAN select MODULES_USE_ELF_RELA if MODULES select MODULE_SECTIONS if MODULES select OF