From patchwork Mon Oct 21 13:02:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 13844150 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A1A6CD15D8C for ; Mon, 21 Oct 2024 13:31:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Uffe5hTHXC1N9IpKdOMepaRpviJ24IOWzAUyLrjTbsU=; b=B2/GtfaEgZbZ6h3t9DQ4JpO7bJ tkNL/36tqK5KcSdwqHnbBI4Ref2UKdWOHHxA2oqnsMRbFhkgSDgUrQgrIFFc8kVanUWJ9kZEyDu9D Bs5Kf0N0QNaFP3Tmq3CKzNALyKsLxA7eGvqrHDVUpOqgg5KqobEwHTk67u2xToTYca1xuALkGz/cT aUn0/oKXYeDCUlVxNCPVvzKJFCnrU+cwAeIqveRjENyTFBfFm5zIh7ArBT3V4IpLri1ZOPBDb7BwZ Xks0AvANoaPihFa7KpdqK+2jdr12V8++mi6eZdYxi6ku0Me831PyAnN9OJ+l5wLeGZBAy+VtfhhyP 838zVdew==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t2sUp-00000007QTy-1M8F; Mon, 21 Oct 2024 13:30:55 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t2s3y-00000007MI6-1gOi for linux-arm-kernel@bombadil.infradead.org; Mon, 21 Oct 2024 13:03:10 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Cc:To:In-Reply-To:References: Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Sender:Reply-To:Content-ID:Content-Description; bh=Uffe5hTHXC1N9IpKdOMepaRpviJ24IOWzAUyLrjTbsU=; b=BcL9c7V1hKUGl+SVcF2Yq99y3O i/QvF5nNi1fZwoMGtWh1KTPqS2S7xAT8WGnUet5z4+Wagf8HlQn5W7RxW88/i2to/cT2MVssi8gy1 T6kWO949dKkkaFajU7hCxhCyOejeALyVnutvN/RGsfajnfLRNW5CLBG1RMYfF/cbm6pZVmtG1aNww uoIPeKHAWWC/ZuchRALQZ+klWC7jDL1eyv9S34ntvw3oum5RAfa09H1NDCgHCYJ4ft5nNus+3soQX TxqPCyavib25BoyqhWFoERTpVRUpksXNdNS1HRrtWnMyoAC4F2weyKwHfcO6CCJbhYKQoXuo6l3C2 LdMdRPeQ==; Received: from mail-lj1-x22f.google.com ([2a00:1450:4864:20::22f]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t2s3u-00000007wXz-3SG4 for linux-arm-kernel@lists.infradead.org; Mon, 21 Oct 2024 13:03:08 +0000 Received: by mail-lj1-x22f.google.com with SMTP id 38308e7fff4ca-2f7657f9f62so41751211fa.3 for ; Mon, 21 Oct 2024 06:03:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1729515784; x=1730120584; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=Uffe5hTHXC1N9IpKdOMepaRpviJ24IOWzAUyLrjTbsU=; b=VYcIEn4diic/4PAER5Vc5meK6pklTYjNFBIhpOAfagEkR5ZoB1aLldf6OgJKw+Z5UT DI0MekpLsFrTd+KgzN3Fy4kSa4qTjVLofeKsiEsVRS0S8rAKyfpAMGzu5fQZHlISuPhK B/HzGoaHvmcDRFgVb7DgrgpSd82K2dUmy9U3xryex6HB01n9qrJocnIXahtwScTCXLvo ZFx8YHQnimjLNyJAAWtI0BlTxud5TS+NG4jKq5pKtEmh62VWJsnIQJlJNlPPXK9ClOWe QNRRiccSmS0//fjfBaNpYWCgff281J/sG7bN3J1s0tQxYq5Ekboi8veNeQMLkOCbSMiH pP4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729515784; x=1730120584; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Uffe5hTHXC1N9IpKdOMepaRpviJ24IOWzAUyLrjTbsU=; b=KSpxyqOUygJ6DXSV6160jZ7aUm4cyCP5Sl4DpoEdtrCs6NVGgLtMtPei027uM1STV5 Bn8tiIXpDATYzDU1MZtgDhYrwtsLMECjfYDazoMGa7X47uhYdhBqcE7tcIZ90aKhYO6i xhyWk7lg5Bhi/78jpqFAJnqQG5FlFdzKhJcRPYbSfvBmV5M0ipKq+L3ZV/m4SaZ6CKsv EcqrCE4kg0NL1EORZ5tpJTMZ0PqfbeWcE6/4E7EUxPTHKTcvO/zC9onEsR4dLNeC8Nhi lUFaPfR/Mb0Z5RctymCsx9clv1pHTTcY47djsnPsqCJNG6dRnk69/YQnZosShUtug5Wt 2oeg== X-Forwarded-Encrypted: i=1; AJvYcCXA+K7GYRwgMV/IML5/ilbuLZXu4Oy2NzRFJ0cTNjFIj00tmIXneOrrDGgaqZPJHqyNmLN9b9F+g08BrHPG3At3@lists.infradead.org X-Gm-Message-State: AOJu0Yzowgdj45Y/2/LzgKf98VeLp4DHPleqk0WcQRr6t0E15AzIdS87 H/IfLtSr0rNnEEXiTaKv8xD/QSKfIVHdJKS7ppmLylG2F6lu2tLBGkNaLhqwk2g= X-Google-Smtp-Source: AGHT+IGO/4EX5avkuvXfScYCvhLu3iZRJMQg3BFD3nkjwgwkjHvopwRBrppx6dTV4/UAHlINPapBpQ== X-Received: by 2002:a2e:4a11:0:b0:2fb:599a:a8e9 with SMTP id 38308e7fff4ca-2fb82ea2942mr40762811fa.15.1729515783475; Mon, 21 Oct 2024 06:03:03 -0700 (PDT) Received: from lino.lan ([85.235.12.238]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-2fb9ae24d51sm4808351fa.130.2024.10.21.06.03.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Oct 2024 06:03:03 -0700 (PDT) From: Linus Walleij Date: Mon, 21 Oct 2024 15:02:58 +0200 Subject: [PATCH v4 1/3] ARM: ioremap: Sync PGDs for VMALLOC shadow MIME-Version: 1.0 Message-Id: <20241021-arm-kasan-vmalloc-crash-v4-1-837d1294344f@linaro.org> References: <20241021-arm-kasan-vmalloc-crash-v4-0-837d1294344f@linaro.org> In-Reply-To: <20241021-arm-kasan-vmalloc-crash-v4-0-837d1294344f@linaro.org> To: Clement LE GOFFIC , Russell King , Melon Liu , Kees Cook , AngeloGioacchino Del Regno , Mark Brown , Mark Rutland , Ard Biesheuvel Cc: Antonio Borneo , linux-stm32@st-md-mailman.stormreply.com, linux-arm-kernel@lists.infradead.org, Linus Walleij , stable@vger.kernel.org X-Mailer: b4 0.14.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241021_140307_077672_ED265C47 X-CRM114-Status: GOOD ( 16.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When sync:ing the VMALLOC area to other CPUs, make sure to also sync the KASAN shadow memory for the VMALLOC area, so that we don't get stale entries for the shadow memory in the top level PGD. Since we are now copying PGDs in two instances, create a helper function named memcpy_pgd() to do the actual copying, and create a helper to map the addresses of VMALLOC_START and VMALLOC_END into the corresponding shadow memory. Cc: stable@vger.kernel.org Fixes: 565cbaad83d8 ("ARM: 9202/1: kasan: support CONFIG_KASAN_VMALLOC") Link: https://lore.kernel.org/linux-arm-kernel/a1a1d062-f3a2-4d05-9836-3b098de9db6d@foss.st.com/ Reported-by: Clement LE GOFFIC Suggested-by: Mark Rutland Suggested-by: Russell King (Oracle) Acked-by: Mark Rutland Co-developed-by: Melon Liu Signed-off-by: Linus Walleij --- arch/arm/mm/ioremap.c | 33 +++++++++++++++++++++++++++++---- 1 file changed, 29 insertions(+), 4 deletions(-) diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c index 794cfea9f9d4..ff555823cceb 100644 --- a/arch/arm/mm/ioremap.c +++ b/arch/arm/mm/ioremap.c @@ -23,6 +23,7 @@ */ #include #include +#include #include #include #include @@ -115,16 +116,40 @@ int ioremap_page(unsigned long virt, unsigned long phys, } EXPORT_SYMBOL(ioremap_page); +#ifdef CONFIG_KASAN +static unsigned long arm_kasan_mem_to_shadow(unsigned long addr) +{ + return (unsigned long)kasan_mem_to_shadow((void *)addr); +} +#else +static unsigned long arm_kasan_mem_to_shadow(unsigned long addr) +{ + return 0; +} +#endif + +static void memcpy_pgd(struct mm_struct *mm, unsigned long start, + unsigned long end) +{ + end = ALIGN(end, PGDIR_SIZE); + memcpy(pgd_offset(mm, start), pgd_offset_k(start), + sizeof(pgd_t) * (pgd_index(end) - pgd_index(start))); +} + void __check_vmalloc_seq(struct mm_struct *mm) { int seq; do { seq = atomic_read(&init_mm.context.vmalloc_seq); - memcpy(pgd_offset(mm, VMALLOC_START), - pgd_offset_k(VMALLOC_START), - sizeof(pgd_t) * (pgd_index(VMALLOC_END) - - pgd_index(VMALLOC_START))); + memcpy_pgd(mm, VMALLOC_START, VMALLOC_END); + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) { + unsigned long start = + arm_kasan_mem_to_shadow(VMALLOC_START); + unsigned long end = + arm_kasan_mem_to_shadow(VMALLOC_END); + memcpy_pgd(mm, start, end); + } /* * Use a store-release so that other CPUs that observe the * counter's new value are guaranteed to see the results of the