From patchwork Wed Jan 26 17:30:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12725507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A09DC28CF5 for ; Wed, 26 Jan 2022 17:30:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243728AbiAZRac (ORCPT ); Wed, 26 Jan 2022 12:30:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243708AbiAZRac (ORCPT ); Wed, 26 Jan 2022 12:30:32 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F126C06161C for ; Wed, 26 Jan 2022 09:30:32 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C066261B14 for ; Wed, 26 Jan 2022 17:30:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EBE1EC340ED; Wed, 26 Jan 2022 17:30:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643218231; bh=NkDHn/GFNzvHawPVS9Op4WAl8MYqnxITPT90mOH4U9k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qP1am5qxgM0KkzyrIl6Q0mpR9YjjeqfvI5J0rwwQx0TyqI9pm+FolKFUmQQC4Lb2j 6D3/AgEm7K9ZYjsNC54LVuPMc3AlZld8YT0Jk2T5r21VW9jONWm+xbnKvSBOm77nlg wH4RqgWTqzYiIETCdUHnMdnH6L9TLZLNaGaFicbQQQiIt/uCINcYITkMApXsIBkuLB aYp3FNu2h5QX3XJlx+AcdcLAWgetZ7voTk8+2k6VFSNMLT7vMML9ysXdCUDRFMPYJx IHkxukLBMOuUNYfpJvCo24OdcibMD1Dik30im1JLNlIb+GaYOYZVOtZn3Lh9yfc1PA 0Vcq6K5mZpxBA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu, linux-hardening@vger.kernel.org, Ard Biesheuvel , Will Deacon , Marc Zyngier , Fuad Tabba , Quentin Perret , Mark Rutland , James Morse , Catalin Marinas Subject: [RFC PATCH 04/12] arm64: mm: remap PGD pages r/o in the linear region after allocation Date: Wed, 26 Jan 2022 18:30:03 +0100 Message-Id: <20220126173011.3476262-5-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220126173011.3476262-1-ardb@kernel.org> References: <20220126173011.3476262-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2460; h=from:subject; bh=NkDHn/GFNzvHawPVS9Op4WAl8MYqnxITPT90mOH4U9k=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBh8YUTOXbAUibor0RdzEKRaxJDrNYFfRidvP8AjwRP cwb0lxuJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYfGFEwAKCRDDTyI5ktmPJF8DC/ 9RAU2RHO/xy49vf4HfxCaKH83qITUHRcA6r89bUF/uVIsCSLeyZu+YTSVYzVbR0oXkOvhePQUSGjBY fvk161grsqB5b5VoZDT/OvYggUxZ59BPeuBNzXYJ5C/XcD12JX9iNQvtND/f9oZTsPX3c4ztmwMppH ak+KB8FI3rug+zIflF7vXnshCCkPPC+pVlaszPqQYupXLi/gt1JmKoSP7+BUfUTJKnPk/HBh1T0jRW MdqFeYWyU/bQz8befC9PuUO02UDcFECd7eAk2FFib0WKtMO6XbOY8HWbM32M66nAOtsOkYwBtSTWGx SSgTnGgokH7pQMkzM2loNpUvAwzHw5JsRLnXQxigbV8f1+aSHLZpBs21XN2mNSwwB8ZZ/zcFEhvJyA 9HraR1/POsV2WVoUC6CREUfB7YYT5GZZZn2tpWVcHinY6iopviXhB3G1gpK1eWeWEEUta13iIQY52U U3xV+Wn3b6sZOOBTd4c0mMonkJYtbsMQV1MSFq7CnsEP0= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org As the first step in restricting write access to all page tables via the linear mapping, remap the page at the root PGD level of a user space page table hierarchy read-only after allocation, so that it can only be manipulated using the dedicated fixmap based API. Signed-off-by: Ard Biesheuvel --- arch/arm64/mm/mmu.c | 7 ++++-- arch/arm64/mm/pgd.c | 25 ++++++++++++++------ 2 files changed, 23 insertions(+), 9 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index acfae9b41cc8..a52c3162beae 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -394,8 +394,11 @@ static phys_addr_t __pgd_pgtable_alloc(int shift) void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL); BUG_ON(!ptr); - /* Ensure the zeroed page is visible to the page table walker */ - dsb(ishst); + if (page_tables_are_ro()) + set_pgtable_ro(ptr); + else + /* Ensure the zeroed page is visible to the page table walker */ + dsb(ishst); return __pa(ptr); } diff --git a/arch/arm64/mm/pgd.c b/arch/arm64/mm/pgd.c index 4a64089e5771..637d6eceeada 100644 --- a/arch/arm64/mm/pgd.c +++ b/arch/arm64/mm/pgd.c @@ -9,8 +9,10 @@ #include #include #include +#include #include +#include #include #include #include @@ -20,24 +22,33 @@ static struct kmem_cache *pgd_cache __ro_after_init; pgd_t *pgd_alloc(struct mm_struct *mm) { gfp_t gfp = GFP_PGTABLE_USER; + pgd_t *pgd; - if (PGD_SIZE == PAGE_SIZE) - return (pgd_t *)__get_free_page(gfp); - else + if (PGD_SIZE < PAGE_SIZE && !page_tables_are_ro()) return kmem_cache_alloc(pgd_cache, gfp); + + pgd = (pgd_t *)__get_free_page(gfp); + if (!pgd) + return NULL; + if (page_tables_are_ro()) + set_pgtable_ro(pgd); + return pgd; } void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - if (PGD_SIZE == PAGE_SIZE) - free_page((unsigned long)pgd); - else + if (PGD_SIZE < PAGE_SIZE && !page_tables_are_ro()) { kmem_cache_free(pgd_cache, pgd); + } else { + if (page_tables_are_ro()) + set_pgtable_rw(pgd); + free_page((unsigned long)pgd); + } } void __init pgtable_cache_init(void) { - if (PGD_SIZE == PAGE_SIZE) + if (PGD_SIZE == PAGE_SIZE || page_tables_are_ro()) return; #ifdef CONFIG_ARM64_PA_BITS_52