From patchwork Tue Feb 22 16:51:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12755703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B1A2C433F5 for ; Tue, 22 Feb 2022 17:05:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8wMfjECq6K1uWicH4xZ4zrsIzyg8uFgaoSWXMw7vR60=; b=nXcLowr/zIsvt6 KXpvGUVK9rYlAGWDzFMiZV0/Gbfv9QO0lffQCKu3cwJYccmlQPgeBXWXRAGzhzeZb9pTCBmlqwS6d 57tfDXGha+qjKLvMqgpeNqA3JsiRTKJmTB9hzEiP4/GoN3/Be/XZKKde4Dc4NMnhNPAnvIYqj52Qg x8NmeCLxTSsYIAjHPtBh29xcWllawFNiukELgWkCxyxjyAEY07q46NJcAV0lMkW68zHeMnZBr0t/I KLeAC3c7/JBLQROt/XnztovrAl5qzwEGEpsMOx7X107fJYb34sbjvrpRyrykuQCCNYSgYSRBncBcB ND4rTKKOwkNKnTQxmXjQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nMYa9-00AqJK-7a; Tue, 22 Feb 2022 17:04:10 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nMYSM-00AnI2-9R for linux-arm-kernel@lists.infradead.org; Tue, 22 Feb 2022 16:56:07 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-2d6914a097cso145615577b3.10 for ; Tue, 22 Feb 2022 08:56:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=R7NUgyhmz2Vzm1IbfrrDYD1qNpQGYjcjDbFVXUaMLH8=; b=KI3O3p6WIXcE6n5c1xlRqNCzstNa7C+hluEFjVsr0MOzveDFLD2VUOGSWGeKG0RHJy T/yqV6+ri+p/3rX+b1pYMjSNo3Ovj9yyZbKA1607HqCA3kbya8vyKQ2GDgWlYG9l1znh rWdySSjzOH0K6sfRsXIWIRMQFAHMD5jl44ausSef3FnfQgFOmyhWcQwfQoMj6etEc/LI hJHRQ66bszx5zigZGXaE9ntUDiXKmNtjFdm5S5h0xTnO2PAF2yNKcyiTDlX9Dq5Ziu1q t3RdfNDzIKa7Bq1TCGBRbeYsC4mW70m5JWLL7NBzuv6XS1rF0YDiOJcHIe7I88ZhGfi5 7YDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=R7NUgyhmz2Vzm1IbfrrDYD1qNpQGYjcjDbFVXUaMLH8=; b=p82gbJM2HWttUNCrNs0nTRjhJeYGGWhig7vqcyHA4ftx8adbPHYlGZGJDfWoxN6Whx zza24pB5cYTzlnRmQ1odlSOHNwuJCi7CEBAIxJKPN3/tfdhoWPirmcV67An0p21UedhT kdPQhk7l9LmEweDG68qL0sv1smoMr9pN1J/2CUGINK3+fqmnTXwpjkRXfR4xmo1CyJN9 m3Q0hv5VgZQuJD3H2UNVh59EoEaz8Cczi9o5mph6HoJ23kGDVDpQ/1CVMp+WyOjmZg8D AujWMwA+W7slkpm+t2B8OCBRWvVxtO/sMInIChLObRjrLDmuY1uOSvp/ra+Ofr2+bPLP SYcQ== X-Gm-Message-State: AOAM5301jo/3Wyam8Lt4cMwCtz7D9TiY9GtUXk7Erls2ZeUMcCoSi7PE Dva1FHzIJ74XsF1LEReQLOAxvWc3rQjcOIx3Tg== X-Google-Smtp-Source: ABdhPJzj4uTOWudxzzJokeTZXJ0Xc+OcxmAl9Nm7IKWdiyC4Dj/qqu6z+Dkz3NZLlc5P6ZEzL8Pz6NCeaRU6Ltd2cA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:5db7:1235:b3dd:cfcb]) (user=kaleshsingh job=sendgmr) by 2002:a05:6902:ca:b0:5ff:5f2d:b533 with SMTP id i10-20020a05690200ca00b005ff5f2db533mr23967292ybs.606.1645548964571; Tue, 22 Feb 2022 08:56:04 -0800 (PST) Date: Tue, 22 Feb 2022 08:51:04 -0800 In-Reply-To: <20220222165212.2005066-1-kaleshsingh@google.com> Message-Id: <20220222165212.2005066-4-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220222165212.2005066-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.473.g83b2b277ed-goog Subject: [PATCH v2 3/9] KVM: arm64: Add guard pages for KVM nVHE hypervisor stack From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , Catalin Marinas , James Morse , Alexandru Elisei , Suzuki K Poulose , Ard Biesheuvel , Mark Rutland , Pasha Tatashin , Joey Gouly , Peter Collingbourne , Andrew Scull , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220222_085606_377396_CAF9F6B1 X-CRM114-Status: GOOD ( 16.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Maps the stack pages in the flexible private VA range and allocates guard pages below the stack as unbacked VA space. The stack is aligned to twice its size to aid overflow detection (implemented in a subsequent patch in the series). Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/arm.c | 32 +++++++++++++++++++++++++++++--- 2 files changed, 30 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index d5b0386ef765..2e277f2ed671 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -169,6 +169,7 @@ struct kvm_nvhe_init_params { unsigned long tcr_el2; unsigned long tpidr_el2; unsigned long stack_hyp_va; + unsigned long stack_pa; phys_addr_t pgd_pa; unsigned long hcr_el2; unsigned long vttbr; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index ecc5958e27fe..7e2e680c3ffb 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1541,7 +1541,6 @@ static void cpu_prepare_hyp_mode(int cpu) tcr |= (idmap_t0sz & GENMASK(TCR_TxSZ_WIDTH - 1, 0)) << TCR_T0SZ_OFFSET; params->tcr_el2 = tcr; - params->stack_hyp_va = kern_hyp_va(per_cpu(kvm_arm_hyp_stack_page, cpu) + PAGE_SIZE); params->pgd_pa = kvm_mmu_get_httbr(); if (is_protected_kvm_enabled()) params->hcr_el2 = HCR_HOST_NVHE_PROTECTED_FLAGS; @@ -1990,14 +1989,41 @@ static int init_hyp_mode(void) * Map the Hyp stack pages */ for_each_possible_cpu(cpu) { + struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu); char *stack_page = (char *)per_cpu(kvm_arm_hyp_stack_page, cpu); - err = create_hyp_mappings(stack_page, stack_page + PAGE_SIZE, - PAGE_HYP); + unsigned long stack_hyp_va, guard_hyp_va; + /* + * Private mappings are allocated downwards from io_map_base + * so allocate the stack first then the guard page. + * + * The stack is aligned to twice its size to facilitate overflow + * detection. + */ + err = __create_hyp_private_mapping(__pa(stack_page), PAGE_SIZE, + PAGE_SIZE * 2, &stack_hyp_va, PAGE_HYP); if (err) { kvm_err("Cannot map hyp stack\n"); goto out_err; } + + /* Allocate unbacked private VA range for stack guard page */ + guard_hyp_va = hyp_alloc_private_va_range(PAGE_SIZE, PAGE_SIZE); + if (IS_ERR((void *)guard_hyp_va)) { + err = PTR_ERR((void *)guard_hyp_va); + kvm_err("Cannot allocate hyp stack guard page\n"); + goto out_err; + } + + /* + * Save the stack PA in nvhe_init_params. This will be needed to recreate + * the stack mapping in protected nVHE mode. __hyp_pa() won't do the right + * thing there, since the stack has been mapped in the flexible private + * VA space. + */ + params->stack_pa = __pa(stack_page) + PAGE_SIZE; + + params->stack_hyp_va = stack_hyp_va + PAGE_SIZE; } for_each_possible_cpu(cpu) {