From patchwork Thu Feb 10 22:41:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12742495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4E757C433F5 for ; Thu, 10 Feb 2022 22:44:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BnNgmqorGOnCXCvOzaNqz+UBZ7QI+TQw9Ji8k3cezcw=; b=eva12BjUJRFuyr 0xgN2wsDSRxTMajknxw/zjegOCWGTVhKTVStI3vZE8NsYAbx6lvkCxXHQv8wzWT06dBgNPPhDp+4W Nl9nzLvRJTfc3IL8C1MO6pCJ6MDUHWp62C4rsBWCA0JB1n/NXihyBF1pvnbDLVEa/kPz88iMEqntq ndjnW+Je+KixvGHRFJcEYQd3zFGgXsgjjQ74LIY+Rq77VVVYOmEO4Ctvxgrv7OK/ncT64SaphmcQI zhEudMutaLtX1BGdcoTkbwRswNLozJifv5uKb9hlSFTsSA+PLC+T45HfxlfRlNKKCkLP5su/Nj3cm QAAe47mqBeEcfmt5KEBA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nII9x-0053Iw-GH; Thu, 10 Feb 2022 22:43:29 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nII9u-0053Ha-4t for linux-arm-kernel@lists.infradead.org; Thu, 10 Feb 2022 22:43:27 +0000 Received: by mail-yb1-xb49.google.com with SMTP id x1-20020a25a001000000b0061c64ee0196so15101514ybh.9 for ; Thu, 10 Feb 2022 14:43:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=2CtdKKtWqmoQwaNtCKVnEnSlCVes5hG5Of9ZJoRSPvY=; b=nf4mRr3ZycGHhBqUZ+8DH8ok/Qz5Vau89sI8kQL6TBzahsS+36PcxWkEHapQEi57zC CGZbN1pSxbTcmxo4AB+1siE6sFbVo2nb0UNYy50fVpf34U/KaE2hkYTprUDL4Nb7cEx7 eOnLgbr0qMZnT6YgUCkRnP6o+xccxEE8H2B8bDpfPqwt3rT5owKtKDyzxlxoreRzy07y MzXYO5kYEYUKhF9/9jypyALGVa3urJxqTXcX0FAUfDebx61o/y7NyiNP7UOwQhkI7CzS luqBAqd7jIqdj7/5zwZd4zwMLB9kJDCIHLmkKLXY+xR7ATa7MbmSjWRxP0IbgVTg4WA9 yOBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=2CtdKKtWqmoQwaNtCKVnEnSlCVes5hG5Of9ZJoRSPvY=; b=XYpQnksc1jcFQuWJbWcNuOUZBCq/1oyKFEY11i2I3xjR3mjOxE+XaXXcR6Fn5YuKNr b5jxPsqicezw2ZJhHDIx2qmo487nd/ObmhcdXgAyC+Mbliudsr16oNMRLBw/v80q3acg 6oJoGla6xOvL7aNiw3YRqkdL3t/Z3U1LobgsvCyTLks8rcrLfzHx5vZT80fB4QrBgdIS wvmdlK9cRUi5K6EkRBHOv5hAIhg1cxrIa+nPyVWhtXqlrlb//Ng4zEFaGJU9I8x4SBm2 lEEYKSMWajNyd9dGdMut6j5vdL+zzHmwamHfoq8uTNtCcFumqZ0mRMqSyiu0aMJACi6e 1Urg== X-Gm-Message-State: AOAM532hBmkGF8RlMOor8NxTaGYBhqJp5zp3dWIbp8BIoQR+j7gI2mIt eRZWaP8GJyQESeetzqMWjPA1kLkcMyZK5aWq6g== X-Google-Smtp-Source: ABdhPJwyMEA3/vPtnC5hrjc0C911vwYkTGYEzvGOgFv+0i9iJplYrKuXB9UfmBJoHZPA2fyRMmEXZHP0MgIOdqb77g== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:8f02:232:ad86:2ab2]) (user=kaleshsingh job=sendgmr) by 2002:a81:60c3:: with SMTP id u186mr9342666ywb.26.1644533004795; Thu, 10 Feb 2022 14:43:24 -0800 (PST) Date: Thu, 10 Feb 2022 14:41:42 -0800 In-Reply-To: <20220210224220.4076151-1-kaleshsingh@google.com> Message-Id: <20220210224220.4076151-2-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220210224220.4076151-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH 1/7] KVM: arm64: Map the stack pages in the 'private' range From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , Catalin Marinas , James Morse , Alexandru Elisei , Suzuki K Poulose , Ard Biesheuvel , Mark Rutland , Pasha Tatashin , Joey Gouly , Peter Collingbourne , Andrew Walbran , Andrew Scull , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220210_144326_213661_F2EC7102 X-CRM114-Status: GOOD ( 12.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret In preparation for introducing guard pages for the stacks, map them in the 'private' range of the EL2 VA space in which the VA to PA relation is flexible when running in protected mode. Signed-off-by: Quentin Perret [Kalesh - Refactor, add comments, resolve conflicts, use __pkvm_create_private_mapping()] Signed-off-by: Kalesh Singh --- arch/arm64/kvm/hyp/nvhe/setup.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 27af337f9fea..99e178cf4249 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -105,11 +105,19 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, if (ret) return ret; - end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va; + /* Map stack pages in the 'private' VA range */ + end = (void *)__hyp_pa(per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va); start = end - PAGE_SIZE; - ret = pkvm_create_mappings(start, end, PAGE_HYP); - if (ret) - return ret; + start = (void *)__pkvm_create_private_mapping((phys_addr_t)start, + PAGE_SIZE, PAGE_HYP); + if (IS_ERR_OR_NULL(start)) + return PTR_ERR(start); + end = start + PAGE_SIZE; + /* + * Update stack_hyp_va to the end of the stack page's + * allocated 'private' VA range. + */ + per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va = (unsigned long) end; } /* From patchwork Thu Feb 10 22:41:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12742496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2B5AEC433EF for ; Thu, 10 Feb 2022 22:45:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fVNyt1hcJ1BdjFOWKTPGyPl9koPOh7fGSY9rHoc/ZfE=; b=KY4Pnu5OAZYdRp xvC7Wvx/tu/HqgtRNW7W3G0uSwmTsrzr7iVNFmUdj/hDmFlQxraHGhGgEz7qbc/hbFy94gPkqEaKO KPABNFzolXBTCpDSjhVEmbH4ESuylG9wWpqOiJvGRwtBHeICrGCUZczEV4fA9fm56Au69otusTokQ yPudWcW5OHSG1Ki/MxYKiw/I0tUn2uPD3RdxTnB1afu5IN1wFCV0hhIDm1v6KsZJQ8YU6qh9gotXo Y3e0yoAKUfcqfomUTNoLx6UMDJUhsct1B78LWv1MsQY9HU8UZRwHEdl5dUnCNTfTLtXZXtDFOvfRG 1S10a9rvJq4UuEEFMEYw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nIIAa-0053XN-9k; Thu, 10 Feb 2022 22:44:08 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nIIAV-0053UM-AV for linux-arm-kernel@lists.infradead.org; Thu, 10 Feb 2022 22:44:05 +0000 Received: by mail-yb1-xb49.google.com with SMTP id a32-20020a25ae20000000b0061db8f89e46so14990751ybj.14 for ; Thu, 10 Feb 2022 14:44:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=FWvONc6PPT7SOgcpYOOzYgnlsLgy4lIL2CKlJJtukzs=; b=mBuynU6nVtUUhgSN/Mzp+TKEbV4pFhe3dtKgEyVMOiwApKF6+HLlSY4gE57X5hOP70 rAAiYgmNoRwzjL9PRCjeUoKyBvbWi9KRQIFADz3Q6trpDjsIh8e/dl0J/VD9kGCwAI+J BESFy4HJ4E6dX5+yow+EWQQLwzm01n1PyiQJhfT9IrZs+BepkmJI7SxxevOSmOe5Zkde eBCAolmAIMIZry/9HLPgsUjMn0zqebgFcw0PnbcfnYgLHiakd0mei8onuiCvnJNdqIbB Mix+BHSyf/9IvbRtthvYA/aAELP0sccVnHZzFmlpWyLkFP2bSNkbmY8rnK0kh6Acc1Vw G2Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=FWvONc6PPT7SOgcpYOOzYgnlsLgy4lIL2CKlJJtukzs=; b=t5KnBN6PuZthV2PepLNkbycl5fPHZNdLpCFxbW7q/XJyquI5aGEl5rddsUI1UmG8GE B9FGBaxcfhXUacVaj7nnFnVE3YD3hRLCYsBVdYv/CuMOd7EkxGMIkgxQUgdcB3keRlKg a/pSo8HqKHlwMG57RBDvMNr4wB2DuAXnDP0m5I9JQpp8slWkfLTQjAhfxX7y6+n2b6zC pA9HzyKyAtD1mU4Niz0n3EXr+P5Z4Zaa4lJYXlJOzos3492frQH2yZp5XKw5qysGZBtT HJzRMdF+UtxlA0Ly4qHcRQm8qx/0YmLtWTQf9v4D/DhTopI6JcxjSt6pCZJwxIJJatD4 D/og== X-Gm-Message-State: AOAM533KkdPrIN1MoMIUrHCfcV5QeCCg5ncmo8OUIGeWWUTX99VkKy6J +GDMbYR9rhRWTZ8HamR+4T3b+htLliNqbxpKtw== X-Google-Smtp-Source: ABdhPJxZHBnjRsaIEqwkHqdAbXT7GDJWvj+cAd1UkPLHhqa+mVsLffzcbYP42soJ9363QIkLwEeaj1E/kFGFAp78sg== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:8f02:232:ad86:2ab2]) (user=kaleshsingh job=sendgmr) by 2002:a25:6f08:: with SMTP id k8mr8699054ybc.469.1644533041615; Thu, 10 Feb 2022 14:44:01 -0800 (PST) Date: Thu, 10 Feb 2022 14:41:43 -0800 In-Reply-To: <20220210224220.4076151-1-kaleshsingh@google.com> Message-Id: <20220210224220.4076151-3-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220210224220.4076151-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH 2/7] KVM: arm64: Factor out private range VA allocation From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , Catalin Marinas , James Morse , Alexandru Elisei , Suzuki K Poulose , Ard Biesheuvel , Mark Rutland , Pasha Tatashin , Joey Gouly , Peter Collingbourne , Andrew Walbran , Andrew Scull , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220210_144403_390979_F553AA53 X-CRM114-Status: GOOD ( 14.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret __pkvm_create_private_mapping() is currently responsible for allocating VA space in the hypervisor's "private" range and creating stage-1 mappings. In order to allow reusing the VA space allocation logic from other places, let's factor it out in a standalone function. This is will be used to allocate private VA ranges for hypervisor stack guard pages in a subsequent patch in this series. Signed-off-by: Quentin Perret [Kalesh - Resolve conflicts and make hyp_alloc_private_va_range available outside nvhe/mm.c, update commit message] Signed-off-by: Kalesh Singh --- arch/arm64/kvm/hyp/include/nvhe/mm.h | 1 + arch/arm64/kvm/hyp/nvhe/mm.c | 28 +++++++++++++++++++--------- 2 files changed, 20 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index 2d08510c6cc1..f53fb0e406db 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -21,6 +21,7 @@ int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot); int pkvm_create_mappings_locked(void *from, void *to, enum kvm_pgtable_prot prot); unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, enum kvm_pgtable_prot prot); +unsigned long hyp_alloc_private_va_range(size_t size); static inline void hyp_vmemmap_range(phys_addr_t phys, unsigned long size, unsigned long *start, unsigned long *end) diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index 526a7d6fa86f..e196441e072f 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -37,6 +37,22 @@ static int __pkvm_create_mappings(unsigned long start, unsigned long size, return err; } +unsigned long hyp_alloc_private_va_range(size_t size) +{ + unsigned long addr = __io_map_base; + + hyp_assert_lock_held(&pkvm_pgd_lock); + __io_map_base += PAGE_ALIGN(size); + + /* Are we overflowing on the vmemmap ? */ + if (__io_map_base > __hyp_vmemmap) { + __io_map_base = addr; + addr = (unsigned long)ERR_PTR(-ENOMEM); + } + + return addr; +} + unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, enum kvm_pgtable_prot prot) { @@ -45,16 +61,10 @@ unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, hyp_spin_lock(&pkvm_pgd_lock); - size = PAGE_ALIGN(size + offset_in_page(phys)); - addr = __io_map_base; - __io_map_base += size; - - /* Are we overflowing on the vmemmap ? */ - if (__io_map_base > __hyp_vmemmap) { - __io_map_base -= size; - addr = (unsigned long)ERR_PTR(-ENOMEM); + size = size + offset_in_page(phys); + addr = hyp_alloc_private_va_range(size); + if (IS_ERR((void *)addr)) goto out; - } err = kvm_pgtable_hyp_map(&pkvm_pgtable, addr, size, phys, prot); if (err) { From patchwork Thu Feb 10 22:41:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12742497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 44457C433F5 for ; Thu, 10 Feb 2022 22:46:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9hRjteCWriyiAzAQqf/8feGRrObqjeSOASGt3tSYmj4=; b=p5ZwIDFn9EBbcR EtBsq0VbxUh1Nv3NYzxnL79kVh5O/0HwbX26FitZ9ve5rd2HXpYt/SvAOIAmhQrNiH8MJXcHqsB60 ta0kKncL3jIo0gRY2zkY9DNG0xg1z/V2B4rb8zcJtAJL6FyPIaJPeuxMh+crLU+SzhBMNSC3cbDC5 axCOsxADiSEHYKqWwvCwus2I5OCLv/HpTv5RKeLprrl/qbwuDXJKjQshRixpQbrIsEXKQg5yg720K Brn+SII9ymXAwe3KDuN16rLLI0Kv/UN4ONB5lxG1urJRsi/uQVsyegMFR71M6pn63jNXSCn0c4prW xTnAH+Z48wS2qPsLA7qA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nIIBG-0053vG-31; Thu, 10 Feb 2022 22:44:50 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nIIBC-0053tZ-KB for linux-arm-kernel@lists.infradead.org; Thu, 10 Feb 2022 22:44:48 +0000 Received: by mail-yb1-xb49.google.com with SMTP id v10-20020a05690204ca00b0061dd584eb83so14891482ybs.21 for ; Thu, 10 Feb 2022 14:44:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=2dZOYBOmJua4dYugoiMtuQUhFU9vmd43xxG8qEJDkFQ=; b=ep6/TNGFOJe8CMFzZYZTp3879NvaY6+muWoz075exQAtJ+Dtv/XxFOgo1TvFvY76YC XOB/EOtVDQ935brIbXoSTuX/PaYdWJC8dHLpt6DXVeYIdx05KvPgXyX9s2r4wBITC+yr AiLe4qiBik1TW8/yYKyz/i9tgLFNDvabJBTvtv17doe7xkkBUVgcvImDEydYuurvMduH wbTib7UbkW1wLjMaJazhGbz3ZRn3P6dDOBYxBoKnsfjlvq1oDKICPU7DpepOHn7PWInc 6NuG3/TMjtT91dnM2QUaLSFBNYRxG6DVDul48ADGrixxtXdibO2kEXGlX1gPRU0hafh+ ZcpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=2dZOYBOmJua4dYugoiMtuQUhFU9vmd43xxG8qEJDkFQ=; b=ob2UikS79r/Xlwf5TGdpsdBACZDgrkSpB5n45pl6bg4i2x72DkmVcM5bdaWpBlFJ8E 2okvMs/EGNeNmDT/kxh2XjzT7HQK7NXgf+Zq04vWC+rn2VoBWZBiObTsjhBZ+ASjpqbQ gs192l9dNWkHTkfChPiePupMiOTwPH2Rsgk2XzRQ/DYUCIntuKoVTaKKgXz/d+ibHh/a OK3cqN+BKi34iDPRyvdj2yUNewous5eAmGEn5Xm/eWYKkOOQkGM5mK9rq5KfJCFQ0uL9 k4ZF2lTUMJTggpjykP4KypdsY8iOb4WkumRC/TKKeCjjQqLYKi+PcgFQiVGRWWxdYUC3 mMPA== X-Gm-Message-State: AOAM533ONZrh+Mk4caDVpGgdwdqKh0llcfL20MchHhj+9V91C0rU1qso 1A+mWsddZq1E+tmdKJICDcO7LqEb6puHmcelAQ== X-Google-Smtp-Source: ABdhPJzhHuVVpX5/8wSgs7kQYW10C9SFtD8VH8eIEn4uDIMdbuJGadpvTqkiYBtfns2EGA0KUtz+LKt9roCBW8sc/A== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:8f02:232:ad86:2ab2]) (user=kaleshsingh job=sendgmr) by 2002:a5b:548:: with SMTP id r8mr9416027ybp.717.1644533085249; Thu, 10 Feb 2022 14:44:45 -0800 (PST) Date: Thu, 10 Feb 2022 14:41:44 -0800 In-Reply-To: <20220210224220.4076151-1-kaleshsingh@google.com> Message-Id: <20220210224220.4076151-4-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220210224220.4076151-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH 3/7] arm64: asm: Introduce test_sp_overflow macro From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , Catalin Marinas , James Morse , Alexandru Elisei , Suzuki K Poulose , Ard Biesheuvel , Mark Rutland , Pasha Tatashin , Joey Gouly , Peter Collingbourne , Andrew Walbran , Andrew Scull , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220210_144446_688286_EDCA8420 X-CRM114-Status: GOOD ( 13.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret The asm entry code in the kernel uses a trick to check if VMAP'd stacks have overflowed by aligning them at THREAD_SHIFT * 2 granularity and checking the SP's THREAD_SHIFT bit. Protected KVM will soon make use of a similar trick to detect stack overflows, so factor out the asm code in a re-usable macro. Signed-off-by: Quentin Perret [Kalesh - Resolve minor conflicts] Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/assembler.h | 11 +++++++++++ arch/arm64/kernel/entry.S | 9 ++------- 2 files changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index e8bd0af0141c..ad40eb0eee83 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -850,4 +850,15 @@ alternative_endif #endif /* GNU_PROPERTY_AARCH64_FEATURE_1_DEFAULT */ +/* + * Test whether the SP has overflowed, without corrupting a GPR. + */ +.macro test_sp_overflow shift, label + add sp, sp, x0 // sp' = sp + x0 + sub x0, sp, x0 // x0' = sp' - x0 = (sp + x0) - x0 = sp + tbnz x0, #\shift, \label + sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0 + sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp +.endm + #endif /* __ASM_ASSEMBLER_H */ diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 772ec2ecf488..2632bc47b348 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -53,16 +53,11 @@ alternative_else_nop_endif sub sp, sp, #PT_REGS_SIZE #ifdef CONFIG_VMAP_STACK /* - * Test whether the SP has overflowed, without corrupting a GPR. * Task and IRQ stacks are aligned so that SP & (1 << THREAD_SHIFT) * should always be zero. */ - add sp, sp, x0 // sp' = sp + x0 - sub x0, sp, x0 // x0' = sp' - x0 = (sp + x0) - x0 = sp - tbnz x0, #THREAD_SHIFT, 0f - sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0 - sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp - b el\el\ht\()_\regsize\()_\label + test_sp_overflow THREAD_SHIFT, 0f + b el\el\ht\()_\regsize\()_\label 0: /* From patchwork Thu Feb 10 22:41:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12742498 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 97E1AC433F5 for ; Thu, 10 Feb 2022 22:46:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SkD1TTx6x3fuUPM8RbEx0YoamUf45LRUkzaHXUrN4vM=; b=sRc2/wFYjXL2Jw qhpPfX5uZUB0cj2k1OieSwiyTq7uYcmUDz/YeGrm2Qw+/h8VHvLYhemauFiGa3+2pUaDKd9HFUpKO fZTE7pJ7z9Yxvn7EddKJaLelBrRlf3S+oQvwbssf/Z2vSKJQibV1JnoKozAnrIFJRs2d/7L0yB3C9 PofGP9u7vrwx0OYE1yU1Q1zmlBNI9TGk/nzmQbAjbE83/vNd9vTnj0dupQKy8arUTfURh5a0j+BLN 0bK78EoXKh7IVyfB0bATGUEcthOeRMOPAKU9sLZUb0BSsuykRgAy2DjuGYQMLr20Is63VEdInX2mX UYoAuiNyt36DtB8f3c6Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nIIBv-0054IH-3L; Thu, 10 Feb 2022 22:45:31 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nIIBr-0054GJ-2k for linux-arm-kernel@lists.infradead.org; Thu, 10 Feb 2022 22:45:28 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id c5-20020a25f305000000b0061dd6123f18so15046250ybs.17 for ; Thu, 10 Feb 2022 14:45:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=OagLRmMmzyQmGqXHU8QCfKrH7+tgm0iC+teioaMlxY8=; b=KZUDFgua6kTwrdY0rgWjsvVP3R/Era680jCgYQwjuK0iUjxXTEGhr1k5AU4ecCZI8w lGlhfM50n/PyOfFuEnRyrza/dCfR09l2qnRTramuth4IQ5kSJWx8IHEXrR6WGw+YWDLS Fcw37pMtSTu7lmTbNbpjWo/86asLiZ0qeY3n5xM9TuAmWeZiYQybxooNBeFcYXyFXw0+ nw3MMrt2SnLYQkB9BXZaWEk/9+7BpUqlQ9/jkoMabBFdTAqgmPJRbDiXXuHwXpiKMGkR ABy9ETz+4Cv/q3YuBrAzVEL6xo4Z9nyp3TJ9LkgLV7VtKKpMguCHsIaLgm9iv9xO8mkw WU8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=OagLRmMmzyQmGqXHU8QCfKrH7+tgm0iC+teioaMlxY8=; b=mFYXgfhbfNAoCP5WKTzjnMxDDhUPfoDfMnFGpJSstiAoylAUeRm3FHK/DbjDioo8sc CGMlEeKt3wMoWgre5dqzSK1+vMPi+w7o5z69DITR1erFsgFCVVmx3wF9lyitK4/hha7z InwzmhUdS9Ny2R7B7pUW18gjSvNRD3julLCljme33DGWGaYXNVw+Pju6TN1l+XTMpIxp CkRg/61DOWMbVy2KkXItdlVjnSF/VbvJCq0ili3etZUylRrjMTKw5aI4B4YsIuYQuPZI 1thzpS3MUo9suTyC4mg5SB5YLbY2rJ80Ix7s8EK7MxqVZRBmdEgr0CTLLUbr46VYjbXK +nYw== X-Gm-Message-State: AOAM531lFKfqy8/ZQyf1FpOIHcBpGkujxY7cqDe5Bhuk5gcqdppx/8X4 tm9KMPhPSMf7BG/PGbWPQvqWQRbhMWX1ZEZ07Q== X-Google-Smtp-Source: ABdhPJxzsMWs7cPxHcJVEK0MMmzcLZ3BX26xgEqJ+cjkAM67DycOE9xaVXyG4ooDAIXoCBwYiczs1AVvH+kAit3i8g== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:8f02:232:ad86:2ab2]) (user=kaleshsingh job=sendgmr) by 2002:a5b:804:: with SMTP id x4mr9503248ybp.673.1644533125008; Thu, 10 Feb 2022 14:45:25 -0800 (PST) Date: Thu, 10 Feb 2022 14:41:45 -0800 In-Reply-To: <20220210224220.4076151-1-kaleshsingh@google.com> Message-Id: <20220210224220.4076151-5-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220210224220.4076151-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH 4/7] KVM: arm64: Allocate guard pages near hyp stacks From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , Catalin Marinas , James Morse , Alexandru Elisei , Suzuki K Poulose , Ard Biesheuvel , Mark Rutland , Pasha Tatashin , Joey Gouly , Peter Collingbourne , Andrew Scull , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220210_144527_155162_D2B0D259 X-CRM114-Status: GOOD ( 15.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret Allocate unbacked VA space underneath each stack page to ensure stack overflows get trapped and don't corrupt memory silently. The stack is aligned to twice its size (PAGE_SIZE), meaning that any valid stack address has PAGE_SHIFT bit as 0. This allows us to easily check for overflow in the exception entry without corrupting any GPRs. Signed-off-by: Quentin Perret [ Kalesh - Update commit text and comments, refactor, add overflow handling ] Signed-off-by: Kalesh Singh --- arch/arm64/kvm/hyp/nvhe/host.S | 16 ++++++++++++++++ arch/arm64/kvm/hyp/nvhe/setup.c | 19 ++++++++++++++++++- arch/arm64/kvm/hyp/nvhe/switch.c | 5 +++++ 3 files changed, 39 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index 3d613e721a75..78e4b612ac06 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -153,6 +153,10 @@ SYM_FUNC_END(__host_hvc) .macro invalid_host_el2_vect .align 7 + + /* Test stack overflow without corrupting GPRs */ + test_sp_overflow PAGE_SHIFT, .L__hyp_sp_overflow\@ + /* If a guest is loaded, panic out of it. */ stp x0, x1, [sp, #-16]! get_loaded_vcpu x0, x1 @@ -165,6 +169,18 @@ SYM_FUNC_END(__host_hvc) * been partially clobbered by __host_enter. */ b hyp_panic + +.L__hyp_sp_overflow\@: + /* + * Reset SP to the top of the stack, to allow handling the hyp_panic. + * This corrupts the stack but is ok, since we won't be attempting + * any unwinding here. + */ + ldr_this_cpu x0, kvm_init_params + NVHE_INIT_STACK_HYP_VA, x1 + mov sp, x0 + + bl hyp_panic_bad_stack + ASM_BUG() .endm .macro invalid_host_el1_vect diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 99e178cf4249..114053dff228 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -105,7 +105,24 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, if (ret) return ret; - /* Map stack pages in the 'private' VA range */ + /* + * Allocate 'private' VA range for stack guard pages. + * + * The 'private' VA range grows upward and stacks downwards, so + * allocate the guard page first. But make sure to align the + * stack itself with PAGE_SIZE * 2 granularity to ease overflow + * detection in the entry assembly code. + */ + do { + start = (void *)hyp_alloc_private_va_range(PAGE_SIZE); + if (IS_ERR(start)) + return PTR_ERR(start); + } while (IS_ALIGNED((u64) start, PAGE_SIZE * 2)); + + /* + * Map stack pages in the 'private' VA range above the allocated + * guard pages. + */ end = (void *)__hyp_pa(per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va); start = end - PAGE_SIZE; start = (void *)__pkvm_create_private_mapping((phys_addr_t)start, diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 6410d21d8695..5a2e1ab79913 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -369,6 +369,11 @@ void __noreturn hyp_panic(void) unreachable(); } +void __noreturn hyp_panic_bad_stack(void) +{ + hyp_panic(); +} + asmlinkage void kvm_unexpected_el2_exception(void) { return __kvm_unexpected_el2_exception(); From patchwork Thu Feb 10 22:41:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12742499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C3A64C433EF for ; Thu, 10 Feb 2022 22:47:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NWx7Yv3oPfgJ3pnKIMqLGd5Eq9hC+I/ba9Zc7R8gIlo=; b=RYZgStIKv2IkJ/ glYS3qBF0Rg5gsHpTmnKI8OBFMcS5gQ+vGZ0wRwvZ5+S+xXCGOpNP+XS830O4mgl/1N213DydPx+/ pupWsLvRXAZMfDW01Uz1fmgDj2uOV7mC6o2jZwEU4utKMN6E7A0nqrLm+02D6uTlJESWVk4fSRAvK fHP4m2gFCipV64INSfLHe6P+BFvlwky37FiBwJEMyPJ94OeBUd6jJ7pIjqRIBMDqM6p2/1B3amDkO nooz1qi9D/UsRRrwxDQ0DAr3vayOo+gLQmAhSCmosPggPagOFoQI2qtX8xyJSAuL6ha5tR4Vd3cHI vNRRtVWobGgbB76w0o/w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nIICW-0054VZ-3f; Thu, 10 Feb 2022 22:46:08 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nIICQ-0054So-NB for linux-arm-kernel@lists.infradead.org; Thu, 10 Feb 2022 22:46:05 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id b64-20020a256743000000b0061e169a5f19so14658968ybc.11 for ; Thu, 10 Feb 2022 14:46:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=/IZyRrpjogHkvvKAbn/wuM+XhkmHZlGRNHWt0e5zr8A=; b=PBkmRwuFiGGrv5iRHo7tLluafzQlZXF9AN4cUQvrlWacXQcrnAlFwCuc/zOtvTnj3A eyIK8Dr7DfK8BlaH06yRYxaDcs2bvmYni1BgVFeXEqyLnH4uCPmVWvtXo1p/9EFiq8OK yozHbjdo5J8wCe1HRkEpfHLkD8SJn8NzBv9iKIQrno5l1WcFw65NEM8NvqQ6xCxV2EW5 EPhU/2linown0tcJ3M3OITrVQ+B7l8BdSNdus0cc0Zg5pAix41rloxydzHuthJS8zX7p ctJ0Xrcr/w2hgACd8f01767tNpufMelYYz/KRWp+lszWzveeyi2sIdj7lorWwXkseetK RatA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=/IZyRrpjogHkvvKAbn/wuM+XhkmHZlGRNHWt0e5zr8A=; b=PcLMv8Em3lunsfc2TNt+AHAp9a4NQ6vJ74lFIuj0Vc5xNZxsovy/4a42dQqtw5kGTm jw0R3mCMiLuuFKYgkA6IK9ZYogMU7jF5QN6mAYPJUtkxCI1xNWEkVqECH317AcEocjBL 8At770+8W9cbOoGs2JWebgjy4xJOJi4Sd3+dQs+GgNNgvHt6wmjN6amSJteZu3QJtZ/o UzuWJwCPWFy3AsFyzUOx6rcqzWgDgGbdhqch67BNJgQk9mvWf3PNtB7T9oYC6hsipLSy 8+vs8RmEJoeUZG8be+4sxHgxb3BZrbBGt48lWiqehs9fk7Gaep+lwdHSWbL9OK5ZgpJb 7xIQ== X-Gm-Message-State: AOAM531k8pSAGddLju/qHYzAwDUw8YRckYCHf9eM5cZrDRBrfpUdYODD CbrpawzQgAYCLeiq6GeMiIlZyMQdXonGrPFrqQ== X-Google-Smtp-Source: ABdhPJxltTVLo3VOurL3QnroR1cp79bh23NqnvdKXTYN3qVmilP7Lx3u//hZdXpNdmMvGA/0NZZ08lArO17mKqkodg== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:8f02:232:ad86:2ab2]) (user=kaleshsingh job=sendgmr) by 2002:a81:3494:: with SMTP id b142mr9302016ywa.246.1644533161146; Thu, 10 Feb 2022 14:46:01 -0800 (PST) Date: Thu, 10 Feb 2022 14:41:46 -0800 In-Reply-To: <20220210224220.4076151-1-kaleshsingh@google.com> Message-Id: <20220210224220.4076151-6-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220210224220.4076151-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH 5/7] KVM: arm64: Add Hyp overflow stack From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , Catalin Marinas , James Morse , Alexandru Elisei , Suzuki K Poulose , Ard Biesheuvel , Mark Rutland , Pasha Tatashin , Joey Gouly , Peter Collingbourne , Andrew Scull , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220210_144602_782161_A3BD3406 X-CRM114-Status: GOOD ( 12.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Allocate and switch to 16-byte aligned secondary stack on overflow. This provides us stack space to better handle overflows; and is used in a subsequent patch to dump the hypervisor stacktrace. The overflow stack is only allocated if CONFIG_NVHE_EL2_DEBUG is enabled, as hypervisor stacktraces is a debug feature dependent on CONFIG_NVHE_EL2_DEBUG. Signed-off-by: Kalesh Singh --- arch/arm64/kvm/hyp/nvhe/host.S | 5 +++++ arch/arm64/kvm/hyp/nvhe/setup.c | 5 +++++ 2 files changed, 10 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index 78e4b612ac06..751a4b9e429f 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -171,6 +171,10 @@ SYM_FUNC_END(__host_hvc) b hyp_panic .L__hyp_sp_overflow\@: +#ifdef CONFIG_NVHE_EL2_DEBUG + /* Switch to the overflow stack */ + adr_this_cpu sp, hyp_overflow_stack + PAGE_SIZE, x0 +#else /* * Reset SP to the top of the stack, to allow handling the hyp_panic. * This corrupts the stack but is ok, since we won't be attempting @@ -178,6 +182,7 @@ SYM_FUNC_END(__host_hvc) */ ldr_this_cpu x0, kvm_init_params + NVHE_INIT_STACK_HYP_VA, x1 mov sp, x0 +#endif bl hyp_panic_bad_stack ASM_BUG() diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 114053dff228..39937fa6a1b2 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -20,6 +20,11 @@ unsigned long hyp_nr_cpus; +#ifdef CONFIG_NVHE_EL2_DEBUG +DEFINE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], hyp_overflow_stack) + __aligned(16); +#endif + #define hyp_percpu_size ((unsigned long)__per_cpu_end - \ (unsigned long)__per_cpu_start) From patchwork Thu Feb 10 22:41:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12742500 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8D733C433F5 for ; Thu, 10 Feb 2022 22:48:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IoXTLGQbQG/MVwvI3BopdTZKe3aOCTtrnbLm07iFTS8=; b=uZ5A91615Lv49D Uys/1f+tdsDxEaF+JfmTboUivAgCxJdAms7hSNO302nKYnWZYBsy0eyLZblOhyIyExqODal5OP13s JURklFVNjuXWtV31FEta9WMwg4IGOmlywzL2bMyTMgBzGj6/gjolthzZwsOfQXBKGH2flEvEnCp/d rrWtbyMuVsSZ2aYPA5Ahbg2gphT306c8AshXfF7NMnN84qKIZm8esYC6KYDHf1TgjVUpQRbT4WoBf Lm+8BW6y2jPIa4ASDZVwuWsJDj/AqmVklDjh1+0yUdbMnkcMTim8tKeTVpiyXwPEiTz30ELcF8Qh9 5xuPqomkPg0nLLGo/WAw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nIID6-0054jQ-WF; Thu, 10 Feb 2022 22:46:45 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nIID1-0054h4-7X for linux-arm-kernel@lists.infradead.org; Thu, 10 Feb 2022 22:46:41 +0000 Received: by mail-yb1-xb49.google.com with SMTP id v134-20020a25618c000000b00620dc86b9d6so7708222ybb.0 for ; Thu, 10 Feb 2022 14:46:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=vCIc/T3BypFPMAfivt4Ue/YpaTqTQiwu1sslRwpeTec=; b=seqJLYA4w2epHJQlXFABX3D3UYUQLwDvDqlyQDaAaeWonXzVw6MUkHTsJIGtf1Cskw hepDtnf243UVN0de4xbKTvLzx6+EFj6S3n6IRujfhuaYqjLZUsIxOQDJFaiHLhwHpcif azVEJyiu+Imi36TS4A81UL9UDOkVsoc0kyvsRcSLnLEi8OD4pfPAdfyeSn/c8dzfFTh8 7D62rZPd45GwLcsuMH2KDaNp1C7YkxXtHCKU7OxB1rSZEAw8tiYeNPiKdAhNp55Kki6L BdhzopTvP6lPB8mZyd+OizIcJ0uhFSKZBpkvy3pa7eLgNvE/kt/lk25iKtQcmYtsAqDt 9Ksg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=vCIc/T3BypFPMAfivt4Ue/YpaTqTQiwu1sslRwpeTec=; b=fi/AoUl+xjUQTD9oovAG+mwGdruO5sD3XuceC7xzv8IfFu/Wj5yz4ZQPt7MdpV/NGu /NpWhKUxZcoJtBVHccZf2gsk6H5fvVZiqTjVFLktwkjADpvC21o7ue2eQO3KlRo+ctQG DSHQRCsegtH1uGxy7C9wuM69ErrwPAICsBe9KNHqF8p/YOQeYUOVNY5aDeb2/Bkz0zNV 57eiM5k+wgN0UmLySbl4UxqIMGoqXYPvI3F6DHv9w/LvrbEmvee4i6OK7TxZdWHRFbwg p2xtIhDy0X3yUmeiPZgB6FvN/P95XJIQ0OjgRu1iM7RuB8f3Jbul0EM+pSLs6h1QIm/l oWnA== X-Gm-Message-State: AOAM532f73NbTVrcZyk3S0/gEPlJRPy6+djTY/CTcsGVO+6XiI2nkHjK Pg5j44ZUFzi7uHj/J87vt2xQze9g4/zN0rgb5g== X-Google-Smtp-Source: ABdhPJyErmlDgws4R3zr2NRs/2SNeRzB0+zSuqXZ4aWQ8ngObpdjK8Lp8zJzG3DZwOPBxAq3t5kSy2t3OH5MHOSyFg== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:8f02:232:ad86:2ab2]) (user=kaleshsingh job=sendgmr) by 2002:a25:b94f:: with SMTP id s15mr9159159ybm.362.1644533197659; Thu, 10 Feb 2022 14:46:37 -0800 (PST) Date: Thu, 10 Feb 2022 14:41:47 -0800 In-Reply-To: <20220210224220.4076151-1-kaleshsingh@google.com> Message-Id: <20220210224220.4076151-7-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220210224220.4076151-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH 6/7] KVM: arm64: Unwind and dump nVHE HYP stacktrace From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , Catalin Marinas , James Morse , Alexandru Elisei , Suzuki K Poulose , Ard Biesheuvel , Mark Rutland , Pasha Tatashin , Joey Gouly , Peter Collingbourne , Andrew Walbran , Andrew Scull , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220210_144639_325503_CE21FC0E X-CRM114-Status: GOOD ( 31.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Unwind the stack in EL1, when CONFIG_NVHE_EL2_DEBUG is enabled. This is possible because CONFIG_NVHE_EL2_DEBUG disables the host stage 2 protection which allows host to access the hypervisor stack pages in EL1. Unwinding and dumping hyp call traces is gated on CONFIG_NVHE_EL2_DEBUG to avoid the potential leaking of information to the host. A simple stack overflow test produces the following output: [ 580.376051][ T412] kvm: nVHE hyp panic at: ffffffc0116145c4! [ 580.378034][ T412] kvm [412]: nVHE HYP call trace (vmlinux addresses): [ 580.378591][ T412] kvm [412]: [] [ 580.378993][ T412] kvm [412]: [] [ 580.379386][ T412] kvm [412]: [] // Non-terminating recursive call [ 580.379772][ T412] kvm [412]: [] [ 580.380158][ T412] kvm [412]: [] [ 580.380544][ T412] kvm [412]: [] [ 580.380928][ T412] kvm [412]: [] . . . Since nVHE hyp symbols are not included by kallsyms to avoid issues with aliasing, we fallback to the vmlinux addresses. Symbolizing the addresses is handled in the next patch in this series. Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/kvm_asm.h | 17 ++ arch/arm64/kvm/Makefile | 1 + arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/handle_exit.c | 3 + arch/arm64/kvm/hyp/nvhe/setup.c | 25 +++ arch/arm64/kvm/hyp/nvhe/switch.c | 17 ++ arch/arm64/kvm/stacktrace.c | 290 +++++++++++++++++++++++++++++++ arch/arm64/kvm/stacktrace.h | 17 ++ 8 files changed, 371 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/kvm/stacktrace.c create mode 100644 arch/arm64/kvm/stacktrace.h diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index d5b0386ef765..f2b4c2ae5905 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -175,6 +175,23 @@ struct kvm_nvhe_init_params { unsigned long vtcr; }; +#ifdef CONFIG_NVHE_EL2_DEBUG +/* + * Used by the host in EL1 to dump the nVHE hypervisor backtrace on + * hyp_panic. This is possible because CONFIG_NVHE_EL2_DEBUG disables + * the host stage 2 protection. See: __hyp_do_panic() + * + * @hyp_stack_base: hyp VA of the hyp_stack base. + * @hyp_overflow_stack_base: hyp VA of the hyp_overflow_stack base. + * @start_fp: hyp FP where the hyp backtrace should begin. + */ +struct kvm_nvhe_panic_info { + unsigned long hyp_stack_base; + unsigned long hyp_overflow_stack_base; + unsigned long start_fp; +}; +#endif + /* Translate a kernel address @ptr into its equivalent linear mapping */ #define kvm_ksym_ref(ptr) \ ({ \ diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 91861fd8b897..262b5c58cc62 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -23,6 +23,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \ vgic/vgic-its.o vgic/vgic-debug.o kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o +kvm-$(CONFIG_NVHE_EL2_DEBUG) += stacktrace.o always-y := hyp_constants.h hyp-constants.s diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index ecc5958e27fe..f779436919ad 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -49,7 +49,7 @@ DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); -static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); +DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); unsigned long kvm_arm_hyp_percpu_base[NR_CPUS]; DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index e3140abd2e2e..b038c32a3236 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -23,6 +23,7 @@ #define CREATE_TRACE_POINTS #include "trace_handle_exit.h" +#include "stacktrace.h" typedef int (*exit_handle_fn)(struct kvm_vcpu *); @@ -326,6 +327,8 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, kvm_err("nVHE hyp panic at: %016llx!\n", elr_virt + hyp_offset); } + hyp_dump_backtrace(hyp_offset); + /* * Hyp has panicked and we're going to handle that by panicking the * kernel. The kernel offset will be revealed in the panic so we're diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 39937fa6a1b2..3d7720d25acb 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -23,6 +23,29 @@ unsigned long hyp_nr_cpus; #ifdef CONFIG_NVHE_EL2_DEBUG DEFINE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], hyp_overflow_stack) __aligned(16); + +DEFINE_PER_CPU(struct kvm_nvhe_panic_info, kvm_panic_info); + +static void init_nvhe_panic_info(void) +{ + struct kvm_nvhe_panic_info *panic_info; + struct kvm_nvhe_init_params *params; + int cpu; + + for (cpu = 0; cpu < hyp_nr_cpus; cpu++) { + panic_info = per_cpu_ptr(&kvm_panic_info, cpu); + params = per_cpu_ptr(&kvm_init_params, cpu); + + panic_info->hyp_stack_base = (unsigned long)(params->stack_hyp_va - PAGE_SIZE); + panic_info->hyp_overflow_stack_base + = (unsigned long)per_cpu_ptr(hyp_overflow_stack, cpu); + panic_info->start_fp = 0; + } +} +#else +static inline void init_nvhe_panic_info(void) +{ +} #endif #define hyp_percpu_size ((unsigned long)__per_cpu_end - \ @@ -140,6 +163,8 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, * allocated 'private' VA range. */ per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va = (unsigned long) end; + + init_nvhe_panic_info(); } /* diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 5a2e1ab79913..8f8cd0c02e1a 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -34,6 +34,21 @@ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); +#ifdef CONFIG_NVHE_EL2_DEBUG +DECLARE_PER_CPU(struct kvm_nvhe_panic_info, kvm_panic_info); + +static void update_nvhe_panic_info_fp(void) +{ + struct kvm_nvhe_panic_info *panic_info = this_cpu_ptr(&kvm_panic_info); + + panic_info->start_fp = (unsigned long)__builtin_frame_address(0); +} +#else +static inline void update_nvhe_panic_info_fp(void) +{ +} +#endif + static void __activate_traps(struct kvm_vcpu *vcpu) { u64 val; @@ -355,6 +370,8 @@ void __noreturn hyp_panic(void) struct kvm_cpu_context *host_ctxt; struct kvm_vcpu *vcpu; + update_nvhe_panic_info_fp(); + host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; vcpu = host_ctxt->__hyp_running_vcpu; diff --git a/arch/arm64/kvm/stacktrace.c b/arch/arm64/kvm/stacktrace.c new file mode 100644 index 000000000000..3990a616ab66 --- /dev/null +++ b/arch/arm64/kvm/stacktrace.c @@ -0,0 +1,290 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Stack unwinder for EL2 nVHE hypervisor. + * + * Code mostly copied from the arm64 kernel stack unwinder + * and adapted to the nVHE hypervisor. + * + * See: arch/arm64/kernel/stacktrace.c + * + * CONFIG_NVHE_EL2_DEBUG disables the host stage-2 protection + * allowing us to access the hypervisor stack pages and + * consequently unwind its stack from the host in EL1. + * + * See: __hyp_do_panic() + */ + +#include +#include +#include +#include "stacktrace.h" + +DECLARE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); +DECLARE_KVM_NVHE_PER_CPU(unsigned long [PAGE_SIZE/sizeof(long)], hyp_overflow_stack); +DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_panic_info, kvm_panic_info); + +enum hyp_stack_type { + HYP_STACK_TYPE_UNKNOWN, + HYP_STACK_TYPE_HYP, + HYP_STACK_TYPE_OVERFLOW, + __NR_HYP_STACK_TYPES +}; + +struct hyp_stack_info { + unsigned long low; + unsigned long high; + enum hyp_stack_type type; +}; + +/* + * A snapshot of a frame record or fp/lr register values, along with some + * accounting information necessary for robust unwinding. + * + * @fp: The fp value in the frame record (or the real fp) + * @pc: The pc value calculated from lr in the frame record. + * + * @stacks_done: Stacks which have been entirely unwound, for which it is no + * longer valid to unwind to. + * + * @prev_fp: The fp that pointed to this frame record, or a synthetic value + * of 0. This is used to ensure that within a stack, each + * subsequent frame record is at an increasing address. + * @prev_type: The type of stack this frame record was on, or a synthetic + * value of HYP_STACK_TYPE_UNKNOWN. This is used to detect a + * transition from one stack to another. + */ +struct hyp_stackframe { + unsigned long fp; + unsigned long pc; + DECLARE_BITMAP(stacks_done, __NR_HYP_STACK_TYPES); + unsigned long prev_fp; + enum hyp_stack_type prev_type; +}; + +static inline bool __on_hyp_stack(unsigned long hyp_sp, unsigned long size, + unsigned long low, unsigned long high, + enum hyp_stack_type type, + struct hyp_stack_info *info) +{ + if (!low) + return false; + + if (hyp_sp < low || hyp_sp + size < hyp_sp || hyp_sp + size > high) + return false; + + if (info) { + info->low = low; + info->high = high; + info->type = type; + } + return true; +} + +static inline bool on_hyp_overflow_stack(unsigned long hyp_sp, unsigned long size, + struct hyp_stack_info *info) +{ + struct kvm_nvhe_panic_info *panic_info = this_cpu_ptr_nvhe_sym(kvm_panic_info); + unsigned long low = (unsigned long)panic_info->hyp_overflow_stack_base; + unsigned long high = low + PAGE_SIZE; + + return __on_hyp_stack(hyp_sp, size, low, high, HYP_STACK_TYPE_OVERFLOW, info); +} + +static inline bool on_hyp_stack(unsigned long hyp_sp, unsigned long size, + struct hyp_stack_info *info) +{ + struct kvm_nvhe_panic_info *panic_info = this_cpu_ptr_nvhe_sym(kvm_panic_info); + unsigned long low = (unsigned long)panic_info->hyp_stack_base; + unsigned long high = low + PAGE_SIZE; + + return __on_hyp_stack(hyp_sp, size, low, high, HYP_STACK_TYPE_HYP, info); +} + +static inline bool on_hyp_accessible_stack(unsigned long hyp_sp, unsigned long size, + struct hyp_stack_info *info) +{ + if (info) + info->type = HYP_STACK_TYPE_UNKNOWN; + + if (on_hyp_stack(hyp_sp, size, info)) + return true; + if (on_hyp_overflow_stack(hyp_sp, size, info)) + return true; + + return false; +} + +static unsigned long __hyp_stack_kern_va(unsigned long hyp_va) +{ + struct kvm_nvhe_panic_info *panic_info = this_cpu_ptr_nvhe_sym(kvm_panic_info); + unsigned long hyp_base, kern_base, hyp_offset; + + hyp_base = (unsigned long)panic_info->hyp_stack_base; + hyp_offset = hyp_va - hyp_base; + + kern_base = (unsigned long)*this_cpu_ptr(&kvm_arm_hyp_stack_page); + + return kern_base + hyp_offset; +} + +static unsigned long __hyp_overflow_stack_kern_va(unsigned long hyp_va) +{ + struct kvm_nvhe_panic_info *panic_info = this_cpu_ptr_nvhe_sym(kvm_panic_info); + unsigned long hyp_base, kern_base, hyp_offset; + + hyp_base = (unsigned long)panic_info->hyp_overflow_stack_base; + hyp_offset = hyp_va - hyp_base; + + kern_base = (unsigned long)this_cpu_ptr_nvhe_sym(hyp_overflow_stack); + + return kern_base + hyp_offset; +} + +/* + * Convert hypervisor stack VA to a kernel VA. + * + * The hypervisor stack is mapped in the flexible 'private' VA range, to allow + * for guard pages below the stack. Consequently, the fixed offset address + * translation macros won't work here. + * + * The kernel VA is calculated as an offset from the kernel VA of the hypervisor + * stack base. See: __hyp_stack_kern_va(), __hyp_overflow_stack_kern_va() + */ +static unsigned long hyp_stack_kern_va(unsigned long hyp_va, + enum hyp_stack_type stack_type) +{ + switch (stack_type) { + case HYP_STACK_TYPE_HYP: + return __hyp_stack_kern_va(hyp_va); + case HYP_STACK_TYPE_OVERFLOW: + return __hyp_overflow_stack_kern_va(hyp_va); + default: + return 0UL; + } +} + +/* + * Unwind from one frame record (A) to the next frame record (B). + * + * We terminate early if the location of B indicates a malformed chain of frame + * records (e.g. a cycle), determined based on the location and fp value of A + * and the location (but not the fp value) of B. + */ +static int notrace hyp_unwind_frame(struct hyp_stackframe *frame) +{ + unsigned long fp = frame->fp, fp_kern_va; + struct hyp_stack_info info; + + if (fp & 0x7) + return -EINVAL; + + if (!on_hyp_accessible_stack(fp, 16, &info)) + return -EINVAL; + + if (test_bit(info.type, frame->stacks_done)) + return -EINVAL; + + /* + * As stacks grow downward, any valid record on the same stack must be + * at a strictly higher address than the prior record. + * + * Stacks can nest in the following order: + * + * HYP -> OVERFLOW + * + * ... but the nesting itself is strict. Once we transition from one + * stack to another, it's never valid to unwind back to that first + * stack. + */ + if (info.type == frame->prev_type) { + if (fp <= frame->prev_fp) + return -EINVAL; + } else { + set_bit(frame->prev_type, frame->stacks_done); + } + + /* Translate the hyp stack address to a kernel address */ + fp_kern_va = hyp_stack_kern_va(fp, info.type); + if (!fp_kern_va) + return -EINVAL; + + /* + * Record this frame record's values and location. The prev_fp and + * prev_type are only meaningful to the next hyp_unwind_frame() + * invocation. + */ + frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp_kern_va)); + /* PC = LR - 4; All aarch64 instructions are 32-bits in size */ + frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp_kern_va + 8)) - 4; + frame->prev_fp = fp; + frame->prev_type = info.type; + + return 0; +} + +/* + * AArch64 PCS assigns the frame pointer to x29. + * + * A simple function prologue looks like this: + * sub sp, sp, #0x10 + * stp x29, x30, [sp] + * mov x29, sp + * + * A simple function epilogue looks like this: + * mov sp, x29 + * ldp x29, x30, [sp] + * add sp, sp, #0x10 + */ +static void hyp_start_backtrace(struct hyp_stackframe *frame, unsigned long fp) +{ + frame->fp = fp; + + /* + * Prime the first unwind. + * + * In hyp_unwind_frame() we'll check that the FP points to a valid + * stack, which can't be HYP_STACK_TYPE_UNKNOWN, and the first unwind + * will be treated as a transition to whichever stack that happens to + * be. The prev_fp value won't be used, but we set it to 0 such that + * it is definitely not an accessible stack address. The first frame + * (hyp_panic()) is skipped, so we also set PC to 0. + */ + bitmap_zero(frame->stacks_done, __NR_HYP_STACK_TYPES); + frame->pc = frame->prev_fp = 0; + frame->prev_type = HYP_STACK_TYPE_UNKNOWN; +} + +static void hyp_dump_backtrace_entry(unsigned long hyp_pc, unsigned long hyp_offset) +{ + unsigned long va_mask = GENMASK_ULL(vabits_actual - 1, 0); + + hyp_pc &= va_mask; + hyp_pc += hyp_offset; + + kvm_err(" [<%016llx>]\n", hyp_pc); +} + +void hyp_dump_backtrace(unsigned long hyp_offset) +{ + struct kvm_nvhe_panic_info *panic_info = this_cpu_ptr_nvhe_sym(kvm_panic_info); + struct hyp_stackframe frame; + int frame_nr = 0; + int skip = 1; /* Skip the first frame: hyp_panic() */ + + kvm_err("nVHE HYP call trace (vmlinux addresses):\n"); + + hyp_start_backtrace(&frame, (unsigned long)panic_info->start_fp); + + do { + if (skip) { + skip--; + continue; + } + + hyp_dump_backtrace_entry(frame.pc, hyp_offset); + + frame_nr++; + } while (!hyp_unwind_frame(&frame)); + + kvm_err("---- end of nVHE HYP call trace ----\n"); +} diff --git a/arch/arm64/kvm/stacktrace.h b/arch/arm64/kvm/stacktrace.h new file mode 100644 index 000000000000..40c397394b9b --- /dev/null +++ b/arch/arm64/kvm/stacktrace.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Stack unwinder for EL2 nVHE hypervisor. + */ + +#ifndef __KVM_HYP_STACKTRACE_H +#define __KVM_HYP_STACKTRACE_H + +#ifdef CONFIG_NVHE_EL2_DEBUG +void hyp_dump_backtrace(unsigned long hyp_offset); +#else +static inline void hyp_dump_backtrace(unsigned long hyp_offset) +{ +} +#endif /* CONFIG_NVHE_EL2_DEBUG */ + +#endif /* __KVM_HYP_STACKTRACE_H */ From patchwork Thu Feb 10 22:41:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12742501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A1AA2C433F5 for ; Thu, 10 Feb 2022 22:48:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=UOAe0zX8DhU9KTxtDWc5oAFB7gYAgBnwzgX13EekNBY=; b=r2APJcJI1eCbiG wZb3VI2bTVfPXrj6mVB5sKR2eGzG0hFnhN6iCYEZvljlH1BrXW+rv0vJM3G3VrbO/xJlBsFaNgDYj R+hoW5UabSnZkIS4KPCun19mpnZmyDcEUNcGvySZPkLGM1ublUENQsXeXRB1MHL2ivRu4Qtd8TuJ7 bE5Ciq7GRb585djDHP5SG84IVxTVQHe5Fdh1flgvZO90fLrZ2OaZAsqM9aFeF8y7ak5W0V2S2jNxO AKE3Knly6g+/7YoK9Qc9qXhbb7HqqII4ikKDe6zESv8tln4eGfQLbz7PJCEJSxczp8I8NZbhw88Mi 9CC/oU7FvCW2osHuFE3g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nIIDi-0054zQ-Nd; Thu, 10 Feb 2022 22:47:22 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nIIDb-0054wl-JZ for linux-arm-kernel@lists.infradead.org; Thu, 10 Feb 2022 22:47:17 +0000 Received: by mail-yb1-xb49.google.com with SMTP id t8-20020a259ac8000000b00619a3b5977fso15172267ybo.5 for ; Thu, 10 Feb 2022 14:47:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=0KTIZ/BsAXUFEI2iaae3XhjiZCEBM5hWuzmqdk8v8UY=; b=ZHZcx5xA7EHUTHCEaadFyiIjhdIRYxHvTFae4ko0TI3GsIog1bV3SjKx1jRojQWUj9 9YmFQDHXzEGOwzo0jbIx2p05gWQot0LfhQEneSFTIQDTsEvDEu3cbgpbhfSSiOKTck7B XnA5mt4AM4oFT952jp0wtfNf7wV5ozC6f4FRX17E75v7SVOIT6Qq311evP23Cd+WxpCL t4yqScsbIWIhUS/HOMp6RY2OVDmFmetiuTA2yxdyMmNDQyRHtrtm3pjz0CB+NKSsCK+K F7SOpuTg/HVJ5A4DPQFaIsMVgylVqHlTpMjmwpbi8GWIswYd63eFghrQb2JVt08OjyHt AohA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=0KTIZ/BsAXUFEI2iaae3XhjiZCEBM5hWuzmqdk8v8UY=; b=U8bnZ/auDP7yeRxY1ZKUgQg0N9gg60k3PRU3/7xfupTNjFmsGTnHTi1dkZWI1e4epj Velkrvk0pmdVlM+yWJTLZyz1bGNy210xX+GpaDrSI+mF9LwLpIAbRwSmRskBqtYS3rnz XAF3eZWN6b7PhAhtyOybnqidMpm7lDDmrw8GhB9roxrx5xT/FwY5LC+Q0M47SyrlqhX1 yPYHzvPgNABBD8y59vu3viQQOCrnLORG8hceGxxk0ylSwCqoS9IngM2IL96wpZL2hxaH lGQxxuF7W8uZN8Y4soyrs53+f8aMBGUfXVwzSdTdGYmscjkP7n70N1mR2i5C2GeAAgbH I7uA== X-Gm-Message-State: AOAM532WjiCv2Kk9wZmqR0bwJwBo0vJ8dxm5LW6SBzEvgqqLTcR7klPd YELoOzbfkoEn00/jRoEbXy1O1W+J5zlDCId0kA== X-Google-Smtp-Source: ABdhPJw5p/Uh7hAVUUQUoiuUOWMwTt8ZLF4sqcEFxLAvSzIjy7oy/SoNwO/Prk3wKZ7puGZ6SI+A+MoEUcuHAzw3bA== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:8f02:232:ad86:2ab2]) (user=kaleshsingh job=sendgmr) by 2002:a81:b650:: with SMTP id h16mr9312369ywk.238.1644533233863; Thu, 10 Feb 2022 14:47:13 -0800 (PST) Date: Thu, 10 Feb 2022 14:41:48 -0800 In-Reply-To: <20220210224220.4076151-1-kaleshsingh@google.com> Message-Id: <20220210224220.4076151-8-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220210224220.4076151-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH 7/7] KVM: arm64: Symbolize the nVHE HYP backtrace From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , Catalin Marinas , James Morse , Alexandru Elisei , Suzuki K Poulose , Ard Biesheuvel , Mark Rutland , Pasha Tatashin , Joey Gouly , Peter Collingbourne , Andrew Walbran , Andrew Scull , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220210_144715_694277_BB408C17 X-CRM114-Status: GOOD ( 14.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Reintroduce the __kvm_nvhe_ symbols in kallsyms, ignoring the local symbols in this namespace. The local symbols are not informative and can cause aliasing issues when symbolizing the addresses. With the necessary symbols now in kallsyms we can symbolize nVHE stacktrace addresses using the %pB print format specifier. Some sample call traces: ------- [ 167.018598][ T407] kvm [407]: nVHE hyp panic at: [] __kvm_nvhe_overflow_stack+0x10/0x34! [ 167.020841][ T407] kvm [407]: nVHE HYP call trace: [ 167.021371][ T407] kvm [407]: [] __kvm_nvhe_hyp_panic_bad_stack+0xc/0x10 [ 167.021972][ T407] kvm [407]: [] __kvm_nvhe___kvm_hyp_host_vector+0x248/0x794 [ 167.022572][ T407] kvm [407]: [] __kvm_nvhe_overflow_stack+0x20/0x34 [ 167.023135][ T407] kvm [407]: [] __kvm_nvhe_overflow_stack+0x20/0x34 [ 167.023699][ T407] kvm [407]: [] __kvm_nvhe_overflow_stack+0x20/0x34 [ 167.024261][ T407] kvm [407]: [] __kvm_nvhe_overflow_stack+0x20/0x34 . . . ------- [ 166.161699][ T409] kvm [409]: Invalid host exception to nVHE hyp! [ 166.163789][ T409] kvm [409]: nVHE HYP call trace: [ 166.164709][ T409] kvm [409]: [] __kvm_nvhe_handle___kvm_vcpu_run+0x198/0x21c [ 166.165352][ T409] kvm [409]: [] __kvm_nvhe_handle_trap+0xa4/0x124 [ 166.165911][ T409] kvm [409]: [] __kvm_nvhe___host_exit+0x60/0x64 [ 166.166657][ T409] Kernel panic - not syncing: HYP panic: . . . ------- Signed-off-by: Kalesh Singh --- arch/arm64/kvm/handle_exit.c | 11 +++-------- arch/arm64/kvm/stacktrace.c | 2 +- scripts/kallsyms.c | 2 +- 3 files changed, 5 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index b038c32a3236..d7f0f295aebf 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -296,13 +296,8 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr_in_kimg = __phys_to_kimg(elr_phys); u64 hyp_offset = elr_in_kimg - kaslr_offset() - elr_virt; u64 mode = spsr & PSR_MODE_MASK; + u64 panic_addr = elr_virt + hyp_offset; - /* - * The nVHE hyp symbols are not included by kallsyms to avoid issues - * with aliasing. That means that the symbols cannot be printed with the - * "%pS" format specifier, so fall back to the vmlinux address if - * there's no better option. - */ if (mode != PSR_MODE_EL2t && mode != PSR_MODE_EL2h) { kvm_err("Invalid host exception to nVHE hyp!\n"); } else if (ESR_ELx_EC(esr) == ESR_ELx_EC_BRK64 && @@ -322,9 +317,9 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, if (file) kvm_err("nVHE hyp BUG at: %s:%u!\n", file, line); else - kvm_err("nVHE hyp BUG at: %016llx!\n", elr_virt + hyp_offset); + kvm_err("nVHE hyp BUG at: [<%016llx>] %pB!\n", panic_addr, panic_addr); } else { - kvm_err("nVHE hyp panic at: %016llx!\n", elr_virt + hyp_offset); + kvm_err("nVHE hyp panic at: [<%016llx>] %pB!\n", panic_addr, panic_addr); } hyp_dump_backtrace(hyp_offset); diff --git a/arch/arm64/kvm/stacktrace.c b/arch/arm64/kvm/stacktrace.c index 3990a616ab66..4d12ffee9cc6 100644 --- a/arch/arm64/kvm/stacktrace.c +++ b/arch/arm64/kvm/stacktrace.c @@ -261,7 +261,7 @@ static void hyp_dump_backtrace_entry(unsigned long hyp_pc, unsigned long hyp_off hyp_pc &= va_mask; hyp_pc += hyp_offset; - kvm_err(" [<%016llx>]\n", hyp_pc); + kvm_err("[<%016llx>] %pB\n", hyp_pc, hyp_pc); } void hyp_dump_backtrace(unsigned long hyp_offset) diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c index 54ad86d13784..19aba43d9da4 100644 --- a/scripts/kallsyms.c +++ b/scripts/kallsyms.c @@ -111,7 +111,7 @@ static bool is_ignored_symbol(const char *name, char type) ".LASANPC", /* s390 kasan local symbols */ "__crc_", /* modversions */ "__efistub_", /* arm64 EFI stub namespace */ - "__kvm_nvhe_", /* arm64 non-VHE KVM namespace */ + "__kvm_nvhe_$", /* arm64 local symbols in non-VHE KVM namespace */ "__AArch64ADRPThunk_", /* arm64 lld */ "__ARMV5PILongThunk_", /* arm lld */ "__ARMV7PILongThunk_",