From patchwork Fri Nov 29 12:58:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 13888651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 09397D6EC05 for ; Fri, 29 Nov 2024 12:59:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:Mime-Version:Date:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=iTo7mdsKhhMMv0NQ6W3WdpTpByHz4qODWqGMzHKiw1I=; b=RVNyxtRTGwb1TeQGAz5Iq1MdrC bt38bJeh7ZCuy+oqwNRpNIZ9ZcUyzXdfo+yENJbRCqOFarnrKrjHSuDt6nTaYOZHIV6otOt7Nc2TT 4g0vihJLAkSDNnmNLlA6mxKqaUDU4TB3rYoHsDZgLyyb1XOw2rdhEgtLD9ERD5GwMt7JeHlyOA0jm mwO+KVhRhAHEkmrxdaXUO5E8NtWdCjUtAIqX96ox/Hxr7ldkkT7WYaPq9dzn7q7uGlS7uImngf/Nb P5HtcXV4Hdfpf1tSJun4uEM9Zaz4jkou61H7p9E7xvrXR/c5BkliQV0TIDNv4uF/myHxmHXCtqEFl ysT0mVCA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tH0aR-0000000082H-2AD8; Fri, 29 Nov 2024 12:59:07 +0000 Received: from mail-ed1-x54a.google.com ([2a00:1450:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tH0ZR-000000007lB-11CL for linux-arm-kernel@lists.infradead.org; Fri, 29 Nov 2024 12:58:06 +0000 Received: by mail-ed1-x54a.google.com with SMTP id 4fb4d7f45d1cf-5d09962822bso1019262a12.1 for ; Fri, 29 Nov 2024 04:58:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732885082; x=1733489882; darn=lists.infradead.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=iTo7mdsKhhMMv0NQ6W3WdpTpByHz4qODWqGMzHKiw1I=; b=Yjb4kkMb7xcSdvlrmlzVvvQgi68I0O993dv7314T5EbBnn+YbSa19GijttuYXFgW9C 6b5WC++qy5ytSpLM0PWXkSZpRvm7dtuuuxq1RMfBEs0FrRljP3cYs6pD1D5Ck6rIq8DB xPYJWf8U2St1KWqYCjlog7xhfXrlr0JoOHWQMmoWQiI5vMagYokd4RC+K9s3qjV6Rupc 0Tl4nxeD33JbwuxRe2MkRHeRMWplupElxoV+tmkPrVF/m+ZLzFLu6K4RWFjkVEErd8Qg LsKGf2UFdmfTvAExZf16i0ZEjHydSv3p88TU21WMQ9Z//q2jjP/BisNpgIZ+odvJRWYS v/KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732885082; x=1733489882; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=iTo7mdsKhhMMv0NQ6W3WdpTpByHz4qODWqGMzHKiw1I=; b=g0Fyc3oau1ruPIjCcW1XVexeH+gCKZx4Jjyoby2gopBAY4A3YGq5Wbj/4ouDNH1qMk Pi9N+SqlbfQyyJIRUviw0cabx8y9JoXsRV76lNGecShuw7svM+DaD6hwVxnKGT9J9A0i lwMWeZfScC/DHFXJtAehSXQm6EFihdd723gRxeWGjaXDUrj2Q5jjCA3+G4v6BGfwNL8M iFqt1hwlzq3rAncvv9Oxa8XPgp3QETO61KEW+VYmiKdpjsH5E6UuMqj/Y7lKShyAyXlO 20wq7D+qJLHRsYr2uRYDcMRk919QJNJLP8zmUIwC459KGju1E5pMDjlxYYMrKca+miaH 0eCA== X-Gm-Message-State: AOJu0YwQo5ORq9Ev0eRYgvUs4poCbdT2BOcXEwNENDw/zSswEl7dkzni irdTbc0IeugZ108b3v9KMEPE/ZPAbvSWvIapWqyRWjLAYQiU2/5uf/g9neeN4+IIUe2KbCdTnWf KfXyHJw== X-Google-Smtp-Source: AGHT+IE8LI59tC+qSLoKQ9kpy+7UR4VBfF+klcw+nBIw6XFRR92RWRwAjYNKTrXAmiQZO9b0XnadXq1XxO3S X-Received: from edya11.prod.google.com ([2002:aa7:cf0b:0:b0:5cf:ca78:ecdf]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:4402:b0:5cf:f248:7715 with SMTP id 4fb4d7f45d1cf-5d080c48680mr9111559a12.23.1732885082703; Fri, 29 Nov 2024 04:58:02 -0800 (PST) Date: Fri, 29 Nov 2024 12:58:00 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241129125800.992468-1-qperret@google.com> Subject: [PATCH] KVM: arm64: Selftest for pKVM transitions From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, qperret@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241129_045805_282186_A1E2C64A X-CRM114-Status: GOOD ( 14.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We have recently found a bug [1] in the pKVM memory ownership transitions by code inspection, but it could have been caught with a test. Introduce a boot-time selftest exercising all the known pKVM memory transitions and importantly checks the rejection of illegal transitions. The new test is hidden behind a new Kconfig option separate from CONFIG_EL2_NVHE_DEBUG on purpose as that has side effects on the transition checks ([1] doesn't reproduce with EL2 debug enabled). [1] https://lore.kernel.org/kvmarm/20241128154406.602875-1-qperret@google.com/ Suggested-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/Kconfig | 10 ++ arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 6 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 110 ++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/setup.c | 2 + 4 files changed, 128 insertions(+) diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index ead632ad01b4..038d7f52232c 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -46,6 +46,7 @@ menuconfig KVM config NVHE_EL2_DEBUG bool "Debug mode for non-VHE EL2 object" depends on KVM + select PKVM_SELFTESTS help Say Y here to enable the debug mode for the non-VHE KVM EL2 object. Failure reports will BUG() in the hypervisor. This is intended for @@ -53,6 +54,15 @@ config NVHE_EL2_DEBUG If unsure, say N. +config PKVM_SELFTESTS + bool "Protected KVM hypervisor selftests" + help + Say Y here to enable Protected KVM (pKVM) hypervisor selftests + during boot. Failure reports will panic the hypervisor. This is + intended for EL2 hypervisor development. + + If unsure, say N. + config PROTECTED_NVHE_STACKTRACE bool "Protected KVM hypervisor stacktraces" depends on NVHE_EL2_DEBUG diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 0972faccc2af..a9b2677227cc 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -90,4 +90,10 @@ static __always_inline void __load_host_stage2(void) else write_sysreg(0, vttbr_el2); } + +#ifdef CONFIG_PKVM_SELFTESTS +void pkvm_ownership_selftest(void); +#else +static inline void pkvm_ownership_selftest(void) { } +#endif #endif /* __KVM_NVHE_MEM_PROTECT__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index e75374d682f4..6a01ffe3d117 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1306,3 +1306,113 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) return ret; } + +#ifdef CONFIG_PKVM_SELFTESTS +struct pkvm_expected_state { + enum pkvm_page_state host; + enum pkvm_page_state hyp; +}; + +static struct pkvm_expected_state selftest_state; +static struct hyp_page *selftest_page; + +static void assert_page_state(void) +{ + void *virt = hyp_page_to_virt(selftest_page); + u64 size = PAGE_SIZE << selftest_page->order; + u64 phys = hyp_virt_to_phys(virt); + + host_lock_component(); + WARN_ON(__host_check_page_state_range(phys, size, selftest_state.host)); + host_unlock_component(); + + hyp_lock_component(); + WARN_ON(__hyp_check_page_state_range((u64)virt, size, selftest_state.hyp)); + hyp_unlock_component(); +} + +#define assert_transition_res(res, fn, ...) \ + do { \ + WARN_ON(fn(__VA_ARGS__) != res); \ + assert_page_state(); \ + } while (0) + +void pkvm_ownership_selftest(void) +{ + void *virt = hyp_alloc_pages(&host_s2_pool, 0); + u64 phys, size, pfn; + + WARN_ON(!virt); + selftest_page = hyp_virt_to_page(virt); + selftest_page->refcount = 0; + + size = PAGE_SIZE << selftest_page->order; + phys = hyp_virt_to_phys(virt); + pfn = hyp_phys_to_pfn(phys); + + selftest_state.host = PKVM_NOPAGE; + selftest_state.hyp = PKVM_PAGE_OWNED; + assert_page_state(); + assert_transition_res(-EPERM, __pkvm_host_donate_hyp, pfn, 1); + assert_transition_res(-EPERM, __pkvm_host_share_hyp, pfn); + assert_transition_res(-EPERM, __pkvm_host_unshare_hyp, pfn); + assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); + assert_transition_res(-EPERM, __pkvm_host_unshare_ffa, pfn, 1); + assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); + + selftest_state.host = PKVM_PAGE_OWNED; + selftest_state.hyp = PKVM_NOPAGE; + assert_transition_res(0, __pkvm_hyp_donate_host, pfn, 1); + assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); + assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); + + selftest_state.host = PKVM_PAGE_SHARED_OWNED; + selftest_state.hyp = PKVM_PAGE_SHARED_BORROWED; + assert_transition_res(0, __pkvm_host_share_hyp, pfn); + assert_transition_res(-EPERM, __pkvm_host_share_hyp, pfn); + assert_transition_res(-EPERM, __pkvm_host_donate_hyp, pfn, 1); + assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); + assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); + + assert_transition_res(0, hyp_pin_shared_mem, virt, virt + size); + WARN_ON(!hyp_page_count(virt)); + assert_transition_res(-EBUSY, __pkvm_host_unshare_hyp, pfn); + assert_transition_res(-EPERM, __pkvm_host_share_hyp, pfn); + assert_transition_res(-EPERM, __pkvm_host_donate_hyp, pfn, 1); + assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); + assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); + + hyp_unpin_shared_mem(virt, virt + size); + assert_page_state(); + WARN_ON(hyp_page_count(virt)); + assert_transition_res(-EPERM, __pkvm_host_share_hyp, pfn); + assert_transition_res(-EPERM, __pkvm_host_donate_hyp, pfn, 1); + assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); + assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); + + selftest_state.host = PKVM_PAGE_OWNED; + selftest_state.hyp = PKVM_NOPAGE; + assert_transition_res(0, __pkvm_host_unshare_hyp, pfn); + + selftest_state.host = PKVM_PAGE_SHARED_OWNED; + selftest_state.hyp = PKVM_NOPAGE; + assert_transition_res(0, __pkvm_host_share_ffa, pfn, 1); + assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); + assert_transition_res(-EPERM, __pkvm_host_donate_hyp, pfn, 1); + assert_transition_res(-EPERM, __pkvm_host_share_hyp, pfn); + assert_transition_res(-EPERM, __pkvm_host_unshare_hyp, pfn); + assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); + assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); + + selftest_state.host = PKVM_PAGE_OWNED; + selftest_state.hyp = PKVM_NOPAGE; + assert_transition_res(0, __pkvm_host_unshare_ffa, pfn, 1); + + selftest_state.host = PKVM_NOPAGE; + selftest_state.hyp = PKVM_PAGE_OWNED; + assert_transition_res(0, __pkvm_host_donate_hyp, pfn, 1); + + selftest_page->refcount = 1; + hyp_put_page(&host_s2_pool, virt); +} +#endif diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index cbdd18cd3f98..d154e80fe6b9 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -306,6 +306,8 @@ void __noreturn __pkvm_init_finalise(void) goto out; pkvm_hyp_vm_table_init(vm_table_base); + + pkvm_ownership_selftest(); out: /* * We tail-called to here from handle___pkvm_init() and will not return,