From patchwork Wed Oct 13 15:58:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C47F4C433EF for ; Wed, 13 Oct 2021 16:01:38 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9108E61168 for ; Wed, 13 Oct 2021 16:01:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9108E61168 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=n7lPnXkQAwZhas0niRE4Ie7pDP1snxqET0eKhoVJ6mE=; b=jeXnljJDqjPIj7ArbOz27Rjqo/ 06G49VZjMg2UDHdCNicHdvMID6Yu1YbzSqZq6MnJw4231Uaox1FgHbl4TpPmD/R3UDtyoQnSI/hUR duFE9crmx4uVVxo3XrQIKuKEnU7aK18LsVHjhQms9mvK5LoYqklokuzsSToTvHBCFOuycsxOaArhd ngyo0R7hzdl7eb6lNLNyGCsmWijEl6uJxLpwSpKJe98Q5cllBi/6KMvwa5n41l8t4r4NFIe03PiGS V+MHKQCLcb5UTecu3LWHFyX4T8x0EoMpOt9h6ZpeIO4JnNordleC0w1Vs5dQBD3BMsfBDR4OxUMEd ksvP4SWA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageg-00HTON-Us; Wed, 13 Oct 2021 15:58:59 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageM-00HTHu-Eg for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:58:40 +0000 Received: by mail-wr1-x44a.google.com with SMTP id r16-20020adfbb10000000b00160958ed8acso2328180wrg.16 for ; Wed, 13 Oct 2021 08:58:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=v2gd8qWvh4eZ/J6qFrhADUHaOZfVJrXdFAKXh8OoabM=; b=aiLbfTOZ81dk7Ubs6Jh81CTkEkNUWHmmr8x/KGdngNpEYBP0Z9GSspCYasDJsokS8B 25KZrOjXeP6snrT3y16Ol8qeQ19bx/ldAPBQvNXukYoiUW0Ufy5hmjawfyXajTExU78p +Jtl2soJvLAYtF6s8fAP35c5qkaYm9mbyzZ42HC5BJyLrpcL4DWXdn2mWAiednc4H+nv o9Tnzs9v8p6vKewI9cHEr48DTa5cRelubAkDJ0eH3elTYaKbv8y5b/Mx2J+oogxsNLZZ ykwRSxEIBpLGJUfWcrt8p0vSO/pZ7aS2DZHRC+PFmJW6bfsYxHkoLpX00sU0YFYLQaAP NLxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=v2gd8qWvh4eZ/J6qFrhADUHaOZfVJrXdFAKXh8OoabM=; b=5zvrSz/LxMbtkJFuvUdGdJlk+FwIMm1k91YZBR/h3ccMxqBiNWLEDLbMkV0AeKGEyg 0fuV/eQpoqHrpxPPor0vB2y/pqn913PSKX14VROpykjFih8gyLhtx7wJH1p23zE56amY A3wTesGSpSd9IT14lLwY8u1c0PVl4MImsSnXCfi5RpiDuDmfryjaeZlS7stXMrvagYmS Sia+FjCi5oEbIxpIopM9LtENvm0NDhYNg9UFU/L2Ank7Gx1rhFN97a27asgWN+sLPjjn Z0q9lW9XRcYab5FpXT9tnAsSsaYB5lPwIk+OwKvmIaZx7eAMenXmrQMHyw02nU2BlUXJ bOgg== X-Gm-Message-State: AOAM533hxUxnd//yd2uyQ4813Zx7sV4vebq74sBiRmTm1LE49EmBBmaP 7EmMNqfDzFYdmqYK0+KTGEEn1QSHyHzU X-Google-Smtp-Source: ABdhPJzsn1SgKf860FkEzGm1RZCUUM/Y+aQz9WqD6w1BPfROEP4vWn6pc5yiCYtsPSp0+MddQ+sTTMF4s46R X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a7b:c5cc:: with SMTP id n12mr69756wmk.43.1634140716553; Wed, 13 Oct 2021 08:58:36 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:16 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-2-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 01/16] KVM: arm64: Introduce do_share() helper for memory sharing between components From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085838_552719_C65E187A X-CRM114-Status: GOOD ( 20.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon In preparation for extending memory sharing to include the guest as well as the hypervisor and the host, introduce a high-level do_share() helper which allows memory to be shared between these components without duplication of validity checks. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 5 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 315 ++++++++++++++++++ 2 files changed, 320 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index b58c910babaf..56445586c755 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -24,6 +24,11 @@ enum pkvm_page_state { PKVM_PAGE_OWNED = 0ULL, PKVM_PAGE_SHARED_OWNED = KVM_PGTABLE_PROT_SW0, PKVM_PAGE_SHARED_BORROWED = KVM_PGTABLE_PROT_SW1, + __PKVM_PAGE_RESERVED = KVM_PGTABLE_PROT_SW0 | + KVM_PGTABLE_PROT_SW1, + + /* Meta-states which aren't encoded directly in the PTE's SW bits */ + PKVM_NOPAGE, }; #define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index bacd493a4eac..53e503501044 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -443,3 +443,318 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) ret = host_stage2_idmap(addr); BUG_ON(ret && ret != -EAGAIN); } + +/* This corresponds to locking order */ +enum pkvm_component_id { + PKVM_ID_HOST, + PKVM_ID_HYP, +}; + +struct pkvm_mem_transition { + u64 nr_pages; + + struct { + enum pkvm_component_id id; + u64 addr; + + union { + struct { + u64 completer_addr; + } host; + }; + } initiator; + + struct { + enum pkvm_component_id id; + } completer; +}; + +struct pkvm_mem_share { + struct pkvm_mem_transition tx; + enum kvm_pgtable_prot prot; +}; + +struct pkvm_page_req { + struct { + enum pkvm_page_state state; + u64 addr; + } initiator; + + struct { + u64 addr; + } completer; + + phys_addr_t phys; +}; + +struct pkvm_page_share_ack { + struct { + enum pkvm_page_state state; + phys_addr_t phys; + enum kvm_pgtable_prot prot; + } completer; +}; + +static void host_lock_component(void) +{ + hyp_spin_lock(&host_kvm.lock); +} + +static void host_unlock_component(void) +{ + hyp_spin_unlock(&host_kvm.lock); +} + +static void hyp_lock_component(void) +{ + hyp_spin_lock(&pkvm_pgd_lock); +} + +static void hyp_unlock_component(void) +{ + hyp_spin_unlock(&pkvm_pgd_lock); +} + +static int host_request_share(struct pkvm_page_req *req, + struct pkvm_mem_transition *tx, + u64 idx) +{ + u64 offset = idx * PAGE_SIZE; + enum kvm_pgtable_prot prot; + u64 host_addr; + kvm_pte_t pte; + int err; + + hyp_assert_lock_held(&host_kvm.lock); + + host_addr = tx->initiator.addr + offset; + err = kvm_pgtable_get_leaf(&host_kvm.pgt, host_addr, &pte, NULL); + if (err) + return err; + + if (!kvm_pte_valid(pte) && pte) + return -EPERM; + + prot = kvm_pgtable_stage2_pte_prot(pte); + *req = (struct pkvm_page_req) { + .initiator = { + .state = pkvm_getstate(prot), + .addr = host_addr, + }, + .completer = { + .addr = tx->initiator.host.completer_addr + offset, + }, + .phys = host_addr, + }; + + return 0; +} + +/* + * Populate the page-sharing request (@req) based on the share transition + * information from the initiator and its current page state. + */ +static int request_share(struct pkvm_page_req *req, + struct pkvm_mem_share *share, + u64 idx) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + return host_request_share(req, tx, idx); + default: + return -EINVAL; + } +} + +static int hyp_ack_share(struct pkvm_page_share_ack *ack, + struct pkvm_page_req *req, + enum kvm_pgtable_prot perms) +{ + enum pkvm_page_state state = PKVM_NOPAGE; + enum kvm_pgtable_prot prot = 0; + phys_addr_t phys = 0; + kvm_pte_t pte; + u64 hyp_addr; + int err; + + hyp_assert_lock_held(&pkvm_pgd_lock); + + if (perms != PAGE_HYP) + return -EPERM; + + hyp_addr = req->completer.addr; + err = kvm_pgtable_get_leaf(&pkvm_pgtable, hyp_addr, &pte, NULL); + if (err) + return err; + + if (kvm_pte_valid(pte)) { + state = pkvm_getstate(kvm_pgtable_hyp_pte_prot(pte)); + phys = kvm_pte_to_phys(pte); + prot = kvm_pgtable_hyp_pte_prot(pte) & KVM_PGTABLE_PROT_RWX; + } + + *ack = (struct pkvm_page_share_ack) { + .completer = { + .state = state, + .phys = phys, + .prot = prot, + }, + }; + + return 0; +} + +/* + * Populate the page-sharing acknowledgment (@ack) based on the sharing request + * from the initiator and the current page state in the completer. + */ +static int ack_share(struct pkvm_page_share_ack *ack, + struct pkvm_page_req *req, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->completer.id) { + case PKVM_ID_HYP: + return hyp_ack_share(ack, req, share->prot); + default: + return -EINVAL; + } +} + +/* + * Check that the page states in the initiator and the completer are compatible + * for the requested page-sharing operation to go ahead. + */ +static int check_share(struct pkvm_page_req *req, + struct pkvm_page_share_ack *ack, + struct pkvm_mem_share *share) +{ + if (!addr_is_memory(req->phys)) + return -EINVAL; + + if (req->initiator.state == PKVM_PAGE_OWNED && + ack->completer.state == PKVM_NOPAGE) { + return 0; + } + + if (req->initiator.state != PKVM_PAGE_SHARED_OWNED) + return -EPERM; + + if (ack->completer.state != PKVM_PAGE_SHARED_BORROWED) + return -EPERM; + + if (ack->completer.phys != req->phys) + return -EPERM; + + if (ack->completer.prot != share->prot) + return -EPERM; + + return 0; +} + +static int host_initiate_share(struct pkvm_page_req *req) +{ + enum kvm_pgtable_prot prot; + + prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_OWNED); + return host_stage2_idmap_locked(req->initiator.addr, PAGE_SIZE, prot); +} + +/* Update the initiator's page-table for the page-sharing request */ +static int initiate_share(struct pkvm_page_req *req, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + return host_initiate_share(req); + default: + return -EINVAL; + } +} + +static int hyp_complete_share(struct pkvm_page_req *req, + enum kvm_pgtable_prot perms) +{ + void *start = (void *)req->completer.addr, *end = start + PAGE_SIZE; + enum kvm_pgtable_prot prot; + + prot = pkvm_mkstate(perms, PKVM_PAGE_SHARED_BORROWED); + return pkvm_create_mappings_locked(start, end, prot); +} + +/* Update the completer's page-table for the page-sharing request */ +static int complete_share(struct pkvm_page_req *req, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->completer.id) { + case PKVM_ID_HYP: + return hyp_complete_share(req, share->prot); + default: + return -EINVAL; + } +} + +/* + * do_share(): + * + * The page owner grants access to another component with a given set + * of permissions. + * + * Initiator: OWNED => SHARED_OWNED + * Completer: NOPAGE => SHARED_BORROWED + * + * Note that we permit the same share operation to be repeated from the + * host to the hypervisor, as this removes the need for excessive + * book-keeping of shared KVM data structures at EL1. + */ +static int do_share(struct pkvm_mem_share *share) +{ + struct pkvm_page_req req; + int ret = 0; + u64 idx; + + for (idx = 0; idx < share->tx.nr_pages; ++idx) { + struct pkvm_page_share_ack ack; + + ret = request_share(&req, share, idx); + if (ret) + goto out; + + ret = ack_share(&ack, &req, share); + if (ret) + goto out; + + ret = check_share(&req, &ack, share); + if (ret) + goto out; + } + + for (idx = 0; idx < share->tx.nr_pages; ++idx) { + ret = request_share(&req, share, idx); + if (ret) + break; + + /* Allow double-sharing by skipping over the page */ + if (req.initiator.state == PKVM_PAGE_SHARED_OWNED) + continue; + + ret = initiate_share(&req, share); + if (ret) + break; + + ret = complete_share(&req, share); + if (ret) + break; + } + + WARN_ON(ret); +out: + return ret; +} From patchwork Wed Oct 13 15:58:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73FCFC433F5 for ; Wed, 13 Oct 2021 16:01:08 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 490D5610FE for ; Wed, 13 Oct 2021 16:01:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 490D5610FE Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=J0/WmI1S6qrmr3e+AO3AqTZg/U+hc5hCScl7bVnl578=; b=syN4kiPsXKUO6IQ+H1c4RjZ0vW k9GISILnTwQvJwpM4JCphcFUACgRvScvAaA83JwgP1gJbVgPLX7HrfKTXp2sl1Iqb6GajFzbqUOvs oTUeTgOV0joyq+ru2t5K0gMF9utXP2A0kiRNDfZ2fd+fYosddlY9DwhJUgKuQ1dzPbjGyecjl75bM VMmeIMSq7fIeyoJJqCkzvtTaKYs4YX3xeKSp5W6G7KnBg+vSIWd33q0v61ysan/YxAHS9rBm6+/US hbtNE0s4CLrt23wAzv/HWMN+1sigc2sOAwnzRS9QMplC8x9SwnsFO1fQ80Hdsu25dy1HkfPqRShIC 84GOJJbQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1magf5-00HTWW-LL; Wed, 13 Oct 2021 15:59:24 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageO-00HTIn-Sb for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:58:42 +0000 Received: by mail-yb1-xb49.google.com with SMTP id i83-20020a252256000000b005b67a878f56so3570676ybi.17 for ; Wed, 13 Oct 2021 08:58:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xRWQMLf+apv1h1Y0VPkMZJQJu4PG3LU0eGsIqqLuHBE=; b=dtGpO69Q22L/0AT3DiJZIAlyjD9Ntx7ATixiCAjJqElqUscnJANNHy8xGBlhLAzASH pdMTCLVoMIGPRc5OMmlvsOgZ4F3PpWd403IcwNLULm+Sd0mCbfyt4BbbVrcv+U6xPwy8 9l/EZD+oqsJ2pZSywdCKsEp5AT9hI6AyFaBDm3LexUQUcce8VitixF3STuVa/UttUXgC TL0UmSbwA8c4ZKzdrmvE2y9yd9aY0DQKn60kJWdeTYg0Wett0q+QIYVGPd1tVo09H0sz 15tm3VQaYKS+GlDNE6LsTero1j0v8nx0eH2shAT8yi2ZENzZBvorWzBnQx+UYLi99Fvz 9f3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xRWQMLf+apv1h1Y0VPkMZJQJu4PG3LU0eGsIqqLuHBE=; b=7IOumWwBPKm8FeQuNfZB3cxFeQGpYMUX+lHooT9ft61uMVN7zJtU/yO3VunSjIP68C LUuFyIKkvm//Uw6wxVviRGHFMpPGqY5r+zM2b2FcGMRQphYRg74gpMG+3Ga/nueMXUFC QMFRvoIaw2DwlXm9ik3+qNcdKEBzXIfZVHregTeY7RbgtUpJlBT/F+MSu6mO1fUGv+ec 61E2dJ1BWrCryAvXMdNbvr/sWLtaDbIbWvQn8rIR9APYl3s3yr2RiPeCuPeOG0S5epWp +r8890wLnhK8WCtkaReMLSMAhcmnl2pqUkihwEkezpOna0HHYApwF209fVxQ6DTCdzWA TIfA== X-Gm-Message-State: AOAM530uOCriITz/EmBOCSHlunFFHHjx58eUiS+upzvW1uWGiOZxLhe8 rJv4S8XDlUPhOMNJ1vQH1NsKH7WPqaJ4 X-Google-Smtp-Source: ABdhPJz807qmcV7sOwt20TlAiEdslYI8Ep37s2M/k2R6SI9SlHRjD+sPlh7lm4qrKlIzL5wx+7NKSuJQiHA8 X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a05:6902:1024:: with SMTP id x4mr207401ybt.514.1634140718834; Wed, 13 Oct 2021 08:58:38 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:17 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-3-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 02/16] KVM: arm64: Implement __pkvm_host_share_hyp() using do_share() From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085840_972512_EDA8F127 X-CRM114-Status: GOOD ( 16.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon __pkvm_host_share_hyp() shares memory between the host and the hypervisor, so implement it as an invocation of the new do_share() mechanism. Note that the new semantics are slightly stricter than before, as we now validate the physical address when double-sharing a page. However, this makes no functional difference as long as no other transitions are supported and the host can only share pages by pfn. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 121 +++++++------------------- 1 file changed, 33 insertions(+), 88 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 53e503501044..6983b83f799f 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -342,94 +342,6 @@ static int host_stage2_idmap(u64 addr) return ret; } -static inline bool check_prot(enum kvm_pgtable_prot prot, - enum kvm_pgtable_prot required, - enum kvm_pgtable_prot denied) -{ - return (prot & (required | denied)) == required; -} - -int __pkvm_host_share_hyp(u64 pfn) -{ - phys_addr_t addr = hyp_pfn_to_phys(pfn); - enum kvm_pgtable_prot prot, cur; - void *virt = __hyp_va(addr); - enum pkvm_page_state state; - kvm_pte_t pte; - int ret; - - if (!addr_is_memory(addr)) - return -EINVAL; - - hyp_spin_lock(&host_kvm.lock); - hyp_spin_lock(&pkvm_pgd_lock); - - ret = kvm_pgtable_get_leaf(&host_kvm.pgt, addr, &pte, NULL); - if (ret) - goto unlock; - if (!pte) - goto map_shared; - - /* - * Check attributes in the host stage-2 PTE. We need the page to be: - * - mapped RWX as we're sharing memory; - * - not borrowed, as that implies absence of ownership. - * Otherwise, we can't let it got through - */ - cur = kvm_pgtable_stage2_pte_prot(pte); - prot = pkvm_mkstate(0, PKVM_PAGE_SHARED_BORROWED); - if (!check_prot(cur, PKVM_HOST_MEM_PROT, prot)) { - ret = -EPERM; - goto unlock; - } - - state = pkvm_getstate(cur); - if (state == PKVM_PAGE_OWNED) - goto map_shared; - - /* - * Tolerate double-sharing the same page, but this requires - * cross-checking the hypervisor stage-1. - */ - if (state != PKVM_PAGE_SHARED_OWNED) { - ret = -EPERM; - goto unlock; - } - - ret = kvm_pgtable_get_leaf(&pkvm_pgtable, (u64)virt, &pte, NULL); - if (ret) - goto unlock; - - /* - * If the page has been shared with the hypervisor, it must be - * already mapped as SHARED_BORROWED in its stage-1. - */ - cur = kvm_pgtable_hyp_pte_prot(pte); - prot = pkvm_mkstate(PAGE_HYP, PKVM_PAGE_SHARED_BORROWED); - if (!check_prot(cur, prot, ~prot)) - ret = -EPERM; - goto unlock; - -map_shared: - /* - * If the page is not yet shared, adjust mappings in both page-tables - * while both locks are held. - */ - prot = pkvm_mkstate(PAGE_HYP, PKVM_PAGE_SHARED_BORROWED); - ret = pkvm_create_mappings_locked(virt, virt + PAGE_SIZE, prot); - BUG_ON(ret); - - prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_OWNED); - ret = host_stage2_idmap_locked(addr, PAGE_SIZE, prot); - BUG_ON(ret); - -unlock: - hyp_spin_unlock(&pkvm_pgd_lock); - hyp_spin_unlock(&host_kvm.lock); - - return ret; -} - void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) { struct kvm_vcpu_fault_info fault; @@ -758,3 +670,36 @@ static int do_share(struct pkvm_mem_share *share) out: return ret; } + +int __pkvm_host_share_hyp(u64 pfn) +{ + int ret; + u64 host_addr = hyp_pfn_to_phys(pfn); + u64 hyp_addr = (u64)__hyp_va(host_addr); + struct pkvm_mem_share share = { + .tx = { + .nr_pages = 1, + .initiator = { + .id = PKVM_ID_HOST, + .addr = host_addr, + .host = { + .completer_addr = hyp_addr, + }, + }, + .completer = { + .id = PKVM_ID_HYP, + }, + }, + .prot = PAGE_HYP, + }; + + host_lock_component(); + hyp_lock_component(); + + ret = do_share(&share); + + hyp_unlock_component(); + host_unlock_component(); + + return ret; +} From patchwork Wed Oct 13 15:58:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07744C433EF for ; Wed, 13 Oct 2021 16:02:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BDDA4610FE for ; Wed, 13 Oct 2021 16:02:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org BDDA4610FE Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ZXmHG0fFYNvldsToc7iP4S3ScFX6dmTYjdazzsxqy54=; b=y7mAEiLS0YnAs2mMd47EdYkNb6 mSYi3IW+xMocPx/MH8CWGqVAnelYg9BLOySyiQ5MIMBOCDqprD7d6IGhpgc2k0vIg+gdHwJDA/j00 trSlOER+XgydmlsdKpxqhukByvAHTB5FFkUBfm13LbLBRV0MulOnRigFVYeGCzitsEpf7wEN7OrV/ NNhQpt02FidcHE3X77Mr8RTBs8YkesELsUrbUHXGAjnGqLG/DGpziKRslQqPZnxrxf63P6D4zYX1u 91W2+ml026K2ndbIsevInDd9mq52yDb1nZd26gI8q0U/IuqoLbAW1IC2LSiCWh+ww9U4ps/LIhJx2 OCqNiwhg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1magfM-00HTbH-HZ; Wed, 13 Oct 2021 15:59:40 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageR-00HTJp-Pa for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:58:45 +0000 Received: by mail-wr1-x449.google.com with SMTP id y12-20020a056000168c00b00160da4de2c7so2365595wrd.5 for ; Wed, 13 Oct 2021 08:58:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iQjX5L75EMSut/e9QkKC3PVYtpB1oumW2lPsA0BXgHk=; b=mIk1w2jkpInjOJapgg2EiBV4rNZgsgrkxTvKv+siczoD6/bFepkA9l4SOTN6PbyrSN dXPXzsv4EPi6k9JSC+NkQy/OdGLmYCDnMOgDmY2x8FF0X86XBBp4Zlcbuy0UoaXz8kNl yNBxqZRo1LBPUkBWUDCARpwsXE05CCl0Izy/SIBc7Fjm85eoeEDoAtMV3PNd9rxKM0P0 Pxw1YjskOHOCj8VoF4X7ujJ2Esn8W3QuG9E1FlEL9i2WuxPaTuGTl/4yWFNdPCfsNl62 tjkH4AxtP48/1CYZlrScJ0diYkB2Lqr0HocWPwK3xLr8Twzo7r+apNgU2NSKioYMwf5/ Xg9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iQjX5L75EMSut/e9QkKC3PVYtpB1oumW2lPsA0BXgHk=; b=4crkvyrKD9A8wV1XwcXWSgtbX9PsjAPMAuICkgEoWMu2+yqbsYMflJbwgu9n3IvsX1 bJw0B4Fc6/vCYSxrlbc0nvkKmgB6NZeIW6mndl03C7wSiKhnLLWFuGEgAO9HmpFYTRc+ Z6vcyQ5ezcIiiY+tEXJhv9mrzwmeMdz2P79q1Um+/2wNQa/dnB6y7eGz7x78T9dpe9e/ i4hglCJKlVhf/x+xU7N45qpwU/ddZGWKG8K9j0oq56j7yzIDzG9kvfCf+tRz4ZahROsE OLeCqWjO/fpIJpppVMwXjwemqxNXLCwY9xIypdDgjMolwf3UWogxvaLNwsmByLT3Mxcs mu1g== X-Gm-Message-State: AOAM532Z6NhEPmQ6hgz9vk2Ht1e3RbXfLZbHyWF0x77gY4UrNg5V9Gx9 ggju9o6MLmTAlUvCqsV9js0Uek0rzdRa X-Google-Smtp-Source: ABdhPJwXfk8TYTriRHD8Ql+PrifVU/oKyaCUyxffX3kvDsKbnn5ZL+GaraV62M5Cs5X0Y7S/AKPg00V749ll X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a05:6000:2af:: with SMTP id l15mr6285646wry.296.1634140721007; Wed, 13 Oct 2021 08:58:41 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:18 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-4-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 03/16] KVM: arm64: Avoid remapping the SVE state in the hyp stage-1 From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085843_876619_AFD3512F X-CRM114-Status: GOOD ( 13.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We currently map the SVE state into the hypervisor stage-1 on VCPU_RUN, when the vCPU thread's PID has changed. However, this only needs to be done during the first VCPU_RUN as the SVE state doesn't depend on thread-specific data, so move the create_hyp_mapping() call to kvm_vcpu_first_run_init(). Suggested-by: Marc Zyngier Signed-off-by: Quentin Perret --- arch/arm64/kvm/arm.c | 12 ++++++++++++ arch/arm64/kvm/fpsimd.c | 11 ----------- 2 files changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index fe102cd2e518..c33d8c073820 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -618,6 +618,18 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) if (ret) return ret; + if (vcpu->arch.sve_state) { + void *sve_end; + + sve_end = vcpu->arch.sve_state + vcpu_sve_state_size(vcpu); + + ret = create_hyp_mappings(vcpu->arch.sve_state, sve_end, + PAGE_HYP); + if (ret) + return ret; + } + + ret = kvm_arm_pmu_v3_enable(vcpu); return ret; diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 5621020b28de..62c0d78da7be 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -43,17 +43,6 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) if (ret) goto error; - if (vcpu->arch.sve_state) { - void *sve_end; - - sve_end = vcpu->arch.sve_state + vcpu_sve_state_size(vcpu); - - ret = create_hyp_mappings(vcpu->arch.sve_state, sve_end, - PAGE_HYP); - if (ret) - goto error; - } - vcpu->arch.host_thread_info = kern_hyp_va(ti); vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd); error: From patchwork Wed Oct 13 15:58:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2C08C433FE for ; Wed, 13 Oct 2021 16:01:52 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AF69261168 for ; Wed, 13 Oct 2021 16:01:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org AF69261168 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=LPlcYWtGvvxiOqHjjQ6aNKuPAOMNVN1PnjunhIzFh4w=; b=hdYHs1uw65aL3oG+fmi9NVW6bN xsrGs/75Am4F7Uvtbj/4VL+5ND7g1vhDjqsK1cxxVc5PeANnslLaqIW9k/nUmgK5uTw0GCNIfKSoW xbrbFPLvsbJUZR9sp31xzR4tX1ru+zjtyjExMVPA7SYbXvT1hfhjnmjded+gJxF39bLdWCGwz/Neb OagmRjmAI6L3TwD8MMhYZDkm+b2c6XifiHxkN80wWUZ8YMQ9Okre+t/i/YHk7r4ZEsK8bSU6PUvVi Sl8Xu4gmabGiv1r1Ns5eEyKZe5HBq6m3ydZ6wF2qcA09oH7mPZv/lulSJ7kSEOjGCDhGdfYbhcivh GbisoLDw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1magfd-00HTgK-QT; Wed, 13 Oct 2021 15:59:58 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageT-00HTKt-CS for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:58:47 +0000 Received: by mail-wr1-x449.google.com with SMTP id r16-20020adfb1d0000000b00160bf8972ceso2339646wra.13 for ; Wed, 13 Oct 2021 08:58:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=PKsT+7mjWWd9o2mzLnA84Fl0AOzVFe9DHpCA6EEH4ck=; b=K59g1PkrmD+1jJFreVZEOcaqgBIuuI69x/2EfncZknIv0Ug4ZU5q/hQgMHuN7BMY/u bXLS4NOjIfrihuVvivfeqdskI7kAcv0EldfrFBgzFKfb6FALdO1GhVsgV45ue4RRYo60 zkkXEre44C5pIpEvGJ3+IaTFDDYOQdEV0ShsoRukTPcMj3Q+U+EI5BOxng8P5pjy/6Vs GExkIm8uvhTZGqM35NG93X/Z2/kVqzNvPwK/aeLDQSpiITTBo0JKFxjcrIrykmR/DWUo EUkBCUb7yNLm8+UFbj4VoE0rnH4JuyaMIw6KQXmAJyM76lxBojGFz2/xJuq+YqaBCLI9 SL4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=PKsT+7mjWWd9o2mzLnA84Fl0AOzVFe9DHpCA6EEH4ck=; b=PS/u8hcrOG3Bhbzchs2gwBjUkFdWopOTMKVX41rNI7MYjsBl0/82uKoaelCl/uEUpn l+6h5kMA/gZa6ganUzdIjyde3eB1l1YPzChzLbwrG3MUy+az7fyLMI3CPr2cL61KLfYN 77B5b1BditsIxIlXCGF8Io6B9EL/53M8bZBjef/YGbBqwwXHkGGghZh6oP5KDWaiZRVD JCg+wZFoFezqFYIvwetY2KkMRrZ11FhF+PjapSiyqonMiYN7iNrF841HUc28ktKwyrY4 0hKilBNiPHIRJ9XXlrFT/PskhB+avLJs0mTyjlQTV4in0iEWtG4piGgXWEDJnoew5VYo RFKA== X-Gm-Message-State: AOAM533CpDbdPo9Y7PrjtH9pOkH+lRC4cmw1IpDnZL5HYe6oD6biZZ62 3Eipb/8tHM+wH64MdZcNP7vlXIT53lYr X-Google-Smtp-Source: ABdhPJw3f6Eb2tt9fOVR32uwn3rCbLrDxsHBoLvwAPqLBi7eN5NfRbqKMHsaNjIwI/pfQ2jQ5pyBrCx0Fr38 X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a1c:4484:: with SMTP id r126mr13590383wma.150.1634140723470; Wed, 13 Oct 2021 08:58:43 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:19 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-5-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 04/16] KVM: arm64: Introduce kvm_share_hyp() From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085845_474636_61082E25 X-CRM114-Status: GOOD ( 17.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The create_hyp_mappings() function can currently be called at any point in time. However, its behaviour in protected mode changes widely depending on when it is being called. Prior to KVM init, it is used to create the temporary page-table used to bring-up the hypervisor, and later on it is transparently turned into a 'share' hypercall when the kernel has lost control over the hypervisor stage-1. In order to prepare the ground for also unsharing pages with the hypervisor during guest teardown, introduce a kvm_share_hyp() function to make it clear in which places a share hypercall should be expected, as we will soon need a matching unshare hypercall in all those places. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/kvm/arm.c | 7 +++---- arch/arm64/kvm/fpsimd.c | 4 ++-- arch/arm64/kvm/mmu.c | 19 +++++++++++++------ 4 files changed, 19 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 02d378887743..185d0f62b724 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -150,6 +150,7 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v) #include #include +int kvm_share_hyp(void *from, void *to); int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot); int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size, void __iomem **kaddr, diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c33d8c073820..f2e74635332b 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -146,7 +146,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) if (ret) return ret; - ret = create_hyp_mappings(kvm, kvm + 1, PAGE_HYP); + ret = kvm_share_hyp(kvm, kvm + 1); if (ret) goto out_free_stage2_pgd; @@ -341,7 +341,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) if (err) return err; - return create_hyp_mappings(vcpu, vcpu + 1, PAGE_HYP); + return kvm_share_hyp(vcpu, vcpu + 1); } void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) @@ -623,8 +623,7 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) sve_end = vcpu->arch.sve_state + vcpu_sve_state_size(vcpu); - ret = create_hyp_mappings(vcpu->arch.sve_state, sve_end, - PAGE_HYP); + ret = kvm_share_hyp(vcpu->arch.sve_state, sve_end); if (ret) return ret; } diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 62c0d78da7be..2fe1128d9f3d 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -35,11 +35,11 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) * Make sure the host task thread flags and fpsimd state are * visible to hyp: */ - ret = create_hyp_mappings(ti, ti + 1, PAGE_HYP); + ret = kvm_share_hyp(ti, ti + 1); if (ret) goto error; - ret = create_hyp_mappings(fpsimd, fpsimd + 1, PAGE_HYP); + ret = kvm_share_hyp(fpsimd, fpsimd + 1); if (ret) goto error; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1a94a7ca48f2..f80673e863ac 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -296,6 +296,17 @@ static int pkvm_share_hyp(phys_addr_t start, phys_addr_t end) return 0; } +int kvm_share_hyp(void *from, void *to) +{ + if (is_kernel_in_hyp_mode()) + return 0; + + if (kvm_host_owns_hyp_mappings()) + return create_hyp_mappings(from, to, PAGE_HYP); + + return pkvm_share_hyp(kvm_kaddr_to_phys(from), kvm_kaddr_to_phys(to)); +} + /** * create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode * @from: The virtual kernel start address of the range @@ -316,12 +327,8 @@ int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot) if (is_kernel_in_hyp_mode()) return 0; - if (!kvm_host_owns_hyp_mappings()) { - if (WARN_ON(prot != PAGE_HYP)) - return -EPERM; - return pkvm_share_hyp(kvm_kaddr_to_phys(from), - kvm_kaddr_to_phys(to)); - } + if (WARN_ON(!kvm_host_owns_hyp_mappings())) + return -EPERM; start = start & PAGE_MASK; end = PAGE_ALIGN(end); From patchwork Wed Oct 13 15:58:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47825C433F5 for ; Wed, 13 Oct 2021 16:02:17 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 094A7610FE for ; Wed, 13 Oct 2021 16:02:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 094A7610FE Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=odv2fjobldc0reQdQBEnTnX1QCIbqp4jxVFX5xRDY6k=; b=m3mWzt/W02oKwkuglRBk/W6AeP cKmdxSv5xHp7/E0rbimV5A6Cqf8J0aP7vg3MkKxsZWj9QLeuHO9+/h4uY2CsPIvJhLnXJ7sSdz0Nh x8VsLWTrlyeVO8jUTVmjFPktdHxg3OKB3ng2kvQgTuGXV8gG7nV6RTLC3fp2qE3znLqOv8InP7SKo 1q/sIrspIoA06WwbcoA8feb95iu6V6NdM+3B8YKOk0df8pR1UOtEfJo+ISJ3EtTgYlDXVQZlUJtdz 5k7Hx7f/1iTggQlSWxxMzwTEPDn6yNl0xkrY421zIPzhjejWQB76MgcarjbhqLQWH9MzE+ndgAUMb q7vnyFQg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1magg6-00HTtJ-DV; Wed, 13 Oct 2021 16:00:26 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageW-00HTLa-2u for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:58:49 +0000 Received: by mail-wr1-x449.google.com with SMTP id h11-20020adfa4cb000000b00160c791a550so2363973wrb.6 for ; Wed, 13 Oct 2021 08:58:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=IlppRzT2DJMrjLyWXcFQhFnQfQR5lo6YirruXy6B+jE=; b=nLb2GxvnfmXRJusIRGDcK6nXA+5x5AmJGRR/V/LcoUNSbMrUd2NrmL4MufI+g0eTha q40ku9MEJqdlYAse4Z0er9MkTvhAtXgjqPtL43Viv98eJ0NYC8Uk+HgyWvkrEpJtMYoE Xldub+P2soQLFdZpz8ZTsw98lRBLzdRbvDbSwdieMx8jJlopirtZXpI95UsWuokp+D9c hasKRs/Cqe5gS9o+94sYPs57ne89Dy5HA0+DSHQZBjSdmo227xvtSOXCHAls/UKcxA5Z JRCo2jkMOmfMNHEHIjOpjU1K+CD+a2tH2JsmQHBdoKCakqEJrwXRI8YWDyQMkf2b1dYc BcZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IlppRzT2DJMrjLyWXcFQhFnQfQR5lo6YirruXy6B+jE=; b=yY0zj11vF3xPZhiflXArPTShG/19x7Mk9WeBTxMfAxoRifTRWli4eYs8MRfBAdkvYF ecOJgkpA2nAz5zYYQAh8gr+8uAw3/07OWN0WlMA6Sc+0dkGbTmCj1TqJL8pWwixCSoH9 ipVC7qRb/oZ1w5ytenzXNEsVXn5c5+oBDRKU/udEU4Y5bKLKsLIOR7sR1JXNlw87YK8V m8jKphL57QynJxZLjWHQOMqV7XLORgwixiDu84vDINEGHBs7NN2VZfxchezWeK0WrGQ9 lo9EZRoYscNxWcp3YiymrIemzIVq5ZYDJqmer4hPGvcF9LW1nf097Lwm8PIksP+nL1xC YkyA== X-Gm-Message-State: AOAM530MfpfzdVovAJF1JqQNGq1cPDu6bkBSsOBfSxhsalLZmy4kcTrW P83IrhifK6uyC8EW4I/68Y9RnpRoUpPc X-Google-Smtp-Source: ABdhPJwoSI8DhPOgPzSbFVEj9YWz/nO03TgjiiCueH7HBTZaJedxiWCr2dApRk0NjeO38ALucY7P+geJyNYC X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a05:600c:1c05:: with SMTP id j5mr142004wms.1.1634140725692; Wed, 13 Oct 2021 08:58:45 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:20 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-6-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 05/16] KVM: arm64: Accept page ranges in pkvm share hypercall From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085848_170675_20C27432 X-CRM114-Status: GOOD ( 14.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The recently reworked do_share() infrastructure for the nVHE protected mode allows to transition the state of a range of pages 'atomically'. This is preferable over single-page sharing when e.g. mapping guest vCPUs in the hypervisor stage-1 as the permission checks and page-table modifications for the entire range are done in a single critical section. This means there is no need for the host the handle e.g. only half of a vCPU being successfully shared with the hypervisor. So, make use of that feature in the __pkvm_host_share_hyp() hypercall by allowing to specify a pfn range. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 +- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 3 ++- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 4 +-- arch/arm64/kvm/mmu.c | 25 +++++++------------ 4 files changed, 14 insertions(+), 20 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 56445586c755..9c02abe92e0a 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -54,7 +54,7 @@ extern struct host_kvm host_kvm; extern const u8 pkvm_hyp_id; int __pkvm_prot_finalize(void); -int __pkvm_host_share_hyp(u64 pfn); +int __pkvm_host_share_hyp(u64 pfn, u64 nr_pages); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 2da6aa8da868..f78bec2b9dd4 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -143,8 +143,9 @@ static void handle___pkvm_cpu_set_vector(struct kvm_cpu_context *host_ctxt) static void handle___pkvm_host_share_hyp(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(u64, pfn, host_ctxt, 1); + DECLARE_REG(u64, nr_pages, host_ctxt, 2); - cpu_reg(host_ctxt, 1) = __pkvm_host_share_hyp(pfn); + cpu_reg(host_ctxt, 1) = __pkvm_host_share_hyp(pfn, nr_pages); } static void handle___pkvm_create_private_mapping(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 6983b83f799f..909e60f71b06 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -671,14 +671,14 @@ static int do_share(struct pkvm_mem_share *share) return ret; } -int __pkvm_host_share_hyp(u64 pfn) +int __pkvm_host_share_hyp(u64 pfn, u64 nr_pages) { int ret; u64 host_addr = hyp_pfn_to_phys(pfn); u64 hyp_addr = (u64)__hyp_va(host_addr); struct pkvm_mem_share share = { .tx = { - .nr_pages = 1, + .nr_pages = nr_pages, .initiator = { .id = PKVM_ID_HOST, .addr = host_addr, diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index f80673e863ac..bc9865a8c988 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -281,30 +281,23 @@ static phys_addr_t kvm_kaddr_to_phys(void *kaddr) } } -static int pkvm_share_hyp(phys_addr_t start, phys_addr_t end) -{ - phys_addr_t addr; - int ret; - - for (addr = ALIGN_DOWN(start, PAGE_SIZE); addr < end; addr += PAGE_SIZE) { - ret = kvm_call_hyp_nvhe(__pkvm_host_share_hyp, - __phys_to_pfn(addr)); - if (ret) - return ret; - } - - return 0; -} - int kvm_share_hyp(void *from, void *to) { + phys_addr_t start, end; + u64 nr_pages; + if (is_kernel_in_hyp_mode()) return 0; if (kvm_host_owns_hyp_mappings()) return create_hyp_mappings(from, to, PAGE_HYP); - return pkvm_share_hyp(kvm_kaddr_to_phys(from), kvm_kaddr_to_phys(to)); + start = ALIGN_DOWN(kvm_kaddr_to_phys(from), PAGE_SIZE); + end = PAGE_ALIGN(kvm_kaddr_to_phys(to)); + nr_pages = (end - start) >> PAGE_SHIFT; + + return kvm_call_hyp_nvhe(__pkvm_host_share_hyp, __phys_to_pfn(start), + nr_pages); } /** From patchwork Wed Oct 13 15:58:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 015FEC433EF for ; Wed, 13 Oct 2021 16:02:48 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C39926109E for ; Wed, 13 Oct 2021 16:02:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C39926109E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=wElK5CQf/fqoOohFSIc3+rhVb3tdUPQGb0u8jiQJRH0=; b=SavZpZtEG9ha4KwNl7QvSyBMAA 0LkGRVw+VDE8dmRGfjZ9KHfsdjfrjoIBlcdoYkyKYNoNFz9PVb131OSzqa+AtnN73tdUjHe7YfB6e w9cupFgPBQYaCjKqKHsxv+FfBtBN+A0Qbl0y7AII2W/mzeZaUf0EB313nH5ksGmoyawXLBEVgMwsD uFcvW9Ribz4TDFD/zyDsQ5NNI2z8zlrPkptSt/UGshjP3SNtRyyZdX+rCiu1sLhov7sfpdpVHC7lo 54xoCLdBOeEd7wdmBg5jKsvDknbs4UMZbJWJwGxbRJSHG8yoWQ5cwETKhZPILCc/3AVckyzQXVS2h vHewgyTA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1maggb-00HU90-UB; Wed, 13 Oct 2021 16:00:58 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageY-00HTMJ-FP for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:58:51 +0000 Received: by mail-wr1-x44a.google.com with SMTP id a15-20020a056000188f00b00161068d8461so2358704wri.11 for ; Wed, 13 Oct 2021 08:58:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=geKOuDyapRPU6is2PIHNocmWi3CmDJ2JnpgjpVKS/bU=; b=VN++2S+k4KN11UpNmg8AFQd0V1g+GIwPCw0Aq+JGPbUyihxnFmzsQSjAQWJFFjtaNm YYyApUmsmPcCpglJBS0OcaxZiObaBCtCRWdAcMluLCbf9XZeWJNDLP+/YN4gnsmUw/f7 xZoAHQ0KIU3Z9WWrkwvKBpCF+1Aaxuw+GPZHCqSAmBBFKgOICUlNmDE9bQXcKoPjsW5j qdrcCUUxcrJnn6YKO7/p/oFhIV5I4JFfwjL4N46KjxY96Z+o1Y9z1MCRtDQ1cIV2V2ST svmcIQV2XP3IGVQgY4dBe1Q0RNlIo0cD+KFGU4K6IAmRkQQkkco+HEFX3nhr9v6m70zt BFKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=geKOuDyapRPU6is2PIHNocmWi3CmDJ2JnpgjpVKS/bU=; b=bQBHK4bN5R9ZnmcGVi3IBf/Q6dd/Vel2U+fcQLR8+EsgIcpsv0AHRti/pv27synBmH lKdb+Qakq7Xft/+pjNndggh0J6dQoBxW0Rp5vWUU1Vb2viUSZQu3qaNHhgteu5bItWPQ wdNMGg3dy1Qqk8yuUSSX09TQX+cWdRpCeFL3WKCUzYp2vjNG/ej0gxH69lW9S/JeTzBk Ef55P/9Q0QYDhvo6s3qZ/CiDWw/engFHG20py3hFujp5JTfKz22FmSUx+dCK+NCJ0s3f a0mAdE8zDk5YiDFolLdXRopaHRz2kj+9erkkwEI22ymz86EG7QXhCiaAI7bBW2COC5Jz xHrA== X-Gm-Message-State: AOAM53066G/jgEe/pNzSVUWLSlg4AwZtAlretGWJ0lRD7oUh0XfgD7lh ZUiagK20Gh62h/jZnO2gcrQUGDV+Fnpm X-Google-Smtp-Source: ABdhPJz0XCfJeYia0X1UAzFkedYN1KhyaiIYXjxhsEH0zsNp+ml8wu+2Vv+dlUCbDB1lFLK95GPgHM0lsljj X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a05:600c:4fcd:: with SMTP id o13mr14107564wmq.158.1634140728272; Wed, 13 Oct 2021 08:58:48 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:21 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-7-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 06/16] KVM: arm64: Provide {get,put}_page() stubs for early hyp allocator From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085850_558144_C615DB6E X-CRM114-Status: GOOD ( 11.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In nVHE protected mode, the EL2 code uses a temporary allocator during boot while re-creating its stage-1 page-table. Unfortunately, the hyp_vmmemap is not ready to use at this stage, so refcounting pages is not possible. That is not currently a problem because hyp stage-1 mappings are never removed, which implies refcounting of page-table pages is unnecessary. In preparation for allowing hypervisor stage-1 mappings to be removed, provide stub implementations for {get,put}_page() in the early allocator. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/early_alloc.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/early_alloc.c b/arch/arm64/kvm/hyp/nvhe/early_alloc.c index 1306c430ab87..00de04153cc6 100644 --- a/arch/arm64/kvm/hyp/nvhe/early_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/early_alloc.c @@ -43,6 +43,9 @@ void *hyp_early_alloc_page(void *arg) return hyp_early_alloc_contig(1); } +static void hyp_early_alloc_get_page(void *addr) { } +static void hyp_early_alloc_put_page(void *addr) { } + void hyp_early_alloc_init(void *virt, unsigned long size) { base = cur = (unsigned long)virt; @@ -51,4 +54,6 @@ void hyp_early_alloc_init(void *virt, unsigned long size) hyp_early_alloc_mm_ops.zalloc_page = hyp_early_alloc_page; hyp_early_alloc_mm_ops.phys_to_virt = hyp_phys_to_virt; hyp_early_alloc_mm_ops.virt_to_phys = hyp_virt_to_phys; + hyp_early_alloc_mm_ops.get_page = hyp_early_alloc_get_page; + hyp_early_alloc_mm_ops.put_page = hyp_early_alloc_put_page; } From patchwork Wed Oct 13 15:58:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD500C433EF for ; Wed, 13 Oct 2021 16:03:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 981726109E for ; Wed, 13 Oct 2021 16:03:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 981726109E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=jgy6M+7kuCxQaU0Igod/bTEwuZaqLARHDI3MUc7KOyw=; b=UhvS9C2orLnK/DbVRCOVAtP5ic bIuQ6//N0ltZMZehtwY/ht0sFZyJQUONg0a9wicLh4KXXU7XUXjfP3QeTdUdELu2c+bT5kL4DCqME /bk1pAsa4gYdPmBaiuNsLYjqmE+HQ0pWmBcLiWwLe5ozZw5vriwm3u6zgWXppaWVDEwJBr+fWfZ9D YFINOJ8t+9MSTGgDo2XSeXELYKhaXfnX3eM++DoQD/6a+ZdQ4NkspO3szyfDYlfb6CunR6izZEVIv PFnLAnacIADh8eFH1vA4Kwh7/Gvyvh8FsLEJTjgp38PaPrK91DTmT2mnCP6Wh+rz56MISJAGB9QvE dvKxszjg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1magh3-00HUMn-Us; Wed, 13 Oct 2021 16:01:26 +0000 Received: from mail-qk1-x74a.google.com ([2607:f8b0:4864:20::74a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageb-00HTMy-4Z for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:58:54 +0000 Received: by mail-qk1-x74a.google.com with SMTP id c16-20020a05620a0cf000b0045f1d55407aso2193572qkj.22 for ; Wed, 13 Oct 2021 08:58:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ztuYYF9Onc0ZIjngTVEp2z77n51MjY0wGMZOLwDTwhY=; b=GVq5GFcw/0eVvulNbba6ktlvmE8YFALwpDIhCJ2/vsiC21sLdLoc9O7Xq36D1lJ5lf ZQpif0gOuVc4Y1gCrCxInx9Dy5dbafYfdL55owb7iUdw+pbGb6j7FjBOyhKrHNDwIkvj IUagiUdDR+zKO/Ac9HpWerSCBqwNR3R74jb/oiqdXP39kp5RokKJ70PgYiZ7br199OT/ fxtAXX/51HuCFEYze6DhFLzMABWI4+IwL8WM23aFrRth8DgCQnF6HKbnFhSTNuYZOayb PcC6MbtcinQD4fvigVVSdsAf5MhLy6ddmj2xBzYeMb40ITQkrfyGcy+2NPXD9hI6eXPQ nvgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ztuYYF9Onc0ZIjngTVEp2z77n51MjY0wGMZOLwDTwhY=; b=EnL98h22+LD65NZKN9gYc+fKFgxHrA9VT2sqbEm6h/tmthfxkDjy7wrFv3nTep46Fx 0dHKrb0B5YFj/zRMDRiJkIdcgMZkaeB3rkiGd95wqVLRv27ASLV/m/sP/MUqDK1Gstr3 muySwGfG/qirYam2KaGXdnJHqoPdnI+rhkXNc1yZzLucauRUslGNWmgYduSUOTigCfGu mqiLecraLYq5Z17WiEKZywoSYGHLA+c7sWsmzf65QIshuOPLxnaRnM15o781VgCytyGl kCYAhFX3Qq1EF4EzTAJQjGzYFUmUgS94PHpLmDA5/Ss5JkzNGZ7R0FC8xy9em3tsFoOT H5yg== X-Gm-Message-State: AOAM532iQ6/FJLszI76MCk36E98EaJO41BxGNFBNHoKb6wwnLv+mnuVH GdaiNWf4J4wel9XwAC+sObwgOwxjOw4F X-Google-Smtp-Source: ABdhPJxp1hUAJxs/riqQe2cYJzZaiMjwH6ehh74ltN5CiIrrR1/FVufG6S8GAC+vFxwjhGJSaTXJSPc9xIJG X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a05:622a:13c6:: with SMTP id p6mr65288qtk.165.1634140730556; Wed, 13 Oct 2021 08:58:50 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:22 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-8-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 07/16] KVM: arm64: Refcount hyp stage-1 pgtable pages From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085853_211071_871440B6 X-CRM114-Status: GOOD ( 12.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To prepare the ground for allowing hyp stage-1 mappings to be removed at run-time, update the KVM page-table code to maintain a correct refcount using the ->{get,put}_page() function callbacks. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/pgtable.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index f8ceebe4982e..768a58835153 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -408,8 +408,10 @@ static bool hyp_map_walker_try_leaf(u64 addr, u64 end, u32 level, return false; new = kvm_init_valid_leaf_pte(phys, data->attr, level); - if (hyp_pte_needs_update(old, new)) + if (hyp_pte_needs_update(old, new)) { smp_store_release(ptep, new); + data->mm_ops->get_page(ptep); + } data->phys += granule; return true; @@ -433,6 +435,7 @@ static int hyp_map_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, return -ENOMEM; kvm_set_table_pte(ptep, childp, mm_ops); + mm_ops->get_page(ptep); return 0; } @@ -482,8 +485,16 @@ static int hyp_free_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, enum kvm_pgtable_walk_flags flag, void * const arg) { struct kvm_pgtable_mm_ops *mm_ops = arg; + kvm_pte_t pte = *ptep; + + if (!kvm_pte_valid(pte)) + return 0; + + mm_ops->put_page(ptep); + + if (kvm_pte_table(pte, level)) + mm_ops->put_page(kvm_pte_follow(pte, mm_ops)); - mm_ops->put_page((void *)kvm_pte_follow(*ptep, mm_ops)); return 0; } @@ -491,7 +502,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt) { struct kvm_pgtable_walker walker = { .cb = hyp_free_walker, - .flags = KVM_PGTABLE_WALK_TABLE_POST, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, .arg = pgt->mm_ops, }; From patchwork Wed Oct 13 15:58:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6A66C433F5 for ; Wed, 13 Oct 2021 16:04:43 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 850EF61168 for ; Wed, 13 Oct 2021 16:04:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 850EF61168 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=u5Pp6fFhoHTJZ6kszO24vvxLUFJDkld/DQXNarO3zNU=; b=rOyUehqIlrDfiYQgdzC+Mv9E0d m625JMsAhqUwZkVoqebJirgTYvrU4xmIJZERmmpsfd64H8Cs0dxOq1O1ukiz5VSPRvKntua0MRa2N KsNXzYxaa8SDEvp18Y2HhuC0RIpNecI+eJ+7SSl6+RCVvwtwKN8tqyuqzBdNi7TiP3ERZYrK1RAYP IkCfIiMVtkWToEg5NoetBLaEXMu2D65CS+p6RPZCPhcCu2BCpSfqW9t1tQQT4QwzD+nt+be6YWbDk 8h6cqs2gCbPhbVyMlPeaMK6VMfanOKmTe+/pmfrS1JVq7xAg4NBnvDmsF6RLU0ZFf6a/RqUxD+W+l fwqnL3BQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1maghk-00HUio-R2; Wed, 13 Oct 2021 16:02:09 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1magec-00HTNa-Sw for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:58:56 +0000 Received: by mail-wr1-x44a.google.com with SMTP id 41-20020adf802c000000b00161123698e0so2337530wrk.12 for ; Wed, 13 Oct 2021 08:58:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AEhoESi+JSkIpPsgLf7KV5M6/nV8qQxTUg3eRSjbv/s=; b=NP+XPmDRgtbMk4UIWYD0pQZRkJlQstYb8HVcLmqCidCTYbYQHgdI3Cqdyg2BMtg0E7 ZxXptk9cPKAKddslAHjToBYwGC6dKSDDr9i8XsvIRpIAqZgNZspy3rXJluomIH6ikRPq GMD7xgTJYC+p2iKq4nOwt1vtc3aTDy8Q23TqJt8WOdZ/2LNtMpxarqRE3/Usq/0MBwRm F24k7V9pBdz5RsbhTu2S+G4qNUwnwip22wK2jS4gWU/A4VAEAeayNExRA5QBw9pLIcnf 3MZvLAInURcb+xfCjWAMcRLKWeqYTvvi+usmLYswXOEB2/t+ULFEJXEMzeRdLXCgdOA+ H7aQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AEhoESi+JSkIpPsgLf7KV5M6/nV8qQxTUg3eRSjbv/s=; b=Qpa9tNJblF2TsFB/k618iUPrQYB/U9OABpK8Fc/wKfKSz0IsWz+2tz2Dwq5O3iSSl1 VgbEr/6hHbn8ydW+fTDBDSjkRuKIrgazUCCUqjkQGmgcssPK+4eEQNLDe/t6Al5JIRKz 7Vz2xjw3152udQSQV08EjrglhnjQZ+Whi/rkNt5LbPjnBI+fm4OWTjekeDtkxrcOxNaC 14DBn8FDVfTBkXOMuujHWJcwEVPeigFBNPGm+rPJG9kCMRQJRkLkO+fxYMHBPptt7njI 25ttSBf7Lw02o3oib7dt7E6orb81Yk/SWJCgw1oUqHcr5OgA/pvzjfSlV0rYBIMezzPa QAIQ== X-Gm-Message-State: AOAM530+B0U7id8NcbmtIzwxDXh+SbvQUYXfRvjcqkcxqod9RTpPCxNk a4NmtIifS1vMAVLgcCQkkwwwW5Gnb1Zt X-Google-Smtp-Source: ABdhPJypaaFzh/P/CSod5M2Uoyrzi+tFUaTYoQ7iBeHY7wDaUM4H3CebTBes5FVoowLKxoASMH9NbGCZR+8e X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a7b:c76e:: with SMTP id x14mr13634245wmk.47.1634140732732; Wed, 13 Oct 2021 08:58:52 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:23 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-9-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 08/16] KVM: arm64: Fixup hyp stage-1 refcount From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085854_986354_59E88A8E X-CRM114-Status: GOOD ( 15.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In nVHE-protected mode, the hyp stage-1 page-table refcount is broken due to the lack of refcount support in the early allocator. Fix-up the refcount in the finalize walker, once the 'hyp_vmemmap' is up and running. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/setup.c | 31 +++++++++++++++++++++---------- 1 file changed, 21 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 57c27846320f..ad89801dfed7 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -166,12 +166,22 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level, { enum kvm_pgtable_prot prot; enum pkvm_page_state state; + struct kvm_pgtable_mm_ops *mm_ops = arg; kvm_pte_t pte = *ptep; phys_addr_t phys; if (!kvm_pte_valid(pte)) return 0; + /* + * Fix-up the refcount for the page-table pages as the early allocator + * was unable to access the hyp_vmemmap and so the buddy allocator has + * initialised the refcount to '1'. + */ + mm_ops->get_page(ptep); + if (flag != KVM_PGTABLE_WALK_LEAF) + return 0; + if (level != (KVM_PGTABLE_MAX_LEVELS - 1)) return -EINVAL; @@ -204,7 +214,8 @@ static int finalize_host_mappings(void) { struct kvm_pgtable_walker walker = { .cb = finalize_host_mappings_walker, - .flags = KVM_PGTABLE_WALK_LEAF, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .arg = pkvm_pgtable.mm_ops, }; return kvm_pgtable_walk(&pkvm_pgtable, 0, BIT(pkvm_pgtable.ia_bits), &walker); @@ -229,19 +240,19 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; - ret = finalize_host_mappings(); - if (ret) - goto out; - pkvm_pgtable_mm_ops = (struct kvm_pgtable_mm_ops) { - .zalloc_page = hyp_zalloc_hyp_page, - .phys_to_virt = hyp_phys_to_virt, - .virt_to_phys = hyp_virt_to_phys, - .get_page = hpool_get_page, - .put_page = hpool_put_page, + .zalloc_page = hyp_zalloc_hyp_page, + .phys_to_virt = hyp_phys_to_virt, + .virt_to_phys = hyp_virt_to_phys, + .get_page = hpool_get_page, + .put_page = hpool_put_page, }; pkvm_pgtable.mm_ops = &pkvm_pgtable_mm_ops; + ret = finalize_host_mappings(); + if (ret) + goto out; + out: /* * We tail-called to here from handle___pkvm_init() and will not return, From patchwork Wed Oct 13 15:58:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24484C433F5 for ; Wed, 13 Oct 2021 16:04:49 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E2F0F611BD for ; Wed, 13 Oct 2021 16:04:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E2F0F611BD Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Cn6WD6ToQR5R0eLzi+A1mNv8k9aSjvtoqwjPgQjfNDE=; b=opeTrNu94Y3zfCE08WhtH8lIBy 5Gn+Op0ufls4k9ahw6GmUdg5AOn8WQizDNFx2ET+sPxwpNo+VLX/PYSefopfUvTh2mZ/cn7SzYUlS 9eyXh7L/mulcXogVF/F2p5qIUTbUMeQN25k4VKqRUyMEPdJApc0sadOF/Q+4ZkBB2t/uBWa/qutrr qRF9F0xl7A+lbNL4NAinLDQ62gzCJK90OL4W0IURnHJEUbo5ZE1SNhMNzBgEnjZXTFYYg1et5gq2q zxspD8azmr/HzgzZTTT5H8M7AJFNUuOWoWZoLsvWryBQjT5HPI3W5yQJBGaDaH+OWvGGAP58Q9zGa mAqOMWnA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1magig-00HVA4-Ee; Wed, 13 Oct 2021 16:03:06 +0000 Received: from mail-qk1-x749.google.com ([2607:f8b0:4864:20::749]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1magee-00HTO6-V3 for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:58:58 +0000 Received: by mail-qk1-x749.google.com with SMTP id v14-20020a05620a0f0e00b0043355ed67d1so2243249qkl.7 for ; Wed, 13 Oct 2021 08:58:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kINe0IWr90RBHgOAjYvlhVKzPYz79JHCEDd5CCbeejk=; b=OsS0vN4gyNtNRBwMG8bFigYCqkq3NooTKR7YjqqinRU1RPJxKLUz47JA1C3v8vh/1I yfLpGr6j0uvRc5U+NVL8ceUMekYzc6PXHJPkoBCbo1LcBSWP61RnabYuvsYnc/ELrilQ u1wd9tD4HSNnUO/mKylza3pmeVGPserwu8e5b/QisNp5m8+YGH/CKaDdVOS8tLMBDbgt MpYKA1gHhlXQUV9f7nJf/mP68j2oIb0ukvdNI9FTnhBF2i3H4OLN/A8vej7da8eZuWaI wwUjRR5U0HVYHzA3Xes9MbWGfxBccnUtmJ2XCEdoCqEeznbLJ/RjkfghxkDOCf3POi/I wnAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kINe0IWr90RBHgOAjYvlhVKzPYz79JHCEDd5CCbeejk=; b=FmXmftVNS0st4IEVW/MK67UFYi4HyZhmNIKqfqiC8waYZMXoWSDPl9N+gl2UvQTOer ZFXXMZMZW2AY7N2kjcLi6eX1rYYQr9ZLndrhdW7dvoM4xibUHiMXirr/h6bt8m5erzyx as6ZTvem+pCl4wzVpDQ4GSx3YNwwGIIWkejrYS3khr8El39Q8rVNIA0BHlYnURwIb7RY lfpvgUKAusvgQP1b4WyoR+cbQDtpSDxQQnj8WSTJCBsUdnHWlYJ6kWaPbpdDhfm2jf+j ZsQROAUUy9aeC2zOmENW0u/GmaisiG5wxXUdclhSs88nzHaT1jj8xTE/smeXA51bLhui 8Ybg== X-Gm-Message-State: AOAM530DLAEL4slvr7tVLGiodFeBAi19payyLhfTQ/BNstuIv6ig4JO9 6VPm5k162zLFAV87GfDSN+rJuVpbEGg+ X-Google-Smtp-Source: ABdhPJzDyz3x0TsxSS86fGtgHfDEAddB8Qdemu3K02NrgNa0NMLZw6aQqeBbr2Pk48QMxkVNxj6evJ7yA3g9 X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:ac8:5994:: with SMTP id e20mr43717qte.331.1634140734872; Wed, 13 Oct 2021 08:58:54 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:24 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-10-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 09/16] KVM: arm64: Hook up ->page_count() for hypervisor stage-1 page-table From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085857_070160_FD27BD8E X-CRM114-Status: UNSURE ( 9.34 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon kvm_pgtable_hyp_unmap() relies on the ->page_count() function callback being provided by the memory-management operations for the page-table. Wire up this callback for the hypervisor stage-1 page-table. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/setup.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index ad89801dfed7..98b39facae04 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -246,6 +246,7 @@ void __noreturn __pkvm_init_finalise(void) .virt_to_phys = hyp_virt_to_phys, .get_page = hpool_get_page, .put_page = hpool_put_page, + .page_count = hyp_page_count, }; pkvm_pgtable.mm_ops = &pkvm_pgtable_mm_ops; From patchwork Wed Oct 13 15:58:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41975C433F5 for ; Wed, 13 Oct 2021 16:05:20 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0821D61168 for ; Wed, 13 Oct 2021 16:05:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0821D61168 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=8D0mB6D8+3cNMg49bauCIAcfT2wK4CsFrDhN2UoW9q4=; b=veV3BvtXuUbGycXr4vGXO4G0ta FwkDCUB8W5x5ZZes5Lw+J/5Djd7J4ELT1XUB2ZjmapIrltAFyVEoutW60dKuWBfViRGQnFYnJAX7P nL0lEgkU8IBlzTuzl0Rb0Kacx9UWUwjVCCq695LHEyhqpxtgHC0q/9CrQLzkVQqF6m4sAowOu0ETS o/pl8UmYFnZwOjD7/7W3HbeaKJW3DoMVj7B10uJZ3sbsAPMswrC+Znbxpvro2VpBCdKFa+6DrqKqk TfaSNJ16sRd6pDn+3iyB4TDKSH4YPxn7Bf0NfGD7khvJS2lHzD5SezioDQAND/2Gb/jEFf/Cru2FZ yiWfGYbQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1magj9-00HVOt-Bj; Wed, 13 Oct 2021 16:03:35 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageh-00HTP3-5k for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:59:00 +0000 Received: by mail-wr1-x44a.google.com with SMTP id c4-20020a5d6cc4000000b00160edc8bb28so2350766wrc.9 for ; Wed, 13 Oct 2021 08:58:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GgLFMX5ztF6MYNVqoJJByZkg6Wz7p+0QQnn9yyaHjyk=; b=cUeoq1DeA87knFLwnUtEgKxY640obWSkOmx7AfI3ZJdKiaB6Mx8rYTVKhNur1p87QB jm9ZaR/lI0BJG1IX3S2G7KPqWoItGL5ubgXk8A3ThwjlnGXWXTFuvnQLUDi9HnHWWMWY oj1BPDy46BPaw6G9ju6PLhpN2geyNjQ7KHUYpJH2FzlmIDiStfRs+ZAPrx/9VnFeG00N Xh8arHQniKhSNwtLAmoFpn9z4CiaUNi8HP4k1rIX/nfjhUqrglB/BShDGi56lWAICv2k xyPA38fgIGMrmJbxsCNuuEyMuM7lP/hOo1LvhRqPCaED1bbCdjWGx0Zl/XAfsq/MflpU 2pQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GgLFMX5ztF6MYNVqoJJByZkg6Wz7p+0QQnn9yyaHjyk=; b=xGUU592VirYxGZo+glRP7B2ZUPpkKK6tZ4LXkyVHjVGWIHZYRUL0zzhcY63Re8VT// EjDQRIBcJuknTFS7KLwCogaKeVIPQkYKQOygMt9N1FNm1aOOs0DcLRokyapMZTgLmGqY 2NwtEToyg2Y4MnH5Ezji6uWJ4RTzW/Zc8ycRoYUH2tj0siQ6+p30tUcQPOK3yRjPCoyc KjWgx0OYQWgg0SaHMC7oFuD5BHeaB+PfJfRMmlHTaWn4NcOiQZghv37HQ05TOu2Ude2K 0oShoNPkKLD9w8jHFkb87JHDBdQ/CX6T9CU5hcJhI1Ry1dUvFZh52jRMdxF2krXQM0F6 hu4w== X-Gm-Message-State: AOAM532TRAGea5KeCCZt1RYuD5jiStO2Wwmq3wvYUqOmLS3pH48wKB8M K1x/pWTxPvHNMrWICwBnQbc6lbbGio58 X-Google-Smtp-Source: ABdhPJx39xlz/CuqQEOAPZTgH38vTKO4XOGwFEQgLo6W0tWY3kTr+StXLmrf/wZMhbyCtA1/odiou1VmT96v X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a05:600c:1c05:: with SMTP id j5mr142153wms.1.1634140737029; Wed, 13 Oct 2021 08:58:57 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:25 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-11-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 10/16] KVM: arm64: Implement kvm_pgtable_hyp_unmap() at EL2 From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085859_264288_3A4D6EDA X-CRM114-Status: GOOD ( 16.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon Implement kvm_pgtable_hyp_unmap() which can be used to remove hypervisor stage-1 mappings at EL2. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 21 ++++++++++ arch/arm64/kvm/hyp/pgtable.c | 63 ++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 027783829584..9d076f36401d 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -251,6 +251,27 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt); int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, enum kvm_pgtable_prot prot); +/** + * kvm_pgtable_hyp_unmap() - Remove a mapping from a hypervisor stage-1 page-table. + * @pgt: Page-table structure initialised by kvm_pgtable_hyp_init(). + * @addr: Virtual address from which to remove the mapping. + * @size: Size of the mapping. + * + * The offset of @addr within a page is ignored, @size is rounded-up to + * the next page boundary and @phys is rounded-down to the previous page + * boundary. + * + * TLB invalidation is performed for each page-table entry cleared during the + * unmapping operation and the reference count for the page-table page + * containing the cleared entry is decremented, with unreferenced pages being + * freed. The unmapping operation will stop early if it encounters either an + * invalid page-table entry or a valid block mapping which maps beyond the range + * being unmapped. + * + * Return: Number of bytes unmapped, which may be 0. + */ +u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); + /** * kvm_get_vtcr() - Helper to construct VTCR_EL2 * @mmfr0: Sanitized value of SYS_ID_AA64MMFR0_EL1 register. diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 768a58835153..6ad4cb2d6947 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -463,6 +463,69 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, return ret; } +struct hyp_unmap_data { + u64 unmapped; + struct kvm_pgtable_mm_ops *mm_ops; +}; + +static int hyp_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, void * const arg) +{ + kvm_pte_t pte = *ptep, *childp = NULL; + u64 granule = kvm_granule_size(level); + struct hyp_unmap_data *data = arg; + struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops; + + if (!kvm_pte_valid(pte)) + return -EINVAL; + + if (kvm_pte_table(pte, level)) { + childp = kvm_pte_follow(pte, mm_ops); + + if (mm_ops->page_count(childp) != 1) + return 0; + + kvm_clear_pte(ptep); + dsb(ishst); + __tlbi_level(vae2is, __TLBI_VADDR(addr, 0), level); + } else { + if (end - addr < granule) + return -EINVAL; + + kvm_clear_pte(ptep); + dsb(ishst); + __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level); + data->unmapped += granule; + } + + dsb(ish); + isb(); + mm_ops->put_page(ptep); + + if (childp) + mm_ops->put_page(childp); + + return 0; +} + +u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct hyp_unmap_data unmap_data = { + .mm_ops = pgt->mm_ops, + }; + struct kvm_pgtable_walker walker = { + .cb = hyp_unmap_walker, + .arg = &unmap_data, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + }; + + if (!pgt->mm_ops->page_count) + return 0; + + kvm_pgtable_walk(pgt, addr, size, &walker); + return unmap_data.unmapped; +} + int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits, struct kvm_pgtable_mm_ops *mm_ops) { From patchwork Wed Oct 13 15:58:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4C53C433EF for ; Wed, 13 Oct 2021 16:06:42 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ABEE2611BD for ; Wed, 13 Oct 2021 16:06:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org ABEE2611BD Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=H/C48eYzNgaq1riyFm/9++yFnimFyyPR/7EDCskZMaU=; b=Uu1wDCNzR4zO2Uo8j3HeT2jLYJ 6wXUlc4KkecNRrP2BzqgtGstq2abnEMIG/fAu1tTdT1MrDT1iuq7qqRBW+dnV8od9UPXjl2Kfq2a/ YBDRF95BtnvayGbMQes3sZ8Xu29IXFDQPCQIZP3wj5lTwcjSe2vSUsGG0knGnpDEq+xzGNrZ8pzpK Z3qtHOtt5M/0G+3SaVOsTi6QkKy8fEl6GAcrMY5kIEs+6leFzRXZ04tv10rKeGUo2BirN6t4fgCgX JtT/G82aXuDq7jvsDUNOFGZMohbEntMNoJ0kJMxpht6iTbNBZcUR0XyT1TW2LuGI+1/NKLk9EZVXS CMWOz+nA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1magkY-00HVzy-2D; Wed, 13 Oct 2021 16:05:02 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1magej-00HTPp-W5 for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:59:08 +0000 Received: by mail-wr1-x449.google.com with SMTP id c2-20020adfa302000000b0015e4260febdso2314650wrb.20 for ; Wed, 13 Oct 2021 08:59:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=N96QWZdZyncYqVJDhmcot0tFl0h23Kvk8FR/wEFCCgY=; b=en5penu1W+N/HtWXYPks7Gi1SZI7fQ3LHlNjwS6qYmvi4vs4oYrpdhagBwb3mKSqjj bnpZZO0p2ZmlZmks+IlNZsVKiFR/IDYI83kf0gKkTZ09wiDTdWfqZ70tDDH/5aBPaZlr //NAMZhoWa8LslXCatIH5BD8oCf491HhDXpQHRcNJdS6lRNz/BUrXFsDEJwPRbKB9Wt2 xVZ6vBx09TRNUg/hw3OzyIHIgWLNZyTcF/aw3zSv/AXHc3B4/l9ThJTT5x9GuoLyc58B Y41f9iJ1fi3HOzuFIqi8mL8vrdGgzR6KPbtgCYz5y6yL2OKnMwRgQpx8UetMlgbxuLvL XaZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=N96QWZdZyncYqVJDhmcot0tFl0h23Kvk8FR/wEFCCgY=; b=wgCL+gRRR2NdHCLtKQJVUbJkPO7Que5ViXEeLPe57nhvdavVL/zk1jbRw9RqTH6Pr+ Swoe4cVkd0ZDGq/XsoyjEgUyTxlCCiQDsmNcDJbm65oAahTUPxkHZL0I7BXnmQ82eGTy a4aisyA92FwHNVOEuznrwGEw2zJT0Pp0uvq3X1So6Xon+LAXqK9kQA0FeXHwAxa7Yj4S 2Ulv0SYLERnnTMVmOp45c/r0PSy/nJwAvboQhaHc5t2ie6UgbWZS6Dt8dp0SZCoAlnIO olxXy8HNqd7k9DDZxJzjXQGWR9qL+EpXyDR720A9WaoUKw/dqObFlNImnNtI8t/GfxxA z+FQ== X-Gm-Message-State: AOAM5311kSQX2u7ILIElOC0Pkfp71QYyqi4m3U3LcXJMiV3xezXWOUhg jT4IP2+qGmqt+++bKkc/9Oz9OFaU4rsU X-Google-Smtp-Source: ABdhPJygQa+wTc4OeeCXl7z9m/R4SJN/4hBuSvsV5o2tDYL3NdUx+8EUeWoWuPNV7alI36ZJ0V6wpG0LZazz X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a05:6000:1563:: with SMTP id 3mr3254wrz.152.1634140739504; Wed, 13 Oct 2021 08:58:59 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:26 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-12-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 11/16] KVM: arm64: Back hyp_vmemmap for all of memory From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085902_142665_A6099226 X-CRM114-Status: GOOD ( 22.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The EL2 vmemmap in nVHE Protected mode is currently very sparse: only memory pages owned by the hypervisor itself have a matching struct hyp_page. But since the size of these structs has been reduced significantly, it appears that we can now afford backing the vmemmap for all of memory. This will simplify a lot memory tracking as the hypervisor will have a place to store metadata (e.g. refcounts) that wouldn't otherwise fit in the 4 SW bits we have in the host stage-2 page-table. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mm.h | 29 ++++++++++++++++++-------- arch/arm64/kvm/hyp/nvhe/mm.c | 31 ++++++++++++++++++++++++---- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 4 +--- arch/arm64/kvm/hyp/nvhe/setup.c | 7 +++---- arch/arm64/kvm/hyp/reserved_mem.c | 17 ++------------- 5 files changed, 53 insertions(+), 35 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index c9a8f535212e..f5e8582252c3 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -20,23 +20,34 @@ extern u64 __io_map_base; int hyp_create_idmap(u32 hyp_va_bits); int hyp_map_vectors(void); -int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t back); +int hyp_back_vmemmap(phys_addr_t back); int pkvm_cpu_set_vector(enum arm64_hyp_spectre_vector slot); int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot); int pkvm_create_mappings_locked(void *from, void *to, enum kvm_pgtable_prot prot); unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, enum kvm_pgtable_prot prot); -static inline void hyp_vmemmap_range(phys_addr_t phys, unsigned long size, - unsigned long *start, unsigned long *end) +static inline unsigned long hyp_vmemmap_memblock_size(struct memblock_region *reg) { - unsigned long nr_pages = size >> PAGE_SHIFT; - struct hyp_page *p = hyp_phys_to_page(phys); + unsigned long nr_pages = reg->size >> PAGE_SHIFT; + unsigned long start, end; - *start = (unsigned long)p; - *end = *start + nr_pages * sizeof(struct hyp_page); - *start = ALIGN_DOWN(*start, PAGE_SIZE); - *end = ALIGN(*end, PAGE_SIZE); + start = hyp_phys_to_pfn(reg->base) * sizeof(struct hyp_page); + end = start + nr_pages * sizeof(struct hyp_page); + start = ALIGN_DOWN(start, PAGE_SIZE); + end = ALIGN(end, PAGE_SIZE); + + return end - start; +} + +static inline unsigned long hyp_vmemmap_pages(void) +{ + unsigned long res = 0, i; + + for (i = 0; i < kvm_nvhe_sym(hyp_memblock_nr); i++) + res += hyp_vmemmap_memblock_size(&kvm_nvhe_sym(hyp_memory)[i]); + + return res >> PAGE_SHIFT; } static inline unsigned long __hyp_pgtable_max_pages(unsigned long nr_pages) diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index 2fabeceb889a..65b948cbc0f5 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -103,13 +103,36 @@ int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot) return ret; } -int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t back) +int hyp_back_vmemmap(phys_addr_t back) { - unsigned long start, end; + unsigned long i, start, size, end = 0; + int ret; - hyp_vmemmap_range(phys, size, &start, &end); + for (i = 0; i < hyp_memblock_nr; i++) { + start = hyp_memory[i].base; + start = ALIGN_DOWN((u64)hyp_phys_to_page(start), PAGE_SIZE); + /* + * The beginning of the hyp_vmemmap region for the current + * memblock may already be backed by the page backing the end + * the previous region, so avoid mapping it twice. + */ + start = max(start, end); + + end = hyp_memory[i].base + hyp_memory[i].size; + end = PAGE_ALIGN((u64)hyp_phys_to_page(end)); + if (start >= end) + continue; + + size = end - start; + ret = __pkvm_create_mappings(start, size, back, PAGE_HYP); + if (ret) + return ret; + + memset(hyp_phys_to_virt(back), 0, size); + back += size; + } - return __pkvm_create_mappings(start, end - start, back, PAGE_HYP); + return 0; } static void *__hyp_bp_vect_base; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 41fc25bdfb34..38accc2e23e3 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -234,10 +234,8 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, /* Init the vmemmap portion */ p = hyp_phys_to_page(phys); - for (i = 0; i < nr_pages; i++) { - p[i].order = 0; + for (i = 0; i < nr_pages; i++) hyp_set_page_refcounted(&p[i]); - } /* Attach the unused pages to the buddy tree */ for (i = reserved_pages; i < nr_pages; i++) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 98b39facae04..98441e4039b9 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -29,12 +29,11 @@ static struct kvm_pgtable_mm_ops pkvm_pgtable_mm_ops; static int divide_memory_pool(void *virt, unsigned long size) { - unsigned long vstart, vend, nr_pages; + unsigned long nr_pages; hyp_early_alloc_init(virt, size); - hyp_vmemmap_range(__hyp_pa(virt), size, &vstart, &vend); - nr_pages = (vend - vstart) >> PAGE_SHIFT; + nr_pages = hyp_vmemmap_pages(); vmemmap_base = hyp_early_alloc_contig(nr_pages); if (!vmemmap_base) return -ENOMEM; @@ -76,7 +75,7 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, if (ret) return ret; - ret = hyp_back_vmemmap(phys, size, hyp_virt_to_phys(vmemmap_base)); + ret = hyp_back_vmemmap(hyp_virt_to_phys(vmemmap_base)); if (ret) return ret; diff --git a/arch/arm64/kvm/hyp/reserved_mem.c b/arch/arm64/kvm/hyp/reserved_mem.c index 578670e3f608..81db85bfdbad 100644 --- a/arch/arm64/kvm/hyp/reserved_mem.c +++ b/arch/arm64/kvm/hyp/reserved_mem.c @@ -54,7 +54,7 @@ static int __init register_memblock_regions(void) void __init kvm_hyp_reserve(void) { - u64 nr_pages, prev, hyp_mem_pages = 0; + u64 hyp_mem_pages = 0; int ret; if (!is_hyp_mode_available() || is_kernel_in_hyp_mode()) @@ -72,20 +72,7 @@ void __init kvm_hyp_reserve(void) hyp_mem_pages += hyp_s1_pgtable_pages(); hyp_mem_pages += host_s2_pgtable_pages(); - - /* - * The hyp_vmemmap needs to be backed by pages, but these pages - * themselves need to be present in the vmemmap, so compute the number - * of pages needed by looking for a fixed point. - */ - nr_pages = 0; - do { - prev = nr_pages; - nr_pages = hyp_mem_pages + prev; - nr_pages = DIV_ROUND_UP(nr_pages * sizeof(struct hyp_page), PAGE_SIZE); - nr_pages += __hyp_pgtable_max_pages(nr_pages); - } while (nr_pages != prev); - hyp_mem_pages += nr_pages; + hyp_mem_pages += hyp_vmemmap_pages(); /* * Try to allocate a PMD-aligned region to reduce TLB pressure once From patchwork Wed Oct 13 15:58:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08320C433F5 for ; Wed, 13 Oct 2021 16:06:03 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C83C161168 for ; Wed, 13 Oct 2021 16:06:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C83C161168 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=jtLhiVFje8ZdoDkXPcdhw4i4VYkgjLuYOIVRLR3Uc1g=; b=1GSp3ofx872pF/iaQYFQqopgrw 3VUCwzahQEL2btg+Q488x9tjCoCFpvSgnSJ81RtY+DiCbEkjpmj0JNaz/uZzRVZQi6SPS5ThD+i3M tBKUCKH0l6E3QASL80W8rD7Cj3kMmNn5Zx6PcBXqE6I0HXRGxNAUieAeDbmWq92437TH9/oxkDeTC murorasCoaZyroHTJYY1eQCQ+gtQOT5YmUi9JuPTUC6A1rSF15es7YdHRwtshUsXDlkClupkHpR83 XqkfBpwVik8AK8BMonX5yGqYzDu2hQO+L29V5Dg7fCgDRY2SAM2/Mm440LhIl8R9HqFDdrXJB+Wuz 6N4MF9mA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1magjm-00HVeF-83; Wed, 13 Oct 2021 16:04:15 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1magel-00HTQl-SA for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:59:07 +0000 Received: by mail-wr1-x449.google.com with SMTP id 10-20020a5d47aa000000b001610cbda93dso2310036wrb.23 for ; Wed, 13 Oct 2021 08:59:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=jY5Bguf6FHbyrvp9n+fOCeNfdUoUza19VDS/XHTNadE=; b=a2bOxgesO4qShM9iZUtwxpNLNxvAtISKgoXG58VYG79ZxfXEvlG4txY+QkZoEO33fr k8oSCXYH8GFser7sVBGps8Vjd1hFlXGn5Ts0CCcl2YJqxk3xdJFxAIsh+rBQxP2SkpQQ q347qvkPHogVXKP8Z/gA0OMQ6RXw4b7Pj69mipXSocgiJn7uu/wVPB7Ljf7jnX5yMZdN 5FKtQ19TMmETiAtK1UX8oROSHR0BqOCK12jCUhWEAqJJ9V5EaIC1BevZLvOQtIxRv/we 3alhXG4rLviir9mjCOgr2Z2d38wDI2PSOWjG2Wb6omWVPHFuxiTwi3WQoexeHeTBEv+V rfPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=jY5Bguf6FHbyrvp9n+fOCeNfdUoUza19VDS/XHTNadE=; b=upsXcqKaJRWz3lOPFHV9zndnvLJjm/cOhjdXBfdfcvRh4GHPsT0LG9L5GacLzj/ScE M3Mo0HvO9r78/TKly9COUeKkt79D4GA17phAIOGEvnui+1qr812AVzt81QBBZtro7EW5 6Fsb9q8l/h0ALUPe/sTMuhlb70HUmRdJLf+0mAxI5uAgcw9cnaRtSGg1k5I+/GnpmOCR eA2VdxPQbwZ0UnpEYijV/VTYP2HJW9+pchu1OXHZlPYpoqXQDGW09/bsHMNy6ZMj/ILJ kBVdvnWO6VdOfLga/C4xOAYH86Rvd3Lqyh7Z9tjRkJQiiTVoNwfMZE29jdOeZbWAMUim /HIg== X-Gm-Message-State: AOAM5308y59Z5h7aUh335SZIeduIAkmZm+oBToXZTGLpASj4Ic1mQpyi l3r1Qt38NhTdZQiyuo1O9r2/q3GO8GHx X-Google-Smtp-Source: ABdhPJxV4HY2vWNYpk4jg87UJ31CTmQTVWpbTKB1UxegDt8PV1t1yvNzHY7X12YOZjtv1MWET7xESBbrbphK X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:adf:a31d:: with SMTP id c29mr40005842wrb.381.1634140741873; Wed, 13 Oct 2021 08:59:01 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:27 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-13-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 12/16] KVM: arm64: Move hyp refcount helpers to header files From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085903_973116_4C2E24AF X-CRM114-Status: GOOD ( 11.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We will soon need to touch the hyp_page refcount from outside page_alloc.c in nVHE protected mode, so move the relevant helpers into a header file. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 18 ++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/page_alloc.c | 18 ------------------ 2 files changed, 18 insertions(+), 18 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 592b7edb3edb..e77783be0f3f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -12,6 +12,24 @@ struct hyp_page { unsigned short order; }; +static inline void hyp_page_ref_inc(struct hyp_page *p) +{ + BUG_ON(p->refcount == USHRT_MAX); + p->refcount++; +} + +static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) +{ + p->refcount--; + return (p->refcount == 0); +} + +static inline void hyp_set_page_refcounted(struct hyp_page *p) +{ + BUG_ON(p->refcount); + p->refcount = 1; +} + extern u64 __hyp_vmemmap; #define hyp_vmemmap ((struct hyp_page *)__hyp_vmemmap) diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 38accc2e23e3..0d977169ed08 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -144,24 +144,6 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, return p; } -static inline void hyp_page_ref_inc(struct hyp_page *p) -{ - BUG_ON(p->refcount == USHRT_MAX); - p->refcount++; -} - -static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) -{ - p->refcount--; - return (p->refcount == 0); -} - -static inline void hyp_set_page_refcounted(struct hyp_page *p) -{ - BUG_ON(p->refcount); - p->refcount = 1; -} - static void __hyp_put_page(struct hyp_pool *pool, struct hyp_page *p) { if (hyp_page_ref_dec_and_test(p)) From patchwork Wed Oct 13 15:58:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4B86C433F5 for ; Wed, 13 Oct 2021 16:07:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9914C60EE9 for ; Wed, 13 Oct 2021 16:07:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9914C60EE9 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Qnh4E+w3yKv3IOQl45tjRgfatUCAJ83vEtu11AdDkv4=; b=pfr+thzER342twTcX7EoO/giHS CmcuKqZPtMktDqD40FRKzg6zstXtaqjXN/fxaM1kabCs3D7FhOaCPWI5rnfuReeVozEvV5ECR3cQl tHaRuNerl1qT2dF4GhYK5ntSYS4JSupGot+jR6hcOvknwR4tgDVXGrtfQy3aadt4zQY3bnWCA+il4 5P/+HsNUXK0YcjRqyCTMPVO0zVj/ukZmL2BEGspMPg0cBZASii74NOP9X6Vt03MBSfif1+qK9TAwY EfLhpTtOY4DqjbwPT27hggPbFA2ZlWSPl/x3FOjqKsrri9ZJu2JyB7Kcun4Y+3AzAnC2AHsZ1rZrx TCPSx7YQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1magl8-00HWFj-My; Wed, 13 Oct 2021 16:05:39 +0000 Received: from mail-qv1-xf49.google.com ([2607:f8b0:4864:20::f49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageo-00HTRR-EI for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:59:08 +0000 Received: by mail-qv1-xf49.google.com with SMTP id q17-20020ad45491000000b003832299965bso2941997qvy.18 for ; Wed, 13 Oct 2021 08:59:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=DEoxdh+qQqNfn4kriXOPtuGm5/KezrtRJAuAEOvIIEc=; b=TpLWDDRB66gNFtTO9H4DRYs+65J/g5K+vn3S4IswImh0cKDwKxr4tBSjZcRccVl/Gl PA7nAXYwmNvaJhrKw46Ts3VNmjOrJZAny7BGVWCNErEuHLrCRinaJt33ySp3tdu1rhlU ObF8iMNmln/oDydOezVDr5M8tvVtysJpT8v+dFboU9XBwY12v9a9YJdoISKQDzlPyc89 gwxqYq45K+oRmNXwFvSb/RgfedaC9VADqHJA3tsa8lwe9+sahvN4PIU9zbtzmjuqi1TS TY2B5+x2nY3sIMBjjwqLbn3F5AEhqAsehabz6sua9uKu7eipWBEh68yQaYKn6biu+OzM jw3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=DEoxdh+qQqNfn4kriXOPtuGm5/KezrtRJAuAEOvIIEc=; b=flhzgGw617RO1qIwGnVCpAhipORR4U2dVu/KdXEOWXGdf8BBTbXkyvxQ2q0y53pDNQ qXX4ICQ8V2wW8B8/yUz8o1nfSGhhPtBRzhNaA6yYx0LWb0ytXjMwt4wFTk0BbADKvVqt GcAvgWKKIEPd/nbttYD0yNzKx9vzKRZH6+Aaenrxu8Pg7NxVzodEcRGUZY2gRX+EcN+K pdxYYc7eq2S4XrhywTo/adILAyeN6RLfT33QaX9nrqr8e04tC5MJRbuL+7aigJbvvbvR aJ9z+3e02A7znUTie12wP159samll67uSD65xF+JumzQwgrAJEdew9NMXie/V/GzaC2q jHWA== X-Gm-Message-State: AOAM531wJe51nWcRpOxYolea27f/CFrAl2nkal9vQ+ryNzds2Ia24Ede vf/WdK1Y1T4Z8zGOU121RbgcrsirUCyY X-Google-Smtp-Source: ABdhPJxxY5hYqLzNoNmIsVUGeAsrdYoGroAg6GyRP+fKCKJAdv6jBH4kuWvly/Nv7EL03M4qWkxiLE/AW3NG X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:ac8:5385:: with SMTP id x5mr73629qtp.105.1634140744121; Wed, 13 Oct 2021 08:59:04 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:28 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-14-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 13/16] KVM: arm64: Move double-sharing logic into hyp-specific function From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085906_555878_94F360F5 X-CRM114-Status: GOOD ( 17.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon Strictly speaking, double-sharing a page is an invalid transition and should be rejected, however we allow this in order to simplify the book-keeping when KVM metadata (such as vcpu structures) co-exists in the same page. Given that double-sharing is only required for pages shared with the hypervisor by the host, move the handling into a hyp-specific function to check incoming shares, therefore preventing double-sharing outside of this particular transition. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 57 +++++++++++++++++++-------- 1 file changed, 41 insertions(+), 16 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 909e60f71b06..3378117d010c 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -536,6 +536,33 @@ static int ack_share(struct pkvm_page_share_ack *ack, } } +static int hyp_check_incoming_share(struct pkvm_page_req *req, + struct pkvm_page_share_ack *ack, + enum pkvm_component_id initiator, + enum kvm_pgtable_prot prot) +{ + /* + * We allow the host to share the same page twice, but that means we + * have to check that the states really do match exactly. + */ + if (initiator != PKVM_ID_HOST) + return -EPERM; + + if (req->initiator.state != PKVM_PAGE_SHARED_OWNED) + return -EPERM; + + if (ack->completer.state != PKVM_PAGE_SHARED_BORROWED) + return -EPERM; + + if (ack->completer.phys != req->phys) + return -EPERM; + + if (ack->completer.prot != prot) + return -EPERM; + + return 0; +} + /* * Check that the page states in the initiator and the completer are compatible * for the requested page-sharing operation to go ahead. @@ -544,6 +571,8 @@ static int check_share(struct pkvm_page_req *req, struct pkvm_page_share_ack *ack, struct pkvm_mem_share *share) { + struct pkvm_mem_transition *tx = &share->tx; + if (!addr_is_memory(req->phys)) return -EINVAL; @@ -552,25 +581,22 @@ static int check_share(struct pkvm_page_req *req, return 0; } - if (req->initiator.state != PKVM_PAGE_SHARED_OWNED) - return -EPERM; - - if (ack->completer.state != PKVM_PAGE_SHARED_BORROWED) - return -EPERM; - - if (ack->completer.phys != req->phys) - return -EPERM; - - if (ack->completer.prot != share->prot) + switch (tx->completer.id) { + case PKVM_ID_HYP: + return hyp_check_incoming_share(req, ack, tx->initiator.id, + share->prot); + default: return -EPERM; - - return 0; + } } static int host_initiate_share(struct pkvm_page_req *req) { enum kvm_pgtable_prot prot; + if (req->initiator.state == PKVM_PAGE_SHARED_OWNED) + return 0; + prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_OWNED); return host_stage2_idmap_locked(req->initiator.addr, PAGE_SIZE, prot); } @@ -595,6 +621,9 @@ static int hyp_complete_share(struct pkvm_page_req *req, void *start = (void *)req->completer.addr, *end = start + PAGE_SIZE; enum kvm_pgtable_prot prot; + if (req->initiator.state == PKVM_PAGE_SHARED_OWNED) + return 0; + prot = pkvm_mkstate(perms, PKVM_PAGE_SHARED_BORROWED); return pkvm_create_mappings_locked(start, end, prot); } @@ -653,10 +682,6 @@ static int do_share(struct pkvm_mem_share *share) if (ret) break; - /* Allow double-sharing by skipping over the page */ - if (req.initiator.state == PKVM_PAGE_SHARED_OWNED) - continue; - ret = initiate_share(&req, share); if (ret) break; From patchwork Wed Oct 13 15:58:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6EB1C433F5 for ; Wed, 13 Oct 2021 16:08:21 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 78A4261100 for ; Wed, 13 Oct 2021 16:08:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 78A4261100 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=MKgAa8VrSHlnmSwdUbIegBczM11T8uHWuHwO7jbFaSg=; b=ly8NyosC29gmPgk/KF8yKxIq7o AvllV52zZdLhL2NiXGsMK/V0/OICZyt9SKzM+xyOjpO5jTufpaLxDQgiNyv1P4k8LmSPrZ6Jwvtkz Eq0tPDQt1QDixN2oNfr1BXF1yHPPUtJ0ywAl68Gc6oh6KJkjWrjrJoGM2vEWeT54nxs5YpCI8I4cN I3JoXUDfqnwD9OU2gHpHLAB1fQWXH1PdQ9vSwMU5gBlH3g+fNzBft3kMSHLmKypV44w+YW+OmL5DH YH4WTv1HJ16MJOvteAPZMB9XebW7bM1r/pepxPmlWZo5ZX12nAxIDxUdSoW4ObGP1W0ZmhtKfbSMS 8PULnBLA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1maglk-00HWXZ-25; Wed, 13 Oct 2021 16:06:17 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageq-00HTSF-Hf for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:59:10 +0000 Received: by mail-wr1-x449.google.com with SMTP id f1-20020a5d64c1000000b001611832aefeso2335805wri.17 for ; Wed, 13 Oct 2021 08:59:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=5BoaM6D6gNLK4spf2rCygUFZYyg82caSdHjxEWdrlQU=; b=lCYZR0Kok1ZJ/F1Bc4qnzND5K7Zjs/qag8dPQUKLfKY1Vuju+0hrk97PbxTbLFIE92 dhF98EBgTqhqQt92nIfoE7GMC2K4/ghR4OjSzZq65MJp4AT7ibXNQwUHSePHfSjriq+s qj1OBYf3ab/2GRzkKLmCCNN9sNsIntPaQklJ6mwQ/XKR3fKPsRo3EWTXDVVPSTrHWBVD 1rD/QwOnSIbGCAAkjEKSTIRdQXrNpoLM7Rs+deBDWlZEA49xNVEQoxCXD+YunfwM3WR5 9SFni/ujR7WAYzyn4Pom8B/djSepucfAa3KiUobigruMi97cJGEedWHGkG8p0udLt1y2 BQTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5BoaM6D6gNLK4spf2rCygUFZYyg82caSdHjxEWdrlQU=; b=jUaruWGk7r9qluPf1KFY3T+BMgXp1ZnJlmBiemG0achRAFDpAFPIkcH3q6stDWNsui o7bw0XUzrqccJZ1bOGRuhcgBEIYqVDLTNvxX3x5UgJ0p0a7odX50uBhwYAgxHc4s4G8x ozMQHNlhGTh6mU0NfsWKvOB0HOaC59GxWQhxG/PChz8JC6J04eZs2YCse5m50MgNNjsl MdzIl6ymokmWKMnKqkp3p7r3abvQaf9ruIvPSXVgylnQ6iJtrySyaqUzJ+IjQnU/MUWz K4+8xCTj8KItAavkG05miXdi06XH+oxRVy8cDE4w7eZ5R9HWelXD+4jcx4oPqWTwni9s IlSg== X-Gm-Message-State: AOAM533gHUfXT5UEgAWIXFg07eaUWeOM+bN5z1tNie4rhzhFGc9X696d ZtQdHrqKx9Ce2JHVOcm38RzrprmMnIuD X-Google-Smtp-Source: ABdhPJycAkElHpHQ2AGMbSwm3DR+JicypQn/HrqmtGvR6c4BW8hrAjG63/uNsstYDVBkOL0PzEyul2i7qtoG X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a7b:cd90:: with SMTP id y16mr93339wmj.146.1634140746365; Wed, 13 Oct 2021 08:59:06 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:29 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-15-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 14/16] KVM: arm64: Refcount shared pages at EL2 From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085908_711651_29A11EE0 X-CRM114-Status: GOOD ( 13.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We currently allow double sharing of pages from the hypervisor to the host, but don't track how many times each page is shared. In order to prepare the introduction of an unshare operation in the hypervisor, refcount the physical pages which the host shares more than once. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 3378117d010c..cad76bc68e53 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -560,6 +560,9 @@ static int hyp_check_incoming_share(struct pkvm_page_req *req, if (ack->completer.prot != prot) return -EPERM; + if (WARN_ON(!hyp_phys_to_page(req->phys)->refcount)) + return -EINVAL; + return 0; } @@ -619,13 +622,22 @@ static int hyp_complete_share(struct pkvm_page_req *req, enum kvm_pgtable_prot perms) { void *start = (void *)req->completer.addr, *end = start + PAGE_SIZE; + struct hyp_page *page = hyp_phys_to_page(req->phys); enum kvm_pgtable_prot prot; + int ret = 0; - if (req->initiator.state == PKVM_PAGE_SHARED_OWNED) + if (req->initiator.state == PKVM_PAGE_SHARED_OWNED) { + hyp_page_ref_inc(page); return 0; + } prot = pkvm_mkstate(perms, PKVM_PAGE_SHARED_BORROWED); - return pkvm_create_mappings_locked(start, end, prot); + ret = pkvm_create_mappings_locked(start, end, prot); + + if (!ret) + hyp_set_page_refcounted(page); + + return ret; } /* Update the completer's page-table for the page-sharing request */ From patchwork Wed Oct 13 15:58:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556283 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 177C7C433EF for ; Wed, 13 Oct 2021 16:09:33 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CA24D61168 for ; Wed, 13 Oct 2021 16:09:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org CA24D61168 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=gBMcN4bYqd9M7Dz2uybq3LsQHuqj3NgiWGztagdNT58=; b=MASaP5veqoXcPJrO2I3X36KZ1C jyVkoagd4o9ZkhZqEWsBYWg9hM40EL6uoqh/yIfUTcs0MeuxpLm6/e0/S+Y/JoOQ+V8c1iDQv6jBq yr2Ztf3OxBgAZLum1Q0uZQ1S30bOfnBYlU5J53vpHTBIMMQ08fu5jguFbK+dIp9zxrkMQsLhByl4n RCj+jKLKt5A/1pZGSLRL1B5Ow3R+G7VfnSzVVgrSfxVGBQojSfnTn9cKbnXosULP9xpCtekzsF96+ /tOcUnlG8OAYVQsdj3pM6mpT3XbUwkP5ct0Qki+VzDTgiKl+02xy6t8J1ukA0m2bdkSgAGIBcx2H2 a59775Mg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1magmR-00HWnp-KQ; Wed, 13 Oct 2021 16:07:01 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mages-00HTTB-Qw for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:59:13 +0000 Received: by mail-wr1-x449.google.com with SMTP id k2-20020adfc702000000b0016006b2da9bso2374793wrg.1 for ; Wed, 13 Oct 2021 08:59:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=jUMxCe3iQSBSJfPoTwqGidXgGNlM0bDyqNQH45PKm9Y=; b=CxAgTNfkMVoo/RZUREDqrhDj64DfsmkHXTQ2HN3iUTz4h/Lelio1RTdCwqgDDJwOc2 CNDWLBR/MtpWHrE5lUVvJrbYbPewemJg5kLKt6YlEEpjtThnwR7LyscUzaCfdO3ij5xv 6VhDekgWir7IZZsHshYkvkCCssKuDKeUXkCvpM5chObuWvWpYqOl8CnWNdOizEvZ3iQl DITd/DonpqAO5uDMy/0H6kI0MalnrqreSOZsLxbwhgVUsMGfGwsch9C1e/sBmnh5JO3G uJQ29zHBKjsXkMRmZe/P5nzDxaKhvBu1xLRUvSm49gxoYB1ukqpj2qOlNTSHbGL3syL2 bm8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=jUMxCe3iQSBSJfPoTwqGidXgGNlM0bDyqNQH45PKm9Y=; b=VO1DtrDYRueZYnrmC5RXjNcEsPlk1bLkmKGyaV5EPESJv9XKn4a8fEJAdIbqwHdD/V aNytJh8tEjNnG8mVD3lO3fAvv5MFNeZJKaY7RssxD42LwrDFZ+5k8jEwCkrlXnSZfRo7 c6ptlxLlN9eW5le9LPQ0BEAE4UZB+OO+ZfdreI+swKb/ixn+LcCH95a2OkOPNxibzndT Cy3ntphLrH1VoZD9XvXtsIOIujOgcT4gHFJQOYJcQRFpSOgOZ4uyYiJ5VBjTcXRJP1g1 t/j9uvByvkZV/kLsoN6ENML24MPxQVHMsS6fYw4ZybC+PijAYRZtaEbNm9x7JD5Ikzxy GC9Q== X-Gm-Message-State: AOAM531LULpvyVoeReprEt3SAyiE1yS+czj5T+2VCDRvn2ArQPhYQR/i AS5L+xNUIwiFm4V+6bUY0Qe3959lJWq5 X-Google-Smtp-Source: ABdhPJyvQxc0b2FqQ/DuPVB64HYbLHR9vI9grqwIbQ8mH0Sw/KNQLH0z95tp7Cv/xFYiDERs2LeVdh2X8e3V X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a7b:c766:: with SMTP id x6mr13644451wmk.15.1634140748699; Wed, 13 Oct 2021 08:59:08 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:30 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-16-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 15/16] KVM: arm64: pkvm: Introduce an unshare hypercall From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085910_967859_4C276759 X-CRM114-Status: GOOD ( 18.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce an unshare hypercall which can be used to unmap things from the hypervisor stage-1 in nVHE protected mode. This will be useful to update the EL2 ownership state of pages during guest teardown, and avoids keeping dangling mappings to unreferenced portions of memory. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 9 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 159 ++++++++++++++++++ 4 files changed, 170 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index e86045ac43ba..98ba11b7b192 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -64,6 +64,7 @@ #define __KVM_HOST_SMCCC_FUNC___pkvm_cpu_set_vector 18 #define __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize 19 #define __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc 20 +#define __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp 21 #ifndef __ASSEMBLY__ diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 9c02abe92e0a..88e1607a94fe 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -55,6 +55,7 @@ extern const u8 pkvm_hyp_id; int __pkvm_prot_finalize(void); int __pkvm_host_share_hyp(u64 pfn, u64 nr_pages); +int __pkvm_host_unshare_hyp(u64 pfn, u64 nr_pages); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index f78bec2b9dd4..7070ed9ead9b 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -148,6 +148,14 @@ static void handle___pkvm_host_share_hyp(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = __pkvm_host_share_hyp(pfn, nr_pages); } +static void handle___pkvm_host_unshare_hyp(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, pfn, host_ctxt, 1); + DECLARE_REG(u64, nr_pages, host_ctxt, 2); + + cpu_reg(host_ctxt, 1) = __pkvm_host_unshare_hyp(pfn, nr_pages); +} + static void handle___pkvm_create_private_mapping(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(phys_addr_t, phys, host_ctxt, 1); @@ -184,6 +192,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_init), HANDLE_FUNC(__pkvm_cpu_set_vector), HANDLE_FUNC(__pkvm_host_share_hyp), + HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__pkvm_create_private_mapping), HANDLE_FUNC(__pkvm_prot_finalize), }; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index cad76bc68e53..3b724ab62e9f 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -740,3 +740,162 @@ int __pkvm_host_share_hyp(u64 pfn, u64 nr_pages) return ret; } + +static int host_initiate_unshare(struct pkvm_page_req *req) +{ + struct hyp_page *page = hyp_phys_to_page(req->phys); + enum kvm_pgtable_prot prot; + + if (page->refcount > 1) + return 0; + + prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_OWNED); + return host_stage2_idmap_locked(req->initiator.addr, PAGE_SIZE, prot); +} + +static int initiate_unshare(struct pkvm_page_req *req, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + return host_initiate_unshare(req); + default: + return -EINVAL; + } +} + +static int hyp_complete_unshare(struct pkvm_page_req *req) +{ + struct hyp_page *page = hyp_phys_to_page(req->phys); + void *addr = (void *)req->completer.addr; + int ret = 0; + + if (hyp_page_ref_dec_and_test(page)) { + ret = kvm_pgtable_hyp_unmap(&pkvm_pgtable, (u64)addr, PAGE_SIZE); + ret = (ret == PAGE_SIZE) ? 0 : -EINVAL; + } + + return ret; +} + +static int complete_unshare(struct pkvm_page_req *req, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + switch (tx->completer.id) { + case PKVM_ID_HYP: + return hyp_complete_unshare(req); + default: + return -EINVAL; + } +} + +static int check_unshare(struct pkvm_page_req *req, + struct pkvm_page_share_ack *ack, + struct pkvm_mem_share *share) +{ + struct pkvm_mem_transition *tx = &share->tx; + + if (!addr_is_memory(req->phys)) + return -EINVAL; + + switch (tx->completer.id) { + case PKVM_ID_HYP: + return hyp_check_incoming_share(req, ack, tx->initiator.id, + share->prot); + default: + return -EPERM; + } +} + +/* + * do_unshare(): + * + * The page owner revokes access from another component for a range of + * pages which were previously shared using do_share(). + * + * Initiator: SHARED_OWNED => OWNED + * Completer: SHARED_BORROWED => NOPAGE + */ +static int do_unshare(struct pkvm_mem_share *share) +{ + struct pkvm_page_req req; + int ret = 0; + u64 idx; + + for (idx = 0; idx < share->tx.nr_pages; ++idx) { + struct pkvm_page_share_ack ack; + + /* + * Use the request_share() and ack_share() from the normal share + * path as they implement all the checks we need here. But + * check_unshare() needs to differ -- PKVM_PAGE_OWNED is illegal + * for the initiator. + */ + ret = request_share(&req, share, idx); + if (ret) + goto out; + + ret = ack_share(&ack, &req, share); + if (ret) + goto out; + + ret = check_unshare(&req, &ack, share); + if (ret) + goto out; + } + + for (idx = 0; idx < share->tx.nr_pages; ++idx) { + ret = request_share(&req, share, idx); + if (ret) + break; + + ret = initiate_unshare(&req, share); + if (ret) + break; + + ret = complete_unshare(&req, share); + if (ret) + break; + } + + WARN_ON(ret); +out: + return ret; +} + +int __pkvm_host_unshare_hyp(u64 pfn, u64 nr_pages) +{ + int ret; + u64 host_addr = hyp_pfn_to_phys(pfn); + u64 hyp_addr = (u64)__hyp_va(host_addr); + struct pkvm_mem_share share = { + .tx = { + .nr_pages = nr_pages, + .initiator = { + .id = PKVM_ID_HOST, + .addr = host_addr, + .host = { + .completer_addr = hyp_addr, + }, + }, + .completer = { + .id = PKVM_ID_HYP, + }, + }, + .prot = PAGE_HYP, + }; + + host_lock_component(); + hyp_lock_component(); + + ret = do_unshare(&share); + + hyp_unlock_component(); + host_unlock_component(); + + return ret; +} From patchwork Wed Oct 13 15:58:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556285 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC73CC433F5 for ; Wed, 13 Oct 2021 16:10:55 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8B1EA6109F for ; Wed, 13 Oct 2021 16:10:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8B1EA6109F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=PUxJCRJ9fIMWhSn0iLY9oZkmWjf/olQLiABfCSJIbtk=; b=cUKZlfxu237DB2cX2nQfDDEOmI pUtkBKk/3D4DNuVZ4wOInSTYsitXibGEY0VNh9q6mFAuP2Sj4EIjF2S/jbCbktkdgMdaDrk6iaiRM hEkAuVBxjbYfwl+xMQVLe3fiAMw86CcDhbTYrGlaiR/KhLvqRFFFdfWHmC7cxhDAe2rERuR6trYHQ 92WAWvBulAQZ1MTGJBYpGpHP7WvP38lT+p7ulo2CW8Ush7g/H8xpsjs/4Q3tSOe8yq6uleyvPF429 A9ILNOhrJWTtnCvIcIDcg6CAD7W3ogiLxMMgCPUad38KGFxLP/z4G53nkbtpaqkYER8SK+MC3Iwot tFYf3vFg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1magnO-00HXAb-Bw; Wed, 13 Oct 2021 16:08:00 +0000 Received: from mail-qk1-x749.google.com ([2607:f8b0:4864:20::749]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageu-00HTU7-K2 for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:59:14 +0000 Received: by mail-qk1-x749.google.com with SMTP id c16-20020a05620a0cf000b0045f1d55407aso2194900qkj.22 for ; Wed, 13 Oct 2021 08:59:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NDD02US5GbETqBBkcvp2ltSMD53RfkXvi4Mxnfbxyc8=; b=T1qbVsJ5h8QTAp1/7WrxvMrgG72eD1aSQQj1BSMPYB1Kenh6Hklh6K8yCZTmndDK42 u4Y6NVa014WhwUgqqD9CmbygSYVNBaZkM0MN8g7UYT0kUTHH54kjhCEurh3xoN5qYwBo ddSTtlaIarhctUtcmQkgKfG1Z9B5fnVxEdlhSrzQjhoOayqJ5st7VQycchZ+5+1bRDiN kqcD6cdKLI0G2Cohd+EKqWaBlTYO6NH7oR9jYBU6u0mp5nvRhRsQqIvt2zveBAGm2gW3 sz7jZ5lZd0cOcFiIBkGj8S5gYYRVvaB97+qO8VnlH+n3idlF4gkw+ygzF2gJ+kyVQItQ 0ceQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NDD02US5GbETqBBkcvp2ltSMD53RfkXvi4Mxnfbxyc8=; b=NcEkExzafEyenM+4CKaT6IKoHstzdhNjPSh4txRXPWfWO6l0YmezG46wlOB/mgB/V7 QqNZBaIPCQ0AzsEC0ibsh2lXqQXDQyuR3/OgqUOtQktAB0PxPqwAS0Ll1XafyPLayjlG yFtg5McRvtqHUpuQCSMuBPTXd0MlQG2QoE6nEyliJjc73BhmVKhd4ObGW5hTtIYWCNqh 94erNaurkgA3v6R/zWMG6V1dF2AHwjCrzshoDRsm0qpTOW5Wl7hFsb2fvxrl763TKWYy 2hBJ5slxc9IGYYJL4ZrorssaDJDU3kZQqFjegd3IaXpBTJaZZVaDvFnlz8GZIhg5uZa1 7PZA== X-Gm-Message-State: AOAM532x6y4C+sw9I+hiY/ZX92tQ7KUmik/XPsEuVLH/w0J3z8XgcQur 9Emdxz6t/UgBTnEwaMbCaGpL7fwWOt3L X-Google-Smtp-Source: ABdhPJy+Hh/3VJIHIRnHcU5zrl6PnoHx6rKK0wdWdMKIvuGmVj2OaF9KH+F7ChEdi48qABlrMpoZp8hwR7HW X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a05:622a:118b:: with SMTP id m11mr81305qtk.67.1634140750968; Wed, 13 Oct 2021 08:59:10 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:31 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-17-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 16/16] KVM: arm64: pkvm: Unshare guest structs during teardown From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085912_714735_E7020771 X-CRM114-Status: GOOD ( 16.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Make use of the newly introduced unshare hypercall during guest teardown to unmap guest-related data structures from the hyp stage-1. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/kvm/arm.c | 2 ++ arch/arm64/kvm/fpsimd.c | 10 ++++++++-- arch/arm64/kvm/mmu.c | 16 ++++++++++++++++ arch/arm64/kvm/reset.c | 13 ++++++++++++- 6 files changed, 41 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f8be56d5342b..8b61cdcd1b29 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -322,6 +322,8 @@ struct kvm_vcpu_arch { struct thread_info *host_thread_info; /* hyp VA */ struct user_fpsimd_state *host_fpsimd_state; /* hyp VA */ + struct thread_info *kern_thread_info; + struct user_fpsimd_state *kern_fpsimd_state; struct { /* {Break,watch}point registers */ diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 185d0f62b724..81839e9a8a24 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -151,6 +151,7 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v) #include int kvm_share_hyp(void *from, void *to); +void kvm_unshare_hyp(void *from, void *to); int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot); int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size, void __iomem **kaddr, diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index f2e74635332b..f11c51db6fe6 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -188,6 +188,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm) } } atomic_set(&kvm->online_vcpus, 0); + + kvm_unshare_hyp(kvm, kvm + 1); } int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 2fe1128d9f3d..67059daf4d26 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -28,23 +28,29 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) { int ret; - struct thread_info *ti = ¤t->thread_info; - struct user_fpsimd_state *fpsimd = ¤t->thread.uw.fpsimd_state; + struct thread_info *ti = vcpu->arch.kern_thread_info; + struct user_fpsimd_state *fpsimd = vcpu->arch.kern_fpsimd_state; /* * Make sure the host task thread flags and fpsimd state are * visible to hyp: */ + kvm_unshare_hyp(ti, ti + 1); + ti = ¤t->thread_info; ret = kvm_share_hyp(ti, ti + 1); if (ret) goto error; + kvm_unshare_hyp(fpsimd, fpsimd + 1); + fpsimd = ¤t->thread.uw.fpsimd_state; ret = kvm_share_hyp(fpsimd, fpsimd + 1); if (ret) goto error; vcpu->arch.host_thread_info = kern_hyp_va(ti); vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd); + vcpu->arch.kern_thread_info = ti; + vcpu->arch.kern_fpsimd_state = fpsimd; error: return ret; } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index bc9865a8c988..f01b0e49e262 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -300,6 +300,22 @@ int kvm_share_hyp(void *from, void *to) nr_pages); } +void kvm_unshare_hyp(void *from, void *to) +{ + phys_addr_t start, end; + u64 nr_pages; + + if (is_kernel_in_hyp_mode() || kvm_host_owns_hyp_mappings() || !from) + return; + + start = ALIGN_DOWN(kvm_kaddr_to_phys(from), PAGE_SIZE); + end = PAGE_ALIGN(kvm_kaddr_to_phys(to)); + nr_pages = (end - start) >> PAGE_SHIFT; + + WARN_ON(kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, __phys_to_pfn(start), + nr_pages)); +} + /** * create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode * @from: The virtual kernel start address of the range diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 5ce36b0a3343..e3e9c9e1f1c8 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -141,7 +141,18 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) { - kfree(vcpu->arch.sve_state); + struct user_fpsimd_state *fpsimd = vcpu->arch.kern_fpsimd_state; + struct thread_info *ti = vcpu->arch.kern_thread_info; + void *sve_state = vcpu->arch.sve_state; + + kvm_unshare_hyp(vcpu, vcpu + 1); + if (ti) + kvm_unshare_hyp(ti, ti + 1); + if (fpsimd) + kvm_unshare_hyp(fpsimd, fpsimd + 1); + if (sve_state && vcpu->arch.has_run_once) + kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); + kfree(sve_state); } static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)