From patchwork Thu Feb 10 22:41:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12742496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2B5AEC433EF for ; Thu, 10 Feb 2022 22:45:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fVNyt1hcJ1BdjFOWKTPGyPl9koPOh7fGSY9rHoc/ZfE=; b=KY4Pnu5OAZYdRp xvC7Wvx/tu/HqgtRNW7W3G0uSwmTsrzr7iVNFmUdj/hDmFlQxraHGhGgEz7qbc/hbFy94gPkqEaKO KPABNFzolXBTCpDSjhVEmbH4ESuylG9wWpqOiJvGRwtBHeICrGCUZczEV4fA9fm56Au69otusTokQ yPudWcW5OHSG1Ki/MxYKiw/I0tUn2uPD3RdxTnB1afu5IN1wFCV0hhIDm1v6KsZJQ8YU6qh9gotXo Y3e0yoAKUfcqfomUTNoLx6UMDJUhsct1B78LWv1MsQY9HU8UZRwHEdl5dUnCNTfTLtXZXtDFOvfRG 1S10a9rvJq4UuEEFMEYw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nIIAa-0053XN-9k; Thu, 10 Feb 2022 22:44:08 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nIIAV-0053UM-AV for linux-arm-kernel@lists.infradead.org; Thu, 10 Feb 2022 22:44:05 +0000 Received: by mail-yb1-xb49.google.com with SMTP id a32-20020a25ae20000000b0061db8f89e46so14990751ybj.14 for ; Thu, 10 Feb 2022 14:44:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=FWvONc6PPT7SOgcpYOOzYgnlsLgy4lIL2CKlJJtukzs=; b=mBuynU6nVtUUhgSN/Mzp+TKEbV4pFhe3dtKgEyVMOiwApKF6+HLlSY4gE57X5hOP70 rAAiYgmNoRwzjL9PRCjeUoKyBvbWi9KRQIFADz3Q6trpDjsIh8e/dl0J/VD9kGCwAI+J BESFy4HJ4E6dX5+yow+EWQQLwzm01n1PyiQJhfT9IrZs+BepkmJI7SxxevOSmOe5Zkde eBCAolmAIMIZry/9HLPgsUjMn0zqebgFcw0PnbcfnYgLHiakd0mei8onuiCvnJNdqIbB Mix+BHSyf/9IvbRtthvYA/aAELP0sccVnHZzFmlpWyLkFP2bSNkbmY8rnK0kh6Acc1Vw G2Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=FWvONc6PPT7SOgcpYOOzYgnlsLgy4lIL2CKlJJtukzs=; b=t5KnBN6PuZthV2PepLNkbycl5fPHZNdLpCFxbW7q/XJyquI5aGEl5rddsUI1UmG8GE B9FGBaxcfhXUacVaj7nnFnVE3YD3hRLCYsBVdYv/CuMOd7EkxGMIkgxQUgdcB3keRlKg a/pSo8HqKHlwMG57RBDvMNr4wB2DuAXnDP0m5I9JQpp8slWkfLTQjAhfxX7y6+n2b6zC pA9HzyKyAtD1mU4Niz0n3EXr+P5Z4Zaa4lJYXlJOzos3492frQH2yZp5XKw5qysGZBtT HJzRMdF+UtxlA0Ly4qHcRQm8qx/0YmLtWTQf9v4D/DhTopI6JcxjSt6pCZJwxIJJatD4 D/og== X-Gm-Message-State: AOAM533KkdPrIN1MoMIUrHCfcV5QeCCg5ncmo8OUIGeWWUTX99VkKy6J +GDMbYR9rhRWTZ8HamR+4T3b+htLliNqbxpKtw== X-Google-Smtp-Source: ABdhPJxZHBnjRsaIEqwkHqdAbXT7GDJWvj+cAd1UkPLHhqa+mVsLffzcbYP42soJ9363QIkLwEeaj1E/kFGFAp78sg== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:8f02:232:ad86:2ab2]) (user=kaleshsingh job=sendgmr) by 2002:a25:6f08:: with SMTP id k8mr8699054ybc.469.1644533041615; Thu, 10 Feb 2022 14:44:01 -0800 (PST) Date: Thu, 10 Feb 2022 14:41:43 -0800 In-Reply-To: <20220210224220.4076151-1-kaleshsingh@google.com> Message-Id: <20220210224220.4076151-3-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220210224220.4076151-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH 2/7] KVM: arm64: Factor out private range VA allocation From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , Catalin Marinas , James Morse , Alexandru Elisei , Suzuki K Poulose , Ard Biesheuvel , Mark Rutland , Pasha Tatashin , Joey Gouly , Peter Collingbourne , Andrew Walbran , Andrew Scull , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220210_144403_390979_F553AA53 X-CRM114-Status: GOOD ( 14.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret __pkvm_create_private_mapping() is currently responsible for allocating VA space in the hypervisor's "private" range and creating stage-1 mappings. In order to allow reusing the VA space allocation logic from other places, let's factor it out in a standalone function. This is will be used to allocate private VA ranges for hypervisor stack guard pages in a subsequent patch in this series. Signed-off-by: Quentin Perret [Kalesh - Resolve conflicts and make hyp_alloc_private_va_range available outside nvhe/mm.c, update commit message] Signed-off-by: Kalesh Singh --- arch/arm64/kvm/hyp/include/nvhe/mm.h | 1 + arch/arm64/kvm/hyp/nvhe/mm.c | 28 +++++++++++++++++++--------- 2 files changed, 20 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index 2d08510c6cc1..f53fb0e406db 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -21,6 +21,7 @@ int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot); int pkvm_create_mappings_locked(void *from, void *to, enum kvm_pgtable_prot prot); unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, enum kvm_pgtable_prot prot); +unsigned long hyp_alloc_private_va_range(size_t size); static inline void hyp_vmemmap_range(phys_addr_t phys, unsigned long size, unsigned long *start, unsigned long *end) diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index 526a7d6fa86f..e196441e072f 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -37,6 +37,22 @@ static int __pkvm_create_mappings(unsigned long start, unsigned long size, return err; } +unsigned long hyp_alloc_private_va_range(size_t size) +{ + unsigned long addr = __io_map_base; + + hyp_assert_lock_held(&pkvm_pgd_lock); + __io_map_base += PAGE_ALIGN(size); + + /* Are we overflowing on the vmemmap ? */ + if (__io_map_base > __hyp_vmemmap) { + __io_map_base = addr; + addr = (unsigned long)ERR_PTR(-ENOMEM); + } + + return addr; +} + unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, enum kvm_pgtable_prot prot) { @@ -45,16 +61,10 @@ unsigned long __pkvm_create_private_mapping(phys_addr_t phys, size_t size, hyp_spin_lock(&pkvm_pgd_lock); - size = PAGE_ALIGN(size + offset_in_page(phys)); - addr = __io_map_base; - __io_map_base += size; - - /* Are we overflowing on the vmemmap ? */ - if (__io_map_base > __hyp_vmemmap) { - __io_map_base -= size; - addr = (unsigned long)ERR_PTR(-ENOMEM); + size = size + offset_in_page(phys); + addr = hyp_alloc_private_va_range(size); + if (IS_ERR((void *)addr)) goto out; - } err = kvm_pgtable_hyp_map(&pkvm_pgtable, addr, size, phys, prot); if (err) {