From patchwork Thu Nov 10 19:02:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 13039184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 938F4C4332F for ; Thu, 10 Nov 2022 19:03:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230195AbiKJTDS (ORCPT ); Thu, 10 Nov 2022 14:03:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229667AbiKJTDR (ORCPT ); Thu, 10 Nov 2022 14:03:17 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE49A32061 for ; Thu, 10 Nov 2022 11:03:16 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 495C561E12 for ; Thu, 10 Nov 2022 19:03:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 83368C433C1; Thu, 10 Nov 2022 19:03:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1668106995; bh=CgvCcN0w8RSnobGp+pWjrUtf6udUwWRpslfHA1QtLho=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RhxpPOarAXUTKV23j8xbMMyFzemBUqydFEMbZUybOb6bFTGdqY2mzZ7Dp1K6qkXwY q92WTX1Li89s45N5Df90uwpmylmpAFccan12ojrLp01gEqexUqmnFtnt2jPZwzFuv5 pOS4G5HWcawfHwGOTBNKH8GAb8bHJqQntWVR+X2+o2cUInIe4CmhdnOEpHpE5Kd/rk 2/1URxE7QFt2wvSN2XjU8SWQvmz1hPGEXRvnZinfciaWZ6hibW78cOdQM+jOgNFpTQ hqw6T0NtKpzzdIAoxITWkf9MWLge/jShjUme1ebyUBUbYUjKDk/EWkvqzJx8+yfUL3 j276L7xrOpxww== From: Will Deacon To: kvmarm@lists.linux.dev Cc: Will Deacon , Sean Christopherson , Vincent Donnefort , Alexandru Elisei , Catalin Marinas , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v6 02/26] KVM: arm64: Allow attaching of non-coalescable pages to a hyp pool Date: Thu, 10 Nov 2022 19:02:35 +0000 Message-Id: <20221110190259.26861-3-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20221110190259.26861-1-will@kernel.org> References: <20221110190259.26861-1-will@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Quentin Perret All the contiguous pages used to initialize a 'struct hyp_pool' are considered coalescable, which means that the hyp page allocator will actively try to merge them with their buddies on the hyp_put_page() path. However, using hyp_put_page() on a page that is not part of the inital memory range given to a hyp_pool() is currently unsupported. In order to allow dynamically extending hyp pools at run-time, add a check to __hyp_attach_page() to allow inserting 'external' pages into the free-list of order 0. This will be necessary to allow lazy donation of pages from the host to the hypervisor when allocating guest stage-2 page-table pages at EL2. Tested-by: Vincent Donnefort Signed-off-by: Quentin Perret Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 1ded09fc9b10..dad88e203598 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -93,11 +93,16 @@ static inline struct hyp_page *node_to_page(struct list_head *node) static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { + phys_addr_t phys = hyp_page_to_phys(p); unsigned short order = p->order; struct hyp_page *buddy; memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); + /* Skip coalescing for 'external' pages being freed into the pool. */ + if (phys < pool->range_start || phys >= pool->range_end) + goto insert; + /* * Only the first struct hyp_page of a high-order page (otherwise known * as the 'head') should have p->order set. The non-head pages should @@ -116,6 +121,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, p = min(p, buddy); } +insert: /* Mark the new head, and insert it */ p->order = order; page_add_to_list(p, &pool->free_area[order]);