From patchwork Wed Oct 30 13:49:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Roy X-Patchwork-Id: 13856652 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 29A22D5CCB3 for ; Wed, 30 Oct 2024 15:01:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yL3YWbZxBAdF6rLlLCgXlLTe21dkO7M84UOAw0nyWhs=; b=xve+qZnqwJSo2v 6J7guf+EXbd1hyldGI3CMftk9VlcjugFsF+6UXuJnPGMRGNG6Ixy2ubWLAMBEOk3HzWUKJWVk5T9v 1h32u1kkfQdeIW/OEYzQ2HDlN5VghTclCaOyAC/LyOSFqhsHmvA9KtCw4ZyI65NdTSTSBphHf5fuW 86Tj9nnAUk4Pyylmwik5ZF8spYzZpu8dPEMlCtIlMdE6z3jv1YxHNnlyCHRkJr4T6m7ebkT9j2aKJ gl2cm96KNchwmzLkRo9Qw2fxSH5BtrGVQK98upV0L7IbHDHCjsov3TS39JFC8h8ZYamOrbcmelMOJ FHFEojfTljrKXqNHp2KA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6ABp-00000000l5u-1Ftw; Wed, 30 Oct 2024 15:00:53 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t695U-00000000Xom-2M6K; Wed, 30 Oct 2024 13:50:16 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=blYJo1yNoaziWCkfOsSivyw3bncyOYuZYYL2mpgphk8=; b=Uf9w1WDvsgYIU2JE7OPtxuzhiV IILyFubBNfnuC2jvFEmeOFpxJuut+Rv8kcs5MdCKK0uyJq0+irFBS7e9ueMl5p9T9A346Xcv8t7j1 HnAJ+rsHBEOnASnaWX9uIIHc4C8WJpm2LS8Enxhu2b4t7nrfph/ntJv2DB1bpvuhQWPKdrIA/oMes qjO9lWjr3L6zpYFYi6R33froLQRYt0iHTeWSPDs20XGVeqdVt4JQXmAswSrxpMJhEwFGwohsLTaDU tbqZHirBZFF6c67bid9UmQRItXnZNg3KfR7nhQGOCOL3SfQAlJQZwfiw06aeDEIFmK2+vZxwAYFJU hpSPnUqQ==; Received: from smtp-fw-33001.amazon.com ([207.171.190.10]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t695Q-0000000AFjs-3CbD; Wed, 30 Oct 2024 13:50:15 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt; s=amazon201209; t=1730296213; x=1761832213; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=blYJo1yNoaziWCkfOsSivyw3bncyOYuZYYL2mpgphk8=; b=slcNYB+RKQY5+v5qAAvP2AbLfIbKQ6WIYycPNoZpxPCzXruzeOugo35X DBdA9WXc3lB6UEZos8Qne3mucEz0bvq6JbssyiFw3+XVZUdteRTrKyF5k U2lPG19KyfTn6DMb8dOBv+lBtiEVbrH6irMlYjx2FfgyVh3i2wMwejdHB 8=; X-IronPort-AV: E=Sophos;i="6.11,245,1725321600"; d="scan'208";a="381122622" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.25.36.210]) by smtp-border-fw-33001.sea14.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2024 13:49:59 +0000 Received: from EX19MTAEUA001.ant.amazon.com [10.0.17.79:33275] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.20.52:2525] with esmtp (Farcaster) id c77b8478-8ab2-4663-a8a5-9c2518ad8ef5; Wed, 30 Oct 2024 13:49:56 +0000 (UTC) X-Farcaster-Flow-ID: c77b8478-8ab2-4663-a8a5-9c2518ad8ef5 Received: from EX19D030EUB004.ant.amazon.com (10.252.61.33) by EX19MTAEUA001.ant.amazon.com (10.252.50.192) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 30 Oct 2024 13:49:49 +0000 Received: from EX19MTAUEA002.ant.amazon.com (10.252.134.9) by EX19D030EUB004.ant.amazon.com (10.252.61.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 30 Oct 2024 13:49:48 +0000 Received: from email-imr-corp-prod-pdx-all-2c-8a67eb17.us-west-2.amazon.com (10.43.8.2) by mail-relay.amazon.com (10.252.134.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34 via Frontend Transport; Wed, 30 Oct 2024 13:49:48 +0000 Received: from ua2d7e1a6107c5b.home (dev-dsk-roypat-1c-dbe2a224.eu-west-1.amazon.com [172.19.88.180]) by email-imr-corp-prod-pdx-all-2c-8a67eb17.us-west-2.amazon.com (Postfix) with ESMTPS id EC34C4032D; Wed, 30 Oct 2024 13:49:37 +0000 (UTC) From: Patrick Roy To: , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Patrick Roy Subject: [RFC PATCH v3 1/6] arch: introduce set_direct_map_valid_noflush() Date: Wed, 30 Oct 2024 13:49:05 +0000 Message-ID: <20241030134912.515725-2-roypat@amazon.co.uk> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241030134912.515725-1-roypat@amazon.co.uk> References: <20241030134912.515725-1-roypat@amazon.co.uk> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241030_135013_191363_C494129E X-CRM114-Status: GOOD ( 15.43 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (Microsoft)" From: Mike Rapoport (Microsoft) Add an API that will allow updates of the direct/linear map for a set of physically contiguous pages. It will be used in the following patches. Signed-off-by: Mike Rapoport (Microsoft) Signed-off-by: Patrick Roy --- arch/arm64/include/asm/set_memory.h | 1 + arch/arm64/mm/pageattr.c | 10 ++++++++++ arch/loongarch/include/asm/set_memory.h | 1 + arch/loongarch/mm/pageattr.c | 21 +++++++++++++++++++++ arch/riscv/include/asm/set_memory.h | 1 + arch/riscv/mm/pageattr.c | 15 +++++++++++++++ arch/s390/include/asm/set_memory.h | 1 + arch/s390/mm/pageattr.c | 11 +++++++++++ arch/x86/include/asm/set_memory.h | 1 + arch/x86/mm/pat/set_memory.c | 8 ++++++++ include/linux/set_memory.h | 6 ++++++ 11 files changed, 76 insertions(+) base-commit: 5cb1659f412041e4780f2e8ee49b2e03728a2ba6 diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h index 917761feeffdd..98088c043606a 100644 --- a/arch/arm64/include/asm/set_memory.h +++ b/arch/arm64/include/asm/set_memory.h @@ -13,6 +13,7 @@ int set_memory_valid(unsigned long addr, int numpages, int enable); int set_direct_map_invalid_noflush(struct page *page); int set_direct_map_default_noflush(struct page *page); +int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid); bool kernel_page_present(struct page *page); #endif /* _ASM_ARM64_SET_MEMORY_H */ diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 0e270a1c51e64..01225900293ac 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -192,6 +192,16 @@ int set_direct_map_default_noflush(struct page *page) PAGE_SIZE, change_page_range, &data); } +int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid) +{ + unsigned long addr = (unsigned long)page_address(page); + + if (!can_set_direct_map()) + return 0; + + return set_memory_valid(addr, nr, valid); +} + #ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { diff --git a/arch/loongarch/include/asm/set_memory.h b/arch/loongarch/include/asm/set_memory.h index d70505b6676cb..55dfaefd02c8a 100644 --- a/arch/loongarch/include/asm/set_memory.h +++ b/arch/loongarch/include/asm/set_memory.h @@ -17,5 +17,6 @@ int set_memory_rw(unsigned long addr, int numpages); bool kernel_page_present(struct page *page); int set_direct_map_default_noflush(struct page *page); int set_direct_map_invalid_noflush(struct page *page); +int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid); #endif /* _ASM_LOONGARCH_SET_MEMORY_H */ diff --git a/arch/loongarch/mm/pageattr.c b/arch/loongarch/mm/pageattr.c index ffd8d76021d47..f14b40c968b48 100644 --- a/arch/loongarch/mm/pageattr.c +++ b/arch/loongarch/mm/pageattr.c @@ -216,3 +216,24 @@ int set_direct_map_invalid_noflush(struct page *page) return __set_memory(addr, 1, __pgprot(0), __pgprot(_PAGE_PRESENT | _PAGE_VALID)); } + +int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid) +{ + unsigned long addr = (unsigned long)page_address(page); + pgprot_t set, clear; + + return __set_memory((unsigned long)page_address(page), nr, set, clear); + + if (addr < vm_map_base) + return 0; + + if (valid) { + set = PAGE_KERNEL; + clear = __pgprot(0); + } else { + set = __pgprot(0); + clear = __pgprot(_PAGE_PRESENT | _PAGE_VALID); + } + + return __set_memory(addr, 1, set, clear); +} diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h index ab92fc84e1fc9..ea263d3683ef6 100644 --- a/arch/riscv/include/asm/set_memory.h +++ b/arch/riscv/include/asm/set_memory.h @@ -42,6 +42,7 @@ static inline int set_kernel_memory(char *startp, char *endp, int set_direct_map_invalid_noflush(struct page *page); int set_direct_map_default_noflush(struct page *page); +int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid); bool kernel_page_present(struct page *page); #endif /* __ASSEMBLY__ */ diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c index 271d01a5ba4da..d815448758a19 100644 --- a/arch/riscv/mm/pageattr.c +++ b/arch/riscv/mm/pageattr.c @@ -386,6 +386,21 @@ int set_direct_map_default_noflush(struct page *page) PAGE_KERNEL, __pgprot(_PAGE_EXEC)); } +int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid) +{ + pgprot_t set, clear; + + if (valid) { + set = PAGE_KERNEL; + clear = __pgprot(_PAGE_EXEC); + } else { + set = __pgprot(0); + clear = __pgprot(_PAGE_PRESENT); + } + + return __set_memory((unsigned long)page_address(page), nr, set, clear); +} + #ifdef CONFIG_DEBUG_PAGEALLOC static int debug_pagealloc_set_page(pte_t *pte, unsigned long addr, void *data) { diff --git a/arch/s390/include/asm/set_memory.h b/arch/s390/include/asm/set_memory.h index 06fbabe2f66c9..240bcfbdcdcec 100644 --- a/arch/s390/include/asm/set_memory.h +++ b/arch/s390/include/asm/set_memory.h @@ -62,5 +62,6 @@ __SET_MEMORY_FUNC(set_memory_4k, SET_MEMORY_4K) int set_direct_map_invalid_noflush(struct page *page); int set_direct_map_default_noflush(struct page *page); +int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid); #endif diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c index 5f805ad42d4c3..4c7ee74aa130d 100644 --- a/arch/s390/mm/pageattr.c +++ b/arch/s390/mm/pageattr.c @@ -406,6 +406,17 @@ int set_direct_map_default_noflush(struct page *page) return __set_memory((unsigned long)page_to_virt(page), 1, SET_MEMORY_DEF); } +int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid) +{ + unsigned long flags; + + if (valid) + flags = SET_MEMORY_DEF; + else + flags = SET_MEMORY_INV; + + return __set_memory((unsigned long)page_to_virt(page), nr, flags); +} #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE) static void ipte_range(pte_t *pte, unsigned long address, int nr) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index 4b2abce2e3e7d..cc62ef70ccc0a 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -89,6 +89,7 @@ int set_pages_rw(struct page *page, int numpages); int set_direct_map_invalid_noflush(struct page *page); int set_direct_map_default_noflush(struct page *page); +int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid); bool kernel_page_present(struct page *page); extern int kernel_set_to_readonly; diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 44f7b2ea6a073..069e421c22474 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2444,6 +2444,14 @@ int set_direct_map_default_noflush(struct page *page) return __set_pages_p(page, 1); } +int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid) +{ + if (valid) + return __set_pages_p(page, nr); + + return __set_pages_np(page, nr); +} + #ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h index e7aec20fb44f1..3030d9245f5ac 100644 --- a/include/linux/set_memory.h +++ b/include/linux/set_memory.h @@ -34,6 +34,12 @@ static inline int set_direct_map_default_noflush(struct page *page) return 0; } +static inline int set_direct_map_valid_noflush(struct page *page, + unsigned nr, bool valid) +{ + return 0; +} + static inline bool kernel_page_present(struct page *page) { return true; From patchwork Wed Oct 30 13:49:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Roy X-Patchwork-Id: 13856653 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B757CD5CCB7 for ; Wed, 30 Oct 2024 15:01:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MuKIk99hsV+EbuHXF2y8vhQ6hEUpsK+mGPJJxSrj0og=; b=BYrL02y1lR5nEa cYbkcFXqqVhKKc/ZTyF9eMzzQh8o0PwyBtgmqRkOxiYru20hX2GXGVr3jaJ+aRUJqLoBlpGfxC/7/ nd8VRRzUPV2WVzlgh4uIi2dCvvRhR5n6eNljWvK5E9ZU16zEIowywUoP8bMVT8WJo6WHfm5rnjocv nG+qq6Zk+KVgjvrF8xZ1O7BUbprUZO8h6apYi7KxdEcFkytYgpJ1w2TgXhw5KMaQY8GqEGOc9EzKG /aBcmsxiF2Q+NEKfs1LLJZCrwzRrgjGpJ8G/0RXwuJOv98aNnkEcqTXXOlvRa6hU07OctMFO3eRXH DclVCGtDDADEt0/5T2qQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6ABq-00000000l6j-1DX2; Wed, 30 Oct 2024 15:00:54 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t695W-00000000Xoy-1O4F; Wed, 30 Oct 2024 13:50:18 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=S8hLaLmpaqH2hbmpwLE9UIqf50NoMS+38eolo/Dz8xQ=; b=knQu6Lrm10jmiyV0FXjmFoNRdu z34i2rO8o8odryu9fLoNg4FMuD/a+lDmoHmZaBEG9QIZY9Mr3zY53uEJjm86kgS89r97E4/Ny6sS4 WEZ/8Keco5KXTikgFR0cUTxE/SKFk7ktqilhD6PHpvSXDpVxfPUAIcPUCCsWODMxEFU5R/wZhLL/N ebp/HWa1dBrCv49oWdoNlj9nMpqA3HN7zd2CdzY/ZITVaRsml5vbqHwUebQMX3OdTGH8WVTZnbMRn rep4mVQLQUF/TOSrHYjMR8mIOVNqFLgfjuhUxm+qYwfygXg3HPZp3oWyeEfe1C9kuLXI7DHu+jsJD WnUtbIaA==; Received: from smtp-fw-80009.amazon.com ([99.78.197.220]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t695Q-0000000AFjx-3hq7; Wed, 30 Oct 2024 13:50:16 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt; s=amazon201209; t=1730296212; x=1761832212; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=S8hLaLmpaqH2hbmpwLE9UIqf50NoMS+38eolo/Dz8xQ=; b=qNpKwwDJPcgkkbjm+wAN3b1zOf+n5KVfMJMV4NlHr9GEYGz3ESsg+Lt/ rbYOTwAQ+9bxO5MhPvOPZgv9+WRA2U4MzTYwzBoZLdFx+ihzfVwZLqG13 lExD0QjI+6XTy2oX4/JIegdK4SgVvCaTR4nG6aS6Cuebtm1o8G5ewn34l w=; X-IronPort-AV: E=Sophos;i="6.11,245,1725321600"; d="scan'208";a="142980599" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.25.36.210]) by smtp-border-fw-80009.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2024 13:50:06 +0000 Received: from EX19MTAUWC002.ant.amazon.com [10.0.21.151:54967] by smtpin.naws.us-west-2.prod.farcaster.email.amazon.dev [10.0.22.121:2525] with esmtp (Farcaster) id 08c8009f-6d75-4831-aeed-52be2404b382; Wed, 30 Oct 2024 13:50:06 +0000 (UTC) X-Farcaster-Flow-ID: 08c8009f-6d75-4831-aeed-52be2404b382 Received: from EX19D003UWC002.ant.amazon.com (10.13.138.169) by EX19MTAUWC002.ant.amazon.com (10.250.64.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 30 Oct 2024 13:50:00 +0000 Received: from EX19MTAUWA002.ant.amazon.com (10.250.64.202) by EX19D003UWC002.ant.amazon.com (10.13.138.169) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.35; Wed, 30 Oct 2024 13:50:00 +0000 Received: from email-imr-corp-prod-pdx-all-2c-8a67eb17.us-west-2.amazon.com (10.25.36.210) by mail-relay.amazon.com (10.250.64.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34 via Frontend Transport; Wed, 30 Oct 2024 13:50:00 +0000 Received: from ua2d7e1a6107c5b.home (dev-dsk-roypat-1c-dbe2a224.eu-west-1.amazon.com [172.19.88.180]) by email-imr-corp-prod-pdx-all-2c-8a67eb17.us-west-2.amazon.com (Postfix) with ESMTPS id 8A1224032D; Wed, 30 Oct 2024 13:49:50 +0000 (UTC) From: Patrick Roy To: , , , , , , , , CC: Patrick Roy , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v3 2/6] kvm: gmem: add flag to remove memory from kernel direct map Date: Wed, 30 Oct 2024 13:49:06 +0000 Message-ID: <20241030134912.515725-3-roypat@amazon.co.uk> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241030134912.515725-1-roypat@amazon.co.uk> References: <20241030134912.515725-1-roypat@amazon.co.uk> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241030_135013_365783_24A98A70 X-CRM114-Status: GOOD ( 38.52 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add a new flag, KVM_GMEM_NO_DIRECT_MAP, to KVM_CREATE_GUEST_MEMFD, which causes KVM to remove the folios backing this guest_memfd from the direct map after preparation/population. This flag is only exposed on architectures that can set the direct map (the notable exception here being ARM64 if the direct map is not set up at 4K granularity), otherwise EOPNOTSUPP is returned. This patch also implements infrastructure for tracking (temporary) reinsertation of memory ranges into the direct map (more accurately: It allows recording that specific memory ranges deviate from the default direct map setup. Currently the default setup is always "direct map entries removed", but it is trivial to extend this with some "default_state_for_vm_type" mechanism to cover the pKVM usecase of memory starting off with directe map entries present). An xarray tracks this at page granularity, to be compatible with future hugepages usecases that might require subranges of hugetlb folios to have direct map entries restored. This xarray holds entries for each page that has a direct map state deviating from the default, and holes for all pages whose direct map state matches the default, the idea being that these "deviations" will be rare. kvm_gmem_folio_configure_direct_map applies the configuration stored in the xarray to a given folio, and is called for each new gmem folio after preparation/population. Storing direct map state in the gmem inode has two advantages: 1) We can track direct map state at page granularity even for huge folios (see also Ackerley's series on hugetlbfs support in guest_memfd [1]) 2) We can pre-configure the direct map state of not-yet-faulted in folios. This would for example be needed if a VMM is receiving a virtio buffer that the guest is requested it to fill. In this case, the pages backing the guest physical address range of the buffer might not be faulted in yet, and thus would be faulted when the VMM tries to write to them, and at this point we would need to ensure direct map entries are present) Note that this patch does not include operations for manipulating the direct map state xarray, or for changing direct map state of already existing folios. These routines are sketched out in the following patch, although are not needed in this initial patch series. When a gmem folio is freed, it is reinserted into the direct map (and failing this, marked as HWPOISON to avoid any other part of the kernel accidentally touching folios without complete direct map entries). The direct map configuration stored in the xarray is _not_ reset when the folio is freed (although this could be implemented by storing the reference to the xarray in the folio's private data instead of only the inode). [1]: https://lore.kernel.org/kvm/cover.1726009989.git.ackerleytng@google.com/ Signed-off-by: Patrick Roy --- include/uapi/linux/kvm.h | 2 + virt/kvm/guest_memfd.c | 150 +++++++++++++++++++++++++++++++++++---- 2 files changed, 137 insertions(+), 15 deletions(-) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 637efc0551453..81b0f4a236b8c 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1564,6 +1564,8 @@ struct kvm_create_guest_memfd { __u64 reserved[6]; }; +#define KVM_GMEM_NO_DIRECT_MAP (1ULL << 0) + #define KVM_PRE_FAULT_MEMORY _IOWR(KVMIO, 0xd5, struct kvm_pre_fault_memory) struct kvm_pre_fault_memory { diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 47a9f68f7b247..50ffc2ad73eda 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -4,6 +4,7 @@ #include #include #include +#include #include "kvm_mm.h" @@ -13,6 +14,88 @@ struct kvm_gmem { struct list_head entry; }; +struct kvm_gmem_inode_private { + unsigned long long flags; + + /* + * direct map configuration of the gmem instance this private data + * is associated with. present indices indicate a desired direct map + * configuration deviating from default_direct_map_state (e.g. if + * default_direct_map_state is false/not present, then the xarray + * contains all indices for which direct map entries are restored). + */ + struct xarray direct_map_state; + bool default_direct_map_state; +}; + +static bool kvm_gmem_test_no_direct_map(struct kvm_gmem_inode_private *gmem_priv) +{ + return ((unsigned long)gmem_priv->flags & KVM_GMEM_NO_DIRECT_MAP) != 0; +} + +/* + * Configure the direct map present/not present state of @folio based on + * the xarray stored in the associated inode's private data. + * + * Assumes the folio lock is held. + */ +static int kvm_gmem_folio_configure_direct_map(struct folio *folio) +{ + struct inode *inode = folio_inode(folio); + struct kvm_gmem_inode_private *gmem_priv = inode->i_private; + bool default_state = gmem_priv->default_direct_map_state; + + pgoff_t start = folio_index(folio); + pgoff_t last = start + folio_nr_pages(folio) - 1; + + struct xarray *xa = &gmem_priv->direct_map_state; + unsigned long index; + void *entry; + + pgoff_t range_start = start; + unsigned long npages = 1; + int r = 0; + + if (!kvm_gmem_test_no_direct_map(gmem_priv)) + goto out; + + r = set_direct_map_valid_noflush(folio_page(folio, 0), folio_nr_pages(folio), + default_state); + if (r) + goto out; + + if (!xa_find_after(xa, &range_start, last, XA_PRESENT)) + goto out_flush; + + xa_for_each_range(xa, index, entry, range_start, last) { + ++npages; + + if (index == range_start + npages) + continue; + + r = set_direct_map_valid_noflush(folio_file_page(folio, range_start), npages - 1, + !default_state); + if (r) + goto out_flush; + + range_start = index; + npages = 1; + } + + r = set_direct_map_valid_noflush(folio_file_page(folio, range_start), npages, + !default_state); + +out_flush: + /* + * Use PG_private to track that this folio has had potentially some of + * its direct map entries modified, so that we can restore them in free_folio. + */ + folio_set_private(folio); + flush_tlb_kernel_range(start, start + folio_size(folio)); +out: + return r; +} + /** * folio_file_pfn - like folio_file_page, but return a pfn. * @folio: The folio which contains this index. @@ -42,9 +125,19 @@ static int __kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slo return 0; } -static inline void kvm_gmem_mark_prepared(struct folio *folio) + +static inline int kvm_gmem_finalize_folio(struct folio *folio) { + int r = kvm_gmem_folio_configure_direct_map(folio); + + /* + * Parts of the direct map might have been punched out, mark this folio + * as prepared even in the error case to avoid touching parts without + * direct map entries in a potential re-preparation. + */ folio_mark_uptodate(folio); + + return r; } /* @@ -82,11 +175,10 @@ static int kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slot, index = ALIGN_DOWN(index, 1 << folio_order(folio)); r = __kvm_gmem_prepare_folio(kvm, slot, index, folio); if (!r) - kvm_gmem_mark_prepared(folio); + r = kvm_gmem_finalize_folio(folio); return r; } - /* * Returns a locked folio on success. The caller is responsible for * setting the up-to-date flag before the memory is mapped into the guest. @@ -249,6 +341,7 @@ static long kvm_gmem_fallocate(struct file *file, int mode, loff_t offset, static int kvm_gmem_release(struct inode *inode, struct file *file) { struct kvm_gmem *gmem = file->private_data; + struct kvm_gmem_inode_private *gmem_priv; struct kvm_memory_slot *slot; struct kvm *kvm = gmem->kvm; unsigned long index; @@ -279,13 +372,17 @@ static int kvm_gmem_release(struct inode *inode, struct file *file) list_del(&gmem->entry); + gmem_priv = inode->i_private; + filemap_invalidate_unlock(inode->i_mapping); mutex_unlock(&kvm->slots_lock); - xa_destroy(&gmem->bindings); kfree(gmem); + xa_destroy(&gmem_priv->direct_map_state); + kfree(gmem_priv); + kvm_put_kvm(kvm); return 0; @@ -357,24 +454,37 @@ static int kvm_gmem_error_folio(struct address_space *mapping, struct folio *fol return MF_DELAYED; } -#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE static void kvm_gmem_free_folio(struct folio *folio) { +#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE struct page *page = folio_page(folio, 0); kvm_pfn_t pfn = page_to_pfn(page); int order = folio_order(folio); kvm_arch_gmem_invalidate(pfn, pfn + (1ul << order)); -} #endif + if (folio_test_private(folio)) { + unsigned long start = (unsigned long)folio_address(folio); + + int r = set_direct_map_valid_noflush(folio_page(folio, 0), folio_nr_pages(folio), + true); + /* + * There might be holes left in the folio, better make sure + * nothing tries to touch it again. + */ + if (r) + folio_set_hwpoison(folio); + + flush_tlb_kernel_range(start, start + folio_size(folio)); + } +} + static const struct address_space_operations kvm_gmem_aops = { .dirty_folio = noop_dirty_folio, .migrate_folio = kvm_gmem_migrate_folio, .error_remove_folio = kvm_gmem_error_folio, -#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE .free_folio = kvm_gmem_free_folio, -#endif }; static int kvm_gmem_getattr(struct mnt_idmap *idmap, const struct path *path, @@ -401,6 +511,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) { const char *anon_name = "[kvm-gmem]"; struct kvm_gmem *gmem; + struct kvm_gmem_inode_private *gmem_priv; struct inode *inode; struct file *file; int fd, err; @@ -409,11 +520,14 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) if (fd < 0) return fd; + err = -ENOMEM; gmem = kzalloc(sizeof(*gmem), GFP_KERNEL); - if (!gmem) { - err = -ENOMEM; + if (!gmem) + goto err_fd; + + gmem_priv = kzalloc(sizeof(*gmem_priv), GFP_KERNEL); + if (!gmem_priv) goto err_fd; - } file = anon_inode_create_getfile(anon_name, &kvm_gmem_fops, gmem, O_RDWR, NULL); @@ -427,7 +541,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) inode = file->f_inode; WARN_ON(file->f_mapping != inode->i_mapping); - inode->i_private = (void *)(unsigned long)flags; + inode->i_private = gmem_priv; inode->i_op = &kvm_gmem_iops; inode->i_mapping->a_ops = &kvm_gmem_aops; inode->i_mode |= S_IFREG; @@ -442,6 +556,9 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) xa_init(&gmem->bindings); list_add(&gmem->entry, &inode->i_mapping->i_private_list); + xa_init(&gmem_priv->direct_map_state); + gmem_priv->flags = flags; + fd_install(fd, file); return fd; @@ -456,11 +573,14 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) { loff_t size = args->size; u64 flags = args->flags; - u64 valid_flags = 0; + u64 valid_flags = KVM_GMEM_NO_DIRECT_MAP; if (flags & ~valid_flags) return -EINVAL; + if ((flags & KVM_GMEM_NO_DIRECT_MAP) && !can_set_direct_map()) + return -EOPNOTSUPP; + if (size <= 0 || !PAGE_ALIGNED(size)) return -EINVAL; @@ -679,7 +799,6 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long break; } - folio_unlock(folio); WARN_ON(!IS_ALIGNED(gfn, 1 << max_order) || (npages - i) < (1 << max_order)); @@ -695,7 +814,8 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long p = src ? src + i * PAGE_SIZE : NULL; ret = post_populate(kvm, gfn, pfn, p, max_order, opaque); if (!ret) - kvm_gmem_mark_prepared(folio); + ret = kvm_gmem_finalize_folio(folio); + folio_unlock(folio); put_folio_and_exit: folio_put(folio); From patchwork Wed Oct 30 13:49:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Roy X-Patchwork-Id: 13856584 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D1761D5CCB1 for ; Wed, 30 Oct 2024 13:57:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=b3mIAUrbknGG0IF/xT0I28oUPxgCpCfO8ZxBt/vpzoY=; b=DMXgOHrijrOv9W jBqUfAyZsUIEWfUo3R8MkgRnKKG53Y3pcyf2xuB55ZVZBEKVGYc42AQ2ISpwYz777FrEwlseSTKsi PIxAzyE1A94pDLole6ijzp/2pTu3LiRGyUagcG63Qeqtjfm1wCMrwY6IGt34nSHP1NNIEjjjGj7FX DA0XGqt7UxzcNf3AireQh7GfajdDRM0+rWaQRimBtgVUDDRwSTJMApSzSvc2RIPOqCCm1ga+hwgsq JlnCoHBD326azQXsjEE/FiV1F/ujo39zIK9eU4Q8FFfD4K4NEBOVtCye7cVSCuawQN/W0O3KRq199 iunayB0SDl1Gj2fcm/Vg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t69CI-00000000Z47-3eZe; Wed, 30 Oct 2024 13:57:18 +0000 Received: from smtp-fw-52005.amazon.com ([52.119.213.156]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t695b-00000000XqG-3v3g; Wed, 30 Oct 2024 13:51:03 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt; s=amazon201209; t=1730296224; x=1761832224; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ynq07QFehgBeqYs8R/uKHAJgtPgDIcbxY2M616fmmig=; b=oDrV65BTT+7F34u4+gjz5JlHr55nkHWLuYHUXyYBJ6aImCdk8DcbgVRx Szl7FJ5pjLnNgaoj1cOiC8xwTE3nZXd01sCIcMCDCbHT7qMvvjUs42CE1 iP+s065v7pJKlVC36vG9G1BjxwOEBLbVNAS5dQOzwf10GMYr7uG2QLoFJ o=; X-IronPort-AV: E=Sophos;i="6.11,245,1725321600"; d="scan'208";a="691697056" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.43.8.6]) by smtp-border-fw-52005.iad7.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2024 13:50:17 +0000 Received: from EX19MTAUWA002.ant.amazon.com [10.0.38.20:54095] by smtpin.naws.us-west-2.prod.farcaster.email.amazon.dev [10.0.35.102:2525] with esmtp (Farcaster) id e1e6c02d-939e-42c5-ab8b-438e8ae0b6bc; Wed, 30 Oct 2024 13:50:16 +0000 (UTC) X-Farcaster-Flow-ID: e1e6c02d-939e-42c5-ab8b-438e8ae0b6bc Received: from EX19D020UWA002.ant.amazon.com (10.13.138.222) by EX19MTAUWA002.ant.amazon.com (10.250.64.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 30 Oct 2024 13:50:11 +0000 Received: from EX19MTAUWC001.ant.amazon.com (10.250.64.145) by EX19D020UWA002.ant.amazon.com (10.13.138.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 30 Oct 2024 13:50:11 +0000 Received: from email-imr-corp-prod-pdx-all-2c-8a67eb17.us-west-2.amazon.com (10.25.36.210) by mail-relay.amazon.com (10.250.64.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34 via Frontend Transport; Wed, 30 Oct 2024 13:50:11 +0000 Received: from ua2d7e1a6107c5b.home (dev-dsk-roypat-1c-dbe2a224.eu-west-1.amazon.com [172.19.88.180]) by email-imr-corp-prod-pdx-all-2c-8a67eb17.us-west-2.amazon.com (Postfix) with ESMTPS id E3BF841309; Wed, 30 Oct 2024 13:50:01 +0000 (UTC) From: Patrick Roy To: , , , , , , , , CC: Patrick Roy , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v3 3/6] kvm: gmem: implement direct map manipulation routines Date: Wed, 30 Oct 2024 13:49:07 +0000 Message-ID: <20241030134912.515725-4-roypat@amazon.co.uk> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241030134912.515725-1-roypat@amazon.co.uk> References: <20241030134912.515725-1-roypat@amazon.co.uk> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241030_065024_128188_5776BF5B X-CRM114-Status: GOOD ( 21.50 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Implement (yet unused) routines for manipulating guest_memfd direct map state. This is largely for illustration purposes. kvm_gmem_set_direct_map allows manipulating arbitrary pgoff_t ranges, even if the covered memory has not yet been faulted in (in which case the requested direct map state is recorded in the xarray and will be applied by kvm_gmem_folio_configure_direct_map after the folio is faulted in and prepared/populated). This can be used to realize private/shared conversions on not-yet-faulted in memory, as discussed in the guest_memfd upstream call [1]. kvm_gmem_folio_set_direct_map allows manipulating the direct map entries for a gmem folio that the caller already holds a reference for (whereas kvm_gmem_set_direct_map needs to look up all folios intersecting the given pgoff range in the filemap first). The xa lock serializes calls to kvm_gmem_folio_set_direct_map and kvm_gmem_set_direct_map, while the read side (kvm_gmem_folio_configure_direct_map) is protected by RCU. This is sufficient to ensure consistency between the xarray and the folio's actual direct map state, as kvm_gmem_folio_configure_direct_map is called only for freshly allocated folios, and before the folio lock is dropped for the first time, meaning kvm_gmem_folio_configure_direct_map always does it's set_direct_map calls before either of kvm_gmem_[folio_]set_direct_map get a chance. Even if a concurrent call to kvm_gmem_[folio_]set_direct_map happens, this ensures a sort of "eventual consistency" between xarray and actual direct map configuration by the time kvm_gmem_[folio_]set_direct_map exits. [1]: https://lore.kernel.org/kvm/4b49248b-1cf1-44dc-9b50-ee551e1671ac@redhat.com/ Signed-off-by: Patrick Roy --- virt/kvm/guest_memfd.c | 125 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 125 insertions(+) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 50ffc2ad73eda..54387828dcc6a 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -96,6 +96,131 @@ static int kvm_gmem_folio_configure_direct_map(struct folio *folio) return r; } +/* + * Updates the range [@start, @end] in @gmem_priv's direct map state xarray to be @state, + * e.g. erasing entries in this range if @state is the default state, and creating + * entries otherwise. + * + * Assumes the xa_lock is held. + */ +static int __kvm_gmem_update_xarray(struct kvm_gmem_inode_private *gmem_priv, pgoff_t start, + pgoff_t end, bool state) +{ + struct xarray *xa = &gmem_priv->direct_map_state; + int r = 0; + + /* + * Cannot use xa_store_range, as multi-indexes cannot easily + * be partially updated. + */ + for (pgoff_t index = start; index < end; ++index) { + if (state == gmem_priv->default_direct_map_state) + __xa_erase(xa, index); + else + /* don't care _what_ we store in the xarray, only care about presence */ + __xa_store(xa, index, gmem_priv, GFP_KERNEL); + + r = xa_err(xa); + if (r) + goto out; + } + +out: + return r; +} + +static int __kvm_gmem_folio_set_direct_map(struct folio *folio, pgoff_t start, pgoff_t end, + bool state) +{ + unsigned long npages = end - start + 1; + struct page *first_page = folio_file_page(folio, start); + + int r = set_direct_map_valid_noflush(first_page, npages, state); + + flush_tlb_kernel_range((unsigned long)page_address(first_page), + (unsigned long)page_address(first_page) + + npages * PAGE_SIZE); + return r; +} + +/* + * Updates the direct map status for the given range from @start to @end (inclusive), returning + * -EINVAL if this range is not completely contained within @folio. Also updates the + * xarray stored in the private data of the inode @folio is attached to. + * + * Takes and drops the folio lock. + */ +static __always_unused int kvm_gmem_folio_set_direct_map(struct folio *folio, pgoff_t start, + pgoff_t end, bool state) +{ + struct inode *inode = folio_inode(folio); + struct kvm_gmem_inode_private *gmem_priv = inode->i_private; + int r = -EINVAL; + + if (!folio_contains(folio, start) || !folio_contains(folio, end)) + goto out; + + xa_lock(&gmem_priv->direct_map_state); + r = __kvm_gmem_update_xarray(gmem_priv, start, end, state); + if (r) + goto unlock_xa; + + folio_lock(folio); + r = __kvm_gmem_folio_set_direct_map(folio, start, end, state); + folio_unlock(folio); + +unlock_xa: + xa_unlock(&gmem_priv->direct_map_state); +out: + return r; +} + +/* + * Updates the direct map status for the given range from @start to @end (inclusive) + * of @inode. Folios in this range have their direct map entries reconfigured, + * and the xarray in the @inode's private data is updated. + */ +static __always_unused int kvm_gmem_set_direct_map(struct inode *inode, pgoff_t start, + pgoff_t end, bool state) +{ + struct kvm_gmem_inode_private *gmem_priv = inode->i_private; + struct folio_batch fbatch; + pgoff_t index = start; + unsigned int count, i; + int r = 0; + + xa_lock(&gmem_priv->direct_map_state); + + r = __kvm_gmem_update_xarray(gmem_priv, start, end, state); + if (r) + goto out; + + folio_batch_init(&fbatch); + while (!filemap_get_folios(inode->i_mapping, &index, end, &fbatch) && !r) { + count = folio_batch_count(&fbatch); + for (i = 0; i < count; i++) { + struct folio *folio = fbatch.folios[i]; + pgoff_t folio_start = max(folio_index(folio), start); + pgoff_t folio_end = + min(folio_index(folio) + folio_nr_pages(folio), + end); + + folio_lock(folio); + r = __kvm_gmem_folio_set_direct_map(folio, folio_start, + folio_end, state); + folio_unlock(folio); + + if (r) + break; + } + folio_batch_release(&fbatch); + } + + xa_unlock(&gmem_priv->direct_map_state); +out: + return r; +} + /** * folio_file_pfn - like folio_file_page, but return a pfn. * @folio: The folio which contains this index. From patchwork Wed Oct 30 13:49:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Roy X-Patchwork-Id: 13856654 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6162AD5CCB6 for ; Wed, 30 Oct 2024 15:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2ie4NV3AJCKBz6fR0Reppfu5isoE5g+8ZNA6FE9CGk8=; b=YyQhBbHGPwyHhv 0UO284npbJki0oM03uZh30SzwYmljhjy/fDLiqUoRMMq2F1/ftmCuYGDRg1pnO52xelrVYh/w1zfw 6m2uwXEm/axRq5CgjuBLAgCOs2NUlGFMX358pWgdYioyO8M45O7e/76f44L1Kr7VnQ2aFCHevybOZ nOxWWou6DlNs2lBC85e8MjzZ8RkVLXhWkPWSIi8PKJxfaOTBTdWwX/2j81OLGYz76pryS/zaoyWin GNc/3VyEEL3N6minGbrMjyD0R/Mfjb/uDmijQ2ib/mtSjWejlni8RApQ6Jz3vaqsFGBvJ+wV5ZP5Z ilVDcSam7Ztd/d2iVrLQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t6ABs-00000000l8m-2c1Z; Wed, 30 Oct 2024 15:00:56 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t695l-00000000XtK-0QAP; Wed, 30 Oct 2024 13:50:33 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=k/QSX0mI8qQz8W316WTJxbTS4bB3Lyue/iJHbqNfB18=; b=SHRXxI13D0ji02DVnBA+S2vAWG 075oe+6UZFaPWSjZC8HtDJhB3p6eXmMzQwYPMtlPTID5rQSlCfXRP5YAroHgHg8vKQJeo3/Drq/Z6 3ycYhUivybiWoLOpTWXRyGrDHNIxTuMPrkwc3hI/RVmRBX1KtcjxEQ/E4vD+dyXq8cQE7kdfl8dFS wDO2wrE1rxjOBl0+QjcU8LiKsREf+Nw77Ah36NV4I/o/nNDSVS8N5bltk/Hf0igen/IK3xBPZptFb pGoDqi2bhjwVcXYA8wpP8gR1C9pRAtG1LnkJpU1WXwxU1DY06MRInV8u0s7J1Gxu1Jj2y56p5Jw1m RJI1ox/A==; Received: from smtp-fw-80009.amazon.com ([99.78.197.220]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t695h-0000000AFoi-0pin; Wed, 30 Oct 2024 13:50:31 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt; s=amazon201209; t=1730296229; x=1761832229; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=k/QSX0mI8qQz8W316WTJxbTS4bB3Lyue/iJHbqNfB18=; b=Q2vVFiG8cQn0RugB/GF2zlj6k/7HeQNrZJV54Q6JaVyaJ2SuN90f0SQf T4NyllQM4wPQFt91heMlrkp9RTGtifENyOGQeMx5PwzC+X0+FTlioAamm e6bOM2ZG2Qe/crqfqVENN2iXtGzmDSppgJjtPmLnd+2+bgwzqHgO8zxtY g=; X-IronPort-AV: E=Sophos;i="6.11,245,1725321600"; d="scan'208";a="142980689" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.25.36.210]) by smtp-border-fw-80009.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2024 13:50:27 +0000 Received: from EX19MTAEUB001.ant.amazon.com [10.0.43.254:32764] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.20.52:2525] with esmtp (Farcaster) id cb6221e8-e82a-4b95-aff9-f29080c6cf67; Wed, 30 Oct 2024 13:50:26 +0000 (UTC) X-Farcaster-Flow-ID: cb6221e8-e82a-4b95-aff9-f29080c6cf67 Received: from EX19D014EUA004.ant.amazon.com (10.252.50.41) by EX19MTAEUB001.ant.amazon.com (10.252.51.28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 30 Oct 2024 13:50:23 +0000 Received: from EX19MTAUEA002.ant.amazon.com (10.252.134.9) by EX19D014EUA004.ant.amazon.com (10.252.50.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 30 Oct 2024 13:50:23 +0000 Received: from email-imr-corp-prod-pdx-all-2c-8a67eb17.us-west-2.amazon.com (10.43.8.2) by mail-relay.amazon.com (10.252.134.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34 via Frontend Transport; Wed, 30 Oct 2024 13:50:22 +0000 Received: from ua2d7e1a6107c5b.home (dev-dsk-roypat-1c-dbe2a224.eu-west-1.amazon.com [172.19.88.180]) by email-imr-corp-prod-pdx-all-2c-8a67eb17.us-west-2.amazon.com (Postfix) with ESMTPS id 93E2B41303; Wed, 30 Oct 2024 13:50:13 +0000 (UTC) From: Patrick Roy To: , , , , , , , , CC: Patrick Roy , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v3 4/6] kvm: gmem: add trace point for direct map state changes Date: Wed, 30 Oct 2024 13:49:08 +0000 Message-ID: <20241030134912.515725-5-roypat@amazon.co.uk> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241030134912.515725-1-roypat@amazon.co.uk> References: <20241030134912.515725-1-roypat@amazon.co.uk> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241030_135029_654200_6DC9633A X-CRM114-Status: GOOD ( 12.77 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add tracepoints to kvm_gmem_set_direct_map and kvm_gmem_folio_set_direct_map. The above operations can cause folios to be insert/removed into/from the direct map. We want to be able to make sure that only those gmem folios that we expect KVM to access are ever reinserted into the direct map, and that all folios that are temporarily reinserted are also removed again at a later point. Processing ftrace output is one way to verify this. Signed-off-by: Patrick Roy --- include/trace/events/kvm.h | 22 ++++++++++++++++++++++ virt/kvm/guest_memfd.c | 5 +++++ 2 files changed, 27 insertions(+) diff --git a/include/trace/events/kvm.h b/include/trace/events/kvm.h index 74e40d5d4af42..f3d852c18fa08 100644 --- a/include/trace/events/kvm.h +++ b/include/trace/events/kvm.h @@ -489,6 +489,28 @@ TRACE_EVENT(kvm_test_age_hva, TP_printk("mmu notifier test age hva: %#016lx", __entry->hva) ); +#ifdef CONFIG_KVM_PRIVATE_MEM +TRACE_EVENT(kvm_gmem_direct_map_state_change, + TP_PROTO(pgoff_t start, pgoff_t end, bool state), + TP_ARGS(start, end, state), + + TP_STRUCT__entry( + __field(pgoff_t, start) + __field(pgoff_t, end) + __field(bool, state) + ), + + TP_fast_assign( + __entry->start = start; + __entry->end = end; + __entry->state = state; + ), + + TP_printk("changed direct map state of guest_memfd range %lu to %lu to %s", + __entry->start, __entry->end, __entry->state ? "present" : "not present") +); +#endif + #endif /* _TRACE_KVM_MAIN_H */ /* This part must be outside protection */ diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 54387828dcc6a..a0b3b9cacd361 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -7,6 +7,7 @@ #include #include "kvm_mm.h" +#include "trace/events/kvm.h" struct kvm_gmem { struct kvm *kvm; @@ -169,6 +170,8 @@ static __always_unused int kvm_gmem_folio_set_direct_map(struct folio *folio, pg r = __kvm_gmem_folio_set_direct_map(folio, start, end, state); folio_unlock(folio); + trace_kvm_gmem_direct_map_state_change(start, end, state); + unlock_xa: xa_unlock(&gmem_priv->direct_map_state); out: @@ -216,6 +219,8 @@ static __always_unused int kvm_gmem_set_direct_map(struct inode *inode, pgoff_t folio_batch_release(&fbatch); } + trace_kvm_gmem_direct_map_state_change(start, end, state); + xa_unlock(&gmem_priv->direct_map_state); out: return r; From patchwork Wed Oct 30 13:49:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Roy X-Patchwork-Id: 13856582 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5F608D5CCB1 for ; Wed, 30 Oct 2024 13:57:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Nxmld1muHDij05G7q/m5gTa+Eo1Qs7h7huLi79xqHO0=; b=SSrcLu78E7BxV1 vMRyzCSR5U9hGG//n+0Ac7rVjuGQiqiud4xPpnm4Lr6AC4iOd3C/CSKqWPTf/3ECOc9dFfmSUxIOA E2oPXYq8ehl6WobQ5SPPXxMHuyUtYyuwQ336bjIak67StgyPMxqkzmEE9caef3BoGKw15yQTVQaY3 zZmsvXgNThgCZ2/4VyB0dM0pzJHq9rhVRdVEF2dkGsPBqeikNfBrcQhievytmDRDxiw17/bj8nYtB iB4qlUCGy4txAAGd06onf8iJgKe4dUF2DCC8jlrBtx5l+kQWzMQ/SKI6I9HAYEPIXZ7JqwzEmKUGK cPILJfdQaDZiuHVWqD1g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t69By-00000000YxD-1vqs; Wed, 30 Oct 2024 13:56:58 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t695v-00000000Xvn-2ehS; Wed, 30 Oct 2024 13:50:43 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=4gTnpkauDGYjzwRVywsrh01v5XrS76J2GZbKVdbAfi8=; b=Uot6gNxvYl2VOJH4FQmfDbn88X V2bOPxlMINaPiqggAeoTHMzRcCN9WoIqh/2hxTrMIex1ES0HJLrKD9blSi1iQ900qkqC77qEpLi4i opPwYCWGtzJK+9IoFAU5c6zBKbaVE+bDmXFXn2GS6azyt09zSJkzHTOoCMhMNHxbRlrtKFWf5n6i8 fEXagWBL7AXUykdmlmmWHqj8XxeznqgdlYUlVhajx8lGVj7I7++IVcdX9KfPrc6gali6DF5kZm/Pu g44BfkTmKXCfbTKQgoVMFbQ3FO4Td3Ava9QGma83cCTY0/UN6151hBfGcJC+hlE7ORqG5/3KIF9kq zbN8PRWA==; Received: from smtp-fw-33001.amazon.com ([207.171.190.10]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t695s-0000000AFsf-12jJ; Wed, 30 Oct 2024 13:50:42 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt; s=amazon201209; t=1730296241; x=1761832241; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4gTnpkauDGYjzwRVywsrh01v5XrS76J2GZbKVdbAfi8=; b=QEXfclS6WSR3ECB66EPCWVjXsKDnNxu05r6vv99o7AiRElCoW6Tu6wJf Tbkjx4LQ8u1h3OtaIENV+MC4pLZm1wgroebIsCoszZndzV/13VAQAWy7H t5DySk6EDs8ncgkg+Gwjxy8vZ/RN3ftNbMiwscBYFHF94saGSewFyM7dO E=; X-IronPort-AV: E=Sophos;i="6.11,245,1725321600"; d="scan'208";a="381122811" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.25.36.210]) by smtp-border-fw-33001.sea14.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2024 13:50:38 +0000 Received: from EX19MTAUWB002.ant.amazon.com [10.0.21.151:4737] by smtpin.naws.us-west-2.prod.farcaster.email.amazon.dev [10.0.24.36:2525] with esmtp (Farcaster) id 6a75726f-118b-4ebb-b916-ac128103b7c2; Wed, 30 Oct 2024 13:50:37 +0000 (UTC) X-Farcaster-Flow-ID: 6a75726f-118b-4ebb-b916-ac128103b7c2 Received: from EX19D003UWB001.ant.amazon.com (10.13.138.92) by EX19MTAUWB002.ant.amazon.com (10.250.64.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 30 Oct 2024 13:50:34 +0000 Received: from EX19MTAUWB001.ant.amazon.com (10.250.64.248) by EX19D003UWB001.ant.amazon.com (10.13.138.92) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.35; Wed, 30 Oct 2024 13:50:34 +0000 Received: from email-imr-corp-prod-pdx-all-2c-8a67eb17.us-west-2.amazon.com (10.25.36.214) by mail-relay.amazon.com (10.250.64.254) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34 via Frontend Transport; Wed, 30 Oct 2024 13:50:34 +0000 Received: from ua2d7e1a6107c5b.home (dev-dsk-roypat-1c-dbe2a224.eu-west-1.amazon.com [172.19.88.180]) by email-imr-corp-prod-pdx-all-2c-8a67eb17.us-west-2.amazon.com (Postfix) with ESMTPS id AD78B4032D; Wed, 30 Oct 2024 13:50:24 +0000 (UTC) From: Patrick Roy To: , , , , , , , , CC: Patrick Roy , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v3 5/6] kvm: document KVM_GMEM_NO_DIRECT_MAP flag Date: Wed, 30 Oct 2024 13:49:09 +0000 Message-ID: <20241030134912.515725-6-roypat@amazon.co.uk> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241030134912.515725-1-roypat@amazon.co.uk> References: <20241030134912.515725-1-roypat@amazon.co.uk> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241030_135040_793524_97DCB4DA X-CRM114-Status: UNSURE ( 9.18 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Signed-off-by: Patrick Roy --- Documentation/virt/kvm/api.rst | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index edc070c6e19b2..c8e21c523411c 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6382,6 +6382,20 @@ a single guest_memfd file, but the bound ranges must not overlap). See KVM_SET_USER_MEMORY_REGION2 for additional details. +The following flags are defined: + +KVM_GMEM_NO_DIRECT_MAP + Ensure memory backing this guest_memfd inode is unmapped from the kernel's + address space. + +Errors: + + ========== =============================================================== + EOPNOTSUPP `KVM_GMEM_NO_DIRECT_MAP` was set in `flags`, but the host does + not support direct map manipulations. + ========== =============================================================== + + 4.143 KVM_PRE_FAULT_MEMORY --------------------------- From patchwork Wed Oct 30 13:49:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Roy X-Patchwork-Id: 13856583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A7300D5CCB1 for ; Wed, 30 Oct 2024 13:57:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=shtcXAM/kQWmvgIh8qRwz3BvUAqLTlHWTpfXadeNGA8=; b=n2Kml8boMMNjp7 xUTQ5vG7izSM7fpp5FU3JRAzFU71jm+37R/XV0QQxvAwmwcgWq9wMgJ6MNVv9B4EC/kIj7z+Bf7XE Wb8o5irD5RrLhYHZLjpW4RZ3Ps/iJJQkYIVmWMntqv0c1Y5Ln55uTDy0QRH2siZ6NpfgXsSAjzUtv 3/wneRj1xKAxreX17vAHS+o0wbjgd9Q70B+wZ0CeCioo+BibIk7Drf9w6eCDa5Xf8RqoZmCo9ft6/ aoPvJq2tM+PfwUTPjyWevjpZmqO8P2ulXoPPfdkj/o39K6JPvYUK4EwbdmCYdBYYkxp2XspAGu0N1 cvMbUnQ37iPA8MUijAKg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t69C8-00000000Z06-2Fyy; Wed, 30 Oct 2024 13:57:08 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t696D-00000000XzK-3xS2; Wed, 30 Oct 2024 13:51:02 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=7/qXvSh0Hjany10m5XTRwJaHsD/IFz61ERHpnmidOYs=; b=hXj44/ZZOEIHpOcXLFWXRkf2eI 8sTt/yTMeCqheySDWrdoQnSTxYFaiwu3XD7qdx7lLT11hVbcsM1PChcY3lHFAvt+NlezQoNc2HWuS 0umgu/2/k8+Ly2L1OzHb22/Ckg1tIxDvqzMaRVhRPqPslD/nCX3PxzRzcxXHoK7jxPSuyTh+Zs0ke 4iH4UiIDUrz22SSQpaFf2z2Cxrpjo7raket+h/+yTN0Lt45RL5iXVOL+iq4nVKxknJzUrUmZ8eGDL 5+0tDeXjrA9I7zTHWZ9vgc6OpxIDq8ZnLoQ5nLqK02LuadtdvcUCxjE7G8ExHuSTEFdilOUV8zsVf K6c601Eg==; Received: from smtp-fw-9102.amazon.com ([207.171.184.29]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t696A-0000000AFxF-0mwj; Wed, 30 Oct 2024 13:51:00 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt; s=amazon201209; t=1730296259; x=1761832259; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7/qXvSh0Hjany10m5XTRwJaHsD/IFz61ERHpnmidOYs=; b=YcUNP5zmCRxREu6gynEaIsDQpemWGvck+K+EkyCWJ1h7030FEHCuDACD w7M+l7s06K9Ou6TLSfJVEqxzVcvO2u05Jhj01kmEEdnI7eAkGqJPerPfN 3jG23rzzQMeaFd/CFiEeR6Kf8HY7wzEG31/T9dfYZw32qZyW4Oj+0ZkUJ Y=; X-IronPort-AV: E=Sophos;i="6.11,245,1725321600"; d="scan'208";a="465820264" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.25.36.214]) by smtp-border-fw-9102.sea19.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2024 13:50:54 +0000 Received: from EX19MTAUWB002.ant.amazon.com [10.0.21.151:12628] by smtpin.naws.us-west-2.prod.farcaster.email.amazon.dev [10.0.35.102:2525] with esmtp (Farcaster) id 00b2c9f1-27f5-4a01-ae54-6fda17405dbb; Wed, 30 Oct 2024 13:50:53 +0000 (UTC) X-Farcaster-Flow-ID: 00b2c9f1-27f5-4a01-ae54-6fda17405dbb Received: from EX19D020UWC002.ant.amazon.com (10.13.138.147) by EX19MTAUWB002.ant.amazon.com (10.250.64.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 30 Oct 2024 13:50:46 +0000 Received: from EX19MTAUEB002.ant.amazon.com (10.252.135.47) by EX19D020UWC002.ant.amazon.com (10.13.138.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 30 Oct 2024 13:50:45 +0000 Received: from email-imr-corp-prod-pdx-all-2c-8a67eb17.us-west-2.amazon.com (10.43.8.2) by mail-relay.amazon.com (10.252.135.97) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34 via Frontend Transport; Wed, 30 Oct 2024 13:50:45 +0000 Received: from ua2d7e1a6107c5b.home (dev-dsk-roypat-1c-dbe2a224.eu-west-1.amazon.com [172.19.88.180]) by email-imr-corp-prod-pdx-all-2c-8a67eb17.us-west-2.amazon.com (Postfix) with ESMTPS id B5B724032D; Wed, 30 Oct 2024 13:50:35 +0000 (UTC) From: Patrick Roy To: , , , , , , , , CC: Patrick Roy , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v3 6/6] kvm: selftests: run gmem tests with KVM_GMEM_NO_DIRECT_MAP set Date: Wed, 30 Oct 2024 13:49:10 +0000 Message-ID: <20241030134912.515725-7-roypat@amazon.co.uk> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241030134912.515725-1-roypat@amazon.co.uk> References: <20241030134912.515725-1-roypat@amazon.co.uk> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241030_135058_697554_A2A0BDB3 X-CRM114-Status: GOOD ( 13.24 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Also adjust test_create_guest_memfd_invalid, as now BIT(0) is a valid value for flags (note that this also fixes an issue where the loop in test_create_guest_memfd_invalid is a noop. I've posted that fix as a separate patch last week [1]). [1]: https://lore.kernel.org/kvm/20241024095956.3668818-1-roypat@amazon.co.uk/ Signed-off-by: Patrick Roy --- tools/testing/selftests/kvm/guest_memfd_test.c | 2 +- .../selftests/kvm/x86_64/private_mem_conversions_test.c | 7 ++++--- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index ba0c8e9960358..d04f7ff3dfb15 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -134,7 +134,7 @@ static void test_create_guest_memfd_invalid(struct kvm_vm *vm) size); } - for (flag = 0; flag; flag <<= 1) { + for (flag = BIT(1); flag; flag <<= 1) { fd = __vm_create_guest_memfd(vm, page_size, flag); TEST_ASSERT(fd == -1 && errno == EINVAL, "guest_memfd() with flag '0x%lx' should fail with EINVAL", diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c index 82a8d88b5338e..dfc78781e93b8 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c @@ -367,7 +367,7 @@ static void *__test_mem_conversions(void *__vcpu) } static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t nr_vcpus, - uint32_t nr_memslots) + uint32_t nr_memslots, uint64_t gmem_flags) { /* * Allocate enough memory so that each vCPU's chunk of memory can be @@ -394,7 +394,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, (1 << KVM_HC_MAP_GPA_RANGE)); - memfd = vm_create_guest_memfd(vm, memfd_size, 0); + memfd = vm_create_guest_memfd(vm, memfd_size, gmem_flags); for (i = 0; i < nr_memslots; i++) vm_mem_add(vm, src_type, BASE_DATA_GPA + slot_size * i, @@ -477,7 +477,8 @@ int main(int argc, char *argv[]) } } - test_mem_conversions(src_type, nr_vcpus, nr_memslots); + test_mem_conversions(src_type, nr_vcpus, nr_memslots, 0); + test_mem_conversions(src_type, nr_vcpus, nr_memslots, KVM_GMEM_NO_DIRECT_MAP); return 0; }