From patchwork Tue Aug 13 13:41:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13762066 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 40617C52D7C for ; Tue, 13 Aug 2024 13:42:25 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.776268.1186409 (Exim 4.92) (envelope-from ) id 1sdrmr-0005GL-JM; Tue, 13 Aug 2024 13:42:09 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 776268.1186409; Tue, 13 Aug 2024 13:42:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sdrmr-0005GE-Gq; Tue, 13 Aug 2024 13:42:09 +0000 Received: by outflank-mailman (input) for mailman id 776268; Tue, 13 Aug 2024 13:42:07 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sdrmp-0004yQ-Nu for xen-devel@lists.xenproject.org; Tue, 13 Aug 2024 13:42:07 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [2a07:de40:b251:101:10:150:64:2]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id d4b803e6-5979-11ef-a505-bb4a2ccca743; Tue, 13 Aug 2024 15:42:06 +0200 (CEST) Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 79751203BB; Tue, 13 Aug 2024 13:42:06 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 44F0913ABD; Tue, 13 Aug 2024 13:42:06 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 3ThgD65iu2bHGQAAD6G6ig (envelope-from ); Tue, 13 Aug 2024 13:42:06 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d4b803e6-5979-11ef-a505-bb4a2ccca743 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1723556526; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=T2ja3qLrlfWYFIuKdqgrD+R8mxMYKKUsy9OjHCkfDZI=; b=cGK2g0PLJmOAji6VVsgOSbX82iVbZWaAwDqkYGdvguJo/BGjkNqCiyCcIIDLqG+fET3OXl vfkyuYDZWZm2BJVTBdhKDkFD5M61viF0HUaoBazvXcftWhvC5J1aSFVcj+HyqVzu8RZ8DE 4iECH6NLKtFYKVVPyaDPcZN/Fc+LHEo= Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1723556526; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=T2ja3qLrlfWYFIuKdqgrD+R8mxMYKKUsy9OjHCkfDZI=; b=cGK2g0PLJmOAji6VVsgOSbX82iVbZWaAwDqkYGdvguJo/BGjkNqCiyCcIIDLqG+fET3OXl vfkyuYDZWZm2BJVTBdhKDkFD5M61viF0HUaoBazvXcftWhvC5J1aSFVcj+HyqVzu8RZ8DE 4iECH6NLKtFYKVVPyaDPcZN/Fc+LHEo= From: Juergen Gross To: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org Cc: samuel.thibault@ens-lyon.org, Juergen Gross Subject: [PATCH v2 1/3] mini-os: mm: introduce generic page table walk function Date: Tue, 13 Aug 2024 15:41:56 +0200 Message-ID: <20240813134158.580-2-jgross@suse.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813134158.580-1-jgross@suse.com> References: <20240813134158.580-1-jgross@suse.com> MIME-Version: 1.0 X-Spamd-Result: default: False [-2.80 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MIME_TRACE(0.00)[0:+]; TO_DN_SOME(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; RCVD_COUNT_TWO(0.00)[2]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:mid,suse.com:email,imap1.dmz-prg2.suse.org:helo]; RCVD_TLS_ALL(0.00)[] In x86 mm code there are multiple instances of page table walks for different purposes. Introduce a generic page table walker being able to cover the current use cases. It will be used for other cases in future, too. The page table walker needs some per-level data, so add a table for that data. Merge it with the already existing pt_prot[] array. Rewrite get_pgt() to use the new walker. Signed-off-by: Juergen Gross Reviewed-by: SAmuel Thibault --- V2: - add idx_from_va_lvl() helper (Samuel Thibault) --- arch/x86/mm.c | 157 +++++++++++++++++++++++++++++++++++++------------- 1 file changed, 118 insertions(+), 39 deletions(-) diff --git a/arch/x86/mm.c b/arch/x86/mm.c index 7ddf16e4..9849b985 100644 --- a/arch/x86/mm.c +++ b/arch/x86/mm.c @@ -125,20 +125,30 @@ void arch_mm_preinit(void *p) } #endif +static const struct { + unsigned int shift; + unsigned int entries; + pgentry_t prot; +} ptdata[PAGETABLE_LEVELS + 1] = { + { 0, 0, 0 }, + { L1_PAGETABLE_SHIFT, L1_PAGETABLE_ENTRIES, L1_PROT }, + { L2_PAGETABLE_SHIFT, L2_PAGETABLE_ENTRIES, L2_PROT }, + { L3_PAGETABLE_SHIFT, L3_PAGETABLE_ENTRIES, L3_PROT }, +#if defined(__x86_64__) + { L4_PAGETABLE_SHIFT, L4_PAGETABLE_ENTRIES, L4_PROT }, +#endif +}; + +static inline unsigned int idx_from_va_lvl(unsigned long va, unsigned int lvl) +{ + return (va >> ptdata[lvl].shift) & (ptdata[lvl].entries - 1); +} + /* * Make pt_pfn a new 'level' page table frame and hook it into the page * table at offset in previous level MFN (pref_l_mfn). pt_pfn is a guest * PFN. */ -static pgentry_t pt_prot[PAGETABLE_LEVELS] = { - L1_PROT, - L2_PROT, - L3_PROT, -#if defined(__x86_64__) - L4_PROT, -#endif -}; - static void new_pt_frame(unsigned long *pt_pfn, unsigned long prev_l_mfn, unsigned long offset, unsigned long level) { @@ -170,7 +180,7 @@ static void new_pt_frame(unsigned long *pt_pfn, unsigned long prev_l_mfn, mmu_updates[0].ptr = (tab[l2_table_offset(pt_page)] & PAGE_MASK) + sizeof(pgentry_t) * l1_table_offset(pt_page); mmu_updates[0].val = (pgentry_t)pfn_to_mfn(*pt_pfn) << PAGE_SHIFT | - (pt_prot[level - 1] & ~_PAGE_RW); + (ptdata[level].prot & ~_PAGE_RW); if ( (rc = HYPERVISOR_mmu_update(mmu_updates, 1, NULL, DOMID_SELF)) < 0 ) { @@ -183,7 +193,7 @@ static void new_pt_frame(unsigned long *pt_pfn, unsigned long prev_l_mfn, mmu_updates[0].ptr = ((pgentry_t)prev_l_mfn << PAGE_SHIFT) + sizeof(pgentry_t) * offset; mmu_updates[0].val = (pgentry_t)pfn_to_mfn(*pt_pfn) << PAGE_SHIFT | - pt_prot[level]; + ptdata[level + 1].prot; if ( (rc = HYPERVISOR_mmu_update(mmu_updates, 1, NULL, DOMID_SELF)) < 0 ) { @@ -192,7 +202,7 @@ static void new_pt_frame(unsigned long *pt_pfn, unsigned long prev_l_mfn, } #else tab = mfn_to_virt(prev_l_mfn); - tab[offset] = (*pt_pfn << PAGE_SHIFT) | pt_prot[level]; + tab[offset] = (*pt_pfn << PAGE_SHIFT) | ptdata[level + 1].prot; #endif *pt_pfn += 1; @@ -202,6 +212,82 @@ static void new_pt_frame(unsigned long *pt_pfn, unsigned long prev_l_mfn, static mmu_update_t mmu_updates[L1_PAGETABLE_ENTRIES + 1]; #endif +/* + * Walk recursively through all PTEs calling a specified function. The function + * is allowed to change the PTE, the walker will follow the new value. + * The walk will cover the virtual address range [from_va .. to_va]. + * The supplied function will be called with the following parameters: + * va: base virtual address of the area covered by the current PTE + * lvl: page table level of the PTE (1 = lowest level, PAGETABLE_LEVELS = + * PTE in page table addressed by %cr3) + * is_leaf: true if PTE doesn't address another page table (it is either at + * level 1, or invalid, or has its PSE bit set) + * pte: address of the PTE + * par: parameter, passed to walk_pt() by caller + * Return value of func() being non-zero will terminate walk_pt(), walk_pt() + * will return that value in this case, zero else. + */ +static int walk_pt(unsigned long from_va, unsigned long to_va, + int (func)(unsigned long va, unsigned int lvl, + bool is_leaf, pgentry_t *pte, void *par), + void *par) +{ + unsigned int lvl = PAGETABLE_LEVELS; + unsigned int ptindex[PAGETABLE_LEVELS + 1]; + unsigned long va = round_pgdown(from_va); + unsigned long va_lvl; + pgentry_t *tab[PAGETABLE_LEVELS + 1]; + pgentry_t *pte; + bool is_leaf; + int ret; + + /* Start at top level page table. */ + tab[lvl] = pt_base; + ptindex[lvl] = idx_from_va_lvl(va, lvl); + + while ( va < (to_va | (PAGE_SIZE - 1)) ) + { + pte = tab[lvl] + ptindex[lvl]; + is_leaf = (lvl == L1_FRAME) || (*pte & _PAGE_PSE) || + !(*pte & _PAGE_PRESENT); + va_lvl = va & ~((1UL << ptdata[lvl].shift) - 1); + ret = func(va_lvl, lvl, is_leaf, pte, par); + if ( ret ) + return ret; + + /* PTE might have been modified by func(), reevaluate leaf state. */ + is_leaf = (lvl == L1_FRAME) || (*pte & _PAGE_PSE) || + !(*pte & _PAGE_PRESENT); + + if ( is_leaf ) + { + /* Reached a leaf PTE. Advance to next page. */ + va += 1UL << ptdata[lvl].shift; + ptindex[lvl]++; + + /* Check for the need to traverse up again. */ + while ( ptindex[lvl] == ptdata[lvl].entries ) + { + /* End of virtual address space? */ + if ( lvl == PAGETABLE_LEVELS ) + return 0; + /* Reached end of current page table, one level up. */ + lvl++; + ptindex[lvl]++; + } + } + else + { + /* Not a leaf, walk one level down. */ + lvl--; + tab[lvl] = mfn_to_virt(pte_to_mfn(*pte)); + ptindex[lvl] = idx_from_va_lvl(va, lvl); + } + } + + return 0; +} + /* * Build the initial pagetable. */ @@ -407,36 +493,29 @@ static void set_readonly(void *text, void *etext) /* * get the PTE for virtual address va if it exists. Otherwise NULL. */ -static pgentry_t *get_pgt(unsigned long va) +static int get_pgt_func(unsigned long va, unsigned int lvl, bool is_leaf, + pgentry_t *pte, void *par) { - unsigned long mfn; - pgentry_t *tab; - unsigned offset; + pgentry_t **result; - tab = pt_base; - mfn = virt_to_mfn(pt_base); + if ( !(*pte & _PAGE_PRESENT) && lvl > L1_FRAME ) + return -1; -#if defined(__x86_64__) - offset = l4_table_offset(va); - if ( !(tab[offset] & _PAGE_PRESENT) ) - return NULL; - mfn = pte_to_mfn(tab[offset]); - tab = mfn_to_virt(mfn); -#endif - offset = l3_table_offset(va); - if ( !(tab[offset] & _PAGE_PRESENT) ) - return NULL; - mfn = pte_to_mfn(tab[offset]); - tab = mfn_to_virt(mfn); - offset = l2_table_offset(va); - if ( !(tab[offset] & _PAGE_PRESENT) ) - return NULL; - if ( tab[offset] & _PAGE_PSE ) - return &tab[offset]; - mfn = pte_to_mfn(tab[offset]); - tab = mfn_to_virt(mfn); - offset = l1_table_offset(va); - return &tab[offset]; + if ( lvl > L1_FRAME && !(*pte & _PAGE_PSE) ) + return 0; + + result = par; + *result = pte; + + return 0; +} + +static pgentry_t *get_pgt(unsigned long va) +{ + pgentry_t *tab = NULL; + + walk_pt(va, va, get_pgt_func, &tab); + return tab; } From patchwork Tue Aug 13 13:41:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13762067 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1347CC52D7B for ; Tue, 13 Aug 2024 13:42:26 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.776270.1186424 (Exim 4.92) (envelope-from ) id 1sdrn0-0005cR-TZ; Tue, 13 Aug 2024 13:42:18 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 776270.1186424; Tue, 13 Aug 2024 13:42:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sdrn0-0005cK-QM; Tue, 13 Aug 2024 13:42:18 +0000 Received: by outflank-mailman (input) for mailman id 776270; Tue, 13 Aug 2024 13:42:17 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sdrmy-0005YO-Uj for xen-devel@lists.xenproject.org; Tue, 13 Aug 2024 13:42:16 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id d813d5c8-5979-11ef-8776-851b0ebba9a2; Tue, 13 Aug 2024 15:42:13 +0200 (CEST) Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 0E7B0203B9; Tue, 13 Aug 2024 13:42:12 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id D699513ABD; Tue, 13 Aug 2024 13:42:11 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id E+DcMrNiu2bUGQAAD6G6ig (envelope-from ); Tue, 13 Aug 2024 13:42:11 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d813d5c8-5979-11ef-8776-851b0ebba9a2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1723556532; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mSPiqOW7akL0nRnyX8VpLZ5Z1GRvi01pTr8QTeeTzVI=; b=IMcdZoJ2H/dtByOc/sPV/fLdIor8zw/ZlNS0Drsh1zShfE85oWIC3Ase3eGraVcMxTpNu1 5JW0P2J1KSzJJ2l9GI1qLTz/Y5ihm9o//Lgyk2AztbZJYdXqDpOUuaRMtOgCUe/tKpCV// zQP2bWdRQ1v8YA6VmVMO4/CDJv2gDD0= Authentication-Results: smtp-out2.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=IMcdZoJ2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1723556532; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mSPiqOW7akL0nRnyX8VpLZ5Z1GRvi01pTr8QTeeTzVI=; b=IMcdZoJ2H/dtByOc/sPV/fLdIor8zw/ZlNS0Drsh1zShfE85oWIC3Ase3eGraVcMxTpNu1 5JW0P2J1KSzJJ2l9GI1qLTz/Y5ihm9o//Lgyk2AztbZJYdXqDpOUuaRMtOgCUe/tKpCV// zQP2bWdRQ1v8YA6VmVMO4/CDJv2gDD0= From: Juergen Gross To: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org Cc: samuel.thibault@ens-lyon.org, Juergen Gross Subject: [PATCH v2 2/3] mini-os: mm: switch need_pgt() to use walk_pt() Date: Tue, 13 Aug 2024 15:41:57 +0200 Message-ID: <20240813134158.580-3-jgross@suse.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813134158.580-1-jgross@suse.com> References: <20240813134158.580-1-jgross@suse.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 0E7B0203B9 X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; MID_CONTAINS_FROM(1.00)[]; R_MISSING_CHARSET(0.50)[]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; ARC_NA(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; MIME_TRACE(0.00)[0:+]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; TO_DN_SOME(0.00)[]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:dkim,suse.com:email,imap1.dmz-prg2.suse.org:rdns,imap1.dmz-prg2.suse.org:helo]; RCVD_TLS_ALL(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; DKIM_TRACE(0.00)[suse.com:+] X-Rspamd-Action: no action X-Rspamd-Server: rspamd1.dmz-prg2.suse.org Instead of open coding a page table walk, use walk_pt() in need_pgt(). Signed-off-by: Juergen Gross Reviewed-by: Samuel Thibault --- V2: - add comment and ASSERT() (Samuel Thibault) --- arch/x86/mm.c | 72 +++++++++++++++++++++------------------------------ 1 file changed, 30 insertions(+), 42 deletions(-) diff --git a/arch/x86/mm.c b/arch/x86/mm.c index 9849b985..84a6d7f0 100644 --- a/arch/x86/mm.c +++ b/arch/x86/mm.c @@ -523,57 +523,45 @@ static pgentry_t *get_pgt(unsigned long va) * return a valid PTE for a given virtual address. If PTE does not exist, * allocate page-table pages. */ -pgentry_t *need_pgt(unsigned long va) +static int need_pgt_func(unsigned long va, unsigned int lvl, bool is_leaf, + pgentry_t *pte, void *par) { + pgentry_t **result = par; unsigned long pt_mfn; - pgentry_t *tab; unsigned long pt_pfn; - unsigned offset; + unsigned int idx; - tab = pt_base; - pt_mfn = virt_to_mfn(pt_base); + if ( !is_leaf ) + return 0; -#if defined(__x86_64__) - offset = l4_table_offset(va); - if ( !(tab[offset] & _PAGE_PRESENT) ) - { - pt_pfn = virt_to_pfn(alloc_page()); - if ( !pt_pfn ) - return NULL; - new_pt_frame(&pt_pfn, pt_mfn, offset, L3_FRAME); - } - ASSERT(tab[offset] & _PAGE_PRESENT); - pt_mfn = pte_to_mfn(tab[offset]); - tab = mfn_to_virt(pt_mfn); -#endif - offset = l3_table_offset(va); - if ( !(tab[offset] & _PAGE_PRESENT) ) - { - pt_pfn = virt_to_pfn(alloc_page()); - if ( !pt_pfn ) - return NULL; - new_pt_frame(&pt_pfn, pt_mfn, offset, L2_FRAME); - } - ASSERT(tab[offset] & _PAGE_PRESENT); - pt_mfn = pte_to_mfn(tab[offset]); - tab = mfn_to_virt(pt_mfn); - offset = l2_table_offset(va); - if ( !(tab[offset] & _PAGE_PRESENT) ) + if ( lvl == L1_FRAME || (*pte & _PAGE_PRESENT) ) { - pt_pfn = virt_to_pfn(alloc_page()); - if ( !pt_pfn ) - return NULL; - new_pt_frame(&pt_pfn, pt_mfn, offset, L1_FRAME); + /* + * The PTE is not addressing a page table (is_leaf is true). If we are + * either at the lowest level or we have a valid large page, we don't + * need to allocate a page table. + */ + ASSERT(lvl == L1_FRAME || (*pte & _PAGE_PSE)); + *result = pte; + return 1; } - ASSERT(tab[offset] & _PAGE_PRESENT); - if ( tab[offset] & _PAGE_PSE ) - return &tab[offset]; - pt_mfn = pte_to_mfn(tab[offset]); - tab = mfn_to_virt(pt_mfn); + pt_mfn = virt_to_mfn(pte); + pt_pfn = virt_to_pfn(alloc_page()); + if ( !pt_pfn ) + return -1; + idx = idx_from_va_lvl(va, lvl); + new_pt_frame(&pt_pfn, pt_mfn, idx, lvl - 1); - offset = l1_table_offset(va); - return &tab[offset]; + return 0; +} + +pgentry_t *need_pgt(unsigned long va) +{ + pgentry_t *tab = NULL; + + walk_pt(va, va, need_pgt_func, &tab); + return tab; } EXPORT_SYMBOL(need_pgt); From patchwork Tue Aug 13 13:41:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 13762069 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0673EC531DD for ; Tue, 13 Aug 2024 13:42:27 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.776271.1186434 (Exim 4.92) (envelope-from ) id 1sdrn2-0005sM-8L; Tue, 13 Aug 2024 13:42:20 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 776271.1186434; Tue, 13 Aug 2024 13:42:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sdrn2-0005sB-5F; Tue, 13 Aug 2024 13:42:20 +0000 Received: by outflank-mailman (input) for mailman id 776271; Tue, 13 Aug 2024 13:42:18 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sdrn0-0004yQ-Sh for xen-devel@lists.xenproject.org; Tue, 13 Aug 2024 13:42:18 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id db783807-5979-11ef-a505-bb4a2ccca743; Tue, 13 Aug 2024 15:42:18 +0200 (CEST) Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 9DF50227F2; Tue, 13 Aug 2024 13:42:17 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 71D5113ABD; Tue, 13 Aug 2024 13:42:17 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 4odAGrliu2bcGQAAD6G6ig (envelope-from ); Tue, 13 Aug 2024 13:42:17 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: db783807-5979-11ef-a505-bb4a2ccca743 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1723556537; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WNUXKUrwLG6oWojlY/a5gaayfmUKTp/EVDucqgND6/g=; b=RCaHpQhbdwAQB+BWDqRXK5sPErDQgbeCf5c9+8uhLu+szl9StsyrD0m9ajm7gIpZr4ROGY Wt5t7qSdSgjkXjvGUO+AYjB8owPF6rSQJNUslMpuXaF9J36IxYYKWeJ33bHf/faLMjehUf PMHfwQw/r2L28gTvaMft0OSfI3tfRM0= Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=RCaHpQhb DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1723556537; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WNUXKUrwLG6oWojlY/a5gaayfmUKTp/EVDucqgND6/g=; b=RCaHpQhbdwAQB+BWDqRXK5sPErDQgbeCf5c9+8uhLu+szl9StsyrD0m9ajm7gIpZr4ROGY Wt5t7qSdSgjkXjvGUO+AYjB8owPF6rSQJNUslMpuXaF9J36IxYYKWeJ33bHf/faLMjehUf PMHfwQw/r2L28gTvaMft0OSfI3tfRM0= From: Juergen Gross To: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org Cc: samuel.thibault@ens-lyon.org, Juergen Gross Subject: [PATCH v2 3/3] mini-os: mm: convert set_readonly() to use walk_pt() Date: Tue, 13 Aug 2024 15:41:58 +0200 Message-ID: <20240813134158.580-4-jgross@suse.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240813134158.580-1-jgross@suse.com> References: <20240813134158.580-1-jgross@suse.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 9DF50227F2 X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; ARC_NA(0.00)[]; TO_DN_SOME(0.00)[]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; FUZZY_BLOCKED(0.00)[rspamd.com]; RCVD_TLS_ALL(0.00)[]; DKIM_TRACE(0.00)[suse.com:+]; RCVD_COUNT_TWO(0.00)[2]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from]; DNSWL_BLOCKED(0.00)[2a07:de40:b281:104:10:150:64:97:from]; RCVD_VIA_SMTP_AUTH(0.00)[]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; RCPT_COUNT_THREE(0.00)[4]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:dkim,suse.com:mid,suse.com:email,imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns] X-Rspamd-Action: no action Instead of having another copy of a page table walk in set_readonly(), just use walk_pt(). As it will be needed later anyway, split out the TLB flushing into a dedicated function. Signed-off-by: Juergen Gross Reviewed-by: Samuel Thibault --- V2: - clear count after doing an mmu_update call (Samuel Thibault) - do final mmu_update call from set_readonly() if needed (Samuel Thibault) --- arch/x86/mm.c | 124 +++++++++++++++++++++++--------------------------- 1 file changed, 56 insertions(+), 68 deletions(-) diff --git a/arch/x86/mm.c b/arch/x86/mm.c index 84a6d7f0..85827d93 100644 --- a/arch/x86/mm.c +++ b/arch/x86/mm.c @@ -402,92 +402,80 @@ static void build_pagetable(unsigned long *start_pfn, unsigned long *max_pfn) * Mark portion of the address space read only. */ extern struct shared_info shared_info; -static void set_readonly(void *text, void *etext) -{ - unsigned long start_address = - ((unsigned long) text + PAGE_SIZE - 1) & PAGE_MASK; - unsigned long end_address = (unsigned long) etext; - pgentry_t *tab = pt_base, page; - unsigned long mfn = pfn_to_mfn(virt_to_pfn(pt_base)); - unsigned long offset; - unsigned long page_size = PAGE_SIZE; + +struct set_readonly_par { + unsigned long etext; #ifdef CONFIG_PARAVIRT - int count = 0; - int rc; + unsigned int count; #endif +}; - printk("setting %p-%p readonly\n", text, etext); +static int set_readonly_func(unsigned long va, unsigned int lvl, bool is_leaf, + pgentry_t *pte, void *par) +{ + struct set_readonly_par *ro = par; - while ( start_address + page_size <= end_address ) - { - tab = pt_base; - mfn = pfn_to_mfn(virt_to_pfn(pt_base)); + if ( !is_leaf ) + return 0; -#if defined(__x86_64__) - offset = l4_table_offset(start_address); - page = tab[offset]; - mfn = pte_to_mfn(page); - tab = to_virt(mfn_to_pfn(mfn) << PAGE_SHIFT); -#endif - offset = l3_table_offset(start_address); - page = tab[offset]; - mfn = pte_to_mfn(page); - tab = to_virt(mfn_to_pfn(mfn) << PAGE_SHIFT); - offset = l2_table_offset(start_address); - if ( !(tab[offset] & _PAGE_PSE) ) - { - page = tab[offset]; - mfn = pte_to_mfn(page); - tab = to_virt(mfn_to_pfn(mfn) << PAGE_SHIFT); + if ( va + (1UL << ptdata[lvl].shift) > ro->etext ) + return 1; - offset = l1_table_offset(start_address); - } + if ( va == (unsigned long)&shared_info ) + { + printk("skipped %lx\n", va); + return 0; + } - if ( start_address != (unsigned long)&shared_info ) - { #ifdef CONFIG_PARAVIRT - mmu_updates[count].ptr = - ((pgentry_t)mfn << PAGE_SHIFT) + sizeof(pgentry_t) * offset; - mmu_updates[count].val = tab[offset] & ~_PAGE_RW; - count++; + mmu_updates[ro->count].ptr = virt_to_mach(pte); + mmu_updates[ro->count].val = *pte & ~_PAGE_RW; + ro->count++; + + if ( ro->count == L1_PAGETABLE_ENTRIES ) + { + ro->count = 0; + if ( HYPERVISOR_mmu_update(mmu_updates, ro->count, NULL, + DOMID_SELF) < 0 ) + BUG(); + } #else - tab[offset] &= ~_PAGE_RW; + *pte &= ~_PAGE_RW; #endif - } - else - printk("skipped %lx\n", start_address); - start_address += page_size; + return 0; +} #ifdef CONFIG_PARAVIRT - if ( count == L1_PAGETABLE_ENTRIES || - start_address + page_size > end_address ) - { - rc = HYPERVISOR_mmu_update(mmu_updates, count, NULL, DOMID_SELF); - if ( rc < 0 ) - { - printk("ERROR: set_readonly(): PTE could not be updated\n"); - do_exit(); - } - count = 0; - } +static void tlb_flush(void) +{ + mmuext_op_t op = { .cmd = MMUEXT_TLB_FLUSH_ALL }; + int count; + + HYPERVISOR_mmuext_op(&op, 1, &count, DOMID_SELF); +} #else - if ( start_address == (1UL << L2_PAGETABLE_SHIFT) ) - page_size = 1UL << L2_PAGETABLE_SHIFT; +static void tlb_flush(void) +{ + write_cr3((unsigned long)pt_base); +} #endif - } + +static void set_readonly(void *text, void *etext) +{ + struct set_readonly_par setro = { .etext = (unsigned long)etext }; + unsigned long start_address = PAGE_ALIGN((unsigned long)text); + + printk("setting %p-%p readonly\n", text, etext); + walk_pt(start_address, setro.etext, set_readonly_func, &setro); #ifdef CONFIG_PARAVIRT - { - mmuext_op_t op = { - .cmd = MMUEXT_TLB_FLUSH_ALL, - }; - int count; - HYPERVISOR_mmuext_op(&op, 1, &count, DOMID_SELF); - } -#else - write_cr3((unsigned long)pt_base); + if ( setro.count && + HYPERVISOR_mmu_update(mmu_updates, setro.count, NULL, DOMID_SELF) < 0) + BUG(); #endif + + tlb_flush(); } /*