From patchwork Fri Jan 19 14:40:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anton Johansson X-Patchwork-Id: 13523852 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 14D6AC47DAF for ; Fri, 19 Jan 2024 14:41:30 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rQq35-0005xS-2h; Fri, 19 Jan 2024 09:40:48 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rQq2q-0005ps-72 for qemu-devel@nongnu.org; Fri, 19 Jan 2024 09:40:34 -0500 Received: from rev.ng ([5.9.113.41]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rQq2l-0003t7-CX for qemu-devel@nongnu.org; Fri, 19 Jan 2024 09:40:31 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=rev.ng; s=dkim; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=VbhZY2Okth22bd+NRvxa9PZqSda0T+iKQ/d0QwiWpEU=; b=JNGjbhuBAvMMZbtiw28DGVstgl Oy/NXXJBeyx5BmTb1rGywq5JRdm4unkIJzt1+Xl2sUtvUVSFV7HGBoOcrnX5umVlZ41nbn/8fkRPx sf0jdPE2ByiCaQLlk+9n0q3ELmvz8TBJC1kL/gxiLdctDq9+xJpa5h2JaoMb0fRZ/6f8=; To: qemu-devel@nongnu.org Cc: ale@rev.ng, richard.henderson@linaro.org, philmd@linaro.org Subject: [RFC PATCH 15/34] exec: [CPUTLB] Move TLB_*/tlb_*() to common header Date: Fri, 19 Jan 2024 15:40:05 +0100 Message-ID: <20240119144024.14289-16-anjo@rev.ng> In-Reply-To: <20240119144024.14289-1-anjo@rev.ng> References: <20240119144024.14289-1-anjo@rev.ng> MIME-Version: 1.0 Received-SPF: pass client-ip=5.9.113.41; envelope-from=anjo@rev.ng; helo=rev.ng X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Anton Johansson X-Patchwork-Original-From: Anton Johansson via From: Anton Johansson Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org TLB_* macros and tlb_*() functions are target independent, move to cpu-common.h. Signed-off-by: Anton Johansson Reviewed-by: Richard Henderson --- include/exec/cpu-all.h | 81 -------------------------------------- include/exec/cpu-common.h | 83 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 83 insertions(+), 81 deletions(-) diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h index 219544bad8..968fbd4d16 100644 --- a/include/exec/cpu-all.h +++ b/include/exec/cpu-all.h @@ -295,87 +295,6 @@ CPUArchState *cpu_copy(CPUArchState *env); | CPU_INTERRUPT_TGT_EXT_3 \ | CPU_INTERRUPT_TGT_EXT_4) -#ifdef CONFIG_USER_ONLY - -/* - * Allow some level of source compatibility with softmmu. We do not - * support any of the more exotic features, so only invalid pages may - * be signaled by probe_access_flags(). - */ -#define TLB_INVALID_MASK (1 << (TARGET_PAGE_BITS_MIN - 1)) -#define TLB_MMIO (1 << (TARGET_PAGE_BITS_MIN - 2)) -#define TLB_WATCHPOINT 0 - -#else - -/* - * Flags stored in the low bits of the TLB virtual address. - * These are defined so that fast path ram access is all zeros. - * The flags all must be between TARGET_PAGE_BITS and - * maximum address alignment bit. - * - * Use TARGET_PAGE_BITS_MIN so that these bits are constant - * when TARGET_PAGE_BITS_VARY is in effect. - * - * The count, if not the placement of these bits is known - * to tcg/tcg-op-ldst.c, check_max_alignment(). - */ -/* Zero if TLB entry is valid. */ -#define TLB_INVALID_MASK (1 << (TARGET_PAGE_BITS_MIN - 1)) -/* Set if TLB entry references a clean RAM page. The iotlb entry will - contain the page physical address. */ -#define TLB_NOTDIRTY (1 << (TARGET_PAGE_BITS_MIN - 2)) -/* Set if TLB entry is an IO callback. */ -#define TLB_MMIO (1 << (TARGET_PAGE_BITS_MIN - 3)) -/* Set if TLB entry writes ignored. */ -#define TLB_DISCARD_WRITE (1 << (TARGET_PAGE_BITS_MIN - 4)) -/* Set if the slow path must be used; more flags in CPUTLBEntryFull. */ -#define TLB_FORCE_SLOW (1 << (TARGET_PAGE_BITS_MIN - 5)) - -/* - * Use this mask to check interception with an alignment mask - * in a TCG backend. - */ -#define TLB_FLAGS_MASK \ - (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \ - | TLB_FORCE_SLOW | TLB_DISCARD_WRITE) - -/* - * Flags stored in CPUTLBEntryFull.slow_flags[x]. - * TLB_FORCE_SLOW must be set in CPUTLBEntry.addr_idx[x]. - */ -/* Set if TLB entry requires byte swap. */ -#define TLB_BSWAP (1 << 0) -/* Set if TLB entry contains a watchpoint. */ -#define TLB_WATCHPOINT (1 << 1) - -#define TLB_SLOW_FLAGS_MASK (TLB_BSWAP | TLB_WATCHPOINT) - -/** - * tlb_hit_page: return true if page aligned @addr is a hit against the - * TLB entry @tlb_addr - * - * @addr: virtual address to test (must be page aligned) - * @tlb_addr: TLB entry address (a CPUTLBEntry addr_read/write/code value) - */ -static inline bool tlb_hit_page(uint64_t tlb_addr, vaddr addr) -{ - return addr == (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK)); -} - -/** - * tlb_hit: return true if @addr is a hit against the TLB entry @tlb_addr - * - * @addr: virtual address to test (need not be page aligned) - * @tlb_addr: TLB entry address (a CPUTLBEntry addr_read/write/code value) - */ -static inline bool tlb_hit(uint64_t tlb_addr, vaddr addr) -{ - return tlb_hit_page(tlb_addr, addr & TARGET_PAGE_MASK); -} - -#endif /* !CONFIG_USER_ONLY */ - /* accel/tcg/cpu-exec.c */ int cpu_exec(CPUState *cpu); diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h index 63bc0ad150..d3c8b2cf55 100644 --- a/include/exec/cpu-common.h +++ b/include/exec/cpu-common.h @@ -41,6 +41,89 @@ extern const TargetPageBits target_page; #define TARGET_PAGE_ALIGN(addr) ROUND_UP((addr), TARGET_PAGE_SIZE) #endif +#ifdef CONFIG_USER_ONLY + +/* + * Allow some level of source compatibility with softmmu. We do not + * support any of the more exotic features, so only invalid pages may + * be signaled by probe_access_flags(). + */ +#define TLB_INVALID_MASK (1 << (TARGET_PAGE_BITS_MIN - 1)) +#define TLB_MMIO (1 << (TARGET_PAGE_BITS_MIN - 2)) +#define TLB_WATCHPOINT 0 + +#else + +/* + * Flags stored in the low bits of the TLB virtual address. + * These are defined so that fast path ram access is all zeros. + * The flags all must be between TARGET_PAGE_BITS and + * maximum address alignment bit. + * + * Use TARGET_PAGE_BITS_MIN so that these bits are constant + * when TARGET_PAGE_BITS_VARY is in effect. + * + * The count, if not the placement of these bits is known + * to tcg/tcg-op-ldst.c, check_max_alignment(). + */ +/* Zero if TLB entry is valid. */ +#define TLB_INVALID_MASK (1 << (TARGET_PAGE_BITS_MIN - 1)) +/* + * Set if TLB entry references a clean RAM page. The iotlb entry will + * contain the page physical address. + */ +#define TLB_NOTDIRTY (1 << (TARGET_PAGE_BITS_MIN - 2)) +/* Set if TLB entry is an IO callback. */ +#define TLB_MMIO (1 << (TARGET_PAGE_BITS_MIN - 3)) +/* Set if TLB entry writes ignored. */ +#define TLB_DISCARD_WRITE (1 << (TARGET_PAGE_BITS_MIN - 4)) +/* Set if the slow path must be used; more flags in CPUTLBEntryFull. */ +#define TLB_FORCE_SLOW (1 << (TARGET_PAGE_BITS_MIN - 5)) + +/* + * Use this mask to check interception with an alignment mask + * in a TCG backend. + */ +#define TLB_FLAGS_MASK \ + (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \ + | TLB_FORCE_SLOW | TLB_DISCARD_WRITE) + +/* + * Flags stored in CPUTLBEntryFull.slow_flags[x]. + * TLB_FORCE_SLOW must be set in CPUTLBEntry.addr_idx[x]. + */ +/* Set if TLB entry requires byte swap. */ +#define TLB_BSWAP (1 << 0) +/* Set if TLB entry contains a watchpoint. */ +#define TLB_WATCHPOINT (1 << 1) + +#define TLB_SLOW_FLAGS_MASK (TLB_BSWAP | TLB_WATCHPOINT) + +/** + * tlb_hit_page: return true if page aligned @addr is a hit against the + * TLB entry @tlb_addr + * + * @addr: virtual address to test (must be page aligned) + * @tlb_addr: TLB entry address (a CPUTLBEntry addr_read/write/code value) + */ +static inline bool tlb_hit_page(uint64_t tlb_addr, vaddr addr) +{ + return addr == (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK)); +} + +/** + * tlb_hit: return true if @addr is a hit against the TLB entry @tlb_addr + * + * @addr: virtual address to test (need not be page aligned) + * @tlb_addr: TLB entry address (a CPUTLBEntry addr_read/write/code value) + */ +static inline bool tlb_hit(uint64_t tlb_addr, vaddr addr) +{ + return tlb_hit_page(tlb_addr, addr & TARGET_PAGE_MASK); +} + +#endif /* !CONFIG_USER_ONLY */ + void cpu_exec_init_all(void); void cpu_exec_step_atomic(CPUState *cpu);