From patchwork Wed Oct 9 15:08:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BAE64CEDDA7 for ; Wed, 9 Oct 2024 15:12:21 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJI-0001LP-1c; Wed, 09 Oct 2024 11:09:08 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJC-0001JG-ND for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:02 -0400 Received: from mail-pf1-x42d.google.com ([2607:f8b0:4864:20::42d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJ9-0007w3-M5 for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:01 -0400 Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-71dfc78d6ddso3666726b3a.0 for ; Wed, 09 Oct 2024 08:08:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486538; x=1729091338; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=gGugcia7xIHYiO2oiJjHlSRtV4JGvYxNcTYF4XQ4e4A=; b=yGUhcHC+isQy/pN+U8znOlD8csXL0NThVlnORNwIZuiqKCJjTL5Mdb8cKxS6pmgVeC pBfXSamVxeAUI2ttvat4nwlAalR9cKqL1goEfh59K5BSkvu1uIcpVNID3VOj5jHDRQgQ cWNeHQyURMZi3WLvRRgnkXqgCc5sfq3HaIT3/MOZUNzAszBtIWaLU8GZxLg6f8LyhLj3 KRaD1IIP14L1tbap1FWt9LBKRmZE0TJ+Lr7IS17LYnVsTJBXjVv6YzogeHAk4XSpv8Ec wxrvaYUrlwV47NRP3dmmqGn/HDx33nARC8eZIFcdVMTTdQ5wMGCf+mvgD1g2DHwSKb8a +mag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486538; x=1729091338; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gGugcia7xIHYiO2oiJjHlSRtV4JGvYxNcTYF4XQ4e4A=; b=vUt5zToOMihuNgnhBflFLraWqxRNKVKg85CL7b5bZXM27dbDKByaCeTMem/CRcNDsz WYE1Q0n5XYXBrfhEZYo+HNNpnukBFN1WZ8/jxxf8mjArAVICfWMOfyxwZkeWJt4zC0Rx 7VLgJbsrlCxsnNqClrFGnKIVnj6ilVjnPELm2O6lWQVPT3GXE4hKdKeK7cMuAOKWwgoN iU81XZa7u72dSKYozcrz7pjH2s1HN6yz/tTPcchFRw3qvb2/2b+R51PLN5L7rGU45vYp oPVXXief3ccYqyfJS6ydp8Kw9xZv8ts8ycOhznW9/Gb6KDVCFNpakvSC4Lo8LTxdLHMc d9Mg== X-Gm-Message-State: AOJu0YwjF2WeKhriAQC2fOOR4Ksa9m4NcXG936x40FNibuZd74Q1nr8N cE8oAVFBf3qT606HFv0nhfe+j+0YQn7QASMpBpfJ9YejvlP7R8VkKmlJV3t7UcZC7nvm+U8dIQQ r X-Google-Smtp-Source: AGHT+IHTsP/44AfTBj9tBPEteHYobH5wP5qyYZEOM5DId2QpTc/CUusVD/yPrklj6PidnhzLzIG3zg== X-Received: by 2002:a05:6a00:928a:b0:71e:104d:6316 with SMTP id d2e1a72fcca58-71e1dbc20c0mr4894864b3a.21.1728486538095; Wed, 09 Oct 2024 08:08:58 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.08.57 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:08:57 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 01/23] util/interval-tree: Introduce interval_tree_free_nodes Date: Wed, 9 Oct 2024 08:08:33 -0700 Message-ID: <20241009150855.804605-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42d; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Provide a general-purpose release-all-nodes operation, that allows for the IntervalTreeNode to be embeded within a larger structure. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- include/qemu/interval-tree.h | 11 +++++++++++ util/interval-tree.c | 20 ++++++++++++++++++++ util/selfmap.c | 13 +------------ 3 files changed, 32 insertions(+), 12 deletions(-) diff --git a/include/qemu/interval-tree.h b/include/qemu/interval-tree.h index 25006debe8..d90ea6d17f 100644 --- a/include/qemu/interval-tree.h +++ b/include/qemu/interval-tree.h @@ -96,4 +96,15 @@ IntervalTreeNode *interval_tree_iter_first(IntervalTreeRoot *root, IntervalTreeNode *interval_tree_iter_next(IntervalTreeNode *node, uint64_t start, uint64_t last); +/** + * interval_tree_free_nodes: + * @root: root of the tree + * @it_offset: offset from outermost type to IntervalTreeNode + * + * Free, via g_free, all nodes under @root. IntervalTreeNode may + * not be the true type of the nodes allocated; @it_offset gives + * the offset from the outermost type to the IntervalTreeNode member. + */ +void interval_tree_free_nodes(IntervalTreeRoot *root, size_t it_offset); + #endif /* QEMU_INTERVAL_TREE_H */ diff --git a/util/interval-tree.c b/util/interval-tree.c index 53465182e6..663d3ec222 100644 --- a/util/interval-tree.c +++ b/util/interval-tree.c @@ -639,6 +639,16 @@ static void rb_erase_augmented_cached(RBNode *node, RBRootLeftCached *root, rb_erase_augmented(node, &root->rb_root, augment); } +static void rb_node_free(RBNode *rb, size_t rb_offset) +{ + if (rb->rb_left) { + rb_node_free(rb->rb_left, rb_offset); + } + if (rb->rb_right) { + rb_node_free(rb->rb_right, rb_offset); + } + g_free((void *)rb - rb_offset); +} /* * Interval trees. @@ -870,6 +880,16 @@ IntervalTreeNode *interval_tree_iter_next(IntervalTreeNode *node, } } +void interval_tree_free_nodes(IntervalTreeRoot *root, size_t it_offset) +{ + if (root && root->rb_root.rb_node) { + rb_node_free(root->rb_root.rb_node, + it_offset + offsetof(IntervalTreeNode, rb)); + root->rb_root.rb_node = NULL; + root->rb_leftmost = NULL; + } +} + /* Occasionally useful for calling from within the debugger. */ #if 0 static void debug_interval_tree_int(IntervalTreeNode *node, diff --git a/util/selfmap.c b/util/selfmap.c index 483cb617e2..d2b86da301 100644 --- a/util/selfmap.c +++ b/util/selfmap.c @@ -87,23 +87,12 @@ IntervalTreeRoot *read_self_maps(void) * @root: an interval tree * * Free a tree of MapInfo structures. - * Since we allocated each MapInfo in one chunk, we need not consider the - * contents and can simply free each RBNode. */ -static void free_rbnode(RBNode *n) -{ - if (n) { - free_rbnode(n->rb_left); - free_rbnode(n->rb_right); - g_free(n); - } -} - void free_self_maps(IntervalTreeRoot *root) { if (root) { - free_rbnode(root->rb_root.rb_node); + interval_tree_free_nodes(root, offsetof(MapInfo, itree)); g_free(root); } } From patchwork Wed Oct 9 15:08:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828617 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2F7C8CEDDA3 for ; Wed, 9 Oct 2024 15:10:34 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJI-0001NE-Pa; Wed, 09 Oct 2024 11:09:08 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJC-0001JH-PN for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:02 -0400 Received: from mail-pf1-x434.google.com ([2607:f8b0:4864:20::434]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJA-0007wA-IR for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:02 -0400 Received: by mail-pf1-x434.google.com with SMTP id d2e1a72fcca58-71e050190ddso2546056b3a.0 for ; Wed, 09 Oct 2024 08:09:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486539; x=1729091339; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=EYQWI5p9vUhtvQFXm2oStjAmXC4f4vQZGym/5HmIo8g=; b=Iao1Wkfu2M8xasCmE/DzKD/uKaGXsrXmrwEp3Bj8fyKIk7FoD+UEtFggCSF0iuiTmk OomA1Bx5GYXZWEXOUEZvx7BBBFj/hGIFo4B9vLxyqKkMyQtz0cDn9NRqEeDQWcuPtPNR Ptxubl5rn8oQI9F4TVr7mFc2Ar6LBfjTMyW1f3uT3ZC0EGo9zq572DDpPE9jj8lgV+E5 Htz1D8mS4DZHAz+1J67X21Ch+zzT9v8q7mRTpZBAroj0I3apQlhOBbbJjA29XEobQ7eM w2hn8ZvDHladdQAylhd257DbnKKy02MJ7nCWyXR14jO05wX3RJCYRUc3xo8gxMOPBPvv OsWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486539; x=1729091339; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EYQWI5p9vUhtvQFXm2oStjAmXC4f4vQZGym/5HmIo8g=; b=ClFs8EqlATh5OmiqCrSQFOqHrGAWzltokudX8AgbBUFUoPDFIGwoZwnfyUA93Xr1TJ X5KT6aJQRHymWYhVA4bulls3hbw4fryaAVxrUpSRaWjnpPRuunsFlApr5oqmYWwehFY9 YTeRQRgnNLXwD1QEsvPeAXdN9Qs0wW7xrmtd12722Ih7Sbl3gwBAp8ySpbcy2fdwOGL7 HcpTW/CFcofheylCjNPvQJ3A3tKGj+PIj8OngrszGbqCEe06Wtbl3nKigvxH4mYNciXJ p+FVK9XJyPoj6a5Zh2sClemhm9fDReJ5Ksws5McNotchraEt4ZilJN+NQbcygEDnfGdJ EcjQ== X-Gm-Message-State: AOJu0YwS9qc0cbJ9DpHcXfWrzYJtNNOA4GuhNRQl/1HB9tJwRHFPBq9x eYTN9bI4D88h2RmwLQ3lbTpU8DnsHd3CgG5L/zyTehZnQEvpe3c1oQlU7X04u/4nnLaLAvtBgMy p X-Google-Smtp-Source: AGHT+IFQVDWjqNKg3SS2WqohmyaQE9gv0goDMBxuaYFfHbX5oWTOHysYXJEKXuEP/SUeGvWSwk527w== X-Received: by 2002:a05:6a00:1810:b0:71e:f14:869c with SMTP id d2e1a72fcca58-71e1db673ffmr4519349b3a.6.1728486539163; Wed, 09 Oct 2024 08:08:59 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.08.58 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:08:58 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 02/23] accel/tcg: Split out tlbfast_flush_locked Date: Wed, 9 Oct 2024 08:08:34 -0700 Message-ID: <20241009150855.804605-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::434; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x434.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org We will have a need to flush only the "fast" portion of the tlb, allowing re-fill from the "full" portion. Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Pierrick Bouvier --- accel/tcg/cputlb.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index b76a4eac4e..c1838412e8 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -284,13 +284,18 @@ static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast, } } -static void tlb_mmu_flush_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast) +static void tlbfast_flush_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast) { desc->n_used_entries = 0; + memset(fast->table, -1, sizeof_tlb(fast)); +} + +static void tlb_mmu_flush_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast) +{ + tlbfast_flush_locked(desc, fast); desc->large_page_addr = -1; desc->large_page_mask = -1; desc->vindex = 0; - memset(fast->table, -1, sizeof_tlb(fast)); memset(desc->vtable, -1, sizeof(desc->vtable)); } From patchwork Wed Oct 9 15:08:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 28AF1CEDDA3 for ; Wed, 9 Oct 2024 15:09:28 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJJ-0001Nh-MF; Wed, 09 Oct 2024 11:09:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJF-0001Jt-Oz for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:05 -0400 Received: from mail-pf1-x434.google.com ([2607:f8b0:4864:20::434]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJD-0007wF-Fh for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:05 -0400 Received: by mail-pf1-x434.google.com with SMTP id d2e1a72fcca58-71dea49e808so4428350b3a.1 for ; Wed, 09 Oct 2024 08:09:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486540; x=1729091340; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=D/kCglcC+IjHEzmMz83kkwbAHqcDArOp+yBPf3gcvXc=; b=b/JOfgfol/fc7xdAIgmwzJsPYXxaj8zBDil2Pv4RnketqR8WOmDlCM84KZCTGVQDZh HpOudEWA/LsP+6inzzGUvofY23xMLQcvAf3ArrTiqFdkFuFTIu5nJnPQaXhPbZTlWtif GDwkskY+mg+YfzS9H9OE1xp8chNnn6PD2chI2AC6qwE3rNLw2dU/VglleLCIfbIotETN jDp2RC3bbDyPEvMiFE8EK9eYSi2ipWD0c297ga+Z9Fpl1MH5Xj/xuvKBw2yL4GDy9zja Bry1b6zgkKf/wPKkXItOWGRYpW3RKfpNRvVKxFD6g0BMCX48Cjv5kOIOE2JU8MQcFnmB QSXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486540; x=1729091340; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=D/kCglcC+IjHEzmMz83kkwbAHqcDArOp+yBPf3gcvXc=; b=GaeErqw3EgAyl0pJh91MvagP1zo7aOTxFCt4Frde4iiQwkMbT9AfFthrK70gJY5oJD iSV5LSmoRvrfOQlbsOlMhSHG7A6TzNrFfJaTi30DVWj7JFfI5UP1JJeIuCB6T17PeRTX 6DXEWZRfMRppDU9kVJZrAAvI/ZtWChFKgj5NRMbX0rx7yPNgz3o1y9HB5qtWGnR5FU/z drEn1RMnH/5bt8OtE7K59R9i1nPa0TdMls4nPoG53Np5yk3Y2LFr0hcPZFLCu6i7CC3o 7TEbUuOIego3f22JSatwSSBt802/rFMWga5wn350G2A7XnF0ajyIeTK5KlHLuQCYtnku YrMg== X-Gm-Message-State: AOJu0YxTLeYoV8qjMoGkKqwfHbo7s6Q042vlZezVr0yLFCdTrV7BLQwV aEMv5j7YQ8Gta39OO3+SvzVHZfe+Mr446jmIqsGrXfN0/ZllQpgl1uES6OlrzfpADHAgnAf6tic S X-Google-Smtp-Source: AGHT+IHdRRW3ngb0pz896g8Fgg6Z3mN7oHu1GCmpTxcOt5SqBUdwH8gVcMZOFUfLJeoSWE4V7zSl+g== X-Received: by 2002:a05:6a00:2d8d:b0:71d:f7ea:89f6 with SMTP id d2e1a72fcca58-71e1dbc20cbmr4603234b3a.18.1728486539868; Wed, 09 Oct 2024 08:08:59 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.08.59 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:08:59 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 03/23] accel/tcg: Split out tlbfast_{index,entry} Date: Wed, 9 Oct 2024 08:08:35 -0700 Message-ID: <20241009150855.804605-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::434; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x434.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Often we already have the CPUTLBDescFast structure pointer. Allows future code simplification. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- accel/tcg/cputlb.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index c1838412e8..e37af24525 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -131,20 +131,28 @@ static inline uint64_t tlb_addr_write(const CPUTLBEntry *entry) return tlb_read_idx(entry, MMU_DATA_STORE); } +static inline uintptr_t tlbfast_index(CPUTLBDescFast *fast, vaddr addr) +{ + return (addr >> TARGET_PAGE_BITS) & (fast->mask >> CPU_TLB_ENTRY_BITS); +} + +static inline CPUTLBEntry *tlbfast_entry(CPUTLBDescFast *fast, vaddr addr) +{ + return fast->table + tlbfast_index(fast, addr); +} + /* Find the TLB index corresponding to the mmu_idx + address pair. */ static inline uintptr_t tlb_index(CPUState *cpu, uintptr_t mmu_idx, vaddr addr) { - uintptr_t size_mask = cpu->neg.tlb.f[mmu_idx].mask >> CPU_TLB_ENTRY_BITS; - - return (addr >> TARGET_PAGE_BITS) & size_mask; + return tlbfast_index(&cpu->neg.tlb.f[mmu_idx], addr); } /* Find the TLB entry corresponding to the mmu_idx + address pair. */ static inline CPUTLBEntry *tlb_entry(CPUState *cpu, uintptr_t mmu_idx, vaddr addr) { - return &cpu->neg.tlb.f[mmu_idx].table[tlb_index(cpu, mmu_idx, addr)]; + return tlbfast_entry(&cpu->neg.tlb.f[mmu_idx], addr); } static void tlb_window_reset(CPUTLBDesc *desc, int64_t ns, From patchwork Wed Oct 9 15:08:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828601 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DE621CEDDA4 for ; Wed, 9 Oct 2024 15:09:43 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJH-0001LQ-RB; Wed, 09 Oct 2024 11:09:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJG-0001K8-AJ for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:06 -0400 Received: from mail-pf1-x429.google.com ([2607:f8b0:4864:20::429]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJC-0007wT-Iw for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:05 -0400 Received: by mail-pf1-x429.google.com with SMTP id d2e1a72fcca58-71e10ae746aso1878523b3a.2 for ; Wed, 09 Oct 2024 08:09:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486541; x=1729091341; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=24CMkQMrWOo8hDSF+GETpHYU0Rfi9fKfFw2PUYnta2A=; b=eVIoe5ckJBAkJ5mvp5U1hZZ51/r0B3EReei2bzEP5PvCZwrC3X+vDXkN+VfaN3Mb+A aSduhAfJEnwoTM1rgihVRgKBJ8nyZJ+osiCSZ9M/j69HHD3kqFD2U3OnOHqlytD4d9Xx 3n46waff8z1QE6aF+5W7dCKiaUq//Lgmni/oOxXb8ZypqU67iE395gMWNqXiLo5gAYoU J9VbhKE0DFKA8+BnWojQ3WkNkPWsQfurPYddvBLAX1M5CFHsZNoQjQwCz3rL5U/LD6yB A15THJ004xiHwLlnBEORa4TKe0hISOgJKgHvHnqNxaWA8h3VH9XfEtfDEYYJLouWxGs0 x1lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486541; x=1729091341; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=24CMkQMrWOo8hDSF+GETpHYU0Rfi9fKfFw2PUYnta2A=; b=EltBpBd3b+5i+1YjtbPDAg7CScOiHNW8vUi/iuldtp0WzLNQIMoR4ytsebEGrsbnu2 vev6e2ojST9yY6RYVZdCS3vDwOA5UWNS3EUAYDlXau2ViaruJs2YLs2HyBATytX8orHh BK/Gpo6ViF+KXyalP6n6PLRGxE5S9/7N1UqLMlBQljYY3ojUA4heV1rtA4EoNZi9Qx+j Gk2Viyu/KnhFebp7dJgvZzRbXn+ikT/elwynz4BWDCOw8fchgh5ZKco1Pa0evpemULtd aFAXPf9ExWMNxp8x2Q1zmuEDJ9PwBXThmvBy/U7C9hdaNBpG7S61KjwCaNcA7CSbTVL/ d9xQ== X-Gm-Message-State: AOJu0Yzpbd1+rLEd9zAqtLqbamH2We9AnQ2J/2wu8uOBGXW94jH3mcMe Sx6+f25NVwi/0dV/BFxGarP1RFDJXS/dlz679/8ZVZ6vyZHB5GYNarQBTTypxOvk2p9wB6StYBA S X-Google-Smtp-Source: AGHT+IFW6X21xATWnjs19uf+VAQ6MGgZYTZoVKA/0G6LpR232u94VmzYtKkSIJheRagPblZypXdAZQ== X-Received: by 2002:a05:6a00:1a8b:b0:71e:48b:6422 with SMTP id d2e1a72fcca58-71e1db64785mr4342135b3a.2.1728486540635; Wed, 09 Oct 2024 08:09:00 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:00 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 04/23] accel/tcg: Split out tlbfast_flush_range_locked Date: Wed, 9 Oct 2024 08:08:36 -0700 Message-ID: <20241009150855.804605-5-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::429; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x429.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org While this may at present be overly complicated for use by single page flushes, do so with the expectation that this will eventually allow simplification of large pages. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- accel/tcg/cputlb.c | 61 +++++++++++++++++++++++++--------------------- 1 file changed, 33 insertions(+), 28 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index e37af24525..6773874f2d 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -520,10 +520,37 @@ static inline void tlb_flush_vtlb_page_locked(CPUState *cpu, int mmu_idx, tlb_flush_vtlb_page_mask_locked(cpu, mmu_idx, page, -1); } +static void tlbfast_flush_range_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast, + vaddr addr, vaddr len, vaddr mask) +{ + /* + * If @mask is smaller than the tlb size, there may be multiple entries + * within the TLB; for now, just flush the entire TLB. + * Otherwise all addresses that match under @mask hit the same TLB entry. + * + * If @len is larger than the tlb size, then it will take longer to + * test all of the entries in the TLB than it will to flush it all. + */ + if (mask < fast->mask || len > fast->mask) { + tlbfast_flush_locked(desc, fast); + return; + } + + for (vaddr i = 0; i < len; i += TARGET_PAGE_SIZE) { + vaddr page = addr + i; + CPUTLBEntry *entry = tlbfast_entry(fast, page); + + if (tlb_flush_entry_mask_locked(entry, page, mask)) { + desc->n_used_entries--; + } + } +} + static void tlb_flush_page_locked(CPUState *cpu, int midx, vaddr page) { - vaddr lp_addr = cpu->neg.tlb.d[midx].large_page_addr; - vaddr lp_mask = cpu->neg.tlb.d[midx].large_page_mask; + CPUTLBDesc *desc = &cpu->neg.tlb.d[midx]; + vaddr lp_addr = desc->large_page_addr; + vaddr lp_mask = desc->large_page_mask; /* Check if we need to flush due to large pages. */ if ((page & lp_mask) == lp_addr) { @@ -532,9 +559,8 @@ static void tlb_flush_page_locked(CPUState *cpu, int midx, vaddr page) midx, lp_addr, lp_mask); tlb_flush_one_mmuidx_locked(cpu, midx, get_clock_realtime()); } else { - if (tlb_flush_entry_locked(tlb_entry(cpu, midx, page), page)) { - tlb_n_used_entries_dec(cpu, midx); - } + tlbfast_flush_range_locked(desc, &cpu->neg.tlb.f[midx], + page, TARGET_PAGE_SIZE, -1); tlb_flush_vtlb_page_locked(cpu, midx, page); } } @@ -689,24 +715,6 @@ static void tlb_flush_range_locked(CPUState *cpu, int midx, CPUTLBDescFast *f = &cpu->neg.tlb.f[midx]; vaddr mask = MAKE_64BIT_MASK(0, bits); - /* - * If @bits is smaller than the tlb size, there may be multiple entries - * within the TLB; otherwise all addresses that match under @mask hit - * the same TLB entry. - * TODO: Perhaps allow bits to be a few bits less than the size. - * For now, just flush the entire TLB. - * - * If @len is larger than the tlb size, then it will take longer to - * test all of the entries in the TLB than it will to flush it all. - */ - if (mask < f->mask || len > f->mask) { - tlb_debug("forcing full flush midx %d (" - "%016" VADDR_PRIx "/%016" VADDR_PRIx "+%016" VADDR_PRIx ")\n", - midx, addr, mask, len); - tlb_flush_one_mmuidx_locked(cpu, midx, get_clock_realtime()); - return; - } - /* * Check if we need to flush due to large pages. * Because large_page_mask contains all 1's from the msb, @@ -720,13 +728,10 @@ static void tlb_flush_range_locked(CPUState *cpu, int midx, return; } + tlbfast_flush_range_locked(d, f, addr, len, mask); + for (vaddr i = 0; i < len; i += TARGET_PAGE_SIZE) { vaddr page = addr + i; - CPUTLBEntry *entry = tlb_entry(cpu, midx, page); - - if (tlb_flush_entry_mask_locked(entry, page, mask)) { - tlb_n_used_entries_dec(cpu, midx); - } tlb_flush_vtlb_page_mask_locked(cpu, midx, page, mask); } } From patchwork Wed Oct 9 15:08:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828614 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E390FCEDDA3 for ; Wed, 9 Oct 2024 15:10:07 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJI-0001M5-Bp; Wed, 09 Oct 2024 11:09:08 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJH-0001Kz-83 for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:07 -0400 Received: from mail-pf1-x42f.google.com ([2607:f8b0:4864:20::42f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJE-0007wa-9E for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:07 -0400 Received: by mail-pf1-x42f.google.com with SMTP id d2e1a72fcca58-71df67c67fcso3023880b3a.2 for ; Wed, 09 Oct 2024 08:09:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486541; x=1729091341; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=XLhOzO66foHF9LZBZKhFihKXvAHnKRfv3WrPUxhL4d8=; b=EivM19r7dvUcMt9yVWOMtAJHb45PVkQ/yUYWUvVHum2MO+B+OPZIEVNziBDUGK0rCE se9crc2XkbH85KcqvTpB8lTjRpYGytDy6q2BRI1NOXF0KQkVEilE8rqFJrm7ckzKc40H fuQEkeIfN8wgWqzBb1qVxmCBaE51NTv4Jutg4oplDxXjMDb0Mxy5gJgFgoW7sE915/jJ lXOOcBgHwfvBoOh6YlX3MxO2NWndrGVR+6VpeVoyY40kTSYaOjLKpFHZQD96Xa7aWAtd b2og9eaTnRRMK/B6KWkwvBtVJ2rK+uSTIn/8aH/C5U+Rua3SlSJu2TRZnQBQD9spgNtc Pguw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486541; x=1729091341; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XLhOzO66foHF9LZBZKhFihKXvAHnKRfv3WrPUxhL4d8=; b=iccnrokBdMf+FlbJj7Fiz59bdpykfllB3DB3RHkuw/ix0kNNy9LxUzvvQNToGBaAN7 tmHG5KwDP/+tCoJ1mjRhzb/PaSjkr4kbeMBENOgFmihjuURj40mnlnmu5L3qdMRtdZ7N nZK24+pMFdT+9AGH6EyrKJglaKneHlgvP4lsFfSbdTu0neF31kQI570MrfARY/6uYT1L Z7VSqZILkjfKyYq5iNvp7Mn7TBvR5k9crDmxiSTKk9fRyZPlAAfitTHJ9sKjQHdtFyFO eOcCwYFNn57PXdsE9SzGj5w/BHbWA/6Ako7ScGzmTroVM61oRrX/tlG458tKB9QbZdL6 jSWA== X-Gm-Message-State: AOJu0YwNnvVhOYXLKqTJTjh1Lym4t+1LsIWnO4/e2cDP0GQ+urZRT7F8 VaSCMwhj8ACG605IN1feZrftWSS5+myPlmYytqdRtJTuCNaog7zxfqKOb3f2u33nXJVnrqocZJC o X-Google-Smtp-Source: AGHT+IGXvtVa1wxYAWZSwmgpm4UiLLJAnQCtHJ1bTABNK6BkFDyFGXHr2g2zO1ht6bTgtorwAdmT1g== X-Received: by 2002:a05:6a00:4616:b0:71d:fe5b:5eb9 with SMTP id d2e1a72fcca58-71e1db75452mr4558056b3a.10.1728486541448; Wed, 09 Oct 2024 08:09:01 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:01 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 05/23] accel/tcg: Fix flags usage in mmu_lookup1, atomic_mmu_lookup Date: Wed, 9 Oct 2024 08:08:37 -0700 Message-ID: <20241009150855.804605-6-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42f; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The INVALID bit should only be auto-cleared when we have just called tlb_fill, not along the victim_tlb_hit path. In atomic_mmu_lookup, rename tlb_addr to flags, as that is what we're actually carrying around. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- accel/tcg/cputlb.c | 33 ++++++++++++++++++++++----------- 1 file changed, 22 insertions(+), 11 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 6773874f2d..fd8da8586f 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1657,7 +1657,7 @@ static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, MemOp memop, uint64_t tlb_addr = tlb_read_idx(entry, access_type); bool maybe_resized = false; CPUTLBEntryFull *full; - int flags; + int flags = TLB_FLAGS_MASK & ~TLB_FORCE_SLOW; /* If the TLB entry is for a different page, reload and try again. */ if (!tlb_hit(tlb_addr, addr)) { @@ -1668,8 +1668,14 @@ static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, MemOp memop, maybe_resized = true; index = tlb_index(cpu, mmu_idx, addr); entry = tlb_entry(cpu, mmu_idx, addr); + /* + * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately, + * to force the next access through tlb_fill. We've just + * called tlb_fill, so we know that this entry *is* valid. + */ + flags &= ~TLB_INVALID_MASK; } - tlb_addr = tlb_read_idx(entry, access_type) & ~TLB_INVALID_MASK; + tlb_addr = tlb_read_idx(entry, access_type); } full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; @@ -1819,10 +1825,10 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, MemOp mop = get_memop(oi); uintptr_t index; CPUTLBEntry *tlbe; - vaddr tlb_addr; void *hostaddr; CPUTLBEntryFull *full; bool did_tlb_fill = false; + int flags; tcg_debug_assert(mmu_idx < NB_MMU_MODES); @@ -1833,8 +1839,8 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, tlbe = tlb_entry(cpu, mmu_idx, addr); /* Check TLB entry and enforce page permissions. */ - tlb_addr = tlb_addr_write(tlbe); - if (!tlb_hit(tlb_addr, addr)) { + flags = TLB_FLAGS_MASK; + if (!tlb_hit(tlb_addr_write(tlbe), addr)) { if (!victim_tlb_hit(cpu, mmu_idx, index, MMU_DATA_STORE, addr & TARGET_PAGE_MASK)) { tlb_fill_align(cpu, addr, MMU_DATA_STORE, mmu_idx, @@ -1842,8 +1848,13 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, did_tlb_fill = true; index = tlb_index(cpu, mmu_idx, addr); tlbe = tlb_entry(cpu, mmu_idx, addr); + /* + * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately, + * to force the next access through tlb_fill. We've just + * called tlb_fill, so we know that this entry *is* valid. + */ + flags &= ~TLB_INVALID_MASK; } - tlb_addr = tlb_addr_write(tlbe) & ~TLB_INVALID_MASK; } /* @@ -1879,11 +1890,11 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, goto stop_the_world; } - /* Collect tlb flags for read. */ - tlb_addr |= tlbe->addr_read; + /* Collect tlb flags for read and write. */ + flags &= tlbe->addr_read | tlb_addr_write(tlbe); /* Notice an IO access or a needs-MMU-lookup access */ - if (unlikely(tlb_addr & (TLB_MMIO | TLB_DISCARD_WRITE))) { + if (unlikely(flags & (TLB_MMIO | TLB_DISCARD_WRITE))) { /* There's really nothing that can be done to support this apart from stop-the-world. */ goto stop_the_world; @@ -1892,11 +1903,11 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, hostaddr = (void *)((uintptr_t)addr + tlbe->addend); full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; - if (unlikely(tlb_addr & TLB_NOTDIRTY)) { + if (unlikely(flags & TLB_NOTDIRTY)) { notdirty_write(cpu, addr, size, full, retaddr); } - if (unlikely(tlb_addr & TLB_FORCE_SLOW)) { + if (unlikely(flags & TLB_FORCE_SLOW)) { int wp_flags = 0; if (full->slow_flags[MMU_DATA_STORE] & TLB_WATCHPOINT) { From patchwork Wed Oct 9 15:08:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828599 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F0F9DCEDDA3 for ; Wed, 9 Oct 2024 15:09:39 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJK-0001Nu-RI; Wed, 09 Oct 2024 11:09:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJH-0001Kv-3r for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:07 -0400 Received: from mail-pf1-x429.google.com ([2607:f8b0:4864:20::429]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJE-0007ws-9O for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:06 -0400 Received: by mail-pf1-x429.google.com with SMTP id d2e1a72fcca58-71df04d3cd1so4245717b3a.2 for ; Wed, 09 Oct 2024 08:09:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486542; x=1729091342; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=Do0SdIOifT89XnNeUtN7RO/VbTA/vHCZ7rZBpNO2/jQ=; b=TSTTmRmPpvbPrBUDejy3Cc71GkMZ2c4lz1OkebDb/diZcqFO5BQFseHkYGXSQ2VXyX 40SpDN7W7u+egi8UH/CA9IUM4Fns2XkFViDAmRrPAp/LMiA7jVaAGXyrdP8HdCECwYNy 4/pIAADPVZXtBlsVoedNm2GWyuAtUYx7P+zPYdBxxje/PtItrWdqMdIgr1wMNKcqk8Ce JWXUWRURMbOAk4NupPWsKq0QueZfEXaM3/qlVw9Xk0NIS5S7w94ecIbtU+O0ZxW7BlRn nRfNO3RV0VuOjF2RYy8R+e9iGbFJx8NxwIdPnACblcPU97Rj6KIYrn+IFjEYK2+m6uhn vEJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486542; x=1729091342; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Do0SdIOifT89XnNeUtN7RO/VbTA/vHCZ7rZBpNO2/jQ=; b=nCBUDiZigu4pQjf55ZQGxQJW8HMebB6lwm/IYwfVdbZfb9stdYWXZ3vrxfTDKgQcAr hxDMZ7UjC1MTL7EpWRnZR75gTXabpDWSehKjSLv+E72k49+vmAKDLO5CE0wbQa+/y6VW l+dDw3JFwPwgQ4VrLVtCgh7VC7wWO0Mrs84PT5BNC0jiNuiGypLst0Ijo0uDknZiGAp+ RsFfFTFRrQ0XWZjPke86Cx2SW6zaNU7Z2hW4Kl4Nx+1J5GGH5eij3k3NAEmZFreHk9SG /4RAVI7qj00Qdj0layTKqctgFjVXhWxfbPsrnAV16tmXIgWVAh84mCMf4OP/rn4mReDm Ox2A== X-Gm-Message-State: AOJu0YwWVMHDFMWSGMuUQxqUQ7z5JIR8lg/R7Iu9t5CUL6wrkQckZjiu GqfXKEc99v0gsmBr/X1X3wTxH92ljktD8UWH09oBHABVt21QlmecBTcV7Sl3Xd8hIBB4PQRQABV a X-Google-Smtp-Source: AGHT+IEvZePKT7NROK5/oy+CQcERPtsns76/3R4ossBGDuriA6ygSiVN5gHJfVk+Y9UYuW4s8tdpig== X-Received: by 2002:a05:6a00:4614:b0:71e:10ed:6f3 with SMTP id d2e1a72fcca58-71e1dab3e86mr5337972b3a.0.1728486542559; Wed, 09 Oct 2024 08:09:02 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.01 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:02 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 06/23] accel/tcg: Early exit for zero length in tlb_flush_range_by_mmuidx* Date: Wed, 9 Oct 2024 08:08:38 -0700 Message-ID: <20241009150855.804605-7-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::429; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x429.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Probably never happens, but next patches will assume non-zero length. Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé --- accel/tcg/cputlb.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index fd8da8586f..93b42d18ee 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -801,6 +801,9 @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr addr, * If all bits are significant, and len is small, * this devolves to tlb_flush_page. */ + if (len == 0) { + return; + } if (bits >= TARGET_LONG_BITS && len <= TARGET_PAGE_SIZE) { tlb_flush_page_by_mmuidx(cpu, addr, idxmap); return; @@ -839,6 +842,9 @@ void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *src_cpu, * If all bits are significant, and len is small, * this devolves to tlb_flush_page. */ + if (len == 0) { + return; + } if (bits >= TARGET_LONG_BITS && len <= TARGET_PAGE_SIZE) { tlb_flush_page_by_mmuidx_all_cpus_synced(src_cpu, addr, idxmap); return; From patchwork Wed Oct 9 15:08:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828622 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8BC7CCEDDA3 for ; Wed, 9 Oct 2024 15:11:38 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJL-0001PQ-HQ; Wed, 09 Oct 2024 11:09:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJH-0001Ky-7B for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:07 -0400 Received: from mail-pf1-x42b.google.com ([2607:f8b0:4864:20::42b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJE-0007wx-PK for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:06 -0400 Received: by mail-pf1-x42b.google.com with SMTP id d2e1a72fcca58-71dea49e808so4428411b3a.1 for ; Wed, 09 Oct 2024 08:09:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486543; x=1729091343; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=GFCnPzqY9yPtIcUh428jqMbY/g4iHxajC+FeFvs691s=; b=mCC7yv3SUkGOXH4sVANwqX3+PuLqueNjSxGsNAABdCilRtO1KGNPS8AzXPSFz093pz g7F3IU0FTrq3LpNAB3xnAVGAtDXIoP3adbSck0gu5S2cwL3zhwSHubGuDQRu/9zWym60 3Bcyk962SQRjwC8nmoba+0geMHVi+cKVtzw2BtRbgtZVRb2QzX6LPEQewRPTAesB7EJK Z3EcCacFUUUedJx0hpCBoqZV3H+MjRVPj0CP3G5k4cwo6GAFF0ssOGO7bPa3vZn+QND8 O5oXvTmuZwXj/VwgPiaJKbGa3AMmvqy2a3Gy/XcqYfnfGmg9g/AfC6MyaKxFzc0ISG6W NnJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486543; x=1729091343; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GFCnPzqY9yPtIcUh428jqMbY/g4iHxajC+FeFvs691s=; b=RXmq6eDTz8As07/UNuyosuWlmmeuuDgbg9ye9bY7nF0vaG4NjbSejK9BrrGmoLw85O Dqc8yqC/7zF8+iinKWN8xjsWZoPx9oH8lMjLtietpxh3ooZIR3TqOFaiAvG/qd4O+W4U CHxSRkekH9PC9zYhmDrUtUQaTrDKxzfXUNY8eUYx8+FSjFDdqizUBuyrfbJj98ZGG8Sa fxwR/+RL1R+KOjrwwBypwGI2J3Fj+zuj7/2hU3ONffrZd87jIxel3ovarDx7LGMQBTFf zeT8SF5E6jbXv9OWog/I+Es6W1hkbZCwCDod/N4gsqrP2LP6e5Mr7jtXyjIKORn0rMe8 0yDA== X-Gm-Message-State: AOJu0YxtHQBlYDbYb9NMvhshmcpuoMERMxgKPEZFpEOUqdxkbN1gIeUE qd9sZ9K8VXrFNUufNwaZPmpplzYbLIOgbguTeZnSJIEaxu228AxuM9SrRy21kb5yz/0CKXRg3L1 0 X-Google-Smtp-Source: AGHT+IEfn/SZ+7K01hmmkXmSC71xAeP6HhI7oc7Ht7OKHdmUD5JRdAJ+0be1TSyHE3P/MzPZQJO6EQ== X-Received: by 2002:a05:6a20:b807:b0:1d8:a49b:ee62 with SMTP id adf61e73a8af0-1d8a49bf013mr3402933637.19.1728486543421; Wed, 09 Oct 2024 08:09:03 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.02 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:03 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 07/23] accel/tcg: Flush entire tlb when a masked range wraps Date: Wed, 9 Oct 2024 08:08:39 -0700 Message-ID: <20241009150855.804605-8-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42b; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org We expect masked address spaces to be quite large, e.g. 56 bits for AArch64 top-byte-ignore mode. We do not expect addr+len to wrap around, but it is possible with AArch64 guest flush range instructions. Convert this unlikely case to a full tlb flush. This can simplify the subroutines actually performing the range flush. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- accel/tcg/cputlb.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 93b42d18ee..8affa25db3 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -808,8 +808,12 @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr addr, tlb_flush_page_by_mmuidx(cpu, addr, idxmap); return; } - /* If no page bits are significant, this devolves to tlb_flush. */ - if (bits < TARGET_PAGE_BITS) { + /* + * If no page bits are significant, this devolves to full flush. + * If addr+len wraps in len bits, fall back to full flush. + */ + if (bits < TARGET_PAGE_BITS + || (bits < TARGET_LONG_BITS && (addr ^ (addr + len - 1)) >> bits)) { tlb_flush_by_mmuidx(cpu, idxmap); return; } @@ -849,8 +853,12 @@ void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *src_cpu, tlb_flush_page_by_mmuidx_all_cpus_synced(src_cpu, addr, idxmap); return; } - /* If no page bits are significant, this devolves to tlb_flush. */ - if (bits < TARGET_PAGE_BITS) { + /* + * If no page bits are significant, this devolves to full flush. + * If addr+len wraps in len bits, fall back to full flush. + */ + if (bits < TARGET_PAGE_BITS + || (bits < TARGET_LONG_BITS && (addr ^ (addr + len - 1)) >> bits)) { tlb_flush_by_mmuidx_all_cpus_synced(src_cpu, idxmap); return; } From patchwork Wed Oct 9 15:08:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828602 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 515E5CEDDA7 for ; Wed, 9 Oct 2024 15:09:57 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJJ-0001Nc-CR; Wed, 09 Oct 2024 11:09:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJH-0001La-NE for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:07 -0400 Received: from mail-pf1-x42c.google.com ([2607:f8b0:4864:20::42c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJG-0007x4-2d for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:07 -0400 Received: by mail-pf1-x42c.google.com with SMTP id d2e1a72fcca58-71dea49e808so4428432b3a.1 for ; Wed, 09 Oct 2024 08:09:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486544; x=1729091344; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=DYfuoBzid/U1ZJhrVaORza7JwLn2M4M2h5sPBcvVQ5Q=; b=MnMLmH4PKyp4aNGdTTRhPf9H1AUlSITQQI+qwwbgQViu160uufnf9wZQPdOSp9s/9z aIUsG7/aS/Nc07whLauXZ3dpY+EfYHuAjO8NGdV86RZNcn4qmJSLSACQNB/ytwdMvq5S eoL49CFgDn3sKNB5pKnE0lGSYwckreZpIxAAOS0xt0UT8Wm9M2AjjMj7x4BzNAYWoCjA h7r2C/E8kEeAZrfynbrEq4DwbxpsjbhxaBdr9dwlmxp1HQ62k6AaydkPgHu2NZ9MqWon LWCnVamcNsA0ZYkGkgT0cjnzDGGhAGp5yndwhJ+DeSRy+/XZMjDJz5V0f57mRoKe20FY LPRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486544; x=1729091344; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DYfuoBzid/U1ZJhrVaORza7JwLn2M4M2h5sPBcvVQ5Q=; b=C69ycnIApu+vscYrxHXcf2i7ya+XHfjzW2tH86oMV6OZD+vWL+WRpyiTtuIAASu1zM voT16/l5zOfJLQFsW/9pvrvQvP01xM+WJAbnPO9iLQIjcmC4uXJTChck1WexczR60L20 579U+FLBeDs2+EXWZetELQfHR/Oq63SV5ZyaRCXSuM1/xvcmfWsGq99corEX9eHy8DDX WqvNx3bm8cRyuPMJz8H9OYPI9/KWO+hw+sb719OZ/j1JQTmVzHFpjdThGbNBzfOX/7Dz fxZGdIk+w9SzwS9G5T9PdPWf/g9my5Cvu5lshFVQygAmGfrhdSrGsjRnI92ER1DyHILQ rlDA== X-Gm-Message-State: AOJu0Yxi0a4xl7I9EjWAj35wAjJp0PIbDfgCJJXAQfdHNRRB1uI9D2rz aW1957PF7VWkaaLFslgrY4+1d1sN97Ry1nT4UHiNcJ9B56AAUD1VJK4b3E6IWkl75l5WUQWP1NH p X-Google-Smtp-Source: AGHT+IEJLGpA39Vv+ilmNwxnUt3wkUQQWTQpabEuIbV19gB1Qm2wNwpL6xTFVKy73Tjo3n0sV+W6Bg== X-Received: by 2002:a05:6a00:1307:b0:71e:1201:636a with SMTP id d2e1a72fcca58-71e1db6ecd6mr4760730b3a.1.1728486544248; Wed, 09 Oct 2024 08:09:04 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.03 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:03 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 08/23] accel/tcg: Add IntervalTreeRoot to CPUTLBDesc Date: Wed, 9 Oct 2024 08:08:40 -0700 Message-ID: <20241009150855.804605-9-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42c; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Add the data structures for tracking softmmu pages via a balanced interval tree. So far, only initialize and destroy the data structure. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- include/hw/core/cpu.h | 3 +++ accel/tcg/cputlb.c | 11 +++++++++++ 2 files changed, 14 insertions(+) diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h index d21a24c82f..b567abe3e2 100644 --- a/include/hw/core/cpu.h +++ b/include/hw/core/cpu.h @@ -34,6 +34,7 @@ #include "qemu/rcu_queue.h" #include "qemu/queue.h" #include "qemu/thread.h" +#include "qemu/interval-tree.h" #include "qom/object.h" typedef int (*WriteCoreDumpFunction)(const void *buf, size_t size, @@ -287,6 +288,8 @@ typedef struct CPUTLBDesc { CPUTLBEntry vtable[CPU_VTLB_SIZE]; CPUTLBEntryFull vfulltlb[CPU_VTLB_SIZE]; CPUTLBEntryFull *fulltlb; + /* All active tlb entries for this address space. */ + IntervalTreeRoot iroot; } CPUTLBDesc; /* diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 8affa25db3..435c2dc132 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -89,6 +89,13 @@ QEMU_BUILD_BUG_ON(sizeof(vaddr) > sizeof(run_on_cpu_data)); QEMU_BUILD_BUG_ON(NB_MMU_MODES > 16); #define ALL_MMUIDX_BITS ((1 << NB_MMU_MODES) - 1) +/* Extra data required to manage CPUTLBEntryFull within an interval tree. */ +typedef struct CPUTLBEntryTree { + IntervalTreeNode itree; + CPUTLBEntry copy; + CPUTLBEntryFull full; +} CPUTLBEntryTree; + static inline size_t tlb_n_entries(CPUTLBDescFast *fast) { return (fast->mask >> CPU_TLB_ENTRY_BITS) + 1; @@ -305,6 +312,7 @@ static void tlb_mmu_flush_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast) desc->large_page_mask = -1; desc->vindex = 0; memset(desc->vtable, -1, sizeof(desc->vtable)); + interval_tree_free_nodes(&desc->iroot, offsetof(CPUTLBEntryTree, itree)); } static void tlb_flush_one_mmuidx_locked(CPUState *cpu, int mmu_idx, @@ -326,6 +334,7 @@ static void tlb_mmu_init(CPUTLBDesc *desc, CPUTLBDescFast *fast, int64_t now) fast->mask = (n_entries - 1) << CPU_TLB_ENTRY_BITS; fast->table = g_new(CPUTLBEntry, n_entries); desc->fulltlb = g_new(CPUTLBEntryFull, n_entries); + memset(&desc->iroot, 0, sizeof(desc->iroot)); tlb_mmu_flush_locked(desc, fast); } @@ -365,6 +374,8 @@ void tlb_destroy(CPUState *cpu) g_free(fast->table); g_free(desc->fulltlb); + interval_tree_free_nodes(&cpu->neg.tlb.d[i].iroot, + offsetof(CPUTLBEntryTree, itree)); } } From patchwork Wed Oct 9 15:08:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828618 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A765BCEDDA7 for ; Wed, 9 Oct 2024 15:10:59 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJL-0001PF-CW; Wed, 09 Oct 2024 11:09:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJI-0001Ly-AR for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:08 -0400 Received: from mail-pf1-x436.google.com ([2607:f8b0:4864:20::436]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJG-0007xD-Jr for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:08 -0400 Received: by mail-pf1-x436.google.com with SMTP id d2e1a72fcca58-71de9e1f431so4400093b3a.3 for ; Wed, 09 Oct 2024 08:09:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486545; x=1729091345; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=+DRNYHg5CeN//65T5XbeaPm5JgKwHf9cf4eBzefEY3o=; b=hIvCefCKvAs+vRHwovjvah8iTDjFAwEgLgmX/F785MGKkgU7mF2SJ1sIeqarmBY69Q D3ABz/cIYGDt1EGjf7YSn9HTH5xLnQ+4i1c7gE5YB4mcg/uJmpvXFf26PFKwZdnko0G7 YlMv7owmyQ+kN1kfV0N18d26chRUAgpfBtDZAY/jNJxPvqsWHjMYf+E1Fr8ssFBGYLM7 NSg2G9WOT8Ossq7QAezgIkZFpzTtaHtcveiWwzHCCo4Ppsym/XtXlamYYy/BHRLHBmTU klEUmIykaBcNmeVsaXpe+g96pMLuNZX+Qz0+oM9tPZmQ7Xxru/Qh3hhSlju14KKW8VGp b5bA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486545; x=1729091345; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+DRNYHg5CeN//65T5XbeaPm5JgKwHf9cf4eBzefEY3o=; b=YZPRGSLCAA9fvZ4ZxdVJixuxe2BHY8GjcNbJtzJRZD/p+kzrMGFAPk9UhF/0zlrbcv NWWCCT7qMwV1f+Iyv4Js1earbxpovE6pfyL19FTd/ISbiQ1pvTU5f299LctkM6eRMGw/ 4v4jKvy6AsUAwUfOWYc6fUaA7ae9JO3HOzgn3PUBfO3e3aCWMsbGiz9v6eDbut+OeiU+ iWr6qoOFsi4OAIMegT2uZwOHApFHXN0pMDel7UWPHn8RkvV3qJ/6/4c2bmnr/pih5fKr 0qCQ7X4hltD9S60XN7FLC47Y026eLSq9lw7DuDChr/smM1VYcHqajduhhOgpYxD1CLlp iiFw== X-Gm-Message-State: AOJu0Yx/6PR3wwhFm6pC/NMiiwluykKD5Lni/SbJHQsqyyF3+Dz5rp7S ChlGl3maE+AzUoekLAtgHHaBI7zF+Yj9fjcIvJkeT52JJWiKGyvjUWbes7cejLWrjtQAtRPXhTV u X-Google-Smtp-Source: AGHT+IFsMmrbSg+cuSARvgU1CIe/+EUBsQhE8Mr2v1NwLoPUuIyLmBqfjenJdJzA1f/aDSkEN84ifg== X-Received: by 2002:a05:6a21:4006:b0:1d7:f4f:9868 with SMTP id adf61e73a8af0-1d8a3c4b90cmr5172963637.37.1728486545116; Wed, 09 Oct 2024 08:09:05 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.04 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:04 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 09/23] accel/tcg: Populate IntervalTree in tlb_set_page_full Date: Wed, 9 Oct 2024 08:08:41 -0700 Message-ID: <20241009150855.804605-10-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::436; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x436.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Add or replace an entry in the IntervalTree for each page installed into softmmu. We do not yet use the tree for anything else. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- accel/tcg/cputlb.c | 34 ++++++++++++++++++++++++++++------ 1 file changed, 28 insertions(+), 6 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 435c2dc132..d964e1b2e8 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -305,6 +305,17 @@ static void tlbfast_flush_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast) memset(fast->table, -1, sizeof_tlb(fast)); } +static CPUTLBEntryTree *tlbtree_lookup_range(CPUTLBDesc *desc, vaddr s, vaddr l) +{ + IntervalTreeNode *i = interval_tree_iter_first(&desc->iroot, s, l); + return i ? container_of(i, CPUTLBEntryTree, itree) : NULL; +} + +static CPUTLBEntryTree *tlbtree_lookup_addr(CPUTLBDesc *desc, vaddr addr) +{ + return tlbtree_lookup_range(desc, addr, addr); +} + static void tlb_mmu_flush_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast) { tlbfast_flush_locked(desc, fast); @@ -1086,7 +1097,8 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, MemoryRegionSection *section; unsigned int index, read_flags, write_flags; uintptr_t addend; - CPUTLBEntry *te, tn; + CPUTLBEntry *te; + CPUTLBEntryTree *node; hwaddr iotlb, xlat, sz, paddr_page; vaddr addr_page; int asidx, wp_flags, prot; @@ -1194,6 +1206,15 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, tlb_n_used_entries_dec(cpu, mmu_idx); } + /* Replace an old IntervalTree entry, or create a new one. */ + node = tlbtree_lookup_addr(desc, addr_page); + if (!node) { + node = g_new(CPUTLBEntryTree, 1); + node->itree.start = addr_page; + node->itree.last = addr_page + TARGET_PAGE_SIZE - 1; + interval_tree_insert(&node->itree, &desc->iroot); + } + /* refill the tlb */ /* * When memory region is ram, iotlb contains a TARGET_PAGE_BITS @@ -1215,15 +1236,15 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, full->phys_addr = paddr_page; /* Now calculate the new entry */ - tn.addend = addend - addr_page; + node->copy.addend = addend - addr_page; - tlb_set_compare(full, &tn, addr_page, read_flags, + tlb_set_compare(full, &node->copy, addr_page, read_flags, MMU_INST_FETCH, prot & PAGE_EXEC); if (wp_flags & BP_MEM_READ) { read_flags |= TLB_WATCHPOINT; } - tlb_set_compare(full, &tn, addr_page, read_flags, + tlb_set_compare(full, &node->copy, addr_page, read_flags, MMU_DATA_LOAD, prot & PAGE_READ); if (prot & PAGE_WRITE_INV) { @@ -1232,10 +1253,11 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, if (wp_flags & BP_MEM_WRITE) { write_flags |= TLB_WATCHPOINT; } - tlb_set_compare(full, &tn, addr_page, write_flags, + tlb_set_compare(full, &node->copy, addr_page, write_flags, MMU_DATA_STORE, prot & PAGE_WRITE); - copy_tlb_helper_locked(te, &tn); + node->full = *full; + copy_tlb_helper_locked(te, &node->copy); tlb_n_used_entries_inc(cpu, mmu_idx); qemu_spin_unlock(&tlb->c.lock); } From patchwork Wed Oct 9 15:08:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A451ACEDDA3 for ; Wed, 9 Oct 2024 15:12:07 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJK-0001O4-Us; Wed, 09 Oct 2024 11:09:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJI-0001NU-TZ for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:09 -0400 Received: from mail-pf1-x435.google.com ([2607:f8b0:4864:20::435]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJH-0007xY-EC for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:08 -0400 Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-71e03be0d92so2584089b3a.3 for ; Wed, 09 Oct 2024 08:09:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486546; x=1729091346; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=hD+DjoYV5P1tXKFKBbM2Tkd9vX+VPhej9oLFeKYabEE=; b=HTSBOaUGyTm7IBmfGXolzCLS/6vHWFJfF5h9NvwUMifvLZfqne0fVqzGI0vr8IrK0V GIB3Hapmw/DQxmpSPUpneO0kbAvqlf0vqOUOeSqoUUumucywIfaskjZyOyEKma+qxsHt G7yH4oZjoj4fQ9v5W4mwBY8JqSpSf0LfzGyHdc4s17sw8Trt7Ehgy0LdYvZjOOf1gRiC MSrqCKAUoT4Ro7izlBrGjsvzTUInp134EIdRwjZIesFtqj8ZtXjSzYueHLrOcLoyGWOT c6FoGCDfQ5+VEWdNxyGx+nllV1h3swy3KcEi9DKLZw6D2epUZQmosGCKrRtXIQeGLez3 qCHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486546; x=1729091346; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hD+DjoYV5P1tXKFKBbM2Tkd9vX+VPhej9oLFeKYabEE=; b=sbqbQ/SFdDDyWLHaA2RU2aBOq7G2TyDaC7qbz3YFhtYlA2oSiCOEmAvwxCAK3GFhvS 0anwEV+sjYF6isZmbbV8L7D11EnkVvDzd++55oL9V92yBtxHz2q1CwaYYZAnY7mVrgoo vWGTGyvCrI6VWvCJAwISZOnGJhPeodB3Eo3zvB0WKOzUNlbVQd1iTXNiYSBy0uGPS/Qs ZlyvIN4HbuaKBpCwjOqfD9C2y/JZkDCs9uTLZQX8k4fjo2mmfUVZsmalVsSw9rOYRbbq M7R8qEWKpKTR2d9n29/SWIroOLE7KZxuqnwQQX8zkuR82W9nUEc5qDg40w6xrNbu/DwF 1bwg== X-Gm-Message-State: AOJu0Yw1c35cqT920rOipANT4kjTRFqmly70Jz5lgLZj7XHhGrLtsZDA gbJ0EZjj5lBgOSKQRBMCpgzIu912UABfHAZlLEzT1q63HmmiagzarxOJhv9ZPleJtRhFwDCScqH b X-Google-Smtp-Source: AGHT+IGznn5NFu1ov12QDaHBn/DbuHMcFgfmMTEVF8QFJ6IJRdxF2j4BRodhlAtGYWds07VmD6IQfQ== X-Received: by 2002:a05:6a00:990:b0:714:2198:26b9 with SMTP id d2e1a72fcca58-71e1db854b6mr4514431b3a.13.1728486545937; Wed, 09 Oct 2024 08:09:05 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.05 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:05 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 10/23] accel/tcg: Remove IntervalTree entry in tlb_flush_page_locked Date: Wed, 9 Oct 2024 08:08:42 -0700 Message-ID: <20241009150855.804605-11-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::435; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x435.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Flush a page from the IntervalTree cache. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- accel/tcg/cputlb.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index d964e1b2e8..772656c7f8 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -573,6 +573,7 @@ static void tlb_flush_page_locked(CPUState *cpu, int midx, vaddr page) CPUTLBDesc *desc = &cpu->neg.tlb.d[midx]; vaddr lp_addr = desc->large_page_addr; vaddr lp_mask = desc->large_page_mask; + CPUTLBEntryTree *node; /* Check if we need to flush due to large pages. */ if ((page & lp_mask) == lp_addr) { @@ -580,10 +581,17 @@ static void tlb_flush_page_locked(CPUState *cpu, int midx, vaddr page) VADDR_PRIx "/%016" VADDR_PRIx ")\n", midx, lp_addr, lp_mask); tlb_flush_one_mmuidx_locked(cpu, midx, get_clock_realtime()); - } else { - tlbfast_flush_range_locked(desc, &cpu->neg.tlb.f[midx], - page, TARGET_PAGE_SIZE, -1); - tlb_flush_vtlb_page_locked(cpu, midx, page); + return; + } + + tlbfast_flush_range_locked(desc, &cpu->neg.tlb.f[midx], + page, TARGET_PAGE_SIZE, -1); + tlb_flush_vtlb_page_locked(cpu, midx, page); + + node = tlbtree_lookup_addr(desc, page); + if (node) { + interval_tree_remove(&node->itree, &desc->iroot); + g_free(node); } } From patchwork Wed Oct 9 15:08:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828600 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0CFF6CEDDA4 for ; Wed, 9 Oct 2024 15:09:41 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJQ-0001X2-WE; Wed, 09 Oct 2024 11:09:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJM-0001Pj-9c for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:12 -0400 Received: from mail-pf1-x435.google.com ([2607:f8b0:4864:20::435]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJJ-0007xv-CF for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:12 -0400 Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-71e0c3e85c5so2366957b3a.2 for ; Wed, 09 Oct 2024 08:09:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486547; x=1729091347; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=Bw8T/FPM8CxFXoYkswVTq6Tu5U9eEUUb80nQFyOk+OE=; b=Qhp1h3R4guvq91RxhwmD5rn17dy9osN7Z9JU1yvWTzAy7Jgi151KuBz+I/YNzr5ZWC 6dUPj+/ELA7XisauEPJd3zYaePPsn3aKh5ioxaGyDF7vWqX+K4vi6nw4VjVJnVoBgxfd 1PumtWQ16c/a4586L0mBGRxOrTYwHpJCGDHz0SRU3CidQbh4XtnEU8s07dZJdA2l8qZ0 RYHv6YReX6ZlC0Le3rKVsQ9NG8FL1nIPvMaVU3++zTLEwSoUAChkAK5R0bXKDUcVCHSN QNCvsffrh71fm7V3a7QJWts2dZFX1pah58toeu8zQ4pCd7gFA5Z489c5Ys4+wrMbVBJx qFXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486547; x=1729091347; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Bw8T/FPM8CxFXoYkswVTq6Tu5U9eEUUb80nQFyOk+OE=; b=vXNYAtOCJ7llKoRMETy8/f20GW0eKpFTAC7iRiMX7wKa6ff+17IFyPKvrFWYbTFRxQ h43x9biB2f/M6exfVv4nReWWUl0XF5TrcxpNDQEtRwG6nInbbEkZQGuMAQh8pfXmAJys hQUEszT5YlC9Pymwv08QHvfxUBpmtx1n5MeiO9HzFi2fH+Ta3OT6YlaFg9e76RXvpmUh 23fwnyUf1wBNrJrGH0Fsv0+LPjXLubwjxbZdNKzOO5aoKWg9Ch7QMPAvjX8y5AeMSAQj g7PskEg7jT01t56tUPq3blPtlXwdqXt0r1wleJ9WToCkPJwqgBLw9oIH1qAlyJANKa2r 5SOQ== X-Gm-Message-State: AOJu0YwZ0I4q+JxIW8WDIZv8XF7/okfsnJmr102m2VKM+bdOJ2pSvfeH o5codWH5B5hifkSKnNL54bC1zWMegjQeDJA/y/WQO6IFMbsB0o0XUPbixizJGoxpHo8XeoOc58n i X-Google-Smtp-Source: AGHT+IHr0kI9YWbMCiMzLX6fOgD5JJc0LIrGJfKs77W4mNmDCvHKi0YAxxTG9gtb6wJ+4/7PtAMdWw== X-Received: by 2002:a05:6a00:1886:b0:71e:3b8:666b with SMTP id d2e1a72fcca58-71e1db85a2dmr3439258b3a.15.1728486546795; Wed, 09 Oct 2024 08:09:06 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.06 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:06 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 11/23] accel/tcg: Remove IntervalTree entries in tlb_flush_range_locked Date: Wed, 9 Oct 2024 08:08:43 -0700 Message-ID: <20241009150855.804605-12-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::435; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x435.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Flush a masked range of pages from the IntervalTree cache. When the mask is not used there is a redundant comparison, but that is better than duplicating code at this point. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- accel/tcg/cputlb.c | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 772656c7f8..709ad75616 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -311,6 +311,13 @@ static CPUTLBEntryTree *tlbtree_lookup_range(CPUTLBDesc *desc, vaddr s, vaddr l) return i ? container_of(i, CPUTLBEntryTree, itree) : NULL; } +static CPUTLBEntryTree *tlbtree_lookup_range_next(CPUTLBEntryTree *prev, + vaddr s, vaddr l) +{ + IntervalTreeNode *i = interval_tree_iter_next(&prev->itree, s, l); + return i ? container_of(i, CPUTLBEntryTree, itree) : NULL; +} + static CPUTLBEntryTree *tlbtree_lookup_addr(CPUTLBDesc *desc, vaddr addr) { return tlbtree_lookup_range(desc, addr, addr); @@ -744,6 +751,8 @@ static void tlb_flush_range_locked(CPUState *cpu, int midx, CPUTLBDesc *d = &cpu->neg.tlb.d[midx]; CPUTLBDescFast *f = &cpu->neg.tlb.f[midx]; vaddr mask = MAKE_64BIT_MASK(0, bits); + CPUTLBEntryTree *node; + vaddr addr_mask, last_mask, last_imask; /* * Check if we need to flush due to large pages. @@ -764,6 +773,22 @@ static void tlb_flush_range_locked(CPUState *cpu, int midx, vaddr page = addr + i; tlb_flush_vtlb_page_mask_locked(cpu, midx, page, mask); } + + addr_mask = addr & mask; + last_mask = addr_mask + len - 1; + last_imask = last_mask | ~mask; + node = tlbtree_lookup_range(d, addr_mask, last_imask); + while (node) { + CPUTLBEntryTree *next = + tlbtree_lookup_range_next(node, addr_mask, last_imask); + vaddr page_mask = node->itree.start & mask; + + if (page_mask >= addr_mask && page_mask < last_mask) { + interval_tree_remove(&node->itree, &d->iroot); + g_free(node); + } + node = next; + } } typedef struct { From patchwork Wed Oct 9 15:08:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6B887CEDDA3 for ; Wed, 9 Oct 2024 15:09:59 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJN-0001SK-4a; Wed, 09 Oct 2024 11:09:13 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJL-0001OP-4L for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:11 -0400 Received: from mail-pg1-x536.google.com ([2607:f8b0:4864:20::536]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJJ-0007y9-4X for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:10 -0400 Received: by mail-pg1-x536.google.com with SMTP id 41be03b00d2f7-7cd8803fe0aso4764449a12.0 for ; Wed, 09 Oct 2024 08:09:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486547; x=1729091347; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=gS/HO+CQQbelj0MUB4nbL/FHeSZv7sIrZF8Xsr6MsdQ=; b=GT5/sWYY54cpOdo69RTWmIFDNb06f8QWnS6ROJeCtw5Wkc1tiC4O47x4vuqdA0gLmX 65i8t4KUtNl55aSGUuAoNHY3hwx2b+Qd39P+VTD2UCuH6psS3+Te6P4zSL8siXJGwqB4 jYJiXHDB4zXGMv9KpLd5ZoH/G/GW7Lw7kMYwZjI7nyUWO3Xy9CFwMC3aLDZUoe2/K1hb H0yUo57sbCP3UKKrvrWcMHAOOg10qTKI/0BnM7EtfmtqmAlG1kIciahAAK0WGnJpzMCA K1vg8PJksZ3W8U+ogBC7MOhPhSyCq2yyfg8ovSqRGAXFAKP9NTiVhDcDUKswsLlVlct8 1yFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486547; x=1729091347; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gS/HO+CQQbelj0MUB4nbL/FHeSZv7sIrZF8Xsr6MsdQ=; b=WSGs9VDPMJa0YSy7doYS/+mjI4WtfaaYY8bkNSPJWkxICfyFYkrkG4wse3XjmTveKC NCxHX1aYqgf6DngRasU+el3QLtBvX3YgTs86EtpScsUbqsvPZsXmdLl3slBHFGPdI1tj KiXCSQlJFqH4NU0YYd4vSmcxzZJHClTjJxRYs+Arm9m66CF21XvbpxD4k5WmDBi0+FFv 5VJ022J58FRivZiusUlC1Cli/yc4P8plOw8su90bN7NP3jvQnwYlEFdq31drF4JpEJxe U9wcMVHwo+2MUBCj40WU0ysUuiaCZgMgXDIZ8azHOOx4ST7vBxoebAqmnm0xzmINShdS PB2Q== X-Gm-Message-State: AOJu0YzP4Tym+ASwD/hfHeD6EIcKGnFb7sdM+4HuBD1f+7IHsB29CLr3 +0BEGnlLlv9lVl3cl8qhNMUSBvUITcqP67EraXmvdHOU7pTui0byLDCRDt0dhnAOOI4EuV0QvH7 1 X-Google-Smtp-Source: AGHT+IHMfmKkV7kFockoOLPbGAVQuA9HBLhFJpeDPtEUooE+gUeL7QXusNsF59CaEkEHBrwDFdRbuw== X-Received: by 2002:a05:6a21:1304:b0:1d8:a899:889d with SMTP id adf61e73a8af0-1d8ad7dd2acmr353752637.26.1728486547587; Wed, 09 Oct 2024 08:09:07 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.07 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:07 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 12/23] accel/tcg: Process IntervalTree entries in tlb_reset_dirty Date: Wed, 9 Oct 2024 08:08:44 -0700 Message-ID: <20241009150855.804605-13-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::536; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x536.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Update the addr_write copy within each interval tree node. Tidy the iteration within the other two loops as well. Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 709ad75616..95f78afee6 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1024,17 +1024,20 @@ void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, ram_addr_t length) qemu_spin_lock(&cpu->neg.tlb.c.lock); for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { - unsigned int i; - unsigned int n = tlb_n_entries(&cpu->neg.tlb.f[mmu_idx]); + CPUTLBDesc *desc = &cpu->neg.tlb.d[mmu_idx]; + CPUTLBDescFast *fast = &cpu->neg.tlb.f[mmu_idx]; - for (i = 0; i < n; i++) { - tlb_reset_dirty_range_locked(&cpu->neg.tlb.f[mmu_idx].table[i], - start1, length); + for (size_t i = 0, n = tlb_n_entries(fast); i < n; i++) { + tlb_reset_dirty_range_locked(&fast->table[i], start1, length); } - for (i = 0; i < CPU_VTLB_SIZE; i++) { - tlb_reset_dirty_range_locked(&cpu->neg.tlb.d[mmu_idx].vtable[i], - start1, length); + for (size_t i = 0; i < CPU_VTLB_SIZE; i++) { + tlb_reset_dirty_range_locked(&desc->vtable[i], start1, length); + } + + for (CPUTLBEntryTree *t = tlbtree_lookup_range(desc, 0, -1); t; + t = tlbtree_lookup_range_next(t, 0, -1)) { + tlb_reset_dirty_range_locked(&t->copy, start1, length); } } qemu_spin_unlock(&cpu->neg.tlb.c.lock); From patchwork Wed Oct 9 15:08:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828619 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 38F4DCEDDA4 for ; Wed, 9 Oct 2024 15:11:18 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJP-0001W1-Se; Wed, 09 Oct 2024 11:09:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJM-0001Q5-D8 for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:12 -0400 Received: from mail-pf1-x430.google.com ([2607:f8b0:4864:20::430]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJJ-0007yL-Uo for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:12 -0400 Received: by mail-pf1-x430.google.com with SMTP id d2e1a72fcca58-71e0cd1f3b6so2289999b3a.0 for ; Wed, 09 Oct 2024 08:09:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486548; x=1729091348; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=EqamM47YYvl4fayf0B1PwwrQTXdDx1JaLcVtzZIDh0c=; b=gR5ZVGTQoGjElEI6ufXwc71V+GFG5hwZlzT+df1nSQZiOh4NxQnhaHloVIY6mck0sk dIFrbm5Eknar2JlEs269Dg0KEWoSwHJcnczW3AqUXjFSaHZEyATJQWRMPqEJJAkdihG+ cvruVVy+O8lYP2amCJrr9oRsDja+ifqqpXeXcFh04Ucmz4QPx9LPHpaYA/zTkQbLjGhX +9EW7FAQSB8yJOUBuvt2BzWFBnfinRUOVVVYPa5LM1IVftlcLUOj8LXR5+HL8xaudklC Bvek4yaOXXrNwByfQkK4OStEIYFT3lx9X0dPDFuNv1zbSK2P19gPkpy8Qi9Qrp64KSAn JSEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486548; x=1729091348; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EqamM47YYvl4fayf0B1PwwrQTXdDx1JaLcVtzZIDh0c=; b=C/d6gUXcXlddC4q8QTwKdE1vfYJm9d26IpxjZzApc56jKuiaY01W69B+Y4Q4RMRAG9 TKhxX83SzKoBEhEs9hVailVWGr+A71fW/GKxiUZkGs9LGIqLF5sm7y3y/EsZL3gRL5UF H+jAgojJ6mHzUKjUoQLyGBVOZY0pTU6se+Sn0WQBlG9Qr/vYJGMdKGIvFajAqhHFGNn8 po29qUKfnmPJYQx2SP8P2j/W1ZNk4VlMhLgLhghgwaXzHAfpK7pEkUIFbtZvMzNMg9EM V/+vueKmR9u6kLDbZ4T9mm8+jq74yQn7Azs16LXnZDKtw4yl6wJtQ6DsE6GzQ0pCDttE jYMg== X-Gm-Message-State: AOJu0YwabvSNUas7+3+ThnUSinkxCJ2uaVQb4YqbGyUn3HS/qune8F6r AoeQI6MeKzyzSkyeinpBPqiRoCia2IoM6+cEZB30fFxV023QzRCzw6PCErhlmzIeGGmdJ1Zp833 h X-Google-Smtp-Source: AGHT+IF4vYOCZng0K18fYF28mQ/0bCl3IINgujjoyNdHgu+vJDGv2xCueQkw7cn5kXt7gESY3kvXUA== X-Received: by 2002:a05:6a21:3511:b0:1d8:a3ab:7220 with SMTP id adf61e73a8af0-1d8a3b5cac9mr4419525637.0.1728486548400; Wed, 09 Oct 2024 08:09:08 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.07 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:07 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 13/23] accel/tcg: Process IntervalTree entries in tlb_set_dirty Date: Wed, 9 Oct 2024 08:08:45 -0700 Message-ID: <20241009150855.804605-14-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::430; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x430.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Update the addr_write copy within an interval tree node. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- accel/tcg/cputlb.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 95f78afee6..ec989f1290 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1063,13 +1063,18 @@ static void tlb_set_dirty(CPUState *cpu, vaddr addr) addr &= TARGET_PAGE_MASK; qemu_spin_lock(&cpu->neg.tlb.c.lock); for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { - tlb_set_dirty1_locked(tlb_entry(cpu, mmu_idx, addr), addr); - } + CPUTLBDesc *desc = &cpu->neg.tlb.d[mmu_idx]; + CPUTLBEntryTree *node; - for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { - int k; - for (k = 0; k < CPU_VTLB_SIZE; k++) { - tlb_set_dirty1_locked(&cpu->neg.tlb.d[mmu_idx].vtable[k], addr); + tlb_set_dirty1_locked(tlb_entry(cpu, mmu_idx, addr), addr); + + for (int k = 0; k < CPU_VTLB_SIZE; k++) { + tlb_set_dirty1_locked(&desc->vtable[k], addr); + } + + node = tlbtree_lookup_addr(desc, addr); + if (node) { + tlb_set_dirty1_locked(&node->copy, addr); } } qemu_spin_unlock(&cpu->neg.tlb.c.lock); From patchwork Wed Oct 9 15:08:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828616 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B09AFCEDDA3 for ; Wed, 9 Oct 2024 15:10:27 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJO-0001VR-H4; Wed, 09 Oct 2024 11:09:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJM-0001S8-IS for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:12 -0400 Received: from mail-pf1-x42b.google.com ([2607:f8b0:4864:20::42b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJK-0007yS-Iz for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:12 -0400 Received: by mail-pf1-x42b.google.com with SMTP id d2e1a72fcca58-71e15fe56c9so1476689b3a.3 for ; Wed, 09 Oct 2024 08:09:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486549; x=1729091349; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=vtsD+NWTF2Y7/FLWaniy0sQLDGJ5gtBdhjO1P4fjsBU=; b=Gwgr3HMv8Ra1ReLwmZ3mY6WrfKq/XmgNuV3MXjTxADB5E+u61MTzU8Z1hBp3Q3xnAV 88gpU5nlIn5OHA47nkTwz+Xc39kuQ2qAhX+RBTQ4+cD/CF0oNUhSkgKd7mLCfAxJa3Pz sbFo1z/oJiP8iHDgaBViWuKoWNbcj20MKZdkaDmfO5a3B2upjR/G4pD/yKACzvNokBeT xsOnAKtpoHUBH+KuCSFBqCmx3CsmtQq7TVIAi3qdDQWUanjSc5bBPikeaNhahoBBYhz0 33jfaAMYYYipKK+V4owHvlQmUUVYCncwsA1GyLAOxPrzzyp4AiNVxP+7DsCEXR5hpV4F KEPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486549; x=1729091349; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vtsD+NWTF2Y7/FLWaniy0sQLDGJ5gtBdhjO1P4fjsBU=; b=cb8+3GRJfzk9i3OcspzYNt2RGhtGddEqf0BMlWqQfieEGCLMokwe7qLyisp9VTCQ33 rsD/TX0fj2UeoLYKX++7FQVAkExRP4crxg3y7cPzMDN2X+05nuiD+cCymkA00fpnamWQ eMJq0Vn9uwl+dskAS1nJ5F7ZhjYVHzDDXxRbuZiv7Qqz9tUMjTu1C/aXo9rTj/Co++v3 qh9pF7FAnBqcLNqXtgKsEG63k15kzFMV3xrwj9tZUdBcF+bYmqODP+bD+yAEI2e8oM/t rCVZ6XQ7mtXURssMh0kClJqeSMPWclxBhDJJCWonPJj7Apdb6SJRu9GK+f1pHGkt5B1w RP2Q== X-Gm-Message-State: AOJu0YwyWYEuHQqYtruCa89kgNq5Mvvlbkf/i3XWrHEBS1Kl1jArkXKk SMyt1gjZbLbEOoCco/UnI/K6Ca/VPVy19E8nswcvj93ZueidgBFyE6llDDO8o2/vQD4nGmgQHIz u X-Google-Smtp-Source: AGHT+IGyrwHOkiXiXEtijdkYc8915t8C8f2vkVxacIVa9uZj/BtX+W3hC7El/6HB1H4aA+nynQ5mqg== X-Received: by 2002:a05:6a00:928f:b0:71d:ffef:c165 with SMTP id d2e1a72fcca58-71e2680af1dmr376158b3a.25.1728486549222; Wed, 09 Oct 2024 08:09:09 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:08 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 14/23] accel/tcg: Replace victim_tlb_hit with tlbtree_hit Date: Wed, 9 Oct 2024 08:08:46 -0700 Message-ID: <20241009150855.804605-15-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42b; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Change from a linear search on the victim tlb to a balanced binary tree search on the interval tree. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- accel/tcg/cputlb.c | 62 +++++++++++++++++++++++----------------------- 1 file changed, 31 insertions(+), 31 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index ec989f1290..b10b0a357c 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1398,36 +1398,38 @@ static void io_failed(CPUState *cpu, CPUTLBEntryFull *full, vaddr addr, } } -/* Return true if ADDR is present in the victim tlb, and has been copied - back to the main tlb. */ -static bool victim_tlb_hit(CPUState *cpu, size_t mmu_idx, size_t index, - MMUAccessType access_type, vaddr page) +/* + * Return true if ADDR is present in the interval tree, + * and has been copied back to the main tlb. + */ +static bool tlbtree_hit(CPUState *cpu, int mmu_idx, + MMUAccessType access_type, vaddr addr) { - size_t vidx; + CPUTLBDesc *desc = &cpu->neg.tlb.d[mmu_idx]; + CPUTLBDescFast *fast = &cpu->neg.tlb.f[mmu_idx]; + CPUTLBEntryTree *node; + size_t index; assert_cpu_is_self(cpu); - for (vidx = 0; vidx < CPU_VTLB_SIZE; ++vidx) { - CPUTLBEntry *vtlb = &cpu->neg.tlb.d[mmu_idx].vtable[vidx]; - uint64_t cmp = tlb_read_idx(vtlb, access_type); - - if (cmp == page) { - /* Found entry in victim tlb, swap tlb and iotlb. */ - CPUTLBEntry tmptlb, *tlb = &cpu->neg.tlb.f[mmu_idx].table[index]; - - qemu_spin_lock(&cpu->neg.tlb.c.lock); - copy_tlb_helper_locked(&tmptlb, tlb); - copy_tlb_helper_locked(tlb, vtlb); - copy_tlb_helper_locked(vtlb, &tmptlb); - qemu_spin_unlock(&cpu->neg.tlb.c.lock); - - CPUTLBEntryFull *f1 = &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; - CPUTLBEntryFull *f2 = &cpu->neg.tlb.d[mmu_idx].vfulltlb[vidx]; - CPUTLBEntryFull tmpf; - tmpf = *f1; *f1 = *f2; *f2 = tmpf; - return true; - } + node = tlbtree_lookup_addr(desc, addr); + if (!node) { + /* There is no cached mapping for this page. */ + return false; } - return false; + + if (!tlb_hit(tlb_read_idx(&node->copy, access_type), addr)) { + /* This access is not permitted. */ + return false; + } + + /* Install the cached entry. */ + index = tlbfast_index(fast, addr); + qemu_spin_lock(&cpu->neg.tlb.c.lock); + copy_tlb_helper_locked(&fast->table[index], &node->copy); + qemu_spin_unlock(&cpu->neg.tlb.c.lock); + + desc->fulltlb[index] = node->full; + return true; } static void notdirty_write(CPUState *cpu, vaddr mem_vaddr, unsigned size, @@ -1469,7 +1471,7 @@ static int probe_access_internal(CPUState *cpu, vaddr addr, CPUTLBEntryFull *full; if (!tlb_hit_page(tlb_addr, page_addr)) { - if (!victim_tlb_hit(cpu, mmu_idx, index, access_type, page_addr)) { + if (!tlbtree_hit(cpu, mmu_idx, access_type, page_addr)) { if (!tlb_fill_align(cpu, addr, access_type, mmu_idx, 0, fault_size, nonfault, retaddr)) { /* Non-faulting page table read failed. */ @@ -1749,8 +1751,7 @@ static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, MemOp memop, /* If the TLB entry is for a different page, reload and try again. */ if (!tlb_hit(tlb_addr, addr)) { - if (!victim_tlb_hit(cpu, mmu_idx, index, access_type, - addr & TARGET_PAGE_MASK)) { + if (!tlbtree_hit(cpu, mmu_idx, access_type, addr)) { tlb_fill_align(cpu, addr, access_type, mmu_idx, memop, data->size, false, ra); maybe_resized = true; @@ -1929,8 +1930,7 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, /* Check TLB entry and enforce page permissions. */ flags = TLB_FLAGS_MASK; if (!tlb_hit(tlb_addr_write(tlbe), addr)) { - if (!victim_tlb_hit(cpu, mmu_idx, index, MMU_DATA_STORE, - addr & TARGET_PAGE_MASK)) { + if (!tlbtree_hit(cpu, mmu_idx, MMU_DATA_STORE, addr)) { tlb_fill_align(cpu, addr, MMU_DATA_STORE, mmu_idx, mop, size, false, retaddr); did_tlb_fill = true; From patchwork Wed Oct 9 15:08:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828598 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E2200CEDDA3 for ; Wed, 9 Oct 2024 15:09:32 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJO-0001VW-Uu; Wed, 09 Oct 2024 11:09:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJN-0001V8-P1 for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:13 -0400 Received: from mail-pf1-x432.google.com ([2607:f8b0:4864:20::432]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJL-0007yY-SF for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:13 -0400 Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-71def715ebdso3601982b3a.2 for ; Wed, 09 Oct 2024 08:09:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486550; x=1729091350; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=ORHeYVe4iZoRiXmNScxIZEQWJj516Vk6OjgxA97UeGU=; b=D7Y1nobLswka6lcLLAeHIPi92RwnnRr6EvJyYh1yV9tKHzkllpcWrrxnXtMOD4vZ27 rxbL4I0UWgEFueTwDi4yztQoD+xenkK3Ma2hF4KKwTOTyQx9RcAHMzndaydrqzrH39Vx nCcVjEp2071PAOMr7MbkY6V9dqVeSkLah8jwdUyy4v0f0Rq9bnOezVIzZuOf72YGoSJj KMBX1YcHEWGYc5CGPy1OXEp1s0pSzlNmnpT4EKZNI18uCqXYIGVGSdNvnYCzYQGJNOk4 QSxH9FNi9mIpWGR8rcWMup0AipugkEEsXklYC3peFJZKUzwNZF/BkVIBbSktIMrrrUcT gcoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486550; x=1729091350; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ORHeYVe4iZoRiXmNScxIZEQWJj516Vk6OjgxA97UeGU=; b=NOK0Y/BJC4oRHHRSN4PyNmdfVKBApWob9YxqlCGfpr7j2JV5x5EC2smZzVux2Kr2sF bAAqMl039skeHtxhGDNnGW2TCtrJGl5Z92W1uMFdJ65NKX3ukPofqrfH8FsyXqTZK3/m Pc4mTw3lf8mHKHS9DMXt7SI7W+mPu9weHRr8AWo2yx8cskfO1wmRxnV6/i2fIcvBpCWr Gvz/wzgGuhsTzlhg2P0jojAK7g9+U5gMUfpiL+zzDbPI+A45MVSH6/opOl8Cg8RyeYtp wltEOsLqzlBpxXNC2sw3RLcYcMdrS2FJaOQkW8QA15rhN90yHKuZkHRYzYR9VZ1Cl16K +Aag== X-Gm-Message-State: AOJu0YyNud5fxZaa9Zdx7AQfS0KRbDrmKftO57R6d/rx8ketRzQ1izWz 2kCuQxPBsALbzDOozUqfk3cKq0w0Ctg9+hu0o23io5jEAH4OjBvLOFI2Bjj7Fz3MtyyPrGYV/qv a X-Google-Smtp-Source: AGHT+IGRfyvjUUTQW5hOt1aaof9W9iC27riiWVSa+dVzqAcXieD0RNImwpKk+cy+tfxNzQvb8xhLVg== X-Received: by 2002:a05:6a20:9e4c:b0:1d2:e8d8:dd46 with SMTP id adf61e73a8af0-1d8a3c005a2mr4284483637.15.1728486550037; Wed, 09 Oct 2024 08:09:10 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.09 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:09 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 15/23] accel/tcg: Remove the victim tlb Date: Wed, 9 Oct 2024 08:08:47 -0700 Message-ID: <20241009150855.804605-16-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::432; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x432.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org This has been functionally replaced by the IntervalTree. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- include/hw/core/cpu.h | 8 ------ accel/tcg/cputlb.c | 64 ------------------------------------------- 2 files changed, 72 deletions(-) diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h index b567abe3e2..87b864f5c4 100644 --- a/include/hw/core/cpu.h +++ b/include/hw/core/cpu.h @@ -198,9 +198,6 @@ struct CPUClass { */ #define NB_MMU_MODES 16 -/* Use a fully associative victim tlb of 8 entries. */ -#define CPU_VTLB_SIZE 8 - /* * The full TLB entry, which is not accessed by generated TCG code, * so the layout is not as critical as that of CPUTLBEntry. This is @@ -282,11 +279,6 @@ typedef struct CPUTLBDesc { /* maximum number of entries observed in the window */ size_t window_max_entries; size_t n_used_entries; - /* The next index to use in the tlb victim table. */ - size_t vindex; - /* The tlb victim table, in two parts. */ - CPUTLBEntry vtable[CPU_VTLB_SIZE]; - CPUTLBEntryFull vfulltlb[CPU_VTLB_SIZE]; CPUTLBEntryFull *fulltlb; /* All active tlb entries for this address space. */ IntervalTreeRoot iroot; diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index b10b0a357c..561f66c723 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -328,8 +328,6 @@ static void tlb_mmu_flush_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast) tlbfast_flush_locked(desc, fast); desc->large_page_addr = -1; desc->large_page_mask = -1; - desc->vindex = 0; - memset(desc->vtable, -1, sizeof(desc->vtable)); interval_tree_free_nodes(&desc->iroot, offsetof(CPUTLBEntryTree, itree)); } @@ -501,15 +499,6 @@ static inline bool tlb_hit_page_anyprot(CPUTLBEntry *tlb_entry, vaddr page) return tlb_hit_page_mask_anyprot(tlb_entry, page, -1); } -/** - * tlb_entry_is_empty - return true if the entry is not in use - * @te: pointer to CPUTLBEntry - */ -static inline bool tlb_entry_is_empty(const CPUTLBEntry *te) -{ - return te->addr_read == -1 && te->addr_write == -1 && te->addr_code == -1; -} - /* Called with tlb_c.lock held */ static bool tlb_flush_entry_mask_locked(CPUTLBEntry *tlb_entry, vaddr page, @@ -527,28 +516,6 @@ static inline bool tlb_flush_entry_locked(CPUTLBEntry *tlb_entry, vaddr page) return tlb_flush_entry_mask_locked(tlb_entry, page, -1); } -/* Called with tlb_c.lock held */ -static void tlb_flush_vtlb_page_mask_locked(CPUState *cpu, int mmu_idx, - vaddr page, - vaddr mask) -{ - CPUTLBDesc *d = &cpu->neg.tlb.d[mmu_idx]; - int k; - - assert_cpu_is_self(cpu); - for (k = 0; k < CPU_VTLB_SIZE; k++) { - if (tlb_flush_entry_mask_locked(&d->vtable[k], page, mask)) { - tlb_n_used_entries_dec(cpu, mmu_idx); - } - } -} - -static inline void tlb_flush_vtlb_page_locked(CPUState *cpu, int mmu_idx, - vaddr page) -{ - tlb_flush_vtlb_page_mask_locked(cpu, mmu_idx, page, -1); -} - static void tlbfast_flush_range_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast, vaddr addr, vaddr len, vaddr mask) { @@ -593,7 +560,6 @@ static void tlb_flush_page_locked(CPUState *cpu, int midx, vaddr page) tlbfast_flush_range_locked(desc, &cpu->neg.tlb.f[midx], page, TARGET_PAGE_SIZE, -1); - tlb_flush_vtlb_page_locked(cpu, midx, page); node = tlbtree_lookup_addr(desc, page); if (node) { @@ -769,11 +735,6 @@ static void tlb_flush_range_locked(CPUState *cpu, int midx, tlbfast_flush_range_locked(d, f, addr, len, mask); - for (vaddr i = 0; i < len; i += TARGET_PAGE_SIZE) { - vaddr page = addr + i; - tlb_flush_vtlb_page_mask_locked(cpu, midx, page, mask); - } - addr_mask = addr & mask; last_mask = addr_mask + len - 1; last_imask = last_mask | ~mask; @@ -1031,10 +992,6 @@ void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, ram_addr_t length) tlb_reset_dirty_range_locked(&fast->table[i], start1, length); } - for (size_t i = 0; i < CPU_VTLB_SIZE; i++) { - tlb_reset_dirty_range_locked(&desc->vtable[i], start1, length); - } - for (CPUTLBEntryTree *t = tlbtree_lookup_range(desc, 0, -1); t; t = tlbtree_lookup_range_next(t, 0, -1)) { tlb_reset_dirty_range_locked(&t->copy, start1, length); @@ -1068,10 +1025,6 @@ static void tlb_set_dirty(CPUState *cpu, vaddr addr) tlb_set_dirty1_locked(tlb_entry(cpu, mmu_idx, addr), addr); - for (int k = 0; k < CPU_VTLB_SIZE; k++) { - tlb_set_dirty1_locked(&desc->vtable[k], addr); - } - node = tlbtree_lookup_addr(desc, addr); if (node) { tlb_set_dirty1_locked(&node->copy, addr); @@ -1230,23 +1183,6 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, /* Note that the tlb is no longer clean. */ tlb->c.dirty |= 1 << mmu_idx; - /* Make sure there's no cached translation for the new page. */ - tlb_flush_vtlb_page_locked(cpu, mmu_idx, addr_page); - - /* - * Only evict the old entry to the victim tlb if it's for a - * different page; otherwise just overwrite the stale data. - */ - if (!tlb_hit_page_anyprot(te, addr_page) && !tlb_entry_is_empty(te)) { - unsigned vidx = desc->vindex++ % CPU_VTLB_SIZE; - CPUTLBEntry *tv = &desc->vtable[vidx]; - - /* Evict the old entry into the victim tlb. */ - copy_tlb_helper_locked(tv, te); - desc->vfulltlb[vidx] = desc->fulltlb[index]; - tlb_n_used_entries_dec(cpu, mmu_idx); - } - /* Replace an old IntervalTree entry, or create a new one. */ node = tlbtree_lookup_addr(desc, addr_page); if (!node) { From patchwork Wed Oct 9 15:08:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828624 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 44940CEDDA4 for ; Wed, 9 Oct 2024 15:11:50 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJT-0001Xw-Ik; Wed, 09 Oct 2024 11:09:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJR-0001XI-RW for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:17 -0400 Received: from mail-pf1-x432.google.com ([2607:f8b0:4864:20::432]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJM-0007yl-BL for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:17 -0400 Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-71df2b0a2f7so3734435b3a.3 for ; Wed, 09 Oct 2024 08:09:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486551; x=1729091351; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=tHejgt59fwDDl/z4Gy/UG4eq0kEJ+i5rqUHafxjYfQk=; b=NTz6aoJmMOwUcm/JwlYLvYp8fBNF0us+sHBurDHz6iF5hGxKBx42jCMBkc5A8D7Iyd zdwcW0JNwTGYuA8USApGsDA7L4TdVjdDO9ZT9B7PL5kq//hQdTDm+7RsI5Uq3hGihq5F y4e7qxlLynTbr2DKm1Zq9PYcVq/MqdR3bliTrPbtijPynwjOyl/wGpNZ3UhumZwtSSpX mO1QW6SfTuVXDaKCCgsjNx2x9XTOYKEne709RpmCC9WlVtLLN7zeYKcYKg2OuxusG1ll dmrMuaQoGHErnazqrihvSKMMb6UHg1wctXpMIhsLZ9DT0MAPIEbN3qnWed4S0CdWLSMP dPXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486551; x=1729091351; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tHejgt59fwDDl/z4Gy/UG4eq0kEJ+i5rqUHafxjYfQk=; b=JoN8jMQWV+pCdEflE8Jxz0MAOJE+fk0EPYqmSjbRUS/XPKJNaCeGH5hzVmVyO2f1ny O3cc8Dr0j8QUtN6X2mQ/0kU7YQ2/B1ehf5pGQWkqZ910KjQUQCsQaIU7et092gTCdWPq iTuBabE8nctRAUwaeY7TDkzbqd+xoXH7EPEcfONlW3VL5OMtWiZqn/JHBdP+quKZUIIO 4JZKPbqBkyVfrwGVeGhJW0wJqy8YaBWMiOJ7/F2RA6x+LCiehQ7ec2Cmb3uBW7Rc8Q8N R9cvxQWvmLWozndi8JZOcrw6JWLMWTgmbybwyqygmBvCfxLwcD8JJoQnwtW9fi52nfdF T51w== X-Gm-Message-State: AOJu0YwjmcAAwh2VisptVfebHodUBdU9ctwNXtYY5twVh5XsuB4D29Rn sv0CRDhYxhhCkkyDR0mKEpwCATY/iRlkpyf81SEbQaq8ZYf8AwkwxY54+fe3sWapu4BDkII/Ufw u X-Google-Smtp-Source: AGHT+IGnxO4UcoMHts0d6LgunxKV6UjUFs+1KrspPfi6pfoQ3qDp2E3AO6QM5KPlBp6KFTo7UnN6lA== X-Received: by 2002:a05:6a20:d707:b0:1d3:2976:13e with SMTP id adf61e73a8af0-1d8a3c2666emr4537153637.30.1728486550886; Wed, 09 Oct 2024 08:09:10 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.10 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:10 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 16/23] include/exec/tlb-common: Move CPUTLBEntryFull from hw/core/cpu.h Date: Wed, 9 Oct 2024 08:08:48 -0700 Message-ID: <20241009150855.804605-17-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::432; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x432.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org CPUTLBEntryFull structures are no longer directly included within the CPUState structure. Move the structure definition out of cpu.h to reduce visibility. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- include/exec/tlb-common.h | 63 +++++++++++++++++++++++++++++++++++++++ include/hw/core/cpu.h | 63 --------------------------------------- 2 files changed, 63 insertions(+), 63 deletions(-) diff --git a/include/exec/tlb-common.h b/include/exec/tlb-common.h index dc5a5faa0b..300f9fae67 100644 --- a/include/exec/tlb-common.h +++ b/include/exec/tlb-common.h @@ -53,4 +53,67 @@ typedef struct CPUTLBDescFast { CPUTLBEntry *table; } CPUTLBDescFast QEMU_ALIGNED(2 * sizeof(void *)); +/* + * The full TLB entry, which is not accessed by generated TCG code, + * so the layout is not as critical as that of CPUTLBEntry. This is + * also why we don't want to combine the two structs. + */ +struct CPUTLBEntryFull { + /* + * @xlat_section contains: + * - in the lower TARGET_PAGE_BITS, a physical section number + * - with the lower TARGET_PAGE_BITS masked off, an offset which + * must be added to the virtual address to obtain: + * + the ram_addr_t of the target RAM (if the physical section + * number is PHYS_SECTION_NOTDIRTY or PHYS_SECTION_ROM) + * + the offset within the target MemoryRegion (otherwise) + */ + hwaddr xlat_section; + + /* + * @phys_addr contains the physical address in the address space + * given by cpu_asidx_from_attrs(cpu, @attrs). + */ + hwaddr phys_addr; + + /* @attrs contains the memory transaction attributes for the page. */ + MemTxAttrs attrs; + + /* @prot contains the complete protections for the page. */ + uint8_t prot; + + /* @lg_page_size contains the log2 of the page size. */ + uint8_t lg_page_size; + + /* Additional tlb flags requested by tlb_fill. */ + uint8_t tlb_fill_flags; + + /* + * Additional tlb flags for use by the slow path. If non-zero, + * the corresponding CPUTLBEntry comparator must have TLB_FORCE_SLOW. + */ + uint8_t slow_flags[MMU_ACCESS_COUNT]; + + /* + * Allow target-specific additions to this structure. + * This may be used to cache items from the guest cpu + * page tables for later use by the implementation. + */ + union { + /* + * Cache the attrs and shareability fields from the page table entry. + * + * For ARMMMUIdx_Stage2*, pte_attrs is the S2 descriptor bits [5:2]. + * Otherwise, pte_attrs is the same as the MAIR_EL1 8-bit format. + * For shareability and guarded, as in the SH and GP fields respectively + * of the VMSAv8-64 PTEs. + */ + struct { + uint8_t pte_attrs; + uint8_t shareability; + bool guarded; + } arm; + } extra; +}; + #endif /* EXEC_TLB_COMMON_H */ diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h index 87b864f5c4..6b1c2bfadd 100644 --- a/include/hw/core/cpu.h +++ b/include/hw/core/cpu.h @@ -198,69 +198,6 @@ struct CPUClass { */ #define NB_MMU_MODES 16 -/* - * The full TLB entry, which is not accessed by generated TCG code, - * so the layout is not as critical as that of CPUTLBEntry. This is - * also why we don't want to combine the two structs. - */ -struct CPUTLBEntryFull { - /* - * @xlat_section contains: - * - in the lower TARGET_PAGE_BITS, a physical section number - * - with the lower TARGET_PAGE_BITS masked off, an offset which - * must be added to the virtual address to obtain: - * + the ram_addr_t of the target RAM (if the physical section - * number is PHYS_SECTION_NOTDIRTY or PHYS_SECTION_ROM) - * + the offset within the target MemoryRegion (otherwise) - */ - hwaddr xlat_section; - - /* - * @phys_addr contains the physical address in the address space - * given by cpu_asidx_from_attrs(cpu, @attrs). - */ - hwaddr phys_addr; - - /* @attrs contains the memory transaction attributes for the page. */ - MemTxAttrs attrs; - - /* @prot contains the complete protections for the page. */ - uint8_t prot; - - /* @lg_page_size contains the log2 of the page size. */ - uint8_t lg_page_size; - - /* Additional tlb flags requested by tlb_fill. */ - uint8_t tlb_fill_flags; - - /* - * Additional tlb flags for use by the slow path. If non-zero, - * the corresponding CPUTLBEntry comparator must have TLB_FORCE_SLOW. - */ - uint8_t slow_flags[MMU_ACCESS_COUNT]; - - /* - * Allow target-specific additions to this structure. - * This may be used to cache items from the guest cpu - * page tables for later use by the implementation. - */ - union { - /* - * Cache the attrs and shareability fields from the page table entry. - * - * For ARMMMUIdx_Stage2*, pte_attrs is the S2 descriptor bits [5:2]. - * Otherwise, pte_attrs is the same as the MAIR_EL1 8-bit format. - * For shareability and guarded, as in the SH and GP fields respectively - * of the VMSAv8-64 PTEs. - */ - struct { - uint8_t pte_attrs; - uint8_t shareability; - bool guarded; - } arm; - } extra; -}; - /* * Data elements that are per MMU mode, minus the bits accessed by * the TCG fast path. From patchwork Wed Oct 9 15:08:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828615 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B3B51CEDDA7 for ; Wed, 9 Oct 2024 15:10:27 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJS-0001XS-4J; Wed, 09 Oct 2024 11:09:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJP-0001W8-Ua for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:15 -0400 Received: from mail-pf1-x431.google.com ([2607:f8b0:4864:20::431]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJN-0007yz-CG for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:15 -0400 Received: by mail-pf1-x431.google.com with SMTP id d2e1a72fcca58-71e10ae746aso1878744b3a.2 for ; Wed, 09 Oct 2024 08:09:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486552; x=1729091352; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=rKX+Uzl8zZj3fHPXUUEHmyyX8BaoKsC5BBsSMNrfsOM=; b=paP5+PKbA4FWmN7jHX5fF33gXjWnCW63PFFOebVE13I00cgcHjTHEcrIAyPyPoIsm1 aMWuRllsvW6OvQaLiRVsap5QxVbgHgCWp4FlgaMv5I9maR//zqU7ysgBuYEsDhS6HW0A qMdoIx2wz3uNkLxkiBV1ttTque8RQ2TtN03zisMvqVnCzUkmu0F/eIRhz6qOwOFk5OK6 tgg3LM9zZeWNUGO6qehBES+6hkogNzg8moEPX1HMTDdPwmCdWw1hEFjfpI6jMW2sv5hO K6jPgbGol9roYRVm8caCIf6dC7FUyW+yNQHcq60b8tz8pbUwbbQBZTIYar+hphA3g8Pg Z11w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486552; x=1729091352; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rKX+Uzl8zZj3fHPXUUEHmyyX8BaoKsC5BBsSMNrfsOM=; b=qWOQ7AZ/jn6O2EqJ5q0dRgGlczBU/TzrQHulg9PpYfwwBaIhz5Fe79gEQRV9xEzMcB IwztgPeUF0TfoQDz21hP+SNWRwRrg5E2BgGfq1l/8xYUbpFrkm6wQ9eP+JjJmYIP4sLJ JXnK97VgsnRV1VvRsiX/DvJbC7GJOxgW6zCePQbSVCYcw8WQTHuAke4l6GJ2JRFZ/uIU 44Cg6n1rWdr/yOTALCvd6CymSu0hzggEh91JefzmqGxFERAl07FGHGGfUHpLMEoTM/K2 KgRYy2aOPnzVrDvL5bQvevTKfAKn+fI0HRMtCQMWIE/XyJ3JH9zYC3Mkfb2Abp2g4Z2l RKUA== X-Gm-Message-State: AOJu0YziROVeTLuyYtGNdljhPVn8lZ6D3fJuag/kNQqsRjSjSX6e1y+7 aKBEhxaQj286MLdk22lfGNJFrubo489HgvizSWM5wJOhFJK/Efq645azFY/HWSzxbEMlZu7COEg + X-Google-Smtp-Source: AGHT+IF224HVTw84qqiJH9J7Iy/0z93p44eJdMmmj9qg5+y4ahKiyiITV3r4Zb14T0cxtYh+xYQiFQ== X-Received: by 2002:a05:6a21:398:b0:1d8:adea:6598 with SMTP id adf61e73a8af0-1d8adea65d6mr9429637.38.1728486551776; Wed, 09 Oct 2024 08:09:11 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.11 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:11 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 17/23] accel/tcg: Delay plugin adjustment in probe_access_internal Date: Wed, 9 Oct 2024 08:08:49 -0700 Message-ID: <20241009150855.804605-18-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::431; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x431.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Remove force_mmio and place the expression into the IF expression, behind the short-circuit logic expressions that might eliminate its computation. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- accel/tcg/cputlb.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 561f66c723..59ee766d51 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1403,7 +1403,6 @@ static int probe_access_internal(CPUState *cpu, vaddr addr, uint64_t tlb_addr = tlb_read_idx(entry, access_type); vaddr page_addr = addr & TARGET_PAGE_MASK; int flags = TLB_FLAGS_MASK & ~TLB_FORCE_SLOW; - bool force_mmio = check_mem_cbs && cpu_plugin_mem_cbs_enabled(cpu); CPUTLBEntryFull *full; if (!tlb_hit_page(tlb_addr, page_addr)) { @@ -1434,9 +1433,14 @@ static int probe_access_internal(CPUState *cpu, vaddr addr, *pfull = full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; flags |= full->slow_flags[access_type]; - /* Fold all "mmio-like" bits into TLB_MMIO. This is not RAM. */ - if (unlikely(flags & ~(TLB_WATCHPOINT | TLB_NOTDIRTY | TLB_CHECK_ALIGNED)) - || (access_type != MMU_INST_FETCH && force_mmio)) { + /* + * Fold all "mmio-like" bits, and required plugin callbacks, to TLB_MMIO. + * These cannot be treated as RAM. + */ + if ((flags & ~(TLB_WATCHPOINT | TLB_NOTDIRTY | TLB_CHECK_ALIGNED)) + || (access_type != MMU_INST_FETCH + && check_mem_cbs + && cpu_plugin_mem_cbs_enabled(cpu))) { *phost = NULL; return TLB_MMIO; } From patchwork Wed Oct 9 15:08:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 29CF2CEDDA4 for ; Wed, 9 Oct 2024 15:11:38 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJT-0001Xu-Ie; Wed, 09 Oct 2024 11:09:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJP-0001Vr-Jo for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:15 -0400 Received: from mail-pf1-x431.google.com ([2607:f8b0:4864:20::431]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJO-0007zK-6V for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:15 -0400 Received: by mail-pf1-x431.google.com with SMTP id d2e1a72fcca58-71e02249621so2639004b3a.1 for ; Wed, 09 Oct 2024 08:09:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486553; x=1729091353; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=1IdMwosD6leRGtbjgClZZbwFAyeLDeJepvMUUdOag90=; b=P5mBGekxJAShpgrYOZwOlo24GEgTTRdqRmipUJ10X8fPMdyST9dS6PvEi2S+msamKN vys6yJLi74JkWQxToFjC+ZSfAegayr3LMctdx0ClV06VAP0FkmuAYH1dj38juR7TUfje WW+xR9koHSFGzeEV3jaq/60iqwlF8vcCpqUAfyiAEECjXKqaDPARDeQ3LJT5pkl93sFJ N/YfmHuj7FgsldxFqAsMcZLdj0Csp7zas44almQyu/vo4/8H3RHvRHQYMc8oZv3rPiWx bWCr6Y1GZIn0SJx3AvIo1uj+LQp5E8iZ/h8Pz4Do1vE+xm2fhuUDUZJQXikTkkeCKeHE udcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486553; x=1729091353; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1IdMwosD6leRGtbjgClZZbwFAyeLDeJepvMUUdOag90=; b=ToCxgVkxp+kyaDfHCkGpzWoOTsv1aJVKVYtaiOI79GfiVi9qUem17ARNAKI/OKo6Bf ADCGvaw6N9Do+rNRS4trKkW6FrpYRkbLSOdqKiotMi0NumTXOxo5sqWKXCfJ9iOIv32P gJDtOwJ32wv8yQaGPskKsPwEzMboD/xBSUazawWShbrfxm+TyhKRCLT6yl6zzoS/WslE cew2ibtJtaVkMjxbY7xVMql/4/UyEIbnQLsRWtC4HcgK0LEP1NY5DqSt84apMrpCZUpn 4ZzpX4ggDuUQ2Ur7eZJ46YgrUmHpJF3ctFT0z91ENzz5VrTTmp/RWXLQVZCyeYfhPExw GFaA== X-Gm-Message-State: AOJu0YyK431e4VuJDfy3/Wg4bagyBDj/Qz6eK2nRB9SrBwi0JWr2Onw8 tomZ2mW2bsX/ceWJVhICFVM+8K7z4aS+Bb4EZXQbTPo48hD6qW37JG1cDmhjxSwp2Nnl/42YC8l x X-Google-Smtp-Source: AGHT+IE7uJxB0FQ4p3NYLi3j24Qz3X6/nR03VhvG1LnABb98FbXBC6GSs4eziwPZ+JZT28KFRlXlYw== X-Received: by 2002:a05:6a00:1749:b0:71e:148c:4611 with SMTP id d2e1a72fcca58-71e1db6488amr4462318b3a.6.1728486552723; Wed, 09 Oct 2024 08:09:12 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.12 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:12 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 18/23] accel/tcg: Call cpu_ld*_code_mmu from cpu_ld*_code Date: Wed, 9 Oct 2024 08:08:50 -0700 Message-ID: <20241009150855.804605-19-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::431; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x431.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Ensure a common entry point for all code lookups. Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Pierrick Bouvier --- accel/tcg/cputlb.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 59ee766d51..61daa89e06 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -2954,28 +2954,28 @@ uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr) { CPUState *cs = env_cpu(env); MemOpIdx oi = make_memop_idx(MO_UB, cpu_mmu_index(cs, true)); - return do_ld1_mmu(cs, addr, oi, 0, MMU_INST_FETCH); + return cpu_ldb_code_mmu(env, addr, oi, 0); } uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr) { CPUState *cs = env_cpu(env); MemOpIdx oi = make_memop_idx(MO_TEUW, cpu_mmu_index(cs, true)); - return do_ld2_mmu(cs, addr, oi, 0, MMU_INST_FETCH); + return cpu_ldw_code_mmu(env, addr, oi, 0); } uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr) { CPUState *cs = env_cpu(env); MemOpIdx oi = make_memop_idx(MO_TEUL, cpu_mmu_index(cs, true)); - return do_ld4_mmu(cs, addr, oi, 0, MMU_INST_FETCH); + return cpu_ldl_code_mmu(env, addr, oi, 0); } uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr addr) { CPUState *cs = env_cpu(env); MemOpIdx oi = make_memop_idx(MO_TEUQ, cpu_mmu_index(cs, true)); - return do_ld8_mmu(cs, addr, oi, 0, MMU_INST_FETCH); + return cpu_ldq_code_mmu(env, addr, oi, 0); } uint8_t cpu_ldb_code_mmu(CPUArchState *env, abi_ptr addr, From patchwork Wed Oct 9 15:08:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828626 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 757F6CEDDA4 for ; Wed, 9 Oct 2024 15:12:19 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJU-0001Yt-6O; Wed, 09 Oct 2024 11:09:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJR-0001X5-7k for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:17 -0400 Received: from mail-pf1-x42c.google.com ([2607:f8b0:4864:20::42c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJP-0007za-8n for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:16 -0400 Received: by mail-pf1-x42c.google.com with SMTP id d2e1a72fcca58-71e15fe56c9so1476756b3a.3 for ; Wed, 09 Oct 2024 08:09:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486554; x=1729091354; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=s/iMv5B425r44+NF+z2djHWwBwMWohkmdSTVLek2FyE=; b=VMC4GgeTRgMmmb76P0yNO/SStyfX4zrIRvpRu2tVmH6SmS8QGcmztKaK650Uj2Q+Sn dp+lCGk0q5HbzxW0DiNaPNLSKAK4U7025bm+4Y9rfWcDSl63nZvFxdPpgTd9NZPN7N48 08A7nrez8iJshIvO8FFknb7PT8GHZXKWFslNIpyyeCc0OH3GBw1whEBm63r7j10x8uHO dv8wz2nDjlqu/svmSOBzfazwDvJCgWv6TE2Y8cd9+YyhBBYYjQbAj3TKf1IqfV0+N/+h L5YmHrHsNI1i5wnV0VX3DooeSsWoD6CEDQY2hnoAYenqeLUvU/+oIrJuUuzMLECXVd7n HeZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486554; x=1729091354; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=s/iMv5B425r44+NF+z2djHWwBwMWohkmdSTVLek2FyE=; b=T2CFe8Bs55DXjfQqe5+staq7moaMAasDgM8luIuOFp8G/DZFiZ2y9f+QIIWpR3rs6J lzIHL8faRTHvI2C7S5lSP7/D0y1XAug6Ii7CmSBoOEuBlSYW4M5V1J3rWMiK5b+7P8PQ 0fJfqSOnr55iEOqoBop6mnGTqX4DcQ+SNgJfzZaGBWiMVY45MXRBzxLvCwFExJYCUc8W l6W4ydqkPWPaSEF77xHZHoT9IaQwnzUwt8CvcOm+OtCfXTetO1tKyedLamx9VAQtG9HG 5XiC8FXiL8HDtT1M6Tu2cSCbkiHQ7dWv/TQqeMbNtdyYV2rGNX3hxZ26RoI1h08ux7S4 8stQ== X-Gm-Message-State: AOJu0YzBZY65O4cOIIz7xAlSViko3dHWsmIenBJWINbu280GPKGWHvlL EsWVGJ5aetai9k6TXvArpkVrPMB7ZG9sS+CMKBthQyFhZE5CfMag5KQWjfuBJFDmKkUyDn7ZZ2x p X-Google-Smtp-Source: AGHT+IH+iKe/vWUjIGIl67j9gr5GVnLt4pGcHwj1L/Zoo51/5oM/67QtZTtGSjejYmidColmMY07JA== X-Received: by 2002:a05:6a21:114f:b0:1c1:61a9:de4a with SMTP id adf61e73a8af0-1d8ad7dd77cmr434700637.24.1728486553843; Wed, 09 Oct 2024 08:09:13 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.12 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:13 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 19/23] accel/tcg: Always use IntervalTree for code lookups Date: Wed, 9 Oct 2024 08:08:51 -0700 Message-ID: <20241009150855.804605-20-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42c; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Because translation is special, we don't need the speed of the direct-mapped softmmu tlb. We cache a lookups in DisasContextBase within the translator loop anyway. Drop the addr_code comparator from CPUTLBEntry. Go directly to the IntervalTree for MMU_INST_FETCH. Derive exec flags from read flags. Signed-off-by: Richard Henderson --- include/exec/cpu-all.h | 3 + include/exec/tlb-common.h | 5 +- accel/tcg/cputlb.c | 138 +++++++++++++++++++++++++++++--------- 3 files changed, 110 insertions(+), 36 deletions(-) diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h index 6f09b86e7f..7f5a10962a 100644 --- a/include/exec/cpu-all.h +++ b/include/exec/cpu-all.h @@ -326,6 +326,9 @@ static inline int cpu_mmu_index(CPUState *cs, bool ifetch) (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \ | TLB_FORCE_SLOW | TLB_DISCARD_WRITE) +/* Filter read flags to exec flags. */ +#define TLB_EXEC_FLAGS_MASK (TLB_MMIO) + /* * Flags stored in CPUTLBEntryFull.slow_flags[x]. * TLB_FORCE_SLOW must be set in CPUTLBEntry.addr_idx[x]. diff --git a/include/exec/tlb-common.h b/include/exec/tlb-common.h index 300f9fae67..feaa471299 100644 --- a/include/exec/tlb-common.h +++ b/include/exec/tlb-common.h @@ -26,7 +26,6 @@ typedef union CPUTLBEntry { struct { uint64_t addr_read; uint64_t addr_write; - uint64_t addr_code; /* * Addend to virtual address to get host address. IO accesses * use the corresponding iotlb value. @@ -35,7 +34,7 @@ typedef union CPUTLBEntry { }; /* * Padding to get a power of two size, as well as index - * access to addr_{read,write,code}. + * access to addr_{read,write}. */ uint64_t addr_idx[(1 << CPU_TLB_ENTRY_BITS) / sizeof(uint64_t)]; } CPUTLBEntry; @@ -92,7 +91,7 @@ struct CPUTLBEntryFull { * Additional tlb flags for use by the slow path. If non-zero, * the corresponding CPUTLBEntry comparator must have TLB_FORCE_SLOW. */ - uint8_t slow_flags[MMU_ACCESS_COUNT]; + uint8_t slow_flags[2]; /* * Allow target-specific additions to this structure. diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 61daa89e06..7c8308355d 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -114,8 +114,9 @@ static inline uint64_t tlb_read_idx(const CPUTLBEntry *entry, MMU_DATA_LOAD * sizeof(uint64_t)); QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_write) != MMU_DATA_STORE * sizeof(uint64_t)); - QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_code) != - MMU_INST_FETCH * sizeof(uint64_t)); + + tcg_debug_assert(access_type == MMU_DATA_LOAD || + access_type == MMU_DATA_STORE); #if TARGET_LONG_BITS == 32 /* Use qatomic_read, in case of addr_write; only care about low bits. */ @@ -490,8 +491,7 @@ static bool tlb_hit_page_mask_anyprot(CPUTLBEntry *tlb_entry, mask &= TARGET_PAGE_MASK | TLB_INVALID_MASK; return (page == (tlb_entry->addr_read & mask) || - page == (tlb_addr_write(tlb_entry) & mask) || - page == (tlb_entry->addr_code & mask)); + page == (tlb_addr_write(tlb_entry) & mask)); } static inline bool tlb_hit_page_anyprot(CPUTLBEntry *tlb_entry, vaddr page) @@ -1061,15 +1061,13 @@ static inline void tlb_set_compare(CPUTLBEntryFull *full, CPUTLBEntry *ent, vaddr address, int flags, MMUAccessType access_type, bool enable) { - if (enable) { - address |= flags & TLB_FLAGS_MASK; - flags &= TLB_SLOW_FLAGS_MASK; - if (flags) { - address |= TLB_FORCE_SLOW; - } - } else { - address = -1; - flags = 0; + if (!enable) { + address = TLB_INVALID_MASK; + } + address |= flags & TLB_FLAGS_MASK; + flags &= TLB_SLOW_FLAGS_MASK; + if (flags) { + address |= TLB_FORCE_SLOW; } ent->addr_idx[access_type] = address; full->slow_flags[access_type] = flags; @@ -1215,9 +1213,6 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, /* Now calculate the new entry */ node->copy.addend = addend - addr_page; - tlb_set_compare(full, &node->copy, addr_page, read_flags, - MMU_INST_FETCH, prot & PAGE_EXEC); - if (wp_flags & BP_MEM_READ) { read_flags |= TLB_WATCHPOINT; } @@ -1392,21 +1387,52 @@ static void notdirty_write(CPUState *cpu, vaddr mem_vaddr, unsigned size, } } -static int probe_access_internal(CPUState *cpu, vaddr addr, - int fault_size, MMUAccessType access_type, - int mmu_idx, bool nonfault, - void **phost, CPUTLBEntryFull **pfull, - uintptr_t retaddr, bool check_mem_cbs) +static int probe_access_internal_code(CPUState *cpu, vaddr addr, + int fault_size, int mmu_idx, + bool nonfault, + void **phost, CPUTLBEntryFull **pfull, + uintptr_t retaddr) +{ + CPUTLBEntryTree *t = tlbtree_lookup_addr(&cpu->neg.tlb.d[mmu_idx], addr); + int flags; + + if (!t || !(t->full.prot & PAGE_EXEC)) { + if (!tlb_fill_align(cpu, addr, MMU_INST_FETCH, mmu_idx, + 0, fault_size, nonfault, retaddr)) { + /* Non-faulting page table read failed. */ + *phost = NULL; + *pfull = NULL; + return TLB_INVALID_MASK; + } + t = tlbtree_lookup_addr(&cpu->neg.tlb.d[mmu_idx], addr); + } + flags = t->copy.addr_read & TLB_EXEC_FLAGS_MASK; + *pfull = &t->full; + + if (flags) { + *phost = NULL; + return TLB_MMIO; + } + + /* Everything else is RAM. */ + *phost = (void *)((uintptr_t)addr + t->copy.addend); + return flags; +} + +static int probe_access_internal_data(CPUState *cpu, vaddr addr, + int fault_size, MMUAccessType access_type, + int mmu_idx, bool nonfault, + void **phost, CPUTLBEntryFull **pfull, + uintptr_t retaddr, bool check_mem_cbs) { uintptr_t index = tlb_index(cpu, mmu_idx, addr); CPUTLBEntry *entry = tlb_entry(cpu, mmu_idx, addr); uint64_t tlb_addr = tlb_read_idx(entry, access_type); - vaddr page_addr = addr & TARGET_PAGE_MASK; int flags = TLB_FLAGS_MASK & ~TLB_FORCE_SLOW; CPUTLBEntryFull *full; - if (!tlb_hit_page(tlb_addr, page_addr)) { - if (!tlbtree_hit(cpu, mmu_idx, access_type, page_addr)) { + if (!tlb_hit(tlb_addr, addr)) { + if (!tlbtree_hit(cpu, mmu_idx, access_type, addr)) { if (!tlb_fill_align(cpu, addr, access_type, mmu_idx, 0, fault_size, nonfault, retaddr)) { /* Non-faulting page table read failed. */ @@ -1450,6 +1476,21 @@ static int probe_access_internal(CPUState *cpu, vaddr addr, return flags; } +static int probe_access_internal(CPUState *cpu, vaddr addr, + int fault_size, MMUAccessType access_type, + int mmu_idx, bool nonfault, + void **phost, CPUTLBEntryFull **pfull, + uintptr_t retaddr, bool check_mem_cbs) +{ + if (access_type == MMU_INST_FETCH) { + return probe_access_internal_code(cpu, addr, fault_size, mmu_idx, + nonfault, phost, pfull, retaddr); + } + return probe_access_internal_data(cpu, addr, fault_size, access_type, + mmu_idx, nonfault, phost, pfull, + retaddr, check_mem_cbs); +} + int probe_access_full(CPUArchState *env, vaddr addr, int size, MMUAccessType access_type, int mmu_idx, bool nonfault, void **phost, CPUTLBEntryFull **pfull, @@ -1582,9 +1623,9 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, vaddr addr, CPUTLBEntryFull *full; void *p; - (void)probe_access_internal(env_cpu(env), addr, 1, MMU_INST_FETCH, - cpu_mmu_index(env_cpu(env), true), false, - &p, &full, 0, false); + (void)probe_access_internal_code(env_cpu(env), addr, 1, + cpu_mmu_index(env_cpu(env), true), + false, &p, &full, 0); if (p == NULL) { return -1; } @@ -1678,8 +1719,31 @@ typedef struct MMULookupLocals { * tlb_fill_align will longjmp out. Return true if the softmmu tlb for * @mmu_idx may have resized. */ -static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, MemOp memop, - int mmu_idx, MMUAccessType access_type, uintptr_t ra) +static bool mmu_lookup1_code(CPUState *cpu, MMULookupPageData *data, + MemOp memop, int mmu_idx, uintptr_t ra) +{ + vaddr addr = data->addr; + CPUTLBEntryTree *t = tlbtree_lookup_addr(&cpu->neg.tlb.d[mmu_idx], addr); + bool maybe_resized = true; + + if (!t || !(t->full.prot & PAGE_EXEC)) { + tlb_fill_align(cpu, addr, MMU_INST_FETCH, mmu_idx, + memop, data->size, false, ra); + maybe_resized = true; + t = tlbtree_lookup_addr(&cpu->neg.tlb.d[mmu_idx], addr); + } + + data->full = &t->full; + data->flags = t->copy.addr_read & TLB_EXEC_FLAGS_MASK; + /* Compute haddr speculatively; depending on flags it might be invalid. */ + data->haddr = (void *)((uintptr_t)addr + t->copy.addend); + + return maybe_resized; +} + +static bool mmu_lookup1_data(CPUState *cpu, MMULookupPageData *data, + MemOp memop, int mmu_idx, + MMUAccessType access_type, uintptr_t ra) { vaddr addr = data->addr; uintptr_t index = tlb_index(cpu, mmu_idx, addr); @@ -1738,6 +1802,15 @@ static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, MemOp memop, return maybe_resized; } +static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, MemOp memop, + int mmu_idx, MMUAccessType access_type, uintptr_t ra) +{ + if (access_type == MMU_INST_FETCH) { + return mmu_lookup1_code(cpu, data, memop, mmu_idx, ra); + } + return mmu_lookup1_data(cpu, data, memop, mmu_idx, access_type, ra); +} + /** * mmu_watch_or_dirty * @cpu: generic cpu state @@ -1885,13 +1958,13 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, } } + full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; + /* * Let the guest notice RMW on a write-only page. * We have just verified that the page is writable. - * Subpage lookups may have left TLB_INVALID_MASK set, - * but addr_read will only be -1 if PAGE_READ was unset. */ - if (unlikely(tlbe->addr_read == -1)) { + if (unlikely(!(full->prot & PAGE_READ))) { tlb_fill_align(cpu, addr, MMU_DATA_LOAD, mmu_idx, 0, size, false, retaddr); /* @@ -1929,7 +2002,6 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, } hostaddr = (void *)((uintptr_t)addr + tlbe->addend); - full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; if (unlikely(flags & TLB_NOTDIRTY)) { notdirty_write(cpu, addr, size, full, retaddr); From patchwork Wed Oct 9 15:08:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 73052CEDDA3 for ; Wed, 9 Oct 2024 15:11:46 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJU-0001Ys-4E; Wed, 09 Oct 2024 11:09:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJS-0001XT-4F for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:18 -0400 Received: from mail-pf1-x435.google.com ([2607:f8b0:4864:20::435]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJQ-0007zm-6n for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:17 -0400 Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-71e10ae746aso1878823b3a.2 for ; Wed, 09 Oct 2024 08:09:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486555; x=1729091355; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=a2Nnnz5s/lwcWhx3xXVKANRp1+FyXlqnqYoni9ory64=; b=UlUgnYBg1/3GvWvrDXKLBP1X6yhC97sZP6BhyI6OBbMTtx5Pfej6jr4koTOve1HQyB Fv7Ad69nwW+7U1VQZTyPfgn1D8FVV5ra9w2Eob2Hbf2Wvyh/4eF04jMz6GX3k8nBuZ9L kKOHKqNDH9p99sp0Cmw/nkSjt7uZcFTBQWtvDQ1vqPPVAKcthhfMxDWAZmlFP84Ciadn VmrHUUzfxoiQQGBXA6fH9KTINXdr5FDPJ1GDPk+311NIqaZfrtM6TxUPykiHQxCFAIdv D5HQhjPphVFy8jy0+/Wrdp3adD6C7qvVZyjle+QEQAPIW+tMhAAyfkhw3PtQTdUCiCt4 EcEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486555; x=1729091355; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a2Nnnz5s/lwcWhx3xXVKANRp1+FyXlqnqYoni9ory64=; b=IADUxxFlnY86TnYfpdm2vnBJDCUUmiPOs+36awBv7ae/S68g7Jvepgr+v08mvdt22Z iVh9feAzUXEtbf26ueqAKbIhSDg4vwERZ+IxBTdw1wN++MHJjYHbBXwDyM6FDwtotvBN BT/HcF6dDWbTnrp5uA+tG2rKHJ+97Hv+rwYOhMSVnq9N1CS0tjbJL8b8SctW/lUws2sJ 1HDMkk7HhyKws8xsmBISrKnOasazvGfKBWdT8D2EEIRwYGtv0ctHc2UmDyZ1v+qFkdqg +m6sU37ZFKNEBCSIlE6effMmW+p246fhIUA1yKFhrgEZdm5TcsdT7rbOKniwuBAQ4F4K IE9w== X-Gm-Message-State: AOJu0Yx31SyEm4K3whe33C3l602pXpvgcRYiqnXls7hF6lnvH1wAn8mG isXHPQ9seFlhCrPnqRuMx+YXGzxl7qreYnx7AMZPxoIrjU8n2erSKcVBZnR5T9HsV5XzvAyle5r 6 X-Google-Smtp-Source: AGHT+IFemim3MU4U01Z6GiTFgDZ4OwrAoPF8hQ2pxoZmpieHcMYRF/EsgrH7GCcgWD/k43hfJdM5dQ== X-Received: by 2002:a05:6a00:3cd3:b0:71d:fb83:6301 with SMTP id d2e1a72fcca58-71e1db878d0mr4005593b3a.16.1728486554569; Wed, 09 Oct 2024 08:09:14 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.13 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:14 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 20/23] accel/tcg: Link CPUTLBEntry to CPUTLBEntryTree Date: Wed, 9 Oct 2024 08:08:52 -0700 Message-ID: <20241009150855.804605-21-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::435; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x435.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Link from the fast tlb entry to the interval tree node. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- include/exec/tlb-common.h | 2 ++ accel/tcg/cputlb.c | 59 ++++++++++++++------------------------- 2 files changed, 23 insertions(+), 38 deletions(-) diff --git a/include/exec/tlb-common.h b/include/exec/tlb-common.h index feaa471299..3b57d61112 100644 --- a/include/exec/tlb-common.h +++ b/include/exec/tlb-common.h @@ -31,6 +31,8 @@ typedef union CPUTLBEntry { * use the corresponding iotlb value. */ uintptr_t addend; + /* The defining IntervalTree entry. */ + struct CPUTLBEntryTree *tree; }; /* * Padding to get a power of two size, as well as index diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 7c8308355d..2a8d1b4fb2 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -505,7 +505,10 @@ static bool tlb_flush_entry_mask_locked(CPUTLBEntry *tlb_entry, vaddr mask) { if (tlb_hit_page_mask_anyprot(tlb_entry, page, mask)) { - memset(tlb_entry, -1, sizeof(*tlb_entry)); + tlb_entry->addr_read = -1; + tlb_entry->addr_write = -1; + tlb_entry->addend = 0; + tlb_entry->tree = NULL; return true; } return false; @@ -1212,6 +1215,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, /* Now calculate the new entry */ node->copy.addend = addend - addr_page; + node->copy.tree = node; if (wp_flags & BP_MEM_READ) { read_flags |= TLB_WATCHPOINT; @@ -1425,7 +1429,6 @@ static int probe_access_internal_data(CPUState *cpu, vaddr addr, void **phost, CPUTLBEntryFull **pfull, uintptr_t retaddr, bool check_mem_cbs) { - uintptr_t index = tlb_index(cpu, mmu_idx, addr); CPUTLBEntry *entry = tlb_entry(cpu, mmu_idx, addr); uint64_t tlb_addr = tlb_read_idx(entry, access_type); int flags = TLB_FLAGS_MASK & ~TLB_FORCE_SLOW; @@ -1442,7 +1445,6 @@ static int probe_access_internal_data(CPUState *cpu, vaddr addr, } /* TLB resize via tlb_fill_align may have moved the entry. */ - index = tlb_index(cpu, mmu_idx, addr); entry = tlb_entry(cpu, mmu_idx, addr); /* @@ -1456,7 +1458,7 @@ static int probe_access_internal_data(CPUState *cpu, vaddr addr, } flags &= tlb_addr; - *pfull = full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; + *pfull = full = &entry->tree->full; flags |= full->slow_flags[access_type]; /* @@ -1659,7 +1661,6 @@ bool tlb_plugin_lookup(CPUState *cpu, vaddr addr, int mmu_idx, bool is_store, struct qemu_plugin_hwaddr *data) { CPUTLBEntry *tlbe = tlb_entry(cpu, mmu_idx, addr); - uintptr_t index = tlb_index(cpu, mmu_idx, addr); MMUAccessType access_type = is_store ? MMU_DATA_STORE : MMU_DATA_LOAD; uint64_t tlb_addr = tlb_read_idx(tlbe, access_type); CPUTLBEntryFull *full; @@ -1668,7 +1669,7 @@ bool tlb_plugin_lookup(CPUState *cpu, vaddr addr, int mmu_idx, return false; } - full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; + full = &tlbe->tree->full; data->phys_addr = full->phys_addr | (addr & ~TARGET_PAGE_MASK); /* We must have an iotlb entry for MMIO */ @@ -1716,20 +1717,17 @@ typedef struct MMULookupLocals { * * Resolve the translation for the one page at @data.addr, filling in * the rest of @data with the results. If the translation fails, - * tlb_fill_align will longjmp out. Return true if the softmmu tlb for - * @mmu_idx may have resized. + * tlb_fill_align will longjmp out. */ -static bool mmu_lookup1_code(CPUState *cpu, MMULookupPageData *data, +static void mmu_lookup1_code(CPUState *cpu, MMULookupPageData *data, MemOp memop, int mmu_idx, uintptr_t ra) { vaddr addr = data->addr; CPUTLBEntryTree *t = tlbtree_lookup_addr(&cpu->neg.tlb.d[mmu_idx], addr); - bool maybe_resized = true; if (!t || !(t->full.prot & PAGE_EXEC)) { tlb_fill_align(cpu, addr, MMU_INST_FETCH, mmu_idx, memop, data->size, false, ra); - maybe_resized = true; t = tlbtree_lookup_addr(&cpu->neg.tlb.d[mmu_idx], addr); } @@ -1737,19 +1735,16 @@ static bool mmu_lookup1_code(CPUState *cpu, MMULookupPageData *data, data->flags = t->copy.addr_read & TLB_EXEC_FLAGS_MASK; /* Compute haddr speculatively; depending on flags it might be invalid. */ data->haddr = (void *)((uintptr_t)addr + t->copy.addend); - - return maybe_resized; } -static bool mmu_lookup1_data(CPUState *cpu, MMULookupPageData *data, +static void mmu_lookup1_data(CPUState *cpu, MMULookupPageData *data, MemOp memop, int mmu_idx, MMUAccessType access_type, uintptr_t ra) { vaddr addr = data->addr; - uintptr_t index = tlb_index(cpu, mmu_idx, addr); CPUTLBEntry *entry = tlb_entry(cpu, mmu_idx, addr); uint64_t tlb_addr = tlb_read_idx(entry, access_type); - bool maybe_resized = false; + bool did_tlb_fill = false; CPUTLBEntryFull *full; int flags = TLB_FLAGS_MASK & ~TLB_FORCE_SLOW; @@ -1758,8 +1753,7 @@ static bool mmu_lookup1_data(CPUState *cpu, MMULookupPageData *data, if (!tlbtree_hit(cpu, mmu_idx, access_type, addr)) { tlb_fill_align(cpu, addr, access_type, mmu_idx, memop, data->size, false, ra); - maybe_resized = true; - index = tlb_index(cpu, mmu_idx, addr); + did_tlb_fill = true; entry = tlb_entry(cpu, mmu_idx, addr); /* * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately, @@ -1771,11 +1765,11 @@ static bool mmu_lookup1_data(CPUState *cpu, MMULookupPageData *data, tlb_addr = tlb_read_idx(entry, access_type); } - full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; - flags = tlb_addr & (TLB_FLAGS_MASK & ~TLB_FORCE_SLOW); + full = &entry->tree->full; + flags &= tlb_addr; flags |= full->slow_flags[access_type]; - if (likely(!maybe_resized)) { + if (likely(!did_tlb_fill)) { /* Alignment has not been checked by tlb_fill_align. */ int a_bits = memop_alignment_bits(memop); @@ -1798,17 +1792,15 @@ static bool mmu_lookup1_data(CPUState *cpu, MMULookupPageData *data, data->flags = flags; /* Compute haddr speculatively; depending on flags it might be invalid. */ data->haddr = (void *)((uintptr_t)addr + entry->addend); - - return maybe_resized; } -static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, MemOp memop, +static void mmu_lookup1(CPUState *cpu, MMULookupPageData *data, MemOp memop, int mmu_idx, MMUAccessType access_type, uintptr_t ra) { if (access_type == MMU_INST_FETCH) { - return mmu_lookup1_code(cpu, data, memop, mmu_idx, ra); + mmu_lookup1_code(cpu, data, memop, mmu_idx, ra); } - return mmu_lookup1_data(cpu, data, memop, mmu_idx, access_type, ra); + mmu_lookup1_data(cpu, data, memop, mmu_idx, access_type, ra); } /** @@ -1889,15 +1881,9 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, l->page[1].size = l->page[0].size - size0; l->page[0].size = size0; - /* - * Lookup both pages, recognizing exceptions from either. If the - * second lookup potentially resized, refresh first CPUTLBEntryFull. - */ + /* Lookup both pages, recognizing exceptions from either. */ mmu_lookup1(cpu, &l->page[0], l->memop, l->mmu_idx, type, ra); - if (mmu_lookup1(cpu, &l->page[1], 0, l->mmu_idx, type, ra)) { - uintptr_t index = tlb_index(cpu, l->mmu_idx, addr); - l->page[0].full = &cpu->neg.tlb.d[l->mmu_idx].fulltlb[index]; - } + mmu_lookup1(cpu, &l->page[1], 0, l->mmu_idx, type, ra); flags = l->page[0].flags | l->page[1].flags; if (unlikely(flags & (TLB_WATCHPOINT | TLB_NOTDIRTY))) { @@ -1925,7 +1911,6 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, { uintptr_t mmu_idx = get_mmuidx(oi); MemOp mop = get_memop(oi); - uintptr_t index; CPUTLBEntry *tlbe; void *hostaddr; CPUTLBEntryFull *full; @@ -1937,7 +1922,6 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, /* Adjust the given return address. */ retaddr -= GETPC_ADJ; - index = tlb_index(cpu, mmu_idx, addr); tlbe = tlb_entry(cpu, mmu_idx, addr); /* Check TLB entry and enforce page permissions. */ @@ -1947,7 +1931,6 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, tlb_fill_align(cpu, addr, MMU_DATA_STORE, mmu_idx, mop, size, false, retaddr); did_tlb_fill = true; - index = tlb_index(cpu, mmu_idx, addr); tlbe = tlb_entry(cpu, mmu_idx, addr); /* * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately, @@ -1958,7 +1941,7 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, } } - full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; + full = &tlbe->tree->full; /* * Let the guest notice RMW on a write-only page. From patchwork Wed Oct 9 15:08:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828620 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4BD32CEDDA4 for ; Wed, 9 Oct 2024 15:11:23 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJV-0001ZV-He; Wed, 09 Oct 2024 11:09:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJT-0001Xt-AU for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:19 -0400 Received: from mail-pf1-x42e.google.com ([2607:f8b0:4864:20::42e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJR-000800-FE for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:19 -0400 Received: by mail-pf1-x42e.google.com with SMTP id d2e1a72fcca58-71df8585a42so3379861b3a.3 for ; Wed, 09 Oct 2024 08:09:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486556; x=1729091356; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=C3cDVXu4VfzFCQhmjj/FYugodHrKWNrss8x522Nay9g=; b=cYo0yiHg9iXtPNRKENMjRGfyNadnw3WD9nVYSaOcVt+zqaNrDGlZtDn0o7rveHMT/H FsOJqTL4zOR6oy1drokRClxQ0PKUTnymtMaWnIhQbW84il8BPdmbaUsCRE5M9euIcSQX V8LOwvnJlpqeDvnMypHK5bvSWq6T4UBh4AWcrPjvHI9PtkWzjLjnIIwG0sgJaimairBF UCHCdldal5aeqnW7YmY9DEMMkI8MSxYRCo7iBlplHk9+mpsrfcJWreNLdBUF7eN3jNqt 8Spe8yzFlMmE0HvXpMc4h0rWFEiP8UpCFRADQaaeWdn/NTDpTKkdsL7RB3Wxj3drGb+z lkZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486556; x=1729091356; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C3cDVXu4VfzFCQhmjj/FYugodHrKWNrss8x522Nay9g=; b=LzWec/sm0MCff07yznxiqiCTaqlbTWoCh0KY71c2X6UqVkQEI78bmMGxEIt2eSd9Oo bddV814Gp6GFm91wB4wDD+l1ev2O58rMqLwEyEDGugpqbboqkqorK4U8oqYLhTSrjb9I OYLg5/EdAH1Q5DSLsFs6JEuM7EN2dUTwmpNfcK7R9V4mcSgyD3BS5ZA5axhCEOSX2KNN LuzFJ6DvibcIsXtK/5ZmoI8Zsoc5J4zvL6S41b4jCgbnrEDN+f79Ykc+M5k7D77Vt8dP 4s68jYlmYmyskViDRIJTohhuUMPiI78FejVgbykxQmDM/oxZyxPq7oaQWOepzlRNcna0 lmog== X-Gm-Message-State: AOJu0YxmssAmempwIkIp8AzWiL0Hg9dg40nXqJusrGHen03Wty6wALrM 4KeJx1SM/xJCVC4yxcr33k81MfLeuTzmAKnYVOQRWCAYAiu7bqjegiwzcxVP2Z0vDXlrBSoOEYM I X-Google-Smtp-Source: AGHT+IE9ITl34ksIZy9aDp5hrAS5Hbns5+XXr23qjhRcKtmbGNyiOxDllfTeNuAfAGdq5rW9HWVp5A== X-Received: by 2002:a05:6a00:114a:b0:71e:1722:d037 with SMTP id d2e1a72fcca58-71e1dbb561fmr4743119b3a.22.1728486555800; Wed, 09 Oct 2024 08:09:15 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.15 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:15 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 21/23] accel/tcg: Remove CPUTLBDesc.fulltlb Date: Wed, 9 Oct 2024 08:08:53 -0700 Message-ID: <20241009150855.804605-22-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42e; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org This array is now write-only, and may be remove. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- include/hw/core/cpu.h | 1 - accel/tcg/cputlb.c | 39 ++++++++------------------------------- 2 files changed, 8 insertions(+), 32 deletions(-) diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h index 6b1c2bfadd..3022529733 100644 --- a/include/hw/core/cpu.h +++ b/include/hw/core/cpu.h @@ -216,7 +216,6 @@ typedef struct CPUTLBDesc { /* maximum number of entries observed in the window */ size_t window_max_entries; size_t n_used_entries; - CPUTLBEntryFull *fulltlb; /* All active tlb entries for this address space. */ IntervalTreeRoot iroot; } CPUTLBDesc; diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 2a8d1b4fb2..47b9557bb8 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -149,13 +149,6 @@ static inline CPUTLBEntry *tlbfast_entry(CPUTLBDescFast *fast, vaddr addr) return fast->table + tlbfast_index(fast, addr); } -/* Find the TLB index corresponding to the mmu_idx + address pair. */ -static inline uintptr_t tlb_index(CPUState *cpu, uintptr_t mmu_idx, - vaddr addr) -{ - return tlbfast_index(&cpu->neg.tlb.f[mmu_idx], addr); -} - /* Find the TLB entry corresponding to the mmu_idx + address pair. */ static inline CPUTLBEntry *tlb_entry(CPUState *cpu, uintptr_t mmu_idx, vaddr addr) @@ -270,22 +263,20 @@ static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast, } g_free(fast->table); - g_free(desc->fulltlb); tlb_window_reset(desc, now, 0); /* desc->n_used_entries is cleared by the caller */ fast->mask = (new_size - 1) << CPU_TLB_ENTRY_BITS; fast->table = g_try_new(CPUTLBEntry, new_size); - desc->fulltlb = g_try_new(CPUTLBEntryFull, new_size); /* - * If the allocations fail, try smaller sizes. We just freed some + * If the allocation fails, try smaller sizes. We just freed some * memory, so going back to half of new_size has a good chance of working. * Increased memory pressure elsewhere in the system might cause the * allocations to fail though, so we progressively reduce the allocation * size, aborting if we cannot even allocate the smallest TLB we support. */ - while (fast->table == NULL || desc->fulltlb == NULL) { + while (fast->table == NULL) { if (new_size == (1 << CPU_TLB_DYN_MIN_BITS)) { error_report("%s: %s", __func__, strerror(errno)); abort(); @@ -294,9 +285,7 @@ static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast, fast->mask = (new_size - 1) << CPU_TLB_ENTRY_BITS; g_free(fast->table); - g_free(desc->fulltlb); fast->table = g_try_new(CPUTLBEntry, new_size); - desc->fulltlb = g_try_new(CPUTLBEntryFull, new_size); } } @@ -350,7 +339,6 @@ static void tlb_mmu_init(CPUTLBDesc *desc, CPUTLBDescFast *fast, int64_t now) desc->n_used_entries = 0; fast->mask = (n_entries - 1) << CPU_TLB_ENTRY_BITS; fast->table = g_new(CPUTLBEntry, n_entries); - desc->fulltlb = g_new(CPUTLBEntryFull, n_entries); memset(&desc->iroot, 0, sizeof(desc->iroot)); tlb_mmu_flush_locked(desc, fast); } @@ -382,15 +370,9 @@ void tlb_init(CPUState *cpu) void tlb_destroy(CPUState *cpu) { - int i; - qemu_spin_destroy(&cpu->neg.tlb.c.lock); - for (i = 0; i < NB_MMU_MODES; i++) { - CPUTLBDesc *desc = &cpu->neg.tlb.d[i]; - CPUTLBDescFast *fast = &cpu->neg.tlb.f[i]; - - g_free(fast->table); - g_free(desc->fulltlb); + for (int i = 0; i < NB_MMU_MODES; i++) { + g_free(cpu->neg.tlb.f[i].table); interval_tree_free_nodes(&cpu->neg.tlb.d[i].iroot, offsetof(CPUTLBEntryTree, itree)); } @@ -1090,7 +1072,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, CPUTLB *tlb = &cpu->neg.tlb; CPUTLBDesc *desc = &tlb->d[mmu_idx]; MemoryRegionSection *section; - unsigned int index, read_flags, write_flags; + unsigned int read_flags, write_flags; uintptr_t addend; CPUTLBEntry *te; CPUTLBEntryTree *node; @@ -1169,7 +1151,6 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, wp_flags = cpu_watchpoint_address_matches(cpu, addr_page, TARGET_PAGE_SIZE); - index = tlb_index(cpu, mmu_idx, addr_page); te = tlb_entry(cpu, mmu_idx, addr_page); /* @@ -1208,8 +1189,8 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, * subtract here is that of the page base, and not the same as the * vaddr we add back in io_prepare()/get_page_addr_code(). */ - desc->fulltlb[index] = *full; - full = &desc->fulltlb[index]; + node->full = *full; + full = &node->full; full->xlat_section = iotlb - addr_page; full->phys_addr = paddr_page; @@ -1232,7 +1213,6 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, tlb_set_compare(full, &node->copy, addr_page, write_flags, MMU_DATA_STORE, prot & PAGE_WRITE); - node->full = *full; copy_tlb_helper_locked(te, &node->copy); tlb_n_used_entries_inc(cpu, mmu_idx); qemu_spin_unlock(&tlb->c.lock); @@ -1343,7 +1323,6 @@ static bool tlbtree_hit(CPUState *cpu, int mmu_idx, CPUTLBDesc *desc = &cpu->neg.tlb.d[mmu_idx]; CPUTLBDescFast *fast = &cpu->neg.tlb.f[mmu_idx]; CPUTLBEntryTree *node; - size_t index; assert_cpu_is_self(cpu); node = tlbtree_lookup_addr(desc, addr); @@ -1358,12 +1337,10 @@ static bool tlbtree_hit(CPUState *cpu, int mmu_idx, } /* Install the cached entry. */ - index = tlbfast_index(fast, addr); qemu_spin_lock(&cpu->neg.tlb.c.lock); - copy_tlb_helper_locked(&fast->table[index], &node->copy); + copy_tlb_helper_locked(tlbfast_entry(fast, addr), &node->copy); qemu_spin_unlock(&cpu->neg.tlb.c.lock); - desc->fulltlb[index] = node->full; return true; } From patchwork Wed Oct 9 15:08:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 18149CEDDA4 for ; Wed, 9 Oct 2024 15:12:36 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJV-0001ZT-26; Wed, 09 Oct 2024 11:09:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJT-0001Xv-G1 for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:19 -0400 Received: from mail-pg1-x536.google.com ([2607:f8b0:4864:20::536]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJR-00080A-UM for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:19 -0400 Received: by mail-pg1-x536.google.com with SMTP id 41be03b00d2f7-7e9fd82f1a5so2717870a12.1 for ; Wed, 09 Oct 2024 08:09:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486556; x=1729091356; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=cc9On083uJcJ4nXjMqsVER0drsrso2XobYAn0o66lak=; b=CxbG1yFBRQtI2UjNkdokLSUfWvUAxpOk3KZzS/Ot3BTbV04GdQgKO7PPn+Gua9BwPv 0zoWp3F3C4ULVNrmQvn1LYXzmxhFaAgB/mVSnczMCNHRBubmRMhuhjS/dx6qZBwUAhkY nj9VBpVApyPk1YYlFllHCv2+uyUMKWpO3erh0Xz17zjey+yaaDNXenl5MR0Vp0RCnntx b+l/ZaUtcBqgIhFfpzbwfFCUyN1li+5I/Z1MU28WWBc86Aapow7RP099YmGrNa5ZC5WS RyQrxorTEvhjG6bX6mVKlpFnAfMbZVwj9oFbsO5Escr2dZFxg3N81OgBP85ZRrwi8E4z tMIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486556; x=1729091356; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cc9On083uJcJ4nXjMqsVER0drsrso2XobYAn0o66lak=; b=sMfNi8mT4fFbyFRconv2zoJsL2iY9Ev+92PGf3xzNmNJAmUwFi1YrIeAOa+1COeCgW QxjUzDvL5WvfMVZFV6DuoaTeqok+JiP9KXjXq3uSXBbQTopFfDFh727XE5/A97gZU2N9 R0yCUxwioRiWTaVjlFZf17OAXCvtkrmDMyiPYJ7uX06aTGOl6JAM3UKy/LPIp78XEtum uIsMewoRmuXIjRRMhTSpEzmhjlM+CHYm+oXAH9v8Frue7/6esfjvVEYPn6znDDlpJ9iP dfpRRJP6Wk7SS6eDBA+y4lkgutrt+83j+SJolMOR3yxZkWR467y2kuj+Iwxe03D1p9z0 YOhg== X-Gm-Message-State: AOJu0YwiYMbm6uOXAYr2+7bSCSy1D09guEDbUBQ9X6Rf4uR2mh7Z76BV llE2Qq6pVWrKdWIHmjaWjdJnImcGEczSyfRwzStaoAnSki9WXdxdFPx4vhLYzOTngcaiKcruQ90 B X-Google-Smtp-Source: AGHT+IHlws/m7vB9sx1QyRNFWR8LrP4HM0bf9L3WA5lSr4eae3wHeH/tVgx+IzxjyrfTuVucb85DTg== X-Received: by 2002:a05:6a21:9102:b0:1d6:d5c1:e504 with SMTP id adf61e73a8af0-1d8a3c1e587mr5562929637.26.1728486556599; Wed, 09 Oct 2024 08:09:16 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:16 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [NOTYET PATCH 22/23] accel/tcg: Drop TCGCPUOps.tlb_fill Date: Wed, 9 Oct 2024 08:08:54 -0700 Message-ID: <20241009150855.804605-23-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::536; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x536.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Now that all targets have been converted to tlb_fill_align, remove the tlb_fill hook. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- include/hw/core/tcg-cpu-ops.h | 10 ---------- accel/tcg/cputlb.c | 19 ++++--------------- 2 files changed, 4 insertions(+), 25 deletions(-) diff --git a/include/hw/core/tcg-cpu-ops.h b/include/hw/core/tcg-cpu-ops.h index c932690621..e73c8a03de 100644 --- a/include/hw/core/tcg-cpu-ops.h +++ b/include/hw/core/tcg-cpu-ops.h @@ -157,16 +157,6 @@ struct TCGCPUOps { bool (*tlb_fill_align)(CPUState *cpu, CPUTLBEntryFull *out, vaddr addr, MMUAccessType access_type, int mmu_idx, MemOp memop, int size, bool probe, uintptr_t ra); - /** - * @tlb_fill: Handle a softmmu tlb miss - * - * If the access is valid, call tlb_set_page and return true; - * if the access is invalid and probe is true, return false; - * otherwise raise an exception and do not return. - */ - bool (*tlb_fill)(CPUState *cpu, vaddr address, int size, - MMUAccessType access_type, int mmu_idx, - bool probe, uintptr_t retaddr); /** * @do_transaction_failed: Callback for handling failed memory transactions * (ie bus faults or external aborts; not MMU faults) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 47b9557bb8..55c7bf737b 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1251,23 +1251,12 @@ static bool tlb_fill_align(CPUState *cpu, vaddr addr, MMUAccessType type, int mmu_idx, MemOp memop, int size, bool probe, uintptr_t ra) { - const TCGCPUOps *ops = cpu->cc->tcg_ops; CPUTLBEntryFull full; - if (ops->tlb_fill_align) { - if (ops->tlb_fill_align(cpu, &full, addr, type, mmu_idx, - memop, size, probe, ra)) { - tlb_set_page_full(cpu, mmu_idx, addr, &full); - return true; - } - } else { - /* Legacy behaviour is alignment before paging. */ - if (addr & ((1u << memop_alignment_bits(memop)) - 1)) { - ops->do_unaligned_access(cpu, addr, type, mmu_idx, ra); - } - if (ops->tlb_fill(cpu, addr, size, type, mmu_idx, probe, ra)) { - return true; - } + if (cpu->cc->tcg_ops->tlb_fill_align(cpu, &full, addr, type, mmu_idx, + memop, size, probe, ra)) { + tlb_set_page_full(cpu, mmu_idx, addr, &full); + return true; } assert(probe); return false; From patchwork Wed Oct 9 15:08:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13828628 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 57733CEDDA4 for ; Wed, 9 Oct 2024 15:12:33 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1syYJV-0001a1-W2; Wed, 09 Oct 2024 11:09:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1syYJU-0001ZB-J0 for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:20 -0400 Received: from mail-pf1-x42e.google.com ([2607:f8b0:4864:20::42e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1syYJS-00080P-Pb for qemu-devel@nongnu.org; Wed, 09 Oct 2024 11:09:20 -0400 Received: by mail-pf1-x42e.google.com with SMTP id d2e1a72fcca58-71e15fe56c9so1476805b3a.3 for ; Wed, 09 Oct 2024 08:09:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1728486557; x=1729091357; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=y/9VaB8UKZR/34uiKoujjlQexVYllhaoIzyzusyrgR4=; b=oa2ZRzTkGJY03KyazUMFhSI0xkAsUyzEdTf4VmN5PIgfTLHn+pNcPy+LdVpjxTBZ83 5tglmSKuoC/ITqOCGnF13hSz6VL2vCy1qUhTjeG8odLuBEaN1V2y2nVWX5wdHlfS1P4a Z0/0EWA3sx3pX90T5nD3bzIqFoSs7JX4rtAZ9AhPWd7DqPD0Hs0DnMTqjYysgxSpiT81 XGC8lRvJZ13sQF3R+o98shyVnf2IQsIKfdEWtmSH077EFnKAV4w9qc8BD5e99yNdPHat 0hMIfnLAfVNgnlorvzLX9vyu3HysgZbAfk8qbuT0rFsa5LcawAthTbawIIV3rGNwWPE2 t3Rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728486557; x=1729091357; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=y/9VaB8UKZR/34uiKoujjlQexVYllhaoIzyzusyrgR4=; b=MWryf/bjDqsotaWKnPcq7JMp7PCoHTnbh5wXIV+g5BuOXsB4vSfyyf9dzWCDHuHJzQ H34wbSOY9iFa57CvuGMQrL8Bk4DHC06R536jl2YohdyhwZNqFRbZuwhAJqTn5b8LH/yX gP4mZktoyVRvUhBBJUYE74/rw1Lrj1IiqhOo8X2kjcO/yW8MalzbkoqRaORvAVlbrvS9 IpM8sjX7gmf7Y/HCDKXzPH18mqeU4gTs8gOYREwxYZJiANPAG51GYy4eCWX47hUXsOvc LydYAPPKCUmlajtX2/T8JZke5XWioDD9KT9XL6gr4I3m7skIwuPUQBqAeQN40+xPTEwZ Zrsg== X-Gm-Message-State: AOJu0YzPUJCAf2FVj/harXkm4saQXCpgDy6R4RNgE0r+jr3HRPIgAJLT 1wLMRCTTMw/ROKKWnSlbpUs9ivAdXhEq1+tjrNs/10/mzte0pXIx+NvjRFmPNmhMhQ4M3QFoxEl S X-Google-Smtp-Source: AGHT+IEZfm2sQtDYrRys4aZEV9UBsYmfgLvgFbYzKIhBM8I9d4qW+oIRGBhN46jowJtOlINnxyBOZA== X-Received: by 2002:a05:6a00:23c1:b0:71e:210:be12 with SMTP id d2e1a72fcca58-71e267f710cmr440700b3a.21.1728486557301; Wed, 09 Oct 2024 08:09:17 -0700 (PDT) Received: from stoup.. (174-21-81-121.tukw.qwest.net. [174.21.81.121]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71df0d65278sm7881094b3a.160.2024.10.09.08.09.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 08:09:16 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [NOTYET PATCH 23/23] accel/tcg: Unexport tlb_set_page* Date: Wed, 9 Oct 2024 08:08:55 -0700 Message-ID: <20241009150855.804605-24-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241009150855.804605-1-richard.henderson@linaro.org> References: <20241009150855.804605-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42e; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The new tlb_fill_align hook returns page data via structure rather than by function call, so we can make tlb_set_page_full be local to cputlb.c. There are no users of tlb_set_page or tlb_set_page_with_attrs, so those can be eliminated. Signed-off-by: Richard Henderson --- include/exec/exec-all.h | 57 ----------------------------------------- accel/tcg/cputlb.c | 27 ++----------------- 2 files changed, 2 insertions(+), 82 deletions(-) diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 72240ef426..8e2ab26902 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -184,63 +184,6 @@ void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *cpu, vaddr len, uint16_t idxmap, unsigned bits); - -/** - * tlb_set_page_full: - * @cpu: CPU context - * @mmu_idx: mmu index of the tlb to modify - * @addr: virtual address of the entry to add - * @full: the details of the tlb entry - * - * Add an entry to @cpu tlb index @mmu_idx. All of the fields of - * @full must be filled, except for xlat_section, and constitute - * the complete description of the translated page. - * - * This is generally called by the target tlb_fill function after - * having performed a successful page table walk to find the physical - * address and attributes for the translation. - * - * At most one entry for a given virtual address is permitted. Only a - * single TARGET_PAGE_SIZE region is mapped; @full->lg_page_size is only - * used by tlb_flush_page. - */ -void tlb_set_page_full(CPUState *cpu, int mmu_idx, vaddr addr, - CPUTLBEntryFull *full); - -/** - * tlb_set_page_with_attrs: - * @cpu: CPU to add this TLB entry for - * @addr: virtual address of page to add entry for - * @paddr: physical address of the page - * @attrs: memory transaction attributes - * @prot: access permissions (PAGE_READ/PAGE_WRITE/PAGE_EXEC bits) - * @mmu_idx: MMU index to insert TLB entry for - * @size: size of the page in bytes - * - * Add an entry to this CPU's TLB (a mapping from virtual address - * @addr to physical address @paddr) with the specified memory - * transaction attributes. This is generally called by the target CPU - * specific code after it has been called through the tlb_fill() - * entry point and performed a successful page table walk to find - * the physical address and attributes for the virtual address - * which provoked the TLB miss. - * - * At most one entry for a given virtual address is permitted. Only a - * single TARGET_PAGE_SIZE region is mapped; the supplied @size is only - * used by tlb_flush_page. - */ -void tlb_set_page_with_attrs(CPUState *cpu, vaddr addr, - hwaddr paddr, MemTxAttrs attrs, - int prot, int mmu_idx, vaddr size); -/* tlb_set_page: - * - * This function is equivalent to calling tlb_set_page_with_attrs() - * with an @attrs argument of MEMTXATTRS_UNSPECIFIED. It's provided - * as a convenience for CPUs which don't use memory transaction attributes. - */ -void tlb_set_page(CPUState *cpu, vaddr addr, - hwaddr paddr, int prot, - int mmu_idx, vaddr size); #else static inline void tlb_init(CPUState *cpu) { diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 55c7bf737b..5efd6e536c 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1066,8 +1066,8 @@ static inline void tlb_set_compare(CPUTLBEntryFull *full, CPUTLBEntry *ent, * Called from TCG-generated code, which is under an RCU read-side * critical section. */ -void tlb_set_page_full(CPUState *cpu, int mmu_idx, - vaddr addr, CPUTLBEntryFull *full) +static void tlb_set_page_full(CPUState *cpu, int mmu_idx, + vaddr addr, CPUTLBEntryFull *full) { CPUTLB *tlb = &cpu->neg.tlb; CPUTLBDesc *desc = &tlb->d[mmu_idx]; @@ -1218,29 +1218,6 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, qemu_spin_unlock(&tlb->c.lock); } -void tlb_set_page_with_attrs(CPUState *cpu, vaddr addr, - hwaddr paddr, MemTxAttrs attrs, int prot, - int mmu_idx, uint64_t size) -{ - CPUTLBEntryFull full = { - .phys_addr = paddr, - .attrs = attrs, - .prot = prot, - .lg_page_size = ctz64(size) - }; - - assert(is_power_of_2(size)); - tlb_set_page_full(cpu, mmu_idx, addr, &full); -} - -void tlb_set_page(CPUState *cpu, vaddr addr, - hwaddr paddr, int prot, - int mmu_idx, uint64_t size) -{ - tlb_set_page_with_attrs(cpu, addr, paddr, MEMTXATTRS_UNSPECIFIED, - prot, mmu_idx, size); -} - /* * Note: tlb_fill_align() can trigger a resize of the TLB. * This means that all of the caller's prior references to the TLB table