From patchwork Sun Jul 14 11:10:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13732670 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 640C6C3DA4A for ; Sun, 14 Jul 2024 11:12:46 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8C-0004dR-5u; Sun, 14 Jul 2024 07:11:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx89-0004Vr-Hd for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:01 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx86-00022k-OL for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955452; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z7mvDdgjDew4bcumhdVippQPV/asFmWc10+8vble7j0=; b=OszduRC5QRm8851s9dIm1KRHsmLLtzT/K9D5gh5MCsWqWl34Gnyg19FD6WbnF3+LLS+muq R68U92HcSIXFePp8Ffrw/DjORw0wi4u8RjbB/nk5jsgHPXV7j/zMPl37Yb6Lw1USQ6fnB0 tp9y+76TBf5o7iZyN6Cpit8GtMPsfzg= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-269-3ENQxU8HPKiI0D4b6nRLPQ-1; Sun, 14 Jul 2024 07:10:50 -0400 X-MC-Unique: 3ENQxU8HPKiI0D4b6nRLPQ-1 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-426703ac88dso22622485e9.0 for ; Sun, 14 Jul 2024 04:10:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955449; x=1721560249; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=z7mvDdgjDew4bcumhdVippQPV/asFmWc10+8vble7j0=; b=MnKDvKzMmbnUNqVTGMHcUVUATwnEBLiFL0cC/45Y44d5QCDRIG4Tacse2mzDkv0K+D Q7Jcd9V/WkpLOYJei31besRnfqJ9ms3/il9YOnOjgObigL+OaEuguJhViqNtrsWAFdOE 21vcqvomsuevukIwiJMB+j1gbxN79HeBH4nYzyLZiUiGr3loQxR6eulHUWnKmBgnus2z EFAcEdhChYZ9UYPjrdBfCFJ+KQeEd8zl5xCt1+MiGLVQsR7U8T9mb8pvASkS36yQ0Bne /+EbdAo5GOAD4nzPdL8uJXhXBIOZHJSqzr5KGcK4HoKB6JrnD1DX7+aXDoQ5uyfsE7eY Y+ow== X-Gm-Message-State: AOJu0Yzfmbwrkwrbb9ywao4LD3Ok1CheZOakucXvT9cMlr8zpqqrp5Th hEBsUKaTE7BGteSCy+jXqJzDGxaOfUrB83mOfBZOa11MFCIAf5e02Tfht3KW4B8/ap0a0g1DWE/ WUvrj5j3wWT3UrygP6UOXP+bmc5oYwUGIRMNWq8Z1E4d4vSLodDgL2p44R6A6kidCDsyqYjLJX7 uH8Hh3DnkNomFxi31YHJiMZJPUYfSG7HeNEaiG X-Received: by 2002:a7b:ca4b:0:b0:426:6c7a:3a61 with SMTP id 5b1f17b1804b1-426707cf194mr96992905e9.3.1720955448888; Sun, 14 Jul 2024 04:10:48 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGe5iVwXKpRvKcJ7T9DuyuCXUydmRnooNKkTnnRnJW/F3s6RBVCUkos0oVG9z1eAdTz8bVwBA== X-Received: by 2002:a7b:ca4b:0:b0:426:6c7a:3a61 with SMTP id 5b1f17b1804b1-426707cf194mr96992805e9.3.1720955448502; Sun, 14 Jul 2024 04:10:48 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-427a5e9a809sm48665495e9.28.2024.07.14.04.10.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:10:47 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: =?utf-8?q?Cl=C3=A9ment_Chigot?= , Richard Henderson Subject: [PULL 01/13] target/i386/tcg: fix POP to memory in long mode Date: Sun, 14 Jul 2024 13:10:31 +0200 Message-ID: <20240714111043.14132-2-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org In long mode, POP to memory will write a full 64-bit value. However, the call to gen_writeback() in gen_POP will use MO_32 because the decoding table is incorrect. The bug was latent until commit aea49fbb01a ("target/i386: use gen_writeback() within gen_POP()", 2024-06-08), and then became visible because gen_op_st_v now receives op->ot instead of the "ot" returned by gen_pop_T0. Analyzed-by: Clément Chigot Fixes: 5e9e21bcc4d ("target/i386: move 60-BF opcodes to new decoder", 2024-05-07) Tested-by: Clément Chigot Reviewed-by: Richard Henderson Signed-off-by: Paolo Bonzini --- target/i386/tcg/decode-new.c.inc | 2 +- target/i386/tcg/emit.c.inc | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.c.inc index 0d846c32c22..d2da1d396d5 100644 --- a/target/i386/tcg/decode-new.c.inc +++ b/target/i386/tcg/decode-new.c.inc @@ -1717,7 +1717,7 @@ static const X86OpEntry opcodes_root[256] = { [0x8C] = X86_OP_ENTRYwr(MOV, E,v, S,w, op0_Mw), [0x8D] = X86_OP_ENTRYwr(LEA, G,v, M,v, nolea), [0x8E] = X86_OP_ENTRYwr(MOV, S,w, E,w), - [0x8F] = X86_OP_GROUPw(group1A, E,v), + [0x8F] = X86_OP_GROUPw(group1A, E,d64), [0x98] = X86_OP_ENTRY1(CBW, 0,v), /* rAX */ [0x99] = X86_OP_ENTRYwr(CWD, 2,v, 0,v), /* rDX, rAX */ diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc index fc7477833bc..016dce81464 100644 --- a/target/i386/tcg/emit.c.inc +++ b/target/i386/tcg/emit.c.inc @@ -2788,6 +2788,7 @@ static void gen_POP(DisasContext *s, X86DecodedInsn *decode) X86DecodedOp *op = &decode->op[0]; MemOp ot = gen_pop_T0(s); + assert(ot >= op->ot); if (op->has_ea || op->unit == X86_OP_SEG) { /* NOTE: order is important for MMU exceptions */ gen_writeback(s, decode, 0, s->T0); From patchwork Sun Jul 14 11:10:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13732676 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5F73BC3DA49 for ; Sun, 14 Jul 2024 11:13:24 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8E-0004nJ-JX; Sun, 14 Jul 2024 07:11:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8C-0004fI-Ix for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:04 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx86-00022y-Np for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955454; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zysmERLOD7LqFn2JG/XTo8hRSGrS1y0PB/cgSbGIed8=; b=ZMXy27Z9r2cbzNBvMq5uw51o7l1gUNw6CgGfW54UbPZ9FmS+0nTZnakb4VTQ5ziALhE8Kd i0zD5SXay0UsnLzJiHkrJEWViCrntbjnG2bU5XKHPLOCRig2fiSwk18jWke/AoegiV0MPB /ZJwN5OWTO2A8VNCB1xWdI+47nMWxQc= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-345-mH1_8B5tMBCkrLuPWAwJtA-1; Sun, 14 Jul 2024 07:10:52 -0400 X-MC-Unique: mH1_8B5tMBCkrLuPWAwJtA-1 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-367f1dc92e3so2125655f8f.0 for ; Sun, 14 Jul 2024 04:10:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955451; x=1721560251; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zysmERLOD7LqFn2JG/XTo8hRSGrS1y0PB/cgSbGIed8=; b=WRlqI3NaLDc+33/k9Y6YmJVrRbU0MXYNcryUA/2z+DZb7KlcWMy7E+ecWq2iZviqVA fy2to6zLkFwRtV3qW9lX4poupDt+pQUcMEnlO6fUy/8WuF3gpdS7dfpwOUc4/Ha/E4dy t+4V0zUsXt6D9bU2Oj2gMbwMTADh5/xva3ktTsxM62qdw7Zxir2Ii0fMJBTc7n40ZPaf cb2mgnuNODNgi5f6QlZiQBpE/a4ar6RKHIr0dT3Vp0f7o3wIS3TQZCOgk6OiRaAfL/lh YNUvM1BLv22k3oPvcCMtheyAZotcB7c8vO/oOkleDpRL9Mww6qs0wlpvF5D1mrUW+8ba lWxg== X-Gm-Message-State: AOJu0YzPKeWLOGXw3kddX1GriyHpK9OZoRU7H5nQpbQxEJNcC6cE/lVn tRLNXBzn3xm2ir9T1hxwmtIK8wjnEhWNqJfTHzb1YujmcsSfvnp1RioevGPdcvyODJjSqnLOZ6w T5GIqF2Q5hj7qEPuniHzMnZuqf/MvrduxRxW7TgvMXtmYH84lgTHwtFFaS828cYd1s9/EvmaP4v vWx1MknZgr5dBWS9EsFDMe/SJrbs80U2w7bIYm X-Received: by 2002:a5d:4a42:0:b0:366:e2e5:c323 with SMTP id ffacd0b85a97d-367cea6bfb2mr9878886f8f.19.1720955451318; Sun, 14 Jul 2024 04:10:51 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHBbuSYADlJ2k+5vyFiZ9uQWMyodUyMY/4RvE273OcRMr8ZHgNfI7PVszmqkQ4rNOO4jMQMPQ== X-Received: by 2002:a5d:4a42:0:b0:366:e2e5:c323 with SMTP id ffacd0b85a97d-367cea6bfb2mr9878879f8f.19.1720955451013; Sun, 14 Jul 2024 04:10:51 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3680dafbb5bsm3576031f8f.80.2024.07.14.04.10.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:10:49 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Richard Henderson Subject: [PULL 02/13] target/i386/tcg: Remove SEG_ADDL Date: Sun, 14 Jul 2024 13:10:32 +0200 Message-ID: <20240714111043.14132-3-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, T_SPF_TEMPERROR=0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Richard Henderson This truncation is now handled by MMU_*32_IDX. The introduction of MMU_*32_IDX in fact applied correct 32-bit wraparound to 16-bit accesses with a high segment base (e.g. big real mode or vm86 mode), which did not use SEG_ADDL. Signed-off-by: Richard Henderson Link: https://lore.kernel.org/r/20240617161210.4639-3-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini --- target/i386/tcg/seg_helper.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index aee3d19f29b..19d6b41a589 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -579,10 +579,6 @@ int exception_has_error_code(int intno) } while (0) #endif -/* in 64-bit machines, this can overflow. So this segment addition macro - * can be used to trim the value to 32-bit whenever needed */ -#define SEG_ADDL(ssp, sp, sp_mask) ((uint32_t)((ssp) + (sp & (sp_mask)))) - /* XXX: add a is_user flag to have proper security support */ #define PUSHW_RA(ssp, sp, sp_mask, val, ra) \ { \ @@ -593,7 +589,7 @@ int exception_has_error_code(int intno) #define PUSHL_RA(ssp, sp, sp_mask, val, ra) \ { \ sp -= 4; \ - cpu_stl_kernel_ra(env, SEG_ADDL(ssp, sp, sp_mask), (uint32_t)(val), ra); \ + cpu_stl_kernel_ra(env, (ssp) + (sp & (sp_mask)), (val), ra); \ } #define POPW_RA(ssp, sp, sp_mask, val, ra) \ @@ -604,7 +600,7 @@ int exception_has_error_code(int intno) #define POPL_RA(ssp, sp, sp_mask, val, ra) \ { \ - val = (uint32_t)cpu_ldl_kernel_ra(env, SEG_ADDL(ssp, sp, sp_mask), ra); \ + val = (uint32_t)cpu_ldl_kernel_ra(env, (ssp) + (sp & (sp_mask)), ra); \ sp += 4; \ } From patchwork Sun Jul 14 11:10:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13732675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6400EC3DA42 for ; Sun, 14 Jul 2024 11:13:21 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8A-0004XF-Ap; Sun, 14 Jul 2024 07:11:02 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx89-0004Vq-D4 for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:01 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx86-00023k-L6 for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955458; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GX5s7D3GnRktYpy0eRl8MIaqx1kf57JyUnrHen/Omzg=; b=CRoiTtzZXwbA9Ie31ojRgTopxvdcpQOtboIyX4HkV9bO59rz2avS5Bb7BjZ4HI3jKemgw8 1yhKd41ZYtGBFTVLGDjPq83qlOkO85r5oFQOkrC9Ms/p6rh9WL1kRHvyoC5l11m4QpPiD2 C+A2rYnspnix7ADsmESVZ3g1O3jOdRU= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-356-fDWma3bOOuKVUxiemL12PQ-1; Sun, 14 Jul 2024 07:10:56 -0400 X-MC-Unique: fDWma3bOOuKVUxiemL12PQ-1 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-3679ed08797so2247264f8f.0 for ; Sun, 14 Jul 2024 04:10:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955454; x=1721560254; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GX5s7D3GnRktYpy0eRl8MIaqx1kf57JyUnrHen/Omzg=; b=rKE3ado4pXKBawE0hpYGpJVXbNEwE5Ddq7EciH8l7fXY5sqhKgZLSm1ryIM2IsGsk+ XPlH2s9w20GJXVEpmy9zYvw2JiTawenoWqOevCgo/umsCV9FWvrVG9kwem4WuRA8WanX 3Mghps8cF0mkqQVG8Jf13AWnkd4iohq6R7I/+hBjRtyBY2TGeTsXOo70rqCOD2lDwCmA loszEmkFJWX4wi4/OLExFncY6hCgdGUvZZ7lAqXL9y4+VzuWO1FIHUaG4W/qrDJlQ4NY ET0g2FFF0Hu02R7LMe/i0U8F1u26qyrWz7N2DU4+4hDm/f80hg+df/xHTai7Y6g3mhyO 1Q6g== X-Gm-Message-State: AOJu0YxgQtPw2fYrk3mPvYCd+jxD5ibYygx1nztP7AJmhlq3xLM2Hwk2 Mbq0hWaawzeHV4Csx5X+jgc1IPbtMQcYURX14Mz3DGhE4XVjBaoVDDn25GMq7tcFAVDdkIrlh2W FjZ34oIfmT3MRQb6AQpd2efPgmrKhcE1LefbVtKE0XpCBe212fyrV06VSm8swjIfb4DGA50qer8 ud85eOi+7/oZfmrQ7p2a+ps7Sn4zKjjLeEax1n X-Received: by 2002:a5d:4445:0:b0:367:96bd:1277 with SMTP id ffacd0b85a97d-367ceac49a2mr10402784f8f.44.1720955454164; Sun, 14 Jul 2024 04:10:54 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEli0xKWvsapGjFBW0z7G0RIf5GFlcrgiKNZ6QVwIlXlmjQyTVJYmERSJhK/TCD8iaTU3TqBA== X-Received: by 2002:a5d:4445:0:b0:367:96bd:1277 with SMTP id ffacd0b85a97d-367ceac49a2mr10402775f8f.44.1720955453762; Sun, 14 Jul 2024 04:10:53 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3680dab3e71sm3575599f8f.12.2024.07.14.04.10.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:10:52 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: "Robert R . Henry" , Richard Henderson Subject: [PULL 03/13] target/i386/tcg: Allow IRET from user mode to user mode with SMAP Date: Sun, 14 Jul 2024 13:10:33 +0200 Message-ID: <20240714111043.14132-4-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org This fixes a bug wherein i386/tcg assumed an interrupt return using the IRET instruction was always returning from kernel mode to either kernel mode or user mode. This assumption is violated when IRET is used as a clever way to restore thread state, as for example in the dotnet runtime. There, IRET returns from user mode to user mode. This bug is that stack accesses from IRET and RETF, as well as accesses to the parameters in a call gate, are normal data accesses using the current CPL. This manifested itself as a page fault in the guest Linux kernel due to SMAP preventing the access. This bug appears to have been in QEMU since the beginning. Analyzed-by: Robert R. Henry Co-developed-by: Robert R. Henry Signed-off-by: Robert R. Henry Reviewed-by: Richard Henderson Signed-off-by: Paolo Bonzini --- target/i386/tcg/seg_helper.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index 19d6b41a589..224e73e9ed0 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -594,13 +594,13 @@ int exception_has_error_code(int intno) #define POPW_RA(ssp, sp, sp_mask, val, ra) \ { \ - val = cpu_lduw_kernel_ra(env, (ssp) + (sp & (sp_mask)), ra); \ + val = cpu_lduw_data_ra(env, (ssp) + (sp & (sp_mask)), ra); \ sp += 2; \ } #define POPL_RA(ssp, sp, sp_mask, val, ra) \ { \ - val = (uint32_t)cpu_ldl_kernel_ra(env, (ssp) + (sp & (sp_mask)), ra); \ + val = (uint32_t)cpu_ldl_data_ra(env, (ssp) + (sp & (sp_mask)), ra); \ sp += 4; \ } @@ -847,7 +847,7 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, #define POPQ_RA(sp, val, ra) \ { \ - val = cpu_ldq_kernel_ra(env, sp, ra); \ + val = cpu_ldq_data_ra(env, sp, ra); \ sp += 8; \ } @@ -1797,18 +1797,18 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, PUSHL_RA(ssp, sp, sp_mask, env->segs[R_SS].selector, GETPC()); PUSHL_RA(ssp, sp, sp_mask, env->regs[R_ESP], GETPC()); for (i = param_count - 1; i >= 0; i--) { - val = cpu_ldl_kernel_ra(env, old_ssp + - ((env->regs[R_ESP] + i * 4) & - old_sp_mask), GETPC()); + val = cpu_ldl_data_ra(env, + old_ssp + ((env->regs[R_ESP] + i * 4) & old_sp_mask), + GETPC()); PUSHL_RA(ssp, sp, sp_mask, val, GETPC()); } } else { PUSHW_RA(ssp, sp, sp_mask, env->segs[R_SS].selector, GETPC()); PUSHW_RA(ssp, sp, sp_mask, env->regs[R_ESP], GETPC()); for (i = param_count - 1; i >= 0; i--) { - val = cpu_lduw_kernel_ra(env, old_ssp + - ((env->regs[R_ESP] + i * 2) & - old_sp_mask), GETPC()); + val = cpu_lduw_data_ra(env, + old_ssp + ((env->regs[R_ESP] + i * 2) & old_sp_mask), + GETPC()); PUSHW_RA(ssp, sp, sp_mask, val, GETPC()); } } From patchwork Sun Jul 14 11:10:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13732668 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EAC0DC3DA42 for ; Sun, 14 Jul 2024 11:12:30 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8F-0004rI-K1; Sun, 14 Jul 2024 07:11:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8D-0004jn-NK for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:05 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8B-00024t-Rh for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:05 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955463; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NHF1gm359CBBwWB+U+NepPClCcmLSyfmhAFVt5pGW6Y=; b=dS8rlbBJbYCJhAJlpDc7AAtgHW9B9pgHYCe5Hnd79nj8/cW64UlT6x81kdHTRDhuWCk/mu wNOmEkf490xu+cyUqUZSvwoXcw5JMUssVcn5GtagZhL68X5OYMDIh1Xas8Lx/MS1VhEgYM N4rGvdOr5pOKG/SecZI0y4vvmHBRz8g= Received: from mail-lf1-f72.google.com (mail-lf1-f72.google.com [209.85.167.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-615-lg_RJDVZOYyfO76hrS54ew-1; Sun, 14 Jul 2024 07:10:59 -0400 X-MC-Unique: lg_RJDVZOYyfO76hrS54ew-1 Received: by mail-lf1-f72.google.com with SMTP id 2adb3069b0e04-52e9557e312so2846254e87.0 for ; Sun, 14 Jul 2024 04:10:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955457; x=1721560257; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NHF1gm359CBBwWB+U+NepPClCcmLSyfmhAFVt5pGW6Y=; b=TJHbtaek/tpWHGs8ZjoZxH4iERHz/vR7miMPKK+Fz3FS5bNmaEA/SWlA3u4f96CBSc /ZOVMiRhRPoXB4EYYKkvZXB6KjZ3a/5VlkLKAlseuQXFlP5+lqlkdJ9ybr06vUtgMWtA vnFaU78ogj6KhkSX6pjh7X4UzcBtKKbWZYHfY7B5AEDpSd0K+/+MxrI+tBhsv8BYchnh PUFrqh8B7lFJKII9m4LjZqFZWhLLQBFHW69wN2PU14uCV/pFKInyzA4wSj0KNnUH6pbO oWenwdJji9fcUBlNUdXm1xVJTFD1eWxRAh/M5ogWbLpZCgFCYZQtIZisdZf31hMMKWec xF3g== X-Gm-Message-State: AOJu0YwacTc6dijg/Gmh4UZPuG9ubwvyp7WcA+tKUzgAtq/Zbqh5xijG DtIV2Y5MzdCpM6cBY4njbJQ7jqcFm2B9S2I32bcakOOgRYhCHE1KXAmH4+jElnT0FBSb0Umxbmm olUVxCruYFU/hQYxhuxihMDEwUTRvqFJ/7UEyOu77MgM+ohUelVz5oPsIAi5churWqT9XbGT+7i uWfNwore6zegef4MLqMjpVlH5qxMZ+v8ME269W X-Received: by 2002:a05:6512:3190:b0:52e:9d2c:1c86 with SMTP id 2adb3069b0e04-52eb99919d4mr11654869e87.14.1720955457076; Sun, 14 Jul 2024 04:10:57 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHAC2FrOfxaBWqMfb++97pDakzVdKoq5QQeZCMbChpJdD8Y5lHH0b6WxGktj2Tqooz+BmUc1A== X-Received: by 2002:a05:6512:3190:b0:52e:9d2c:1c86 with SMTP id 2adb3069b0e04-52eb99919d4mr11654854e87.14.1720955456434; Sun, 14 Jul 2024 04:10:56 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-427a5e8e2ecsm48502945e9.21.2024.07.14.04.10.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:10:56 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Richard Henderson Subject: [PULL 04/13] target/i386/tcg: use PUSHL/PUSHW for error code Date: Sun, 14 Jul 2024 13:10:34 +0200 Message-ID: <20240714111043.14132-5-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Do not pre-decrement esp, let the macros subtract the appropriate operand size. Reviewed-by: Richard Henderson Signed-off-by: Paolo Bonzini --- target/i386/tcg/seg_helper.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index 224e73e9ed0..b985382d704 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -670,22 +670,20 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, } shift = switch_tss(env, intno * 8, e1, e2, SWITCH_TSS_CALL, old_eip); if (has_error_code) { - uint32_t mask; - /* push the error code */ if (env->segs[R_SS].flags & DESC_B_MASK) { - mask = 0xffffffff; + sp_mask = 0xffffffff; } else { - mask = 0xffff; + sp_mask = 0xffff; } - esp = (env->regs[R_ESP] - (2 << shift)) & mask; - ssp = env->segs[R_SS].base + esp; + esp = env->regs[R_ESP]; + ssp = env->segs[R_SS].base; if (shift) { - cpu_stl_kernel(env, ssp, error_code); + PUSHL(ssp, esp, sp_mask, error_code); } else { - cpu_stw_kernel(env, ssp, error_code); + PUSHW(ssp, esp, sp_mask, error_code); } - SET_ESP(esp, mask); + SET_ESP(esp, sp_mask); } return; } From patchwork Sun Jul 14 11:10:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13732666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4E78EC3DA4A for ; Sun, 14 Jul 2024 11:11:39 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8J-000572-SI; Sun, 14 Jul 2024 07:11:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8G-0004te-60 for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:08 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8C-000253-Ph for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955464; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=riytJjgbE6x/tv0Qu82lUf7rTP82b0wZOff0TXFI5UI=; b=MfOirAAThmcN3N38plGzDDxzpXe7KBQ6WGbjhyNGf7y8n+YEzg/PLVtSsUhQ9GuMAiBE6X 7Aq0keoRIB7pyKo7gGZiFPTBc90C/UpeaCyhuQMd8j0LSzGdNx83sNUD9/sUWAV337Kigy kXGU1+2gVVVSRkWR22TP2bxntGPQ5a8= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-446-XluMdoppNb-aSRaaizP2Aw-1; Sun, 14 Jul 2024 07:11:02 -0400 X-MC-Unique: XluMdoppNb-aSRaaizP2Aw-1 Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-4264dc624a5so21838305e9.1 for ; Sun, 14 Jul 2024 04:11:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955460; x=1721560260; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=riytJjgbE6x/tv0Qu82lUf7rTP82b0wZOff0TXFI5UI=; b=i6RbprMgvZaA+swqNzWVFngFx65UGqSNawV6vE/3fHT/+hx+GmT2XN1QOhdHnW1DJH UQqcP7+Jnsarm0kqSr28oipLkdVcsyUTXIit1r4WzaWZ5Y+pz3NtmjW9yC+i/gsorRh+ Jzcp9qZc3vxQ/NpdPbC950nFjJxMNHsNM6U3Fwv8r3OZJuLVX7OZykJXal0SKHFC8dNk PoAyDbM/VIXFBQ2uFtqYm2byvpov6LKh2pS1CDAHDEYMp4JFkfTtG3hatkSks5a6bdLz wCVDZzCyuv3z42RxM/PfCUu+zoh9Nww2eRnPHrKsv/bQiP6mGummnK2x034s1xk43ndx Bmyg== X-Gm-Message-State: AOJu0YwSLXIUsFB9U74CH1gDu19r5ZZLeKS0caoeyY9QVks9bakycU3N KiGT+EdaEO5lO//OSfYyN5FgiBvr8kKHEBDRpZBpvURmuLZxfZVE6+6N0T09cyZwGvNaf0LoUVN 6751F5iHIaDQIqFwotz8+FthSFAqoLPqy04HR21Zgu3/W4LZ5Yqdry0Z19riQgh5zRexGm48CfT cJ5KPoUOORkq/3J2Wqaw1Rc0YkReAlqaJJgEH7 X-Received: by 2002:a7b:c452:0:b0:426:623f:34ae with SMTP id 5b1f17b1804b1-426707d8bd5mr113442915e9.16.1720955459642; Sun, 14 Jul 2024 04:10:59 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGzcbM9v8ihGbzaBKLS8/gUFwzLEa0mRG01LaD78ViePgLvMybEQYuhPHRGnJZ1x3mkTBZS9A== X-Received: by 2002:a7b:c452:0:b0:426:623f:34ae with SMTP id 5b1f17b1804b1-426707d8bd5mr113442785e9.16.1720955458961; Sun, 14 Jul 2024 04:10:58 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-427a5edb4a3sm48259965e9.31.2024.07.14.04.10.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:10:58 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Richard Henderson Subject: [PULL 05/13] target/i386/tcg: Reorg push/pop within seg_helper.c Date: Sun, 14 Jul 2024 13:10:35 +0200 Message-ID: <20240714111043.14132-6-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Richard Henderson Interrupts and call gates should use accesses with the DPL as the privilege level. While computing the applicable MMU index is easy, the harder thing is how to plumb it in the code. One possibility could be to add a single argument to the PUSH* macros for the privilege level, but this is repetitive and risks confusion between the involved privilege levels. Another possibility is to pass both CPL and DPL, and adjusting both PUSH* and POP* to use specific privilege levels (instead of using cpu_{ld,st}*_data). This makes the code more symmetric. However, a more complicated but much nicer approach is to use a structure to contain the stack parameters, env, unwind return address, and rewrite the macros into functions. The struct provides an easy home for the MMU index as well. Signed-off-by: Richard Henderson Link: https://lore.kernel.org/r/20240617161210.4639-4-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini --- target/i386/tcg/seg_helper.c | 481 +++++++++++++++++++---------------- 1 file changed, 259 insertions(+), 222 deletions(-) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index b985382d704..b6902ca3fba 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -28,6 +28,68 @@ #include "helper-tcg.h" #include "seg_helper.h" +#ifdef TARGET_X86_64 +#define SET_ESP(val, sp_mask) \ + do { \ + if ((sp_mask) == 0xffff) { \ + env->regs[R_ESP] = (env->regs[R_ESP] & ~0xffff) | \ + ((val) & 0xffff); \ + } else if ((sp_mask) == 0xffffffffLL) { \ + env->regs[R_ESP] = (uint32_t)(val); \ + } else { \ + env->regs[R_ESP] = (val); \ + } \ + } while (0) +#else +#define SET_ESP(val, sp_mask) \ + do { \ + env->regs[R_ESP] = (env->regs[R_ESP] & ~(sp_mask)) | \ + ((val) & (sp_mask)); \ + } while (0) +#endif + +/* XXX: use mmu_index to have proper DPL support */ +typedef struct StackAccess +{ + CPUX86State *env; + uintptr_t ra; + target_ulong ss_base; + target_ulong sp; + target_ulong sp_mask; +} StackAccess; + +static void pushw(StackAccess *sa, uint16_t val) +{ + sa->sp -= 2; + cpu_stw_kernel_ra(sa->env, sa->ss_base + (sa->sp & sa->sp_mask), + val, sa->ra); +} + +static void pushl(StackAccess *sa, uint32_t val) +{ + sa->sp -= 4; + cpu_stl_kernel_ra(sa->env, sa->ss_base + (sa->sp & sa->sp_mask), + val, sa->ra); +} + +static uint16_t popw(StackAccess *sa) +{ + uint16_t ret = cpu_lduw_data_ra(sa->env, + sa->ss_base + (sa->sp & sa->sp_mask), + sa->ra); + sa->sp += 2; + return ret; +} + +static uint32_t popl(StackAccess *sa) +{ + uint32_t ret = cpu_ldl_data_ra(sa->env, + sa->ss_base + (sa->sp & sa->sp_mask), + sa->ra); + sa->sp += 4; + return ret; +} + int get_pg_mode(CPUX86State *env) { int pg_mode = 0; @@ -559,68 +621,19 @@ int exception_has_error_code(int intno) return 0; } -#ifdef TARGET_X86_64 -#define SET_ESP(val, sp_mask) \ - do { \ - if ((sp_mask) == 0xffff) { \ - env->regs[R_ESP] = (env->regs[R_ESP] & ~0xffff) | \ - ((val) & 0xffff); \ - } else if ((sp_mask) == 0xffffffffLL) { \ - env->regs[R_ESP] = (uint32_t)(val); \ - } else { \ - env->regs[R_ESP] = (val); \ - } \ - } while (0) -#else -#define SET_ESP(val, sp_mask) \ - do { \ - env->regs[R_ESP] = (env->regs[R_ESP] & ~(sp_mask)) | \ - ((val) & (sp_mask)); \ - } while (0) -#endif - -/* XXX: add a is_user flag to have proper security support */ -#define PUSHW_RA(ssp, sp, sp_mask, val, ra) \ - { \ - sp -= 2; \ - cpu_stw_kernel_ra(env, (ssp) + (sp & (sp_mask)), (val), ra); \ - } - -#define PUSHL_RA(ssp, sp, sp_mask, val, ra) \ - { \ - sp -= 4; \ - cpu_stl_kernel_ra(env, (ssp) + (sp & (sp_mask)), (val), ra); \ - } - -#define POPW_RA(ssp, sp, sp_mask, val, ra) \ - { \ - val = cpu_lduw_data_ra(env, (ssp) + (sp & (sp_mask)), ra); \ - sp += 2; \ - } - -#define POPL_RA(ssp, sp, sp_mask, val, ra) \ - { \ - val = (uint32_t)cpu_ldl_data_ra(env, (ssp) + (sp & (sp_mask)), ra); \ - sp += 4; \ - } - -#define PUSHW(ssp, sp, sp_mask, val) PUSHW_RA(ssp, sp, sp_mask, val, 0) -#define PUSHL(ssp, sp, sp_mask, val) PUSHL_RA(ssp, sp, sp_mask, val, 0) -#define POPW(ssp, sp, sp_mask, val) POPW_RA(ssp, sp, sp_mask, val, 0) -#define POPL(ssp, sp, sp_mask, val) POPL_RA(ssp, sp, sp_mask, val, 0) - /* protected mode interrupt */ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, int error_code, unsigned int next_eip, int is_hw) { SegmentCache *dt; - target_ulong ptr, ssp; + target_ulong ptr; int type, dpl, selector, ss_dpl, cpl; int has_error_code, new_stack, shift; - uint32_t e1, e2, offset, ss = 0, esp, ss_e1 = 0, ss_e2 = 0; - uint32_t old_eip, sp_mask, eflags; + uint32_t e1, e2, offset, ss = 0, ss_e1 = 0, ss_e2 = 0; + uint32_t old_eip, eflags; int vm86 = env->eflags & VM_MASK; + StackAccess sa; bool set_rf; has_error_code = 0; @@ -662,6 +675,9 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, raise_exception_err(env, EXCP0D_GPF, intno * 8 + 2); } + sa.env = env; + sa.ra = 0; + if (type == 5) { /* task gate */ /* must do that check here to return the correct error code */ @@ -672,18 +688,18 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, if (has_error_code) { /* push the error code */ if (env->segs[R_SS].flags & DESC_B_MASK) { - sp_mask = 0xffffffff; + sa.sp_mask = 0xffffffff; } else { - sp_mask = 0xffff; + sa.sp_mask = 0xffff; } - esp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; + sa.sp = env->regs[R_ESP]; + sa.ss_base = env->segs[R_SS].base; if (shift) { - PUSHL(ssp, esp, sp_mask, error_code); + pushl(&sa, error_code); } else { - PUSHW(ssp, esp, sp_mask, error_code); + pushw(&sa, error_code); } - SET_ESP(esp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); } return; } @@ -717,6 +733,7 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, } if (dpl < cpl) { /* to inner privilege */ + uint32_t esp; get_ss_esp_from_tss(env, &ss, &esp, dpl, 0); if ((ss & 0xfffc) == 0) { raise_exception_err(env, EXCP0A_TSS, ss & 0xfffc); @@ -740,17 +757,18 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, raise_exception_err(env, EXCP0A_TSS, ss & 0xfffc); } new_stack = 1; - sp_mask = get_sp_mask(ss_e2); - ssp = get_seg_base(ss_e1, ss_e2); + sa.sp = esp; + sa.sp_mask = get_sp_mask(ss_e2); + sa.ss_base = get_seg_base(ss_e1, ss_e2); } else { /* to same privilege */ if (vm86) { raise_exception_err(env, EXCP0D_GPF, selector & 0xfffc); } new_stack = 0; - sp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; - esp = env->regs[R_ESP]; + sa.sp = env->regs[R_ESP]; + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.ss_base = env->segs[R_SS].base; } shift = type >> 3; @@ -775,36 +793,36 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, if (shift == 1) { if (new_stack) { if (vm86) { - PUSHL(ssp, esp, sp_mask, env->segs[R_GS].selector); - PUSHL(ssp, esp, sp_mask, env->segs[R_FS].selector); - PUSHL(ssp, esp, sp_mask, env->segs[R_DS].selector); - PUSHL(ssp, esp, sp_mask, env->segs[R_ES].selector); + pushl(&sa, env->segs[R_GS].selector); + pushl(&sa, env->segs[R_FS].selector); + pushl(&sa, env->segs[R_DS].selector); + pushl(&sa, env->segs[R_ES].selector); } - PUSHL(ssp, esp, sp_mask, env->segs[R_SS].selector); - PUSHL(ssp, esp, sp_mask, env->regs[R_ESP]); + pushl(&sa, env->segs[R_SS].selector); + pushl(&sa, env->regs[R_ESP]); } - PUSHL(ssp, esp, sp_mask, eflags); - PUSHL(ssp, esp, sp_mask, env->segs[R_CS].selector); - PUSHL(ssp, esp, sp_mask, old_eip); + pushl(&sa, eflags); + pushl(&sa, env->segs[R_CS].selector); + pushl(&sa, old_eip); if (has_error_code) { - PUSHL(ssp, esp, sp_mask, error_code); + pushl(&sa, error_code); } } else { if (new_stack) { if (vm86) { - PUSHW(ssp, esp, sp_mask, env->segs[R_GS].selector); - PUSHW(ssp, esp, sp_mask, env->segs[R_FS].selector); - PUSHW(ssp, esp, sp_mask, env->segs[R_DS].selector); - PUSHW(ssp, esp, sp_mask, env->segs[R_ES].selector); + pushw(&sa, env->segs[R_GS].selector); + pushw(&sa, env->segs[R_FS].selector); + pushw(&sa, env->segs[R_DS].selector); + pushw(&sa, env->segs[R_ES].selector); } - PUSHW(ssp, esp, sp_mask, env->segs[R_SS].selector); - PUSHW(ssp, esp, sp_mask, env->regs[R_ESP]); + pushw(&sa, env->segs[R_SS].selector); + pushw(&sa, env->regs[R_ESP]); } - PUSHW(ssp, esp, sp_mask, eflags); - PUSHW(ssp, esp, sp_mask, env->segs[R_CS].selector); - PUSHW(ssp, esp, sp_mask, old_eip); + pushw(&sa, eflags); + pushw(&sa, env->segs[R_CS].selector); + pushw(&sa, old_eip); if (has_error_code) { - PUSHW(ssp, esp, sp_mask, error_code); + pushw(&sa, error_code); } } @@ -822,10 +840,10 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, cpu_x86_load_seg_cache(env, R_GS, 0, 0, 0, 0); } ss = (ss & ~3) | dpl; - cpu_x86_load_seg_cache(env, R_SS, ss, - ssp, get_seg_limit(ss_e1, ss_e2), ss_e2); + cpu_x86_load_seg_cache(env, R_SS, ss, sa.ss_base, + get_seg_limit(ss_e1, ss_e2), ss_e2); } - SET_ESP(esp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); selector = (selector & ~3) | dpl; cpu_x86_load_seg_cache(env, R_CS, selector, @@ -837,20 +855,18 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, #ifdef TARGET_X86_64 -#define PUSHQ_RA(sp, val, ra) \ - { \ - sp -= 8; \ - cpu_stq_kernel_ra(env, sp, (val), ra); \ - } +static void pushq(StackAccess *sa, uint64_t val) +{ + sa->sp -= 8; + cpu_stq_kernel_ra(sa->env, sa->sp, val, sa->ra); +} -#define POPQ_RA(sp, val, ra) \ - { \ - val = cpu_ldq_data_ra(env, sp, ra); \ - sp += 8; \ - } - -#define PUSHQ(sp, val) PUSHQ_RA(sp, val, 0) -#define POPQ(sp, val) POPQ_RA(sp, val, 0) +static uint64_t popq(StackAccess *sa) +{ + uint64_t ret = cpu_ldq_data_ra(sa->env, sa->sp, sa->ra); + sa->sp += 8; + return ret; +} static inline target_ulong get_rsp_from_tss(CPUX86State *env, int level) { @@ -893,8 +909,9 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, int type, dpl, selector, cpl, ist; int has_error_code, new_stack; uint32_t e1, e2, e3, ss, eflags; - target_ulong old_eip, esp, offset; + target_ulong old_eip, offset; bool set_rf; + StackAccess sa; has_error_code = 0; if (!is_int && !is_hw) { @@ -962,10 +979,15 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, if (e2 & DESC_C_MASK) { dpl = cpl; } + + sa.env = env; + sa.ra = 0; + sa.sp_mask = -1; + sa.ss_base = 0; if (dpl < cpl || ist != 0) { /* to inner privilege */ new_stack = 1; - esp = get_rsp_from_tss(env, ist != 0 ? ist + 3 : dpl); + sa.sp = get_rsp_from_tss(env, ist != 0 ? ist + 3 : dpl); ss = 0; } else { /* to same privilege */ @@ -973,9 +995,9 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, raise_exception_err(env, EXCP0D_GPF, selector & 0xfffc); } new_stack = 0; - esp = env->regs[R_ESP]; + sa.sp = env->regs[R_ESP]; } - esp &= ~0xfLL; /* align stack */ + sa.sp &= ~0xfLL; /* align stack */ /* See do_interrupt_protected. */ eflags = cpu_compute_eflags(env); @@ -983,13 +1005,13 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, eflags |= RF_MASK; } - PUSHQ(esp, env->segs[R_SS].selector); - PUSHQ(esp, env->regs[R_ESP]); - PUSHQ(esp, eflags); - PUSHQ(esp, env->segs[R_CS].selector); - PUSHQ(esp, old_eip); + pushq(&sa, env->segs[R_SS].selector); + pushq(&sa, env->regs[R_ESP]); + pushq(&sa, eflags); + pushq(&sa, env->segs[R_CS].selector); + pushq(&sa, old_eip); if (has_error_code) { - PUSHQ(esp, error_code); + pushq(&sa, error_code); } /* interrupt gate clear IF mask */ @@ -1002,7 +1024,7 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, ss = 0 | dpl; cpu_x86_load_seg_cache(env, R_SS, ss, 0, 0, dpl << DESC_DPL_SHIFT); } - env->regs[R_ESP] = esp; + env->regs[R_ESP] = sa.sp; selector = (selector & ~3) | dpl; cpu_x86_load_seg_cache(env, R_CS, selector, @@ -1074,10 +1096,11 @@ static void do_interrupt_real(CPUX86State *env, int intno, int is_int, int error_code, unsigned int next_eip) { SegmentCache *dt; - target_ulong ptr, ssp; + target_ulong ptr; int selector; - uint32_t offset, esp; + uint32_t offset; uint32_t old_cs, old_eip; + StackAccess sa; /* real mode (simpler!) */ dt = &env->idt; @@ -1087,8 +1110,13 @@ static void do_interrupt_real(CPUX86State *env, int intno, int is_int, ptr = dt->base + intno * 4; offset = cpu_lduw_kernel(env, ptr); selector = cpu_lduw_kernel(env, ptr + 2); - esp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; + + sa.env = env; + sa.ra = 0; + sa.sp = env->regs[R_ESP]; + sa.sp_mask = 0xffff; + sa.ss_base = env->segs[R_SS].base; + if (is_int) { old_eip = next_eip; } else { @@ -1096,12 +1124,12 @@ static void do_interrupt_real(CPUX86State *env, int intno, int is_int, } old_cs = env->segs[R_CS].selector; /* XXX: use SS segment size? */ - PUSHW(ssp, esp, 0xffff, cpu_compute_eflags(env)); - PUSHW(ssp, esp, 0xffff, old_cs); - PUSHW(ssp, esp, 0xffff, old_eip); + pushw(&sa, cpu_compute_eflags(env)); + pushw(&sa, old_cs); + pushw(&sa, old_eip); /* update processor state */ - env->regs[R_ESP] = (env->regs[R_ESP] & ~0xffff) | (esp & 0xffff); + SET_ESP(sa.sp, sa.sp_mask); env->eip = offset; env->segs[R_CS].selector = selector; env->segs[R_CS].base = (selector << 4); @@ -1544,21 +1572,23 @@ void helper_ljmp_protected(CPUX86State *env, int new_cs, target_ulong new_eip, void helper_lcall_real(CPUX86State *env, uint32_t new_cs, uint32_t new_eip, int shift, uint32_t next_eip) { - uint32_t esp, esp_mask; - target_ulong ssp; + StackAccess sa; + + sa.env = env; + sa.ra = GETPC(); + sa.sp = env->regs[R_ESP]; + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.ss_base = env->segs[R_SS].base; - esp = env->regs[R_ESP]; - esp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; if (shift) { - PUSHL_RA(ssp, esp, esp_mask, env->segs[R_CS].selector, GETPC()); - PUSHL_RA(ssp, esp, esp_mask, next_eip, GETPC()); + pushl(&sa, env->segs[R_CS].selector); + pushl(&sa, next_eip); } else { - PUSHW_RA(ssp, esp, esp_mask, env->segs[R_CS].selector, GETPC()); - PUSHW_RA(ssp, esp, esp_mask, next_eip, GETPC()); + pushw(&sa, env->segs[R_CS].selector); + pushw(&sa, next_eip); } - SET_ESP(esp, esp_mask); + SET_ESP(sa.sp, sa.sp_mask); env->eip = new_eip; env->segs[R_CS].selector = new_cs; env->segs[R_CS].base = (new_cs << 4); @@ -1570,9 +1600,10 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, { int new_stack, i; uint32_t e1, e2, cpl, dpl, rpl, selector, param_count; - uint32_t ss = 0, ss_e1 = 0, ss_e2 = 0, type, ss_dpl, sp_mask; + uint32_t ss = 0, ss_e1 = 0, ss_e2 = 0, type, ss_dpl; uint32_t val, limit, old_sp_mask; - target_ulong ssp, old_ssp, offset, sp; + target_ulong old_ssp, offset; + StackAccess sa; LOG_PCALL("lcall %04x:" TARGET_FMT_lx " s=%d\n", new_cs, new_eip, shift); LOG_PCALL_STATE(env_cpu(env)); @@ -1584,6 +1615,10 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, } cpl = env->hflags & HF_CPL_MASK; LOG_PCALL("desc=%08x:%08x\n", e1, e2); + + sa.env = env; + sa.ra = GETPC(); + if (e2 & DESC_S_MASK) { if (!(e2 & DESC_CS_MASK)) { raise_exception_err_ra(env, EXCP0D_GPF, new_cs & 0xfffc, GETPC()); @@ -1611,14 +1646,14 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, #ifdef TARGET_X86_64 /* XXX: check 16/32 bit cases in long mode */ if (shift == 2) { - target_ulong rsp; - /* 64 bit case */ - rsp = env->regs[R_ESP]; - PUSHQ_RA(rsp, env->segs[R_CS].selector, GETPC()); - PUSHQ_RA(rsp, next_eip, GETPC()); + sa.sp = env->regs[R_ESP]; + sa.sp_mask = -1; + sa.ss_base = 0; + pushq(&sa, env->segs[R_CS].selector); + pushq(&sa, next_eip); /* from this point, not restartable */ - env->regs[R_ESP] = rsp; + env->regs[R_ESP] = sa.sp; cpu_x86_load_seg_cache(env, R_CS, (new_cs & 0xfffc) | cpl, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); @@ -1626,15 +1661,15 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, } else #endif { - sp = env->regs[R_ESP]; - sp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; + sa.sp = env->regs[R_ESP]; + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.ss_base = env->segs[R_SS].base; if (shift) { - PUSHL_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHL_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushl(&sa, env->segs[R_CS].selector); + pushl(&sa, next_eip); } else { - PUSHW_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHW_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushw(&sa, env->segs[R_CS].selector); + pushw(&sa, next_eip); } limit = get_seg_limit(e1, e2); @@ -1642,7 +1677,7 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, raise_exception_err_ra(env, EXCP0D_GPF, new_cs & 0xfffc, GETPC()); } /* from this point, not restartable */ - SET_ESP(sp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); cpu_x86_load_seg_cache(env, R_CS, (new_cs & 0xfffc) | cpl, get_seg_base(e1, e2), limit, e2); env->eip = new_eip; @@ -1737,13 +1772,13 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, /* to inner privilege */ #ifdef TARGET_X86_64 if (shift == 2) { - sp = get_rsp_from_tss(env, dpl); ss = dpl; /* SS = NULL selector with RPL = new CPL */ new_stack = 1; - sp_mask = 0; - ssp = 0; /* SS base is always zero in IA-32e mode */ + sa.sp = get_rsp_from_tss(env, dpl); + sa.sp_mask = -1; + sa.ss_base = 0; /* SS base is always zero in IA-32e mode */ LOG_PCALL("new ss:rsp=%04x:%016llx env->regs[R_ESP]=" - TARGET_FMT_lx "\n", ss, sp, env->regs[R_ESP]); + TARGET_FMT_lx "\n", ss, sa.sp, env->regs[R_ESP]); } else #endif { @@ -1752,7 +1787,6 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, LOG_PCALL("new ss:esp=%04x:%08x param_count=%d env->regs[R_ESP]=" TARGET_FMT_lx "\n", ss, sp32, param_count, env->regs[R_ESP]); - sp = sp32; if ((ss & 0xfffc) == 0) { raise_exception_err_ra(env, EXCP0A_TSS, ss & 0xfffc, GETPC()); } @@ -1775,63 +1809,64 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, raise_exception_err_ra(env, EXCP0A_TSS, ss & 0xfffc, GETPC()); } - sp_mask = get_sp_mask(ss_e2); - ssp = get_seg_base(ss_e1, ss_e2); + sa.sp = sp32; + sa.sp_mask = get_sp_mask(ss_e2); + sa.ss_base = get_seg_base(ss_e1, ss_e2); } /* push_size = ((param_count * 2) + 8) << shift; */ - old_sp_mask = get_sp_mask(env->segs[R_SS].flags); old_ssp = env->segs[R_SS].base; + #ifdef TARGET_X86_64 if (shift == 2) { /* XXX: verify if new stack address is canonical */ - PUSHQ_RA(sp, env->segs[R_SS].selector, GETPC()); - PUSHQ_RA(sp, env->regs[R_ESP], GETPC()); + pushq(&sa, env->segs[R_SS].selector); + pushq(&sa, env->regs[R_ESP]); /* parameters aren't supported for 64-bit call gates */ } else #endif if (shift == 1) { - PUSHL_RA(ssp, sp, sp_mask, env->segs[R_SS].selector, GETPC()); - PUSHL_RA(ssp, sp, sp_mask, env->regs[R_ESP], GETPC()); + pushl(&sa, env->segs[R_SS].selector); + pushl(&sa, env->regs[R_ESP]); for (i = param_count - 1; i >= 0; i--) { val = cpu_ldl_data_ra(env, old_ssp + ((env->regs[R_ESP] + i * 4) & old_sp_mask), GETPC()); - PUSHL_RA(ssp, sp, sp_mask, val, GETPC()); + pushl(&sa, val); } } else { - PUSHW_RA(ssp, sp, sp_mask, env->segs[R_SS].selector, GETPC()); - PUSHW_RA(ssp, sp, sp_mask, env->regs[R_ESP], GETPC()); + pushw(&sa, env->segs[R_SS].selector); + pushw(&sa, env->regs[R_ESP]); for (i = param_count - 1; i >= 0; i--) { val = cpu_lduw_data_ra(env, old_ssp + ((env->regs[R_ESP] + i * 2) & old_sp_mask), GETPC()); - PUSHW_RA(ssp, sp, sp_mask, val, GETPC()); + pushw(&sa, val); } } new_stack = 1; } else { /* to same privilege */ - sp = env->regs[R_ESP]; - sp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; + sa.sp = env->regs[R_ESP]; + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.ss_base = env->segs[R_SS].base; /* push_size = (4 << shift); */ new_stack = 0; } #ifdef TARGET_X86_64 if (shift == 2) { - PUSHQ_RA(sp, env->segs[R_CS].selector, GETPC()); - PUSHQ_RA(sp, next_eip, GETPC()); + pushq(&sa, env->segs[R_CS].selector); + pushq(&sa, next_eip); } else #endif if (shift == 1) { - PUSHL_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHL_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushl(&sa, env->segs[R_CS].selector); + pushl(&sa, next_eip); } else { - PUSHW_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHW_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushw(&sa, env->segs[R_CS].selector); + pushw(&sa, next_eip); } /* from this point, not restartable */ @@ -1845,7 +1880,7 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, { ss = (ss & ~3) | dpl; cpu_x86_load_seg_cache(env, R_SS, ss, - ssp, + sa.ss_base, get_seg_limit(ss_e1, ss_e2), ss_e2); } @@ -1856,7 +1891,7 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); - SET_ESP(sp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); env->eip = offset; } } @@ -1864,26 +1899,28 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, /* real and vm86 mode iret */ void helper_iret_real(CPUX86State *env, int shift) { - uint32_t sp, new_cs, new_eip, new_eflags, sp_mask; - target_ulong ssp; + uint32_t new_cs, new_eip, new_eflags; int eflags_mask; + StackAccess sa; + + sa.env = env; + sa.ra = GETPC(); + sa.sp_mask = 0xffff; /* XXXX: use SS segment size? */ + sa.sp = env->regs[R_ESP]; + sa.ss_base = env->segs[R_SS].base; - sp_mask = 0xffff; /* XXXX: use SS segment size? */ - sp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; if (shift == 1) { /* 32 bits */ - POPL_RA(ssp, sp, sp_mask, new_eip, GETPC()); - POPL_RA(ssp, sp, sp_mask, new_cs, GETPC()); - new_cs &= 0xffff; - POPL_RA(ssp, sp, sp_mask, new_eflags, GETPC()); + new_eip = popl(&sa); + new_cs = popl(&sa) & 0xffff; + new_eflags = popl(&sa); } else { /* 16 bits */ - POPW_RA(ssp, sp, sp_mask, new_eip, GETPC()); - POPW_RA(ssp, sp, sp_mask, new_cs, GETPC()); - POPW_RA(ssp, sp, sp_mask, new_eflags, GETPC()); + new_eip = popw(&sa); + new_cs = popw(&sa); + new_eflags = popw(&sa); } - env->regs[R_ESP] = (env->regs[R_ESP] & ~sp_mask) | (sp & sp_mask); + SET_ESP(sa.sp, sa.sp_mask); env->segs[R_CS].selector = new_cs; env->segs[R_CS].base = (new_cs << 4); env->eip = new_eip; @@ -1936,47 +1973,49 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, uint32_t new_es, new_ds, new_fs, new_gs; uint32_t e1, e2, ss_e1, ss_e2; int cpl, dpl, rpl, eflags_mask, iopl; - target_ulong ssp, sp, new_eip, new_esp, sp_mask; + target_ulong new_eip, new_esp; + StackAccess sa; + + sa.env = env; + sa.ra = retaddr; #ifdef TARGET_X86_64 if (shift == 2) { - sp_mask = -1; + sa.sp_mask = -1; } else #endif { - sp_mask = get_sp_mask(env->segs[R_SS].flags); + sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); } - sp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; + sa.sp = env->regs[R_ESP]; + sa.ss_base = env->segs[R_SS].base; new_eflags = 0; /* avoid warning */ #ifdef TARGET_X86_64 if (shift == 2) { - POPQ_RA(sp, new_eip, retaddr); - POPQ_RA(sp, new_cs, retaddr); - new_cs &= 0xffff; + new_eip = popq(&sa); + new_cs = popq(&sa) & 0xffff; if (is_iret) { - POPQ_RA(sp, new_eflags, retaddr); + new_eflags = popq(&sa); } } else #endif { if (shift == 1) { /* 32 bits */ - POPL_RA(ssp, sp, sp_mask, new_eip, retaddr); - POPL_RA(ssp, sp, sp_mask, new_cs, retaddr); - new_cs &= 0xffff; + new_eip = popl(&sa); + new_cs = popl(&sa) & 0xffff; if (is_iret) { - POPL_RA(ssp, sp, sp_mask, new_eflags, retaddr); + new_eflags = popl(&sa); if (new_eflags & VM_MASK) { goto return_to_vm86; } } } else { /* 16 bits */ - POPW_RA(ssp, sp, sp_mask, new_eip, retaddr); - POPW_RA(ssp, sp, sp_mask, new_cs, retaddr); + new_eip = popw(&sa); + new_cs = popw(&sa); if (is_iret) { - POPW_RA(ssp, sp, sp_mask, new_eflags, retaddr); + new_eflags = popw(&sa); } } } @@ -2012,7 +2051,7 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, raise_exception_err_ra(env, EXCP0B_NOSEG, new_cs & 0xfffc, retaddr); } - sp += addend; + sa.sp += addend; if (rpl == cpl && (!(env->hflags & HF_CS64_MASK) || ((env->hflags & HF_CS64_MASK) && !is_iret))) { /* return to same privilege level */ @@ -2024,21 +2063,19 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, /* return to different privilege level */ #ifdef TARGET_X86_64 if (shift == 2) { - POPQ_RA(sp, new_esp, retaddr); - POPQ_RA(sp, new_ss, retaddr); - new_ss &= 0xffff; + new_esp = popq(&sa); + new_ss = popq(&sa) & 0xffff; } else #endif { if (shift == 1) { /* 32 bits */ - POPL_RA(ssp, sp, sp_mask, new_esp, retaddr); - POPL_RA(ssp, sp, sp_mask, new_ss, retaddr); - new_ss &= 0xffff; + new_esp = popl(&sa); + new_ss = popl(&sa) & 0xffff; } else { /* 16 bits */ - POPW_RA(ssp, sp, sp_mask, new_esp, retaddr); - POPW_RA(ssp, sp, sp_mask, new_ss, retaddr); + new_esp = popw(&sa); + new_ss = popw(&sa); } } LOG_PCALL("new ss:esp=%04x:" TARGET_FMT_lx "\n", @@ -2088,14 +2125,14 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); - sp = new_esp; + sa.sp = new_esp; #ifdef TARGET_X86_64 if (env->hflags & HF_CS64_MASK) { - sp_mask = -1; + sa.sp_mask = -1; } else #endif { - sp_mask = get_sp_mask(ss_e2); + sa.sp_mask = get_sp_mask(ss_e2); } /* validate data segments */ @@ -2104,9 +2141,9 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, validate_seg(env, R_FS, rpl); validate_seg(env, R_GS, rpl); - sp += addend; + sa.sp += addend; } - SET_ESP(sp, sp_mask); + SET_ESP(sa.sp, sa.sp_mask); env->eip = new_eip; if (is_iret) { /* NOTE: 'cpl' is the _old_ CPL */ @@ -2126,12 +2163,12 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, return; return_to_vm86: - POPL_RA(ssp, sp, sp_mask, new_esp, retaddr); - POPL_RA(ssp, sp, sp_mask, new_ss, retaddr); - POPL_RA(ssp, sp, sp_mask, new_es, retaddr); - POPL_RA(ssp, sp, sp_mask, new_ds, retaddr); - POPL_RA(ssp, sp, sp_mask, new_fs, retaddr); - POPL_RA(ssp, sp, sp_mask, new_gs, retaddr); + new_esp = popl(&sa); + new_ss = popl(&sa); + new_es = popl(&sa); + new_ds = popl(&sa); + new_fs = popl(&sa); + new_gs = popl(&sa); /* modify processor state */ cpu_load_eflags(env, new_eflags, TF_MASK | AC_MASK | ID_MASK | From patchwork Sun Jul 14 11:10:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13732669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EB28AC3DA42 for ; Sun, 14 Jul 2024 11:12:45 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8M-0005Gm-7Y; Sun, 14 Jul 2024 07:11:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8G-0004vz-OP for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:08 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8F-000267-2u for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:08 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955466; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yKHaLhM9UAuURgVWP4GssGjRnYIMl9btSsv/NKhFweE=; b=EJhyAf9BESePSVimDoTLSq++zGYaoeUznQ/pgKIWxa3mMDaoD9C5fSiSKKaVJGT7LcBL9W unDv3Sgz1/4gz5OHLiXa3JDiIDPV0F0SMPuC3S46y5Kxbea5XqDiaiwNeZ/gqPrykr6Ckk tQPC/V3gM6xJXEDNrD+ArH4dDp/4w78= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-441-BndHyV1NO-O52A9cIjjzig-1; Sun, 14 Jul 2024 07:11:04 -0400 X-MC-Unique: BndHyV1NO-O52A9cIjjzig-1 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-42725ec6d8dso32402465e9.2 for ; Sun, 14 Jul 2024 04:11:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955462; x=1721560262; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yKHaLhM9UAuURgVWP4GssGjRnYIMl9btSsv/NKhFweE=; b=wVNpe42fPe80nLzqbR8V9ussptBuE05eTF5JpX659Nr85QLs3AjKtGb+gOBaErAj81 nxb/XJoCLpztjSKQsM+u4z6oKWWFq3O6ELyfJA8WsCCHVh8NRUJpax0lQaH/0vwJLfP0 9/kMTc5MIo2ym1X2zzWLPvNZSRoNVRFE2Qgsvy1Q3NHVzCqLjm5W6KE1mb5hqKlIfjgs F6xPz84MT4tRwAS7j0Um9Tz1l3yytLnfUfVMIjmgaKGb8c//ArDZitY7rm4w2A4KLnIM csKlDx1vyh7mrnjYvCGxXjkX3rQtyvSxODsdtHcOcI7aHg2/DA/U9kKEGu1dydDbvskE ZBAg== X-Gm-Message-State: AOJu0Yzu1xpH9ok8pnn5x9w41c8hvw5NEHHMn17htmbTHhhIWsLe+7it HjnTDIUKoLdj1xsVJOIm6zx8E4qAm35S2UDq0QqhJ8mn54AYaY9W3AquOjFv1Pk+SqkHRhIjcGZ x49ozRb2yDp1VNa/69h3lof1a6uLDsNPhIEv/+xYEmCW7B/VylamD7sj8TtKbgl66THsmeKiCP3 0lztoeqRSyUAGmx9YPEzJNNWDThC6vjKJTyhsf X-Received: by 2002:a05:600c:3013:b0:426:63f1:9a1b with SMTP id 5b1f17b1804b1-426708f1c97mr124836835e9.33.1720955462575; Sun, 14 Jul 2024 04:11:02 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEuPa39c4EJJD5ZK6UOrT5ikByoKhQFw7f4dyQfg1x4TLynriXEpI4aWGeeO69Mpt2rUawZsA== X-Received: by 2002:a05:600c:3013:b0:426:63f1:9a1b with SMTP id 5b1f17b1804b1-426708f1c97mr124836735e9.33.1720955462222; Sun, 14 Jul 2024 04:11:02 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4279f25ac47sm83415875e9.18.2024.07.14.04.11.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:11:00 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Richard Henderson Subject: [PULL 06/13] target/i386/tcg: Introduce x86_mmu_index_{kernel_,}pl Date: Sun, 14 Jul 2024 13:10:36 +0200 Message-ID: <20240714111043.14132-7-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Richard Henderson Disconnect mmu index computation from the current pl as stored in env->hflags. Signed-off-by: Richard Henderson Link: https://lore.kernel.org/r/20240617161210.4639-2-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini --- target/i386/cpu.h | 11 ++--------- target/i386/cpu.c | 27 ++++++++++++++++++++++++--- 2 files changed, 26 insertions(+), 12 deletions(-) diff --git a/target/i386/cpu.h b/target/i386/cpu.h index c43ac01c794..1e121acef54 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -2445,15 +2445,8 @@ static inline bool is_mmu_index_32(int mmu_index) return mmu_index & 1; } -static inline int cpu_mmu_index_kernel(CPUX86State *env) -{ - int mmu_index_32 = (env->hflags & HF_LMA_MASK) ? 0 : 1; - int mmu_index_base = - !(env->hflags & HF_SMAP_MASK) ? MMU_KNOSMAP64_IDX : - ((env->hflags & HF_CPL_MASK) < 3 && (env->eflags & AC_MASK)) ? MMU_KNOSMAP64_IDX : MMU_KSMAP64_IDX; - - return mmu_index_base + mmu_index_32; -} +int x86_mmu_index_pl(CPUX86State *env, unsigned pl); +int cpu_mmu_index_kernel(CPUX86State *env); #define CC_DST (env->cc_dst) #define CC_SRC (env->cc_src) diff --git a/target/i386/cpu.c b/target/i386/cpu.c index c05765eeafc..4688d140c2d 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -8122,18 +8122,39 @@ static bool x86_cpu_has_work(CPUState *cs) return x86_cpu_pending_interrupt(cs, cs->interrupt_request) != 0; } -static int x86_cpu_mmu_index(CPUState *cs, bool ifetch) +int x86_mmu_index_pl(CPUX86State *env, unsigned pl) { - CPUX86State *env = cpu_env(cs); int mmu_index_32 = (env->hflags & HF_CS64_MASK) ? 0 : 1; int mmu_index_base = - (env->hflags & HF_CPL_MASK) == 3 ? MMU_USER64_IDX : + pl == 3 ? MMU_USER64_IDX : !(env->hflags & HF_SMAP_MASK) ? MMU_KNOSMAP64_IDX : (env->eflags & AC_MASK) ? MMU_KNOSMAP64_IDX : MMU_KSMAP64_IDX; return mmu_index_base + mmu_index_32; } +static int x86_cpu_mmu_index(CPUState *cs, bool ifetch) +{ + CPUX86State *env = cpu_env(cs); + return x86_mmu_index_pl(env, env->hflags & HF_CPL_MASK); +} + +static int x86_mmu_index_kernel_pl(CPUX86State *env, unsigned pl) +{ + int mmu_index_32 = (env->hflags & HF_LMA_MASK) ? 0 : 1; + int mmu_index_base = + !(env->hflags & HF_SMAP_MASK) ? MMU_KNOSMAP64_IDX : + (pl < 3 && (env->eflags & AC_MASK) + ? MMU_KNOSMAP64_IDX : MMU_KSMAP64_IDX); + + return mmu_index_base + mmu_index_32; +} + +int cpu_mmu_index_kernel(CPUX86State *env) +{ + return x86_mmu_index_kernel_pl(env, env->hflags & HF_CPL_MASK); +} + static void x86_disas_set_info(CPUState *cs, disassemble_info *info) { X86CPU *cpu = X86_CPU(cs); From patchwork Sun Jul 14 11:10:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13732667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7EDAAC3DA42 for ; Sun, 14 Jul 2024 11:12:08 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8O-0005Ph-FV; Sun, 14 Jul 2024 07:11:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8L-0005FM-Re for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:13 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8J-000285-Nz for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:13 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955470; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TSrKWshh3GKRJaKaC48z20K8ZcvddsFT5215e+ccso8=; b=FLgYKSY1esF+Pl/Rk3rFjgaYL2XW6GebNX3c7SQmKY8c9zO0hjriUnHwsIhexyw69oz9/Y Q7MekHgMmPkxJgsafMl92C10J+YirMquSG/s8SkEJbuMfsVI1SKJ1qbMjD7M7vcgNHep3D 0xCncDXibSKSHkNeYzQZPU9/8HuBl+M= Received: from mail-lf1-f71.google.com (mail-lf1-f71.google.com [209.85.167.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-445-_cwdFf_IOZyPN2hohE1laQ-1; Sun, 14 Jul 2024 07:11:07 -0400 X-MC-Unique: _cwdFf_IOZyPN2hohE1laQ-1 Received: by mail-lf1-f71.google.com with SMTP id 2adb3069b0e04-52e9cc6a99eso3682070e87.3 for ; Sun, 14 Jul 2024 04:11:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955466; x=1721560266; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TSrKWshh3GKRJaKaC48z20K8ZcvddsFT5215e+ccso8=; b=vGljkQ/JaUbWGJjlx+k/ckdecHPNNStQ63X+lMGkKb4Fzv3kgwbKs3Sz9rsDhwPd/q 5nxxYLtda7+dtF2kFoQl9r7gHTfg3e5nks0oEuT79OxchVLdKIuiTrwaxxsBGwfbt7zP JROwmVLf0yUMwql4idw/pIBeryxk9k3HnVkNkxGrMv81QTSwqQVA0wrgS4oVZqtpCBLw 5r+JU9vLfx8+WlQ5cb0XLe4w6GSWG/RNgNIXt8tVNFW/wzA1TE9QMMA6TtnKfA/jk33c nq6UfvzaXfHYc/d4nZm+evNqU6zjy4bRQ+dkMS0TPHy+Lz33HuVJcx0/L3sowZ1DCIK4 GFgw== X-Gm-Message-State: AOJu0YztX35mdwVg4ztUnLBj9OrPseeMV+sDvBLKdHMgVrvXAkamRrKh 33JQt1bmyCvKDOomi7NhB+gLnW8+Vfh/V6Gs0cnZ46D4ElaysZ6JZRoA7FZ5h5xug647cX7TW+1 G6l3qf1BRI714nV4w95gprnKRPORCnt6PtKhIA0SMqSe0wRQpgFMmIMNgTbUJijo5V1C2GIrLvR us6G0/4YF8zV97iSdFUiVH5BIdR1KsyFFYgk5E X-Received: by 2002:a05:6512:2350:b0:52c:987f:b355 with SMTP id 2adb3069b0e04-52eb99cc6abmr11898937e87.42.1720955465625; Sun, 14 Jul 2024 04:11:05 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEpf+h8gZreDBO6VscnfqTTsEQIe0mSAMI6xX0bC/DIgT15TtqGO2J0okjdeQ/kwActLdyuCw== X-Received: by 2002:a05:6512:2350:b0:52c:987f:b355 with SMTP id 2adb3069b0e04-52eb99cc6abmr11898919e87.42.1720955465155; Sun, 14 Jul 2024 04:11:05 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3680dafbe9esm3583681f8f.78.2024.07.14.04.11.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:11:04 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Richard Henderson Subject: [PULL 07/13] target/i386/tcg: Compute MMU index once Date: Sun, 14 Jul 2024 13:10:37 +0200 Message-ID: <20240714111043.14132-8-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Add the MMU index to the StackAccess struct, so that it can be cached or (in the next patch) computed from information that is not in CPUX86State. Co-developed-by: Richard Henderson Signed-off-by: Richard Henderson Signed-off-by: Paolo Bonzini --- target/i386/tcg/seg_helper.c | 35 ++++++++++++++++++++++------------- 1 file changed, 22 insertions(+), 13 deletions(-) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index b6902ca3fba..8a6d92b3583 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -56,36 +56,37 @@ typedef struct StackAccess target_ulong ss_base; target_ulong sp; target_ulong sp_mask; + int mmu_index; } StackAccess; static void pushw(StackAccess *sa, uint16_t val) { sa->sp -= 2; - cpu_stw_kernel_ra(sa->env, sa->ss_base + (sa->sp & sa->sp_mask), - val, sa->ra); + cpu_stw_mmuidx_ra(sa->env, sa->ss_base + (sa->sp & sa->sp_mask), + val, sa->mmu_index, sa->ra); } static void pushl(StackAccess *sa, uint32_t val) { sa->sp -= 4; - cpu_stl_kernel_ra(sa->env, sa->ss_base + (sa->sp & sa->sp_mask), - val, sa->ra); + cpu_stl_mmuidx_ra(sa->env, sa->ss_base + (sa->sp & sa->sp_mask), + val, sa->mmu_index, sa->ra); } static uint16_t popw(StackAccess *sa) { - uint16_t ret = cpu_lduw_data_ra(sa->env, - sa->ss_base + (sa->sp & sa->sp_mask), - sa->ra); + uint16_t ret = cpu_lduw_mmuidx_ra(sa->env, + sa->ss_base + (sa->sp & sa->sp_mask), + sa->mmu_index, sa->ra); sa->sp += 2; return ret; } static uint32_t popl(StackAccess *sa) { - uint32_t ret = cpu_ldl_data_ra(sa->env, - sa->ss_base + (sa->sp & sa->sp_mask), - sa->ra); + uint32_t ret = cpu_ldl_mmuidx_ra(sa->env, + sa->ss_base + (sa->sp & sa->sp_mask), + sa->mmu_index, sa->ra); sa->sp += 4; return ret; } @@ -677,6 +678,7 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, sa.env = env; sa.ra = 0; + sa.mmu_index = cpu_mmu_index_kernel(env); if (type == 5) { /* task gate */ @@ -858,12 +860,12 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, static void pushq(StackAccess *sa, uint64_t val) { sa->sp -= 8; - cpu_stq_kernel_ra(sa->env, sa->sp, val, sa->ra); + cpu_stq_mmuidx_ra(sa->env, sa->sp, val, sa->mmu_index, sa->ra); } static uint64_t popq(StackAccess *sa) { - uint64_t ret = cpu_ldq_data_ra(sa->env, sa->sp, sa->ra); + uint64_t ret = cpu_ldq_mmuidx_ra(sa->env, sa->sp, sa->mmu_index, sa->ra); sa->sp += 8; return ret; } @@ -982,6 +984,7 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, sa.env = env; sa.ra = 0; + sa.mmu_index = cpu_mmu_index_kernel(env); sa.sp_mask = -1; sa.ss_base = 0; if (dpl < cpl || ist != 0) { @@ -1116,6 +1119,7 @@ static void do_interrupt_real(CPUX86State *env, int intno, int is_int, sa.sp = env->regs[R_ESP]; sa.sp_mask = 0xffff; sa.ss_base = env->segs[R_SS].base; + sa.mmu_index = cpu_mmu_index_kernel(env); if (is_int) { old_eip = next_eip; @@ -1579,6 +1583,7 @@ void helper_lcall_real(CPUX86State *env, uint32_t new_cs, uint32_t new_eip, sa.sp = env->regs[R_ESP]; sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); sa.ss_base = env->segs[R_SS].base; + sa.mmu_index = cpu_mmu_index_kernel(env); if (shift) { pushl(&sa, env->segs[R_CS].selector); @@ -1618,6 +1623,7 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, sa.env = env; sa.ra = GETPC(); + sa.mmu_index = cpu_mmu_index_kernel(env); if (e2 & DESC_S_MASK) { if (!(e2 & DESC_CS_MASK)) { @@ -1905,6 +1911,7 @@ void helper_iret_real(CPUX86State *env, int shift) sa.env = env; sa.ra = GETPC(); + sa.mmu_index = x86_mmu_index_pl(env, 0); sa.sp_mask = 0xffff; /* XXXX: use SS segment size? */ sa.sp = env->regs[R_ESP]; sa.ss_base = env->segs[R_SS].base; @@ -1976,8 +1983,11 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, target_ulong new_eip, new_esp; StackAccess sa; + cpl = env->hflags & HF_CPL_MASK; + sa.env = env; sa.ra = retaddr; + sa.mmu_index = x86_mmu_index_pl(env, cpl); #ifdef TARGET_X86_64 if (shift == 2) { @@ -2032,7 +2042,6 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, !(e2 & DESC_CS_MASK)) { raise_exception_err_ra(env, EXCP0D_GPF, new_cs & 0xfffc, retaddr); } - cpl = env->hflags & HF_CPL_MASK; rpl = new_cs & 3; if (rpl < cpl) { raise_exception_err_ra(env, EXCP0D_GPF, new_cs & 0xfffc, retaddr); From patchwork Sun Jul 14 11:10:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13732674 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9AB2EC3DA49 for ; Sun, 14 Jul 2024 11:13:16 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8R-0005Wh-2X; Sun, 14 Jul 2024 07:11:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8N-0005Mf-ME for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:15 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8K-00028J-PA for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:15 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955472; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JRUePkyI4BlsyZF8K6wIcJjT0NtE/ENQZ85bsJ3I+qY=; b=QeNpA6tPdSYv6n9MG/wMmm7PLy/8+CD7Jk52CYbqNgYByrcSVEi6e6TlNDdu12Xb4UHA4m WnyZJsNSCK5mJGSkS7qlBOw6uHJc0owVULyNNmNzdXvo90eP3SM7g08FdQ2l1RhjqBCoLl pbNbUL9JZgCJ2NFGxZsvoiyQ1pXi7y8= Received: from mail-lj1-f200.google.com (mail-lj1-f200.google.com [209.85.208.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-219-vj7s4lZiN3yvuWqRjbk2eg-1; Sun, 14 Jul 2024 07:11:10 -0400 X-MC-Unique: vj7s4lZiN3yvuWqRjbk2eg-1 Received: by mail-lj1-f200.google.com with SMTP id 38308e7fff4ca-2ee9b383ffbso35145641fa.1 for ; Sun, 14 Jul 2024 04:11:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955468; x=1721560268; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JRUePkyI4BlsyZF8K6wIcJjT0NtE/ENQZ85bsJ3I+qY=; b=Otj5rN4/tbH1Eo+hH9gb7126MfqgrqpU4goYuqv7gBUlMgVzIsNe16reC4gx5tzl8I PgQ7Zm9A002ULDOENi66bwZ7jR3DDNArRADpuIIR7N/to5oEcRFhxrNLcoYBeGf3wIcJ Iv01cz9NGnjh0ashwuEqwGDxW0FEKADlY/dCHagLb9vYrEWAccXoavzLb5/Cx8xzUBiw AM7AjDWLf8FnrpF5u8DJjeneCr0ZXnUrNueb1KKf6T4djy9kPkeIxwP0pBMTZ5FLWqpm 56fwhlnLC0z5nH4Ur67jtecYGF7l0opfdP5C9zgMZwnuDzvcXHKCUtLTiXqgBKKB0iBZ S9Ew== X-Gm-Message-State: AOJu0YxkrzdfVlqPanpHp9v6mUUA12z5dW+vrnmfxKP4hOpzAp+7bxjc Z9ItiLh+bEMJu4Sh5+grstpIWSOtSymzo9L+v6YI/A5b7dsCSSE6KomPqBCZa9qzdHSTIG+I47i 8vd9h7cOeOxY2HTcn22bl9GKsyA0TS8pVaaJDdyXQr4Xer5Q04pL+/yknw+gcbq4GRafsIpjvKR absjoyPk6+iiy5Nlp8ZZxJ/UwmiPzYIp9Dw6s6 X-Received: by 2002:a2e:9cda:0:b0:2ee:bc9a:9d7d with SMTP id 38308e7fff4ca-2eebc9a9f96mr89874011fa.37.1720955468103; Sun, 14 Jul 2024 04:11:08 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEXESsiL/VEFgleDlYMWMRSQvwN02yq7M4Wjq6WuD357p31QnRtj22MfEVs2NEzbONELQI9Vw== X-Received: by 2002:a2e:9cda:0:b0:2ee:bc9a:9d7d with SMTP id 38308e7fff4ca-2eebc9a9f96mr89873851fa.37.1720955467677; Sun, 14 Jul 2024 04:11:07 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3680dabef75sm3612351f8f.36.2024.07.14.04.11.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:11:07 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: "Robert R . Henry" , Richard Henderson Subject: [PULL 08/13] target/i386/tcg: Use DPL-level accesses for interrupts and call gates Date: Sun, 14 Jul 2024 13:10:38 +0200 Message-ID: <20240714111043.14132-9-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org This fixes a bug wherein i386/tcg assumed an interrupt return using the CALL or JMP instructions were always going from kernel or user mode to kernel mode, when using a call gate. This assumption is violated if the call gate has a DPL that is greater than 0. In addition, the stack accesses should count as explicit, not implicit ("kernel" in QEMU code), so that SMAP is not applied if DPL=3. Analyzed-by: Robert R. Henry Resolves: https://gitlab.com/qemu-project/qemu/-/issues/249 Reviewed-by: Richard Henderson Signed-off-by: Paolo Bonzini --- target/i386/tcg/seg_helper.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index 8a6d92b3583..809ee3d9833 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -678,7 +678,7 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, sa.env = env; sa.ra = 0; - sa.mmu_index = cpu_mmu_index_kernel(env); + sa.mmu_index = x86_mmu_index_pl(env, dpl); if (type == 5) { /* task gate */ @@ -984,7 +984,7 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, sa.env = env; sa.ra = 0; - sa.mmu_index = cpu_mmu_index_kernel(env); + sa.mmu_index = x86_mmu_index_pl(env, dpl); sa.sp_mask = -1; sa.ss_base = 0; if (dpl < cpl || ist != 0) { @@ -1119,7 +1119,7 @@ static void do_interrupt_real(CPUX86State *env, int intno, int is_int, sa.sp = env->regs[R_ESP]; sa.sp_mask = 0xffff; sa.ss_base = env->segs[R_SS].base; - sa.mmu_index = cpu_mmu_index_kernel(env); + sa.mmu_index = x86_mmu_index_pl(env, 0); if (is_int) { old_eip = next_eip; @@ -1583,7 +1583,7 @@ void helper_lcall_real(CPUX86State *env, uint32_t new_cs, uint32_t new_eip, sa.sp = env->regs[R_ESP]; sa.sp_mask = get_sp_mask(env->segs[R_SS].flags); sa.ss_base = env->segs[R_SS].base; - sa.mmu_index = cpu_mmu_index_kernel(env); + sa.mmu_index = x86_mmu_index_pl(env, 0); if (shift) { pushl(&sa, env->segs[R_CS].selector); @@ -1619,17 +1619,17 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, raise_exception_err_ra(env, EXCP0D_GPF, new_cs & 0xfffc, GETPC()); } cpl = env->hflags & HF_CPL_MASK; + dpl = (e2 >> DESC_DPL_SHIFT) & 3; LOG_PCALL("desc=%08x:%08x\n", e1, e2); sa.env = env; sa.ra = GETPC(); - sa.mmu_index = cpu_mmu_index_kernel(env); + sa.mmu_index = x86_mmu_index_pl(env, dpl); if (e2 & DESC_S_MASK) { if (!(e2 & DESC_CS_MASK)) { raise_exception_err_ra(env, EXCP0D_GPF, new_cs & 0xfffc, GETPC()); } - dpl = (e2 >> DESC_DPL_SHIFT) & 3; if (e2 & DESC_C_MASK) { /* conforming code segment */ if (dpl > cpl) { @@ -1691,7 +1691,6 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, } else { /* check gate type */ type = (e2 >> DESC_TYPE_SHIFT) & 0x1f; - dpl = (e2 >> DESC_DPL_SHIFT) & 3; rpl = new_cs & 3; #ifdef TARGET_X86_64 From patchwork Sun Jul 14 11:10:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13732664 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 09F89C41513 for ; Sun, 14 Jul 2024 11:11:38 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8S-0005dk-4q; Sun, 14 Jul 2024 07:11:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8P-0005Rl-15 for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:17 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8N-00028l-Jw for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:16 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955475; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h1MZ+bkr2S4w1UHUKLxqB+jeJD0WXRw8G5s2UmrXHqY=; b=bnFXzFFgYPNQAESV8YVyUKjHaGNUPhagIDTjFUikcV0wPp1KqhUh8727ubMgPgI2bYQVX+ 9hb2zmfRMOiwIec4hXsXwJ9yubqFI1RsVcGDFPyVdmLftCJfNc12qGWiaa6WZNeLkLCbCY B3FAQuHtUbUprBCakbtGpOusdQZELcU= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-206-2Am5PLzNOCmKVLzTFKRlFQ-1; Sun, 14 Jul 2024 07:11:12 -0400 X-MC-Unique: 2Am5PLzNOCmKVLzTFKRlFQ-1 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-3678e549a1eso1778246f8f.0 for ; Sun, 14 Jul 2024 04:11:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955470; x=1721560270; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h1MZ+bkr2S4w1UHUKLxqB+jeJD0WXRw8G5s2UmrXHqY=; b=S0NXfe2lafNDIrxd64X0NecYOaOxIzcDsjq2E0c/Iyn629FTNj1eolAHWrB4uIJOW0 vsNwA/atRY3QJ3I8zYDFc9PphmYLC3p9iJ4fh7KhKj9d8aNK1jr7LXw1HRRq4Yc/MfSd BBxrDo1BvjquGiZBaMb+dt0NqRnlb8+86J9rHbmW5pvT/57Opp/Q/srdaCVqXRRhUHda vbeHmNDhtEh2MZ0NfyDqribvObYzc10mCr7twEjMn3SHNOru86GcGl/WilgwSzWL6iah xhmZHYcxmAGHpE1QAsfbs0aLmw9PMg11hTUmneqqsIrcswFyAB8Zqs7p+/Vk2XmOrZNu XDCg== X-Gm-Message-State: AOJu0Yy4kspljz1cKTQTxsW/lumaC4Ee7G87k9lL9FAt2uczZ5nA0mzS 1L/8Yq6T5ayClPWuksc7kmj4Suu3C8JlOEKl/L2T7FMLH0YixlO44/hY/HD687ik9WJ5P6nG3pi 60ngwFo/8myze6Bu+BKhvjU8cBRmBwEUoajpevnfNb3uPxh0NjCOe0b9Lz5IOFVBgevdejWbqZ+ 8S4WuFTe5rHWXjvWqx+DkVGVysEsBznMmOMmIZ X-Received: by 2002:a5d:61cb:0:b0:368:4c5:1697 with SMTP id ffacd0b85a97d-36804fec68amr4152891f8f.36.1720955470192; Sun, 14 Jul 2024 04:11:10 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFWfJpnWBr9IiXDeAClNnjM7WB+R+NZ6Au0q0kiBiLn8Sa9ntWWTzUVyiCuJWwDccZCjKS7LA== X-Received: by 2002:a5d:61cb:0:b0:368:4c5:1697 with SMTP id ffacd0b85a97d-36804fec68amr4152884f8f.36.1720955469892; Sun, 14 Jul 2024 04:11:09 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3680dafb991sm3629295f8f.79.2024.07.14.04.11.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:11:09 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Richard Henderson Subject: [PULL 09/13] target/i386/tcg: check for correct busy state before switching to a new task Date: Sun, 14 Jul 2024 13:10:39 +0200 Message-ID: <20240714111043.14132-10-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org This step is listed in the Intel manual: "Checks that the new task is available (call, jump, exception, or interrupt) or busy (IRET return)". The AMD manual lists the same operation under the "Preventing recursion" paragraph of "12.3.4 Nesting Tasks", though it is not clear if the processor checks the busy bit in the IRET case. Reviewed-by: Richard Henderson Signed-off-by: Paolo Bonzini --- target/i386/tcg/seg_helper.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index 809ee3d9833..0242f9d8b58 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -369,6 +369,11 @@ static int switch_tss_ra(CPUX86State *env, int tss_selector, old_tss_limit_max = 43; } + /* new TSS must be busy iff the source is an IRET instruction */ + if (!!(e2 & DESC_TSS_BUSY_MASK) != (source == SWITCH_TSS_IRET)) { + raise_exception_err_ra(env, EXCP0A_TSS, tss_selector & 0xfffc, retaddr); + } + /* read all the registers from the new TSS */ if (type & 8) { /* 32 bit */ From patchwork Sun Jul 14 11:10:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13732673 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 32D05C3DA49 for ; Sun, 14 Jul 2024 11:13:06 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8T-0005jo-I9; Sun, 14 Jul 2024 07:11:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8Q-0005Z7-Tr for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:18 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8O-00028w-UH for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:18 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955476; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OUMYHZ6i4EA7K7uyAc3TiRc4wdBbB5/oRUtyZ3vopSw=; b=Na84S/NfG1oqdOMwiqZss+OU0dtmX1KnwQSbLost+fwA8oNz1N/UNbVig0BFadmSydBD4a clgnK/mCgF2EpyIdPXVO+dBgsrDN653TS7u9g346Ft9UYfASssu3ZYdvm6IeUJG7MPLDlG sc7tyM5W1Pu3MJ12IAH/Wjc+latva88= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-1-Tg4Jw9N5NEWgGkEW6-6gsQ-1; Sun, 14 Jul 2024 07:11:14 -0400 X-MC-Unique: Tg4Jw9N5NEWgGkEW6-6gsQ-1 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-367a14a6664so2365241f8f.0 for ; Sun, 14 Jul 2024 04:11:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955473; x=1721560273; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OUMYHZ6i4EA7K7uyAc3TiRc4wdBbB5/oRUtyZ3vopSw=; b=Pe+oGh3N15Ckd0jQuZrWy+FZ5bbdnbKQFd/QnomA4Nyi565uktR62Bp0hstM2zVFZa 9zsps3/TtaEdv5gS6VwOZb1CSTNNQ1pcKQF2G+FqTehvj5arB+n+HShcFPfA4ds5Y4yT rQFsIRA57XIMFysJ/Z6zNMuMDPjrLvEF6z+phN6Vn56VprIuPX+ljAF1BUKfDcmzIlnL FbVyCsIEniq7ZPqGmnVDZi8ASu+Zo4GSpFbINORk9bfsDHCbl4coufdA2k6tMyS+d2hq kWfc3yGtXVBf/XXsGfSj0IaFyMsvBkbTGxZ6lWDTAMrzdretwnTeO8+ET7Rsm+z0nOgs R5Gw== X-Gm-Message-State: AOJu0Yx51fLxnoVHPVhhqekO9wFWYuYDbNfco3epZK7cxKxwa8UKR70N A3qRqJD2bmgDLA0FgRCpB7G3hhxHQehU1jHu7hHZnkBHgrjXE3jhoaECJl/f+5j/U+uOFsEUG1E LpKNVruKNS9CPsnQFbmT2b307CbhVbROAyHVNNBDqagVR5zxm1vN+N/4d1l3hr/QJbmxuiMKvub q3A+vNpwH1TwaL5ybEKTFBj1VjO5Co+72vonCw X-Received: by 2002:a05:6000:184b:b0:367:94f7:88f3 with SMTP id ffacd0b85a97d-367ceadd2b3mr13504172f8f.57.1720955472754; Sun, 14 Jul 2024 04:11:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGfbuRJpIvgZC60bLNe0h7iePXPBe+I8EhIFNJ4Ak5K9CZIPuGpUzv7QKDOHXP+3T332rjRtw== X-Received: by 2002:a05:6000:184b:b0:367:94f7:88f3 with SMTP id ffacd0b85a97d-367ceadd2b3mr13504160f8f.57.1720955472362; Sun, 14 Jul 2024 04:11:12 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3680dab3be3sm3585565f8f.24.2024.07.14.04.11.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:11:11 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Richard Henderson Subject: [PULL 10/13] target/i386/tcg: use X86Access for TSS access Date: Sun, 14 Jul 2024 13:10:40 +0200 Message-ID: <20240714111043.14132-11-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org This takes care of probing the vaddr range in advance, and is also faster because it avoids repeated TLB lookups. It also matches the Intel manual better, as it says "Checks that the current (old) TSS, new TSS, and all segment descriptors used in the task switch are paged into system memory"; note however that it's not clear how the processor checks for segment descriptors, and this check is not included in the AMD manual. Reviewed-by: Richard Henderson Signed-off-by: Paolo Bonzini --- target/i386/tcg/seg_helper.c | 110 ++++++++++++++++++----------------- 1 file changed, 58 insertions(+), 52 deletions(-) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index 0242f9d8b58..fea08a2ba0f 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -27,6 +27,7 @@ #include "exec/log.h" #include "helper-tcg.h" #include "seg_helper.h" +#include "access.h" #ifdef TARGET_X86_64 #define SET_ESP(val, sp_mask) \ @@ -313,14 +314,15 @@ static int switch_tss_ra(CPUX86State *env, int tss_selector, uint32_t e1, uint32_t e2, int source, uint32_t next_eip, uintptr_t retaddr) { - int tss_limit, tss_limit_max, type, old_tss_limit_max, old_type, v1, v2, i; + int tss_limit, tss_limit_max, type, old_tss_limit_max, old_type, i; target_ulong tss_base; uint32_t new_regs[8], new_segs[6]; uint32_t new_eflags, new_eip, new_cr3, new_ldt, new_trap; uint32_t old_eflags, eflags_mask; SegmentCache *dt; - int index; + int mmu_index, index; target_ulong ptr; + X86Access old, new; type = (e2 >> DESC_TYPE_SHIFT) & 0xf; LOG_PCALL("switch_tss: sel=0x%04x type=%d src=%d\n", tss_selector, type, @@ -374,35 +376,45 @@ static int switch_tss_ra(CPUX86State *env, int tss_selector, raise_exception_err_ra(env, EXCP0A_TSS, tss_selector & 0xfffc, retaddr); } + /* X86Access avoids memory exceptions during the task switch */ + mmu_index = cpu_mmu_index_kernel(env); + access_prepare_mmu(&old, env, env->tr.base, old_tss_limit_max, + MMU_DATA_STORE, mmu_index, retaddr); + + if (source == SWITCH_TSS_CALL) { + /* Probe for future write of parent task */ + probe_access(env, tss_base, 2, MMU_DATA_STORE, + mmu_index, retaddr); + } + access_prepare_mmu(&new, env, tss_base, tss_limit, + MMU_DATA_LOAD, mmu_index, retaddr); + /* read all the registers from the new TSS */ if (type & 8) { /* 32 bit */ - new_cr3 = cpu_ldl_kernel_ra(env, tss_base + 0x1c, retaddr); - new_eip = cpu_ldl_kernel_ra(env, tss_base + 0x20, retaddr); - new_eflags = cpu_ldl_kernel_ra(env, tss_base + 0x24, retaddr); + new_cr3 = access_ldl(&new, tss_base + 0x1c); + new_eip = access_ldl(&new, tss_base + 0x20); + new_eflags = access_ldl(&new, tss_base + 0x24); for (i = 0; i < 8; i++) { - new_regs[i] = cpu_ldl_kernel_ra(env, tss_base + (0x28 + i * 4), - retaddr); + new_regs[i] = access_ldl(&new, tss_base + (0x28 + i * 4)); } for (i = 0; i < 6; i++) { - new_segs[i] = cpu_lduw_kernel_ra(env, tss_base + (0x48 + i * 4), - retaddr); + new_segs[i] = access_ldw(&new, tss_base + (0x48 + i * 4)); } - new_ldt = cpu_lduw_kernel_ra(env, tss_base + 0x60, retaddr); - new_trap = cpu_ldl_kernel_ra(env, tss_base + 0x64, retaddr); + new_ldt = access_ldw(&new, tss_base + 0x60); + new_trap = access_ldl(&new, tss_base + 0x64); } else { /* 16 bit */ new_cr3 = 0; - new_eip = cpu_lduw_kernel_ra(env, tss_base + 0x0e, retaddr); - new_eflags = cpu_lduw_kernel_ra(env, tss_base + 0x10, retaddr); + new_eip = access_ldw(&new, tss_base + 0x0e); + new_eflags = access_ldw(&new, tss_base + 0x10); for (i = 0; i < 8; i++) { - new_regs[i] = cpu_lduw_kernel_ra(env, tss_base + (0x12 + i * 2), retaddr); + new_regs[i] = access_ldw(&new, tss_base + (0x12 + i * 2)); } for (i = 0; i < 4; i++) { - new_segs[i] = cpu_lduw_kernel_ra(env, tss_base + (0x22 + i * 2), - retaddr); + new_segs[i] = access_ldw(&new, tss_base + (0x22 + i * 2)); } - new_ldt = cpu_lduw_kernel_ra(env, tss_base + 0x2a, retaddr); + new_ldt = access_ldw(&new, tss_base + 0x2a); new_segs[R_FS] = 0; new_segs[R_GS] = 0; new_trap = 0; @@ -412,16 +424,6 @@ static int switch_tss_ra(CPUX86State *env, int tss_selector, chapters 12.2.5 and 13.2.4 on how to implement TSS Trap bit */ (void)new_trap; - /* NOTE: we must avoid memory exceptions during the task switch, - so we make dummy accesses before */ - /* XXX: it can still fail in some cases, so a bigger hack is - necessary to valid the TLB after having done the accesses */ - - v1 = cpu_ldub_kernel_ra(env, env->tr.base, retaddr); - v2 = cpu_ldub_kernel_ra(env, env->tr.base + old_tss_limit_max, retaddr); - cpu_stb_kernel_ra(env, env->tr.base, v1, retaddr); - cpu_stb_kernel_ra(env, env->tr.base + old_tss_limit_max, v2, retaddr); - /* clear busy bit (it is restartable) */ if (source == SWITCH_TSS_JMP || source == SWITCH_TSS_IRET) { tss_set_busy(env, env->tr.selector, 0, retaddr); @@ -434,35 +436,35 @@ static int switch_tss_ra(CPUX86State *env, int tss_selector, /* save the current state in the old TSS */ if (old_type & 8) { /* 32 bit */ - cpu_stl_kernel_ra(env, env->tr.base + 0x20, next_eip, retaddr); - cpu_stl_kernel_ra(env, env->tr.base + 0x24, old_eflags, retaddr); - cpu_stl_kernel_ra(env, env->tr.base + (0x28 + 0 * 4), env->regs[R_EAX], retaddr); - cpu_stl_kernel_ra(env, env->tr.base + (0x28 + 1 * 4), env->regs[R_ECX], retaddr); - cpu_stl_kernel_ra(env, env->tr.base + (0x28 + 2 * 4), env->regs[R_EDX], retaddr); - cpu_stl_kernel_ra(env, env->tr.base + (0x28 + 3 * 4), env->regs[R_EBX], retaddr); - cpu_stl_kernel_ra(env, env->tr.base + (0x28 + 4 * 4), env->regs[R_ESP], retaddr); - cpu_stl_kernel_ra(env, env->tr.base + (0x28 + 5 * 4), env->regs[R_EBP], retaddr); - cpu_stl_kernel_ra(env, env->tr.base + (0x28 + 6 * 4), env->regs[R_ESI], retaddr); - cpu_stl_kernel_ra(env, env->tr.base + (0x28 + 7 * 4), env->regs[R_EDI], retaddr); + access_stl(&old, env->tr.base + 0x20, next_eip); + access_stl(&old, env->tr.base + 0x24, old_eflags); + access_stl(&old, env->tr.base + (0x28 + 0 * 4), env->regs[R_EAX]); + access_stl(&old, env->tr.base + (0x28 + 1 * 4), env->regs[R_ECX]); + access_stl(&old, env->tr.base + (0x28 + 2 * 4), env->regs[R_EDX]); + access_stl(&old, env->tr.base + (0x28 + 3 * 4), env->regs[R_EBX]); + access_stl(&old, env->tr.base + (0x28 + 4 * 4), env->regs[R_ESP]); + access_stl(&old, env->tr.base + (0x28 + 5 * 4), env->regs[R_EBP]); + access_stl(&old, env->tr.base + (0x28 + 6 * 4), env->regs[R_ESI]); + access_stl(&old, env->tr.base + (0x28 + 7 * 4), env->regs[R_EDI]); for (i = 0; i < 6; i++) { - cpu_stw_kernel_ra(env, env->tr.base + (0x48 + i * 4), - env->segs[i].selector, retaddr); + access_stw(&old, env->tr.base + (0x48 + i * 4), + env->segs[i].selector); } } else { /* 16 bit */ - cpu_stw_kernel_ra(env, env->tr.base + 0x0e, next_eip, retaddr); - cpu_stw_kernel_ra(env, env->tr.base + 0x10, old_eflags, retaddr); - cpu_stw_kernel_ra(env, env->tr.base + (0x12 + 0 * 2), env->regs[R_EAX], retaddr); - cpu_stw_kernel_ra(env, env->tr.base + (0x12 + 1 * 2), env->regs[R_ECX], retaddr); - cpu_stw_kernel_ra(env, env->tr.base + (0x12 + 2 * 2), env->regs[R_EDX], retaddr); - cpu_stw_kernel_ra(env, env->tr.base + (0x12 + 3 * 2), env->regs[R_EBX], retaddr); - cpu_stw_kernel_ra(env, env->tr.base + (0x12 + 4 * 2), env->regs[R_ESP], retaddr); - cpu_stw_kernel_ra(env, env->tr.base + (0x12 + 5 * 2), env->regs[R_EBP], retaddr); - cpu_stw_kernel_ra(env, env->tr.base + (0x12 + 6 * 2), env->regs[R_ESI], retaddr); - cpu_stw_kernel_ra(env, env->tr.base + (0x12 + 7 * 2), env->regs[R_EDI], retaddr); + access_stw(&old, env->tr.base + 0x0e, next_eip); + access_stw(&old, env->tr.base + 0x10, old_eflags); + access_stw(&old, env->tr.base + (0x12 + 0 * 2), env->regs[R_EAX]); + access_stw(&old, env->tr.base + (0x12 + 1 * 2), env->regs[R_ECX]); + access_stw(&old, env->tr.base + (0x12 + 2 * 2), env->regs[R_EDX]); + access_stw(&old, env->tr.base + (0x12 + 3 * 2), env->regs[R_EBX]); + access_stw(&old, env->tr.base + (0x12 + 4 * 2), env->regs[R_ESP]); + access_stw(&old, env->tr.base + (0x12 + 5 * 2), env->regs[R_EBP]); + access_stw(&old, env->tr.base + (0x12 + 6 * 2), env->regs[R_ESI]); + access_stw(&old, env->tr.base + (0x12 + 7 * 2), env->regs[R_EDI]); for (i = 0; i < 4; i++) { - cpu_stw_kernel_ra(env, env->tr.base + (0x22 + i * 2), - env->segs[i].selector, retaddr); + access_stw(&old, env->tr.base + (0x22 + i * 2), + env->segs[i].selector); } } @@ -470,7 +472,11 @@ static int switch_tss_ra(CPUX86State *env, int tss_selector, context */ if (source == SWITCH_TSS_CALL) { - cpu_stw_kernel_ra(env, tss_base, env->tr.selector, retaddr); + /* + * Thanks to the probe_access above, we know the first two + * bytes addressed by &new are writable too. + */ + access_stw(&new, tss_base, env->tr.selector); new_eflags |= NT_MASK; } From patchwork Sun Jul 14 11:10:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13732677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 53D7BC3DA42 for ; Sun, 14 Jul 2024 11:13:24 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8Z-00065Y-6L; Sun, 14 Jul 2024 07:11:27 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8X-0005xc-2R for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:25 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8U-00029L-9l for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955481; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wED3eSbZqAQrLro/ysxQ2PfWVzr9a+Prs+exQ4GwPs0=; b=WrMBdZtSVsYVsVjDLX18MnRQPD0tAEfTgIko7UB0WArfELJajHJfXcEkyNHZIqaDH2syRT e+Ybqp1uRKtbY0Q7EIxohacLTZwBPO9aXmn3EP/qRrlNb+PCMvi6V+dntQPAMpMeL/UeSZ uEowJlHNC1QbjewMkWlMgd5b6YtWXJc= Received: from mail-lj1-f199.google.com (mail-lj1-f199.google.com [209.85.208.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-25-c1_8cBL-OzOtsX_K3V6geg-1; Sun, 14 Jul 2024 07:11:19 -0400 X-MC-Unique: c1_8cBL-OzOtsX_K3V6geg-1 Received: by mail-lj1-f199.google.com with SMTP id 38308e7fff4ca-2ee8e904f01so39113571fa.1 for ; Sun, 14 Jul 2024 04:11:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955476; x=1721560276; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wED3eSbZqAQrLro/ysxQ2PfWVzr9a+Prs+exQ4GwPs0=; b=GTyoXLEPn7W2pgQUKTwWvCS46XFuBDwsI5kmCFmNIQj1QmG2D56kE/a/KP/xJ179ZJ QSyTKz1XZ4DRbEV+1XOUzXKk3V8Gylzfa9IMf3Q34JGENVn8dDyhuYeXGy5XqIp6FLUB 35bY4eCPk4r8jLi9ZXhZXwC+t7TUyzsWMY1OewaJ14MTeFs51w5yXswsP2uZsEmnVwIj /jCs5K7wSJowbo2PqKy3HYBxVlTPk4O9/ZjGiHGidC/oy2m2IOxOaieo/7VsFcbsbgL4 57GYbu66BNiuw5gLmZX7g2yb12QajtRMPo81KRRQzQrhCd3NONlW7PUCPtlM7B9Jfk++ g6jg== X-Gm-Message-State: AOJu0Yzc3X5r8DpL9/4PGKqmMdSiqIY0Z3MEYFeSgJp1/SqcNrMxVZyJ N0+HRbakwUprUMLNJ8z08YK0n+HLPw128WYAHO+yFZhkOLJaZLFydg0gLSMOCCMW6ynZZI1j4yi Nb+zJLv+Lpxyy3AzxfKN1HxhsonRRddVYE2Fgwvzeoiv0mDTlDCx8AdePmCYWqexr3LfeOLoKFs Q4HzG3L3eeN+V8d87/ScF89Ko3f6hfhr8iltp+ X-Received: by 2002:a2e:860b:0:b0:2ee:d5e1:25e3 with SMTP id 38308e7fff4ca-2eed5e12629mr43681281fa.2.1720955476608; Sun, 14 Jul 2024 04:11:16 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG+avKR/e80q37Kbr2Qi2vteaiNQouR+PnjJURdfwEF0fQDvn8xGsUghOGtX7ZuOXGYqi49OA== X-Received: by 2002:a2e:860b:0:b0:2ee:d5e1:25e3 with SMTP id 38308e7fff4ca-2eed5e12629mr43680941fa.2.1720955475218; Sun, 14 Jul 2024 04:11:15 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-427a63449fcsm47002785e9.29.2024.07.14.04.11.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:11:14 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Richard Henderson Subject: [PULL 11/13] target/i386/tcg: save current task state before loading new one Date: Sun, 14 Jul 2024 13:10:41 +0200 Message-ID: <20240714111043.14132-12-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.129.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org This is how the steps are ordered in the manual. EFLAGS.NT is overwritten after the fact in the saved image. Reviewed-by: Richard Henderson Signed-off-by: Paolo Bonzini --- target/i386/tcg/seg_helper.c | 85 +++++++++++++++++++----------------- 1 file changed, 45 insertions(+), 40 deletions(-) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index fea08a2ba0f..c0641a79c70 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -389,6 +389,42 @@ static int switch_tss_ra(CPUX86State *env, int tss_selector, access_prepare_mmu(&new, env, tss_base, tss_limit, MMU_DATA_LOAD, mmu_index, retaddr); + /* save the current state in the old TSS */ + old_eflags = cpu_compute_eflags(env); + if (old_type & 8) { + /* 32 bit */ + access_stl(&old, env->tr.base + 0x20, next_eip); + access_stl(&old, env->tr.base + 0x24, old_eflags); + access_stl(&old, env->tr.base + (0x28 + 0 * 4), env->regs[R_EAX]); + access_stl(&old, env->tr.base + (0x28 + 1 * 4), env->regs[R_ECX]); + access_stl(&old, env->tr.base + (0x28 + 2 * 4), env->regs[R_EDX]); + access_stl(&old, env->tr.base + (0x28 + 3 * 4), env->regs[R_EBX]); + access_stl(&old, env->tr.base + (0x28 + 4 * 4), env->regs[R_ESP]); + access_stl(&old, env->tr.base + (0x28 + 5 * 4), env->regs[R_EBP]); + access_stl(&old, env->tr.base + (0x28 + 6 * 4), env->regs[R_ESI]); + access_stl(&old, env->tr.base + (0x28 + 7 * 4), env->regs[R_EDI]); + for (i = 0; i < 6; i++) { + access_stw(&old, env->tr.base + (0x48 + i * 4), + env->segs[i].selector); + } + } else { + /* 16 bit */ + access_stw(&old, env->tr.base + 0x0e, next_eip); + access_stw(&old, env->tr.base + 0x10, old_eflags); + access_stw(&old, env->tr.base + (0x12 + 0 * 2), env->regs[R_EAX]); + access_stw(&old, env->tr.base + (0x12 + 1 * 2), env->regs[R_ECX]); + access_stw(&old, env->tr.base + (0x12 + 2 * 2), env->regs[R_EDX]); + access_stw(&old, env->tr.base + (0x12 + 3 * 2), env->regs[R_EBX]); + access_stw(&old, env->tr.base + (0x12 + 4 * 2), env->regs[R_ESP]); + access_stw(&old, env->tr.base + (0x12 + 5 * 2), env->regs[R_EBP]); + access_stw(&old, env->tr.base + (0x12 + 6 * 2), env->regs[R_ESI]); + access_stw(&old, env->tr.base + (0x12 + 7 * 2), env->regs[R_EDI]); + for (i = 0; i < 4; i++) { + access_stw(&old, env->tr.base + (0x22 + i * 2), + env->segs[i].selector); + } + } + /* read all the registers from the new TSS */ if (type & 8) { /* 32 bit */ @@ -428,49 +464,16 @@ static int switch_tss_ra(CPUX86State *env, int tss_selector, if (source == SWITCH_TSS_JMP || source == SWITCH_TSS_IRET) { tss_set_busy(env, env->tr.selector, 0, retaddr); } - old_eflags = cpu_compute_eflags(env); + if (source == SWITCH_TSS_IRET) { old_eflags &= ~NT_MASK; + if (old_type & 8) { + access_stl(&old, env->tr.base + 0x24, old_eflags); + } else { + access_stw(&old, env->tr.base + 0x10, old_eflags); + } } - /* save the current state in the old TSS */ - if (old_type & 8) { - /* 32 bit */ - access_stl(&old, env->tr.base + 0x20, next_eip); - access_stl(&old, env->tr.base + 0x24, old_eflags); - access_stl(&old, env->tr.base + (0x28 + 0 * 4), env->regs[R_EAX]); - access_stl(&old, env->tr.base + (0x28 + 1 * 4), env->regs[R_ECX]); - access_stl(&old, env->tr.base + (0x28 + 2 * 4), env->regs[R_EDX]); - access_stl(&old, env->tr.base + (0x28 + 3 * 4), env->regs[R_EBX]); - access_stl(&old, env->tr.base + (0x28 + 4 * 4), env->regs[R_ESP]); - access_stl(&old, env->tr.base + (0x28 + 5 * 4), env->regs[R_EBP]); - access_stl(&old, env->tr.base + (0x28 + 6 * 4), env->regs[R_ESI]); - access_stl(&old, env->tr.base + (0x28 + 7 * 4), env->regs[R_EDI]); - for (i = 0; i < 6; i++) { - access_stw(&old, env->tr.base + (0x48 + i * 4), - env->segs[i].selector); - } - } else { - /* 16 bit */ - access_stw(&old, env->tr.base + 0x0e, next_eip); - access_stw(&old, env->tr.base + 0x10, old_eflags); - access_stw(&old, env->tr.base + (0x12 + 0 * 2), env->regs[R_EAX]); - access_stw(&old, env->tr.base + (0x12 + 1 * 2), env->regs[R_ECX]); - access_stw(&old, env->tr.base + (0x12 + 2 * 2), env->regs[R_EDX]); - access_stw(&old, env->tr.base + (0x12 + 3 * 2), env->regs[R_EBX]); - access_stw(&old, env->tr.base + (0x12 + 4 * 2), env->regs[R_ESP]); - access_stw(&old, env->tr.base + (0x12 + 5 * 2), env->regs[R_EBP]); - access_stw(&old, env->tr.base + (0x12 + 6 * 2), env->regs[R_ESI]); - access_stw(&old, env->tr.base + (0x12 + 7 * 2), env->regs[R_EDI]); - for (i = 0; i < 4; i++) { - access_stw(&old, env->tr.base + (0x22 + i * 2), - env->segs[i].selector); - } - } - - /* now if an exception occurs, it will occurs in the next task - context */ - if (source == SWITCH_TSS_CALL) { /* * Thanks to the probe_access above, we know the first two @@ -486,7 +489,9 @@ static int switch_tss_ra(CPUX86State *env, int tss_selector, } /* set the new CPU state */ - /* from this point, any exception which occurs can give problems */ + + /* now if an exception occurs, it will occur in the next task context */ + env->cr[0] |= CR0_TS_MASK; env->hflags |= HF_TS_MASK; env->tr.selector = tss_selector; From patchwork Sun Jul 14 11:10:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13732672 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6FC5CC3DA42 for ; Sun, 14 Jul 2024 11:13:03 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8d-0006KD-4U; Sun, 14 Jul 2024 07:11:31 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8b-0006EN-9F for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:29 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8Z-0002A2-7j for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955486; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xvBboM0gBLgPXgc0OfI2YETwpo6vg79+rDTOsfSo04k=; b=W0KfOT9eDPOPXjFuAB8eL1nhABd9CPArMHTYA15c62KAYGI7uUh3X2e4AHcVjOxsb3iP1G gPA8VkHAaZmqm2jlODDBmc7rsGIOb146JJr8XVBjQ+qmixMSoGBdkStTMv75k0XqBQqvMS cR9ucV+rWua7Qp5k9HiUo7toGo04Q0M= Received: from mail-lf1-f70.google.com (mail-lf1-f70.google.com [209.85.167.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-56-HXoZi1q-NxSTNbAlLmTySA-1; Sun, 14 Jul 2024 07:11:22 -0400 X-MC-Unique: HXoZi1q-NxSTNbAlLmTySA-1 Received: by mail-lf1-f70.google.com with SMTP id 2adb3069b0e04-52e9b943e6dso3254431e87.2 for ; Sun, 14 Jul 2024 04:11:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955480; x=1721560280; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xvBboM0gBLgPXgc0OfI2YETwpo6vg79+rDTOsfSo04k=; b=YpBVcL3tBhAwHK2szy4EtXpC8tyJamlQsaRcTh7jtI0X6xWgADiqI17Z06f+XVs8iG 0c6xBDA+QGcew4KRxuLpLG3Q0Jec5hjqioenqbZGV1saY6wSddIcaGw3x4sp88JeSlod dkdnw1I+fE+oEu6etdwVH9+Gh1Y8u34ok7LpDqg0nGzYnFgHI65MwVlW29Y8IegD5aiJ q2NcEBwnyUM7nEo3o9p8dlPFiPckth97i2ngK8vzuJN0OCz1eW0X3E39xc9rv/d9ZDLT DP5cHzuNUPPWCsdbKsxNC1s7BUP5U01Zy0xsF4HErRsINABM+6zGpXwfrf2oocrV7JmV rdfg== X-Gm-Message-State: AOJu0YzaYRj++jc0HAwsZS2PdvgQsFth9yxI7x8oqeIZNKf1TGKOmyfA WJX5sTozpXar0NpBp7/5GZ7u/CFw//PmUacwjqDSDv1HJIEpKjEvsGvbp4x81uUw/ZrWF1M6myz bPfv0u0ckJ0jRmUcCyjJCwSBsvjE0J4XhjdoGjlffFnb7PJpyvVp5LPHfjoA/7x3EtFQcGgpyRH XMCX4WqNKLbj/rhdkmHk0cKfUgzNeBCQVDqpM7 X-Received: by 2002:a05:6512:ad2:b0:52e:bdfc:1d05 with SMTP id 2adb3069b0e04-52ebdfc1ffcmr10321440e87.44.1720955479722; Sun, 14 Jul 2024 04:11:19 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG/0YOQVR0yqj4226D+TkFKAQsMIz1Xsv7xLq/slMcsDTBh/wvGCUrcxMJjoWM5RzGPbNJLsQ== X-Received: by 2002:a05:6512:ad2:b0:52e:bdfc:1d05 with SMTP id 2adb3069b0e04-52ebdfc1ffcmr10321421e87.44.1720955479295; Sun, 14 Jul 2024 04:11:19 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-427a5edb41asm47985225e9.36.2024.07.14.04.11.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:11:18 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Michael Roth , =?utf-8?q?Daniel_P_=2E_Berrang?= =?utf-8?q?=C3=A9?= , kvm@vger.kernel.org Subject: [PULL 12/13] i386/sev: Don't allow automatic fallback to legacy KVM_SEV*_INIT Date: Sun, 14 Jul 2024 13:10:42 +0200 Message-ID: <20240714111043.14132-13-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Michael Roth Currently if the 'legacy-vm-type' property of the sev-guest object is 'on', QEMU will attempt to use the newer KVM_SEV_INIT2 kernel interface in conjunction with the newer KVM_X86_SEV_VM and KVM_X86_SEV_ES_VM KVM VM types. This can lead to measurement changes if, for instance, an SEV guest was created on a host that originally had an older kernel that didn't support KVM_SEV_INIT2, but is booted on the same host later on after the host kernel was upgraded. Instead, if legacy-vm-type is 'off', QEMU should fail if the KVM_SEV_INIT2 interface is not provided by the current host kernel. Modify the fallback handling accordingly. In the future, VMSA features and other flags might be added to QEMU which will require legacy-vm-type to be 'off' because they will rely on the newer KVM_SEV_INIT2 interface. It may be difficult to convey to users what values of legacy-vm-type are compatible with which features/options, so as part of this rework, switch legacy-vm-type to a tri-state OnOffAuto option. 'auto' in this case will automatically switch to using the newer KVM_SEV_INIT2, but only if it is required to make use of new VMSA features or other options only available via KVM_SEV_INIT2. Defining 'auto' in this way would avoid inadvertantly breaking compatibility with older kernels since it would only be used in cases where users opt into newer features that are only available via KVM_SEV_INIT2 and newer kernels, and provide better default behavior than the legacy-vm-type=off behavior that was previously in place, so make it the default for 9.1+ machine types. Cc: Daniel P. Berrangé Cc: Paolo Bonzini cc: kvm@vger.kernel.org Signed-off-by: Michael Roth Reviewed-by: Daniel P. Berrangé Link: https://lore.kernel.org/r/20240710041005.83720-1-michael.roth@amd.com Signed-off-by: Paolo Bonzini --- qapi/qom.json | 18 ++++++---- hw/i386/pc.c | 2 +- target/i386/sev.c | 87 +++++++++++++++++++++++++++++++++++++++-------- 3 files changed, 84 insertions(+), 23 deletions(-) diff --git a/qapi/qom.json b/qapi/qom.json index 8e75a419c30..7eccd2e14e2 100644 --- a/qapi/qom.json +++ b/qapi/qom.json @@ -924,12 +924,16 @@ # @handle: SEV firmware handle (default: 0) # # @legacy-vm-type: Use legacy KVM_SEV_INIT KVM interface for creating the VM. -# The newer KVM_SEV_INIT2 interface syncs additional vCPU -# state when initializing the VMSA structures, which will -# result in a different guest measurement. Set this to -# maintain compatibility with older QEMU or kernel versions -# that rely on legacy KVM_SEV_INIT behavior. -# (default: false) (since 9.1) +# The newer KVM_SEV_INIT2 interface, from Linux >= 6.10, syncs +# additional vCPU state when initializing the VMSA structures, +# which will result in a different guest measurement. Set +# this to 'on' to force compatibility with older QEMU or kernel +# versions that rely on legacy KVM_SEV_INIT behavior. 'auto' +# will behave identically to 'on', but will automatically +# switch to using KVM_SEV_INIT2 if the user specifies any +# additional options that require it. If set to 'off', QEMU +# will require KVM_SEV_INIT2 unconditionally. +# (default: off) (since 9.1) # # Since: 2.12 ## @@ -939,7 +943,7 @@ '*session-file': 'str', '*policy': 'uint32', '*handle': 'uint32', - '*legacy-vm-type': 'bool' } } + '*legacy-vm-type': 'OnOffAuto' } } ## # @SevSnpGuestProperties: diff --git a/hw/i386/pc.c b/hw/i386/pc.c index 4fbc5774708..c74931d577a 100644 --- a/hw/i386/pc.c +++ b/hw/i386/pc.c @@ -83,7 +83,7 @@ GlobalProperty pc_compat_9_0[] = { { TYPE_X86_CPU, "x-amd-topoext-features-only", "false" }, { TYPE_X86_CPU, "x-l1-cache-per-thread", "false" }, { TYPE_X86_CPU, "guest-phys-bits", "0" }, - { "sev-guest", "legacy-vm-type", "true" }, + { "sev-guest", "legacy-vm-type", "on" }, { TYPE_X86_CPU, "legacy-multi-node", "on" }, }; const size_t pc_compat_9_0_len = G_N_ELEMENTS(pc_compat_9_0); diff --git a/target/i386/sev.c b/target/i386/sev.c index 2ba5f517228..a1157c0ede6 100644 --- a/target/i386/sev.c +++ b/target/i386/sev.c @@ -144,7 +144,7 @@ struct SevGuestState { uint32_t policy; char *dh_cert_file; char *session_file; - bool legacy_vm_type; + OnOffAuto legacy_vm_type; }; struct SevSnpGuestState { @@ -1369,6 +1369,17 @@ sev_vm_state_change(void *opaque, bool running, RunState state) } } +/* + * This helper is to examine sev-guest properties and determine if any options + * have been set which rely on the newer KVM_SEV_INIT2 interface and associated + * KVM VM types. + */ +static bool sev_init2_required(SevGuestState *sev_guest) +{ + /* Currently no KVM_SEV_INIT2-specific options are exposed via QEMU */ + return false; +} + static int sev_kvm_type(X86ConfidentialGuest *cg) { SevCommonState *sev_common = SEV_COMMON(cg); @@ -1379,14 +1390,39 @@ static int sev_kvm_type(X86ConfidentialGuest *cg) goto out; } - kvm_type = (sev_guest->policy & SEV_POLICY_ES) ? - KVM_X86_SEV_ES_VM : KVM_X86_SEV_VM; - if (kvm_is_vm_type_supported(kvm_type) && !sev_guest->legacy_vm_type) { - sev_common->kvm_type = kvm_type; - } else { + /* These are the only cases where legacy VM types can be used. */ + if (sev_guest->legacy_vm_type == ON_OFF_AUTO_ON || + (sev_guest->legacy_vm_type == ON_OFF_AUTO_AUTO && + !sev_init2_required(sev_guest))) { sev_common->kvm_type = KVM_X86_DEFAULT_VM; + goto out; } + /* + * Newer VM types are required, either explicitly via legacy-vm-type=on, or + * implicitly via legacy-vm-type=auto along with additional sev-guest + * properties that require the newer VM types. + */ + kvm_type = (sev_guest->policy & SEV_POLICY_ES) ? + KVM_X86_SEV_ES_VM : KVM_X86_SEV_VM; + if (!kvm_is_vm_type_supported(kvm_type)) { + if (sev_guest->legacy_vm_type == ON_OFF_AUTO_AUTO) { + error_report("SEV: host kernel does not support requested %s VM type, which is required " + "for the set of options specified. To allow use of the legacy " + "KVM_X86_DEFAULT_VM VM type, please disable any options that are not " + "compatible with the legacy VM type, or upgrade your kernel.", + kvm_type == KVM_X86_SEV_VM ? "KVM_X86_SEV_VM" : "KVM_X86_SEV_ES_VM"); + } else { + error_report("SEV: host kernel does not support requested %s VM type. To allow use of " + "the legacy KVM_X86_DEFAULT_VM VM type, the 'legacy-vm-type' argument " + "must be set to 'on' or 'auto' for the sev-guest object.", + kvm_type == KVM_X86_SEV_VM ? "KVM_X86_SEV_VM" : "KVM_X86_SEV_ES_VM"); + } + + return -1; + } + + sev_common->kvm_type = kvm_type; out: return sev_common->kvm_type; } @@ -1477,14 +1513,24 @@ static int sev_common_kvm_init(ConfidentialGuestSupport *cgs, Error **errp) } trace_kvm_sev_init(); - if (x86_klass->kvm_type(X86_CONFIDENTIAL_GUEST(sev_common)) == KVM_X86_DEFAULT_VM) { + switch (x86_klass->kvm_type(X86_CONFIDENTIAL_GUEST(sev_common))) { + case KVM_X86_DEFAULT_VM: cmd = sev_es_enabled() ? KVM_SEV_ES_INIT : KVM_SEV_INIT; ret = sev_ioctl(sev_common->sev_fd, cmd, NULL, &fw_error); - } else { + break; + case KVM_X86_SEV_VM: + case KVM_X86_SEV_ES_VM: + case KVM_X86_SNP_VM: { struct kvm_sev_init args = { 0 }; ret = sev_ioctl(sev_common->sev_fd, KVM_SEV_INIT2, &args, &fw_error); + break; + } + default: + error_setg(errp, "%s: host kernel does not support the requested SEV configuration.", + __func__); + return -1; } if (ret) { @@ -2074,14 +2120,23 @@ sev_guest_set_session_file(Object *obj, const char *value, Error **errp) SEV_GUEST(obj)->session_file = g_strdup(value); } -static bool sev_guest_get_legacy_vm_type(Object *obj, Error **errp) +static void sev_guest_get_legacy_vm_type(Object *obj, Visitor *v, + const char *name, void *opaque, + Error **errp) { - return SEV_GUEST(obj)->legacy_vm_type; + SevGuestState *sev_guest = SEV_GUEST(obj); + OnOffAuto legacy_vm_type = sev_guest->legacy_vm_type; + + visit_type_OnOffAuto(v, name, &legacy_vm_type, errp); } -static void sev_guest_set_legacy_vm_type(Object *obj, bool value, Error **errp) +static void sev_guest_set_legacy_vm_type(Object *obj, Visitor *v, + const char *name, void *opaque, + Error **errp) { - SEV_GUEST(obj)->legacy_vm_type = value; + SevGuestState *sev_guest = SEV_GUEST(obj); + + visit_type_OnOffAuto(v, name, &sev_guest->legacy_vm_type, errp); } static void @@ -2107,9 +2162,9 @@ sev_guest_class_init(ObjectClass *oc, void *data) sev_guest_set_session_file); object_class_property_set_description(oc, "session-file", "guest owners session parameters (encoded with base64)"); - object_class_property_add_bool(oc, "legacy-vm-type", - sev_guest_get_legacy_vm_type, - sev_guest_set_legacy_vm_type); + object_class_property_add(oc, "legacy-vm-type", "OnOffAuto", + sev_guest_get_legacy_vm_type, + sev_guest_set_legacy_vm_type, NULL, NULL); object_class_property_set_description(oc, "legacy-vm-type", "use legacy VM type to maintain measurement compatibility with older QEMU or kernel versions."); } @@ -2125,6 +2180,8 @@ sev_guest_instance_init(Object *obj) object_property_add_uint32_ptr(obj, "policy", &sev_guest->policy, OBJ_PROP_FLAG_READWRITE); object_apply_compat_props(obj); + + sev_guest->legacy_vm_type = ON_OFF_AUTO_AUTO; } /* guest info specific sev/sev-es */ From patchwork Sun Jul 14 11:10:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13732671 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 88B10C41513 for ; Sun, 14 Jul 2024 11:13:03 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sSx8c-0006Gy-9q; Sun, 14 Jul 2024 07:11:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8a-0006Bi-JY for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:28 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sSx8Y-00029r-N6 for qemu-devel@nongnu.org; Sun, 14 Jul 2024 07:11:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720955486; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JU35dLNvZmB9zyjiP8aa02bKtvhRGQs0KzD05llqlsY=; b=Z85J1LpcUWcw9UggaGs2iCcSqO2BH1KbveEThQZYhlkapZFZTBwCi6zknuZU6ePB2nnQBT zZbLkwDVzLJmh5COUzbyFuTqzacT9TB8RLtRwIgMP+z531dpWT4h+WXj4Et4zMuLRF5R3G QqS4OfcSzgxSn9W54z3X6goCLS7fGHM= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-50-bNqOqEAbNwO5DfzIb9wahg-1; Sun, 14 Jul 2024 07:11:24 -0400 X-MC-Unique: bNqOqEAbNwO5DfzIb9wahg-1 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-36796a0687bso2350221f8f.2 for ; Sun, 14 Jul 2024 04:11:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720955482; x=1721560282; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JU35dLNvZmB9zyjiP8aa02bKtvhRGQs0KzD05llqlsY=; b=UQgfdsRMZ2H2BtcbWI8RkFICf5rn7W035CtCJjFIgxZhWZRGCw/jnddG8CB2OT7h6s 5VPBMNWlL+aFvtxAOn/2W+kt0zML9z99Ql8OLgZXD4JhIUBqV/7aFQ5f+U3LbH/jpQk8 8kk4DOvow+B/mhLZKvKdBoS8PdlJtGcQkZ+3GHumzmDoYYt3PbuD6Fg2xuFd/sQLScxo 8bNbsdJEBbCP11AG29hTg9SM7LxYRV2b1UaBOsx2N+kk/M82WLsOr1KoTyZYHMDYgxie Y3eutHn5ssXOFtMbe7J7Tw3RcXU2lCK9nS8eDToOakzRGP46clMwp6t5c5PNB6WqVLO4 ZGog== X-Gm-Message-State: AOJu0YzwF5Nfb+Y1EbpNyz86nnjdXGU6HXD4BvPO2mgwe7KIRSiUULm6 bzYLTe21nddNWy/38EJiBva+x8vvaFgrn/mqdGf2QVMpvhEU7kYOlC4loOArYr4LmRHosGH1a6q Q3deopcz6w0CZlfwraaEvXdPlvoEWjLw6dQRVwR+lr2lnsuUgjWHllSResaDfS6cxng2fKJbJtY eGQo31ZagJDcFck0vrQ7T9sZsbFd46yUst806l X-Received: by 2002:adf:e64d:0:b0:35f:11c5:5c74 with SMTP id ffacd0b85a97d-367cea963bemr10548515f8f.36.1720955482275; Sun, 14 Jul 2024 04:11:22 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE7g0f7Fku3eAKR837iku5Lba+W1UVvXNJWrubmY1MU038o/gCBn+q+kwEqHn971g0f4+vcSw== X-Received: by 2002:adf:e64d:0:b0:35f:11c5:5c74 with SMTP id ffacd0b85a97d-367cea963bemr10548505f8f.36.1720955481908; Sun, 14 Jul 2024 04:11:21 -0700 (PDT) Received: from avogadro.local ([151.95.101.29]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3680dab3c2dsm3613844f8f.1.2024.07.14.04.11.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Jul 2024 04:11:21 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: Sergey Dyasli Subject: [PULL 13/13] Revert "qemu-char: do not operate on sources from finalize callbacks" Date: Sun, 14 Jul 2024 13:10:43 +0200 Message-ID: <20240714111043.14132-14-pbonzini@redhat.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240714111043.14132-1-pbonzini@redhat.com> References: <20240714111043.14132-1-pbonzini@redhat.com> MIME-Version: 1.0 Received-SPF: pass client-ip=170.10.133.124; envelope-from=pbonzini@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Sergey Dyasli This reverts commit 2b316774f60291f57ca9ecb6a9f0712c532cae34. After 038b4217884c ("Revert "chardev: use a child source for qio input source"") we've been observing the "iwp->src == NULL" assertion triggering periodically during the initial capabilities querying by libvirtd. One of possible backtraces: Thread 1 (Thread 0x7f16cd4f0700 (LWP 43858)): 0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 1 0x00007f16c6c21e65 in __GI_abort () at abort.c:79 2 0x00007f16c6c21d39 in __assert_fail_base at assert.c:92 3 0x00007f16c6c46e86 in __GI___assert_fail (assertion=assertion@entry=0x562e9bcdaadd "iwp->src == NULL", file=file@entry=0x562e9bcdaac8 "../chardev/char-io.c", line=line@entry=99, function=function@entry=0x562e9bcdab10 <__PRETTY_FUNCTION__.20549> "io_watch_poll_finalize") at assert.c:101 4 0x0000562e9ba20c2c in io_watch_poll_finalize (source=) at ../chardev/char-io.c:99 5 io_watch_poll_finalize (source=) at ../chardev/char-io.c:88 6 0x00007f16c904aae0 in g_source_unref_internal () from /lib64/libglib-2.0.so.0 7 0x00007f16c904baf9 in g_source_destroy_internal () from /lib64/libglib-2.0.so.0 8 0x0000562e9ba20db0 in io_remove_watch_poll (source=0x562e9d6720b0) at ../chardev/char-io.c:147 9 remove_fd_in_watch (chr=chr@entry=0x562e9d5f3800) at ../chardev/char-io.c:153 10 0x0000562e9ba23ffb in update_ioc_handlers (s=0x562e9d5f3800) at ../chardev/char-socket.c:592 11 0x0000562e9ba2072f in qemu_chr_fe_set_handlers_full at ../chardev/char-fe.c:279 12 0x0000562e9ba207a9 in qemu_chr_fe_set_handlers at ../chardev/char-fe.c:304 13 0x0000562e9ba2ca75 in monitor_qmp_setup_handlers_bh (opaque=0x562e9d4c2c60) at ../monitor/qmp.c:509 14 0x0000562e9bb6222e in aio_bh_poll (ctx=ctx@entry=0x562e9d4c2f20) at ../util/async.c:216 15 0x0000562e9bb4de0a in aio_poll (ctx=0x562e9d4c2f20, blocking=blocking@entry=true) at ../util/aio-posix.c:722 16 0x0000562e9b99dfaa in iothread_run (opaque=0x562e9d4c26f0) at ../iothread.c:63 17 0x0000562e9bb505a4 in qemu_thread_start (args=0x562e9d4c7ea0) at ../util/qemu-thread-posix.c:543 18 0x00007f16c70081ca in start_thread (arg=) at pthread_create.c:479 19 0x00007f16c6c398d3 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 io_remove_watch_poll(), which makes sure that iwp->src is NULL, calls g_source_destroy() which finds that iwp->src is not NULL in the finalize callback. This can only happen if another thread has managed to trigger io_watch_poll_prepare() callback in the meantime. Move iwp->src destruction back to the finalize callback to prevent the described race, and also remove the stale comment. The deadlock glib bug was fixed back in 2010 by b35820285668 ("gmain: move finalization of GSource outside of context lock"). Suggested-by: Paolo Bonzini Signed-off-by: Sergey Dyasli Link: https://lore.kernel.org/r/20240712092659.216206-1-sergey.dyasli@nutanix.com Signed-off-by: Paolo Bonzini --- chardev/char-io.c | 19 +++++-------------- 1 file changed, 5 insertions(+), 14 deletions(-) diff --git a/chardev/char-io.c b/chardev/char-io.c index dab77b112e3..3be17b51ca5 100644 --- a/chardev/char-io.c +++ b/chardev/char-io.c @@ -87,16 +87,12 @@ static gboolean io_watch_poll_dispatch(GSource *source, GSourceFunc callback, static void io_watch_poll_finalize(GSource *source) { - /* - * Due to a glib bug, removing the last reference to a source - * inside a finalize callback causes recursive locking (and a - * deadlock). This is not a problem inside other callbacks, - * including dispatch callbacks, so we call io_remove_watch_poll - * to remove this source. At this point, iwp->src must - * be NULL, or we would leak it. - */ IOWatchPoll *iwp = io_watch_poll_from_source(source); - assert(iwp->src == NULL); + if (iwp->src) { + g_source_destroy(iwp->src); + g_source_unref(iwp->src); + iwp->src = NULL; + } } static GSourceFuncs io_watch_poll_funcs = { @@ -139,11 +135,6 @@ static void io_remove_watch_poll(GSource *source) IOWatchPoll *iwp; iwp = io_watch_poll_from_source(source); - if (iwp->src) { - g_source_destroy(iwp->src); - g_source_unref(iwp->src); - iwp->src = NULL; - } g_source_destroy(&iwp->parent); }