From patchwork Wed Sep 4 13:27:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 13790859 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 53853CA0ED3 for ; Wed, 4 Sep 2024 13:27:40 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.790361.1200088 (Exim 4.92) (envelope-from ) id 1slq2k-0004sU-RP; Wed, 04 Sep 2024 13:27:30 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 790361.1200088; Wed, 04 Sep 2024 13:27:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1slq2k-0004sN-Oh; Wed, 04 Sep 2024 13:27:30 +0000 Received: by outflank-mailman (input) for mailman id 790361; Wed, 04 Sep 2024 13:27:29 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1slq2j-0003Xa-Lj for xen-devel@lists.xenproject.org; Wed, 04 Sep 2024 13:27:29 +0000 Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com [2a00:1450:4864:20::634]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 6e8c41ca-6ac1-11ef-a0b3-8be0dac302b0; Wed, 04 Sep 2024 15:27:28 +0200 (CEST) Received: by mail-ej1-x634.google.com with SMTP id a640c23a62f3a-a868831216cso780489466b.3 for ; Wed, 04 Sep 2024 06:27:28 -0700 (PDT) Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de. [37.24.206.209]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a89891968a3sm819605866b.142.2024.09.04.06.27.27 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 04 Sep 2024 06:27:28 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6e8c41ca-6ac1-11ef-a0b3-8be0dac302b0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1725456448; x=1726061248; darn=lists.xenproject.org; h=content-transfer-encoding:in-reply-to:autocrypt:content-language :references:cc:to:from:subject:user-agent:mime-version:date :message-id:from:to:cc:subject:date:message-id:reply-to; bh=yp78HPLZHcPEMDsLsM1R4Jfv8UU1lMcs6c7lpFRwulA=; b=HwLhc7LgviDxCzUKsbVjuw1dr5cZ3Y3DvN9LvXx7h6z1t3EeYMbvrFyLAZptDeIz4I iLFGRobu5eV28tk/iu4GxO2C0cdRNMUnUFlZbeHOoJBMVYiXgI8GsEu+FOoWBNwLDfmZ 0xs1BLvBi/d9l78M5kGCvYFQJ8j0lLScfx62dAocbg+8vmtMxvnKzzV7ZqXAZuBI9CnM w6NlNujAICJ12NeZkGxRcibIqtJGR/hs9ycsvtYlEh2F+eXKXJAjgL574mpx1TxgvgU8 wHelZtPKrWSCbA3lndsc34zsnM+mYi7/mLi6rIfa4QFMJFVzUVhg8g1FWkuUHNdIhvTQ /KoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725456448; x=1726061248; h=content-transfer-encoding:in-reply-to:autocrypt:content-language :references:cc:to:from:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=yp78HPLZHcPEMDsLsM1R4Jfv8UU1lMcs6c7lpFRwulA=; b=OXHSc8Hk+q/IBiiBhEcP/50mIP/4TtUWM3/0HuL69p0OfflaHDXfhE6jPEUUzUpwn0 kuAlD/nAuVxvCBnN5w7JKDnao9qtzVXRW/YXtZToKqCXoVnlhzxWDbNvX9LVMCeCajpH L1FZ0jklLCaUlqIpNY46MPuhUrM0y92bOR2i1G9TO3Jg+N8tk5KVDc/3nr49RqwdjCgj fxfKtS+IOwrikwA6G2gccUj4L2loNtmfYltfVyIPFXQ8qrMzlJHmmoEfzOqkTJgQoIpJ fW0SvBCxvb06H3xzWNAeml+8vASBi4THM0gA0Y5RMLUOgKg873HwVvsPXRRLmkEpHEKn +zyQ== X-Gm-Message-State: AOJu0Yx1K3Ekt6HiEWpYPnDQlFyAvNITjbW/R4ifzfCk3l6FHf1E7l5N +OcDaW4GfqpKkhlmew3hVixp+pEi2jUJjSyNJCjcTCkjhs058j5FW7TQtffr/N2IiE5vxJnvk8I = X-Google-Smtp-Source: AGHT+IGJDFc47MWj8OXx8n+le8xzZBSP7AtarY8vuRHUCUsZvG02uHr5q+kfFwMBISDnNkSW6UQZvw== X-Received: by 2002:a17:907:cc88:b0:a8a:5a72:a29f with SMTP id a640c23a62f3a-a8a5a72a500mr58705766b.17.1725456448231; Wed, 04 Sep 2024 06:27:28 -0700 (PDT) Message-ID: Date: Wed, 4 Sep 2024 15:27:26 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: [PATCH 1/5] x86/HVM: reduce recursion in linear_{read,write}() From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= References: <31906cba-8646-4cf9-ab31-1d23654df8d1@suse.com> Content-Language: en-US Autocrypt: addr=jbeulich@suse.com; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL In-Reply-To: <31906cba-8646-4cf9-ab31-1d23654df8d1@suse.com> Let's make explicit what the compiler may or may not do on our behalf: The 2nd of the recursive invocations each can fall through rather than re-invoking the function. This will save us from adding yet another parameter (or more) to the function, just for the recursive invocations. Signed-off-by: Jan Beulich Reviewed-by: Andrew Cooper --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -1146,7 +1146,7 @@ static int linear_read(unsigned long add pagefault_info_t pfinfo; struct hvm_vcpu_io *hvio = ¤t->arch.hvm.hvm_io; unsigned int offset = addr & ~PAGE_MASK; - int rc = HVMTRANS_bad_gfn_to_mfn; + int rc; if ( offset + bytes > PAGE_SIZE ) { @@ -1154,12 +1154,16 @@ static int linear_read(unsigned long add /* Split the access at the page boundary. */ rc = linear_read(addr, part1, p_data, pfec, hvmemul_ctxt); - if ( rc == X86EMUL_OKAY ) - rc = linear_read(addr + part1, bytes - part1, p_data + part1, - pfec, hvmemul_ctxt); - return rc; + if ( rc != X86EMUL_OKAY ) + return rc; + + addr += part1; + bytes -= part1; + p_data += part1; } + rc = HVMTRANS_bad_gfn_to_mfn; + /* * If there is an MMIO cache entry for the access then we must be re-issuing * an access that was previously handled as MMIO. Thus it is imperative that @@ -1201,7 +1205,7 @@ static int linear_write(unsigned long ad pagefault_info_t pfinfo; struct hvm_vcpu_io *hvio = ¤t->arch.hvm.hvm_io; unsigned int offset = addr & ~PAGE_MASK; - int rc = HVMTRANS_bad_gfn_to_mfn; + int rc; if ( offset + bytes > PAGE_SIZE ) { @@ -1209,12 +1213,16 @@ static int linear_write(unsigned long ad /* Split the access at the page boundary. */ rc = linear_write(addr, part1, p_data, pfec, hvmemul_ctxt); - if ( rc == X86EMUL_OKAY ) - rc = linear_write(addr + part1, bytes - part1, p_data + part1, - pfec, hvmemul_ctxt); - return rc; + if ( rc != X86EMUL_OKAY ) + return rc; + + addr += part1; + bytes -= part1; + p_data += part1; } + rc = HVMTRANS_bad_gfn_to_mfn; + /* * If there is an MMIO cache entry for the access then we must be re-issuing * an access that was previously handled as MMIO. Thus it is imperative that From patchwork Wed Sep 4 13:29:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 13790860 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F36FBCD3431 for ; Wed, 4 Sep 2024 13:29:22 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.790368.1200099 (Exim 4.92) (envelope-from ) id 1slq4I-0005QZ-5K; Wed, 04 Sep 2024 13:29:06 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 790368.1200099; Wed, 04 Sep 2024 13:29:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1slq4I-0005QS-2k; Wed, 04 Sep 2024 13:29:06 +0000 Received: by outflank-mailman (input) for mailman id 790368; Wed, 04 Sep 2024 13:29:05 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1slq4H-0005QM-BU for xen-devel@lists.xenproject.org; Wed, 04 Sep 2024 13:29:05 +0000 Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com [2a00:1450:4864:20::529]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a7583a93-6ac1-11ef-a0b3-8be0dac302b0; Wed, 04 Sep 2024 15:29:04 +0200 (CEST) Received: by mail-ed1-x529.google.com with SMTP id 4fb4d7f45d1cf-5c26815e174so3014939a12.0 for ; Wed, 04 Sep 2024 06:29:04 -0700 (PDT) Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de. [37.24.206.209]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a89891d8109sm819385666b.174.2024.09.04.06.29.03 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 04 Sep 2024 06:29:03 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a7583a93-6ac1-11ef-a0b3-8be0dac302b0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1725456544; x=1726061344; darn=lists.xenproject.org; h=content-transfer-encoding:in-reply-to:autocrypt:content-language :references:cc:to:from:subject:user-agent:mime-version:date :message-id:from:to:cc:subject:date:message-id:reply-to; bh=kg3DEyJe/yK9Bb7Q6E0uiaX/GqnXQutBfGrfIqTfqXA=; b=Xjq7EnPqBIpP2lLcYn9lEHQJuedlQ/XEdLXVjq6w1+yJQvT9fCVnkrE/Xp3nFYYqsv z3IRG132h21CxR6FzzO1MB0GpGaN3S2g1H7yQYktOSAGqin+FHsNVTNnBwT2GgsnMGvj /FFehyT7wIdToGbtfysB1axBWN/d+bpDQpKY/ddIPQKRDOfTZnt4dS09bQjhsxPv8LR0 fHUS016nCa0v16R0o2qnVZi0ILDjLcrnbbd/kXZnS1FjCdLqGDdiNPIgqJ36rv2mRcKj C64m7UqHGIuRTKU71mhEgTsV3VxB9n/cd2ny9Bg0+/0KKSVw4A/t+2cRv6D6TTTV6ZFl x5GA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725456544; x=1726061344; h=content-transfer-encoding:in-reply-to:autocrypt:content-language :references:cc:to:from:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=kg3DEyJe/yK9Bb7Q6E0uiaX/GqnXQutBfGrfIqTfqXA=; b=GBD3uhjHkDyxS+mfhS0BKV0+hMxctozi7o5DXom+xbS8/WNQxeQNQAdaREihY46GO5 t3NxT4OhvukD10cEGiR5M170lVJpHpZcaE4jGPeEfo9qFcvwL3z5XQwh5TjOyHNzWTWR Rde+FsG73Dnw2lZSgxqCc/ZeD2YpHDVMJDEMLwOotRWFq4JFHMjT91F9vdm3T1irFbzP iE8AcY4ZdnklG0RF3uMDuS0mYuPlABRCP9Hge6smHKS6sbSaIOFKrxvc8u4kKlW1w/SB VKee/FEDZVdE4Mut74TnzdCJlMvzkz2xhmfKHeCDL76fcj5GM3GnHI9xmHG+mdfIF9Ui gogQ== X-Gm-Message-State: AOJu0YwpWxx+GA7LIlGV1ZTTVqwmMKGwStroAvo6KR9AkEaxkAAKXFng mjNzdkdiQsZjT7K6Ouw1UpI8A7S6eW4UCJPTeelOFFxkBXcVW2tTVojJIM83pcAvfIGSbNkBN6A = X-Google-Smtp-Source: AGHT+IFA1QV9nNTH9/UKu8PCb1HbT5gHs1lJ+g61Ctdz7ITrS853zlRJAHB0qYAylwLxJX9BtK84NQ== X-Received: by 2002:a17:906:dc94:b0:a86:821e:8a28 with SMTP id a640c23a62f3a-a89d884953amr990605366b.54.1725456543461; Wed, 04 Sep 2024 06:29:03 -0700 (PDT) Message-ID: Date: Wed, 4 Sep 2024 15:29:02 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: [PATCH 2/5] x86/HVM: allocate emulation cache entries dynamically From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= References: <31906cba-8646-4cf9-ab31-1d23654df8d1@suse.com> Content-Language: en-US Autocrypt: addr=jbeulich@suse.com; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL In-Reply-To: <31906cba-8646-4cf9-ab31-1d23654df8d1@suse.com> Both caches may need higher capacity, and the upper bound will need to be determined dynamically based on CPUID policy (for AMX at least). While touching the check in hvmemul_phys_mmio_access() anyway, also tighten it: To avoid overrunning the internal buffer we need to take the offset into the buffer into account. Signed-off-by: Jan Beulich --- This is a patch taken from the AMX series, which was part of the v3 submission. All I did is strip out the actual AMX bits (from hvmemul_cache_init()), plus of course change the description. As a result some local variables there may look unnecessary, but this way it's going to be less churn when the AMX bits are added. The next patch pretty strongly depends on the changed approach (contextually, not so much functionally), and I'd really like to avoid rebasing that one ahead of this one, and then this one on top of that. TBD: For AMX hvmemul_cache_init() will become CPUID policy dependent. We could of course take the opportunity and also reduce overhead when AVX-512 (and maybe even AVX) is unavailable (in the future: to the guest). --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -26,6 +26,18 @@ #include #include +/* + * We may read or write up to m512 or up to a tile row as a number of + * device-model transactions. + */ +struct hvm_mmio_cache { + unsigned long gla; + unsigned int size; + unsigned int space:31; + unsigned int dir:1; + uint8_t buffer[] __aligned(sizeof(long)); +}; + struct hvmemul_cache { /* The cache is disabled as long as num_ents > max_ents. */ @@ -935,7 +947,7 @@ static int hvmemul_phys_mmio_access( } /* Accesses must not overflow the cache's buffer. */ - if ( size > sizeof(cache->buffer) ) + if ( offset + size > cache->space ) { ASSERT_UNREACHABLE(); return X86EMUL_UNHANDLEABLE; @@ -1011,7 +1023,7 @@ static struct hvm_mmio_cache *hvmemul_fi for ( i = 0; i < hvio->mmio_cache_count; i ++ ) { - cache = &hvio->mmio_cache[i]; + cache = hvio->mmio_cache[i]; if ( gla == cache->gla && dir == cache->dir ) @@ -1027,10 +1039,11 @@ static struct hvm_mmio_cache *hvmemul_fi ++hvio->mmio_cache_count; - cache = &hvio->mmio_cache[i]; - memset(cache, 0, sizeof (*cache)); + cache = hvio->mmio_cache[i]; + memset(cache->buffer, 0, cache->space); cache->gla = gla; + cache->size = 0; cache->dir = dir; return cache; @@ -2978,16 +2991,21 @@ void hvm_dump_emulation_state(const char int hvmemul_cache_init(struct vcpu *v) { /* - * No insn can access more than 16 independent linear addresses (AVX512F - * scatters/gathers being the worst). Each such linear range can span a - * page boundary, i.e. may require two page walks. Account for each insn - * byte individually, for simplicity. + * AVX512F scatter/gather insns can access up to 16 independent linear + * addresses, up to 8 bytes size. Each such linear range can span a page + * boundary, i.e. may require two page walks. + */ + unsigned int nents = 16 * 2 * (CONFIG_PAGING_LEVELS + 1); + unsigned int i, max_bytes = 64; + struct hvmemul_cache *cache; + + /* + * Account for each insn byte individually, both for simplicity and to + * leave some slack space. */ - const unsigned int nents = (CONFIG_PAGING_LEVELS + 1) * - (MAX_INST_LEN + 16 * 2); - struct hvmemul_cache *cache = xmalloc_flex_struct(struct hvmemul_cache, - ents, nents); + nents += MAX_INST_LEN * (CONFIG_PAGING_LEVELS + 1); + cache = xvmalloc_flex_struct(struct hvmemul_cache, ents, nents); if ( !cache ) return -ENOMEM; @@ -2997,6 +3015,15 @@ int hvmemul_cache_init(struct vcpu *v) v->arch.hvm.hvm_io.cache = cache; + for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.hvm_io.mmio_cache); ++i ) + { + v->arch.hvm.hvm_io.mmio_cache[i] = + xmalloc_flex_struct(struct hvm_mmio_cache, buffer, max_bytes); + if ( !v->arch.hvm.hvm_io.mmio_cache[i] ) + return -ENOMEM; + v->arch.hvm.hvm_io.mmio_cache[i]->space = max_bytes; + } + return 0; } --- a/xen/arch/x86/include/asm/hvm/emulate.h +++ b/xen/arch/x86/include/asm/hvm/emulate.h @@ -15,6 +15,7 @@ #include #include #include +#include #include #include @@ -119,7 +120,11 @@ int hvmemul_do_pio_buffer(uint16_t port, int __must_check hvmemul_cache_init(struct vcpu *v); static inline void hvmemul_cache_destroy(struct vcpu *v) { - XFREE(v->arch.hvm.hvm_io.cache); + unsigned int i; + + for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.hvm_io.mmio_cache); ++i ) + XFREE(v->arch.hvm.hvm_io.mmio_cache[i]); + XVFREE(v->arch.hvm.hvm_io.cache); } bool hvmemul_read_cache(const struct vcpu *v, paddr_t gpa, void *buffer, unsigned int size); --- a/xen/arch/x86/include/asm/hvm/vcpu.h +++ b/xen/arch/x86/include/asm/hvm/vcpu.h @@ -22,17 +22,6 @@ struct hvm_vcpu_asid { uint32_t asid; }; -/* - * We may read or write up to m512 as a number of device-model - * transactions. - */ -struct hvm_mmio_cache { - unsigned long gla; - unsigned int size; - uint8_t dir; - uint8_t buffer[64] __aligned(sizeof(long)); -}; - struct hvm_vcpu_io { /* * HVM emulation: @@ -48,7 +37,7 @@ struct hvm_vcpu_io { * We may need to handle up to 3 distinct memory accesses per * instruction. */ - struct hvm_mmio_cache mmio_cache[3]; + struct hvm_mmio_cache *mmio_cache[3]; unsigned int mmio_cache_count; /* For retries we shouldn't re-fetch the instruction. */ From patchwork Wed Sep 4 13:29:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 13790861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 54A1CCD3431 for ; Wed, 4 Sep 2024 13:29:53 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.790371.1200109 (Exim 4.92) (envelope-from ) id 1slq4v-0005wU-E2; Wed, 04 Sep 2024 13:29:45 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 790371.1200109; Wed, 04 Sep 2024 13:29:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1slq4v-0005wN-B4; Wed, 04 Sep 2024 13:29:45 +0000 Received: by outflank-mailman (input) for mailman id 790371; Wed, 04 Sep 2024 13:29:43 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1slq4t-0005w7-Nf for xen-devel@lists.xenproject.org; Wed, 04 Sep 2024 13:29:43 +0000 Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com [2a00:1450:4864:20::629]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id be2b42d1-6ac1-11ef-a0b3-8be0dac302b0; Wed, 04 Sep 2024 15:29:42 +0200 (CEST) Received: by mail-ej1-x629.google.com with SMTP id a640c23a62f3a-a86984e035aso814069166b.2 for ; Wed, 04 Sep 2024 06:29:42 -0700 (PDT) Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de. [37.24.206.209]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a8a3f048b1fsm92936866b.79.2024.09.04.06.29.41 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 04 Sep 2024 06:29:41 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: be2b42d1-6ac1-11ef-a0b3-8be0dac302b0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1725456582; x=1726061382; darn=lists.xenproject.org; h=content-transfer-encoding:in-reply-to:autocrypt:content-language :references:cc:to:from:subject:user-agent:mime-version:date :message-id:from:to:cc:subject:date:message-id:reply-to; bh=5sy4Y2cQJ7dlapUfnU+5xbF/96+httzMJp8tshXNPzQ=; b=IYoQIXC6DZEmCG3SETJ7vJjlgZ3+tBDtcDE7HzFjpqgjFE9UwJndaB4aNF9q0bFAeA jShXHZxjUY7tcq+iB09wLHQiW9r8gukHUAFcq9zmpBEgm9CwGrsk/yfEIk3Fpu6gCR8i JMZbMkkd8B4chDKAoawGw9ccVKltWImAM6nPFaIX79i8t4kXnjA5cxKTW5AwqE9AIfqM sRdJaigHedWP8xhFYQEtvLFouRAFTfi+odY7lYBVIFCaR5NK/EvgR0LyYqD1C3BbQh5Y e1QbBXS0slCvZrpN89QMh8H0PPk2QebF+mQhj3gMBuqaa+F5pwdpdPo7AIJFbUmvJjpp JvkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725456582; x=1726061382; h=content-transfer-encoding:in-reply-to:autocrypt:content-language :references:cc:to:from:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5sy4Y2cQJ7dlapUfnU+5xbF/96+httzMJp8tshXNPzQ=; b=l0wtzdYNvNKUd+iXJw4+RmNMByNOOznzaGRxVPJYwvOQnhkkp71FJhh1EvhFtxZ4hs ppQBR4ZM7xnR7drBHD2AoxRccjmLMO5Vr2BcHWiHkSFBceaHVOBns+RTmE3Uqalsk25s 4TBozaUMK8pGAiB1fqdYEUf68vd4jjFhZy8iJqmaFibrsqfNEzFqBXYsw3cknpeCtVO6 8TMqHNQKSBUitigNKNX1mtuVnlDRvN+KRfxy5DQ7UimTW/0A7ikew7SbniLgF9aam97L a3VX+2R7RKLRHlyXTWmEaFdHe0K9rMXZzSWreYvjky/zbSj+6rs8MkidiZE09wMxxRTV ekpg== X-Gm-Message-State: AOJu0Yx07Ve44jLw+k5TEatOQ97N4uyJ9srULCyfjx8RrIawvVhx6rQE ayvCwnNQRGVC2vIKdzY+4tZ/ItfvEqiaEvCQN+GqhF7T2aM79tRJ60ZUmFUc4+9Go9GDiN1aP58 = X-Google-Smtp-Source: AGHT+IG2roNWLTYbkq23uOPef2XlIRbEi9UoBtIxuNkOBY2mkRMJzrIjIFaUqv0aLAV2ZD7CNFdXOw== X-Received: by 2002:a17:907:c13:b0:a86:8cfe:ec0e with SMTP id a640c23a62f3a-a89a369f9aamr1176686666b.36.1725456581613; Wed, 04 Sep 2024 06:29:41 -0700 (PDT) Message-ID: Date: Wed, 4 Sep 2024 15:29:40 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: [PATCH 3/5] x86/HVM: correct read/write split at page boundaries From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= , Manuel Andreas References: <31906cba-8646-4cf9-ab31-1d23654df8d1@suse.com> Content-Language: en-US Autocrypt: addr=jbeulich@suse.com; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL In-Reply-To: <31906cba-8646-4cf9-ab31-1d23654df8d1@suse.com> The MMIO cache is intended to have one entry used per independent memory access that an insn does. This, in particular, is supposed to be ignoring any page boundary crossing. Therefore when looking up a cache entry, the access'es starting (linear) address is relevant, not the one possibly advanced past a page boundary. In order for the same offset-into-buffer variable to be usable in hvmemul_phys_mmio_access() for both the caller's buffer and the cache entry's it is further necessary to have the un-adjusted caller buffer passed into there. Fixes: 2d527ba310dc ("x86/hvm: split all linear reads and writes at page boundary") Reported-by: Manuel Andreas Signed-off-by: Jan Beulich --- This way problematic overlaps are only reduced (to ones starting at the same address), not eliminated: Assumptions in hvmemul_phys_mmio_access() go further - if a subsequent access is larger than an earlier one, but the splitting results in a chunk to cross the end "boundary" of the earlier access, an assertion will still trigger. Explicit memory accesses (ones encoded in an insn by explicit or implicit memory operands) match the assumption afaict (i.e. all those accesses are of uniform size, and hence they either fully overlap or are mapped to distinct cache entries). Insns accessing descriptor tables, otoh, don't fulfill these expectations: The selector read (if coming from memory) will always be smaller than the descriptor being read, and if both (insanely) start at the same linear address (in turn mapping MMIO), said assertion will kick in. (The same would be true for an insn trying to access itself as data, as long as certain size "restrictions" between insn and memory operand are met. Except that linear_read() disallows insn fetches from MMIO.) To deal with such, I expect we will need to further qualify (tag) cache entries, such that reads/writes won't use insn fetch entries, and implicit-supervisor-mode accesses won't use entries of ordinary accesses. (Page table accesses don't need considering here for now, as our page walking code demands page tables to be mappable, implying they're in guest RAM; such accesses also don't take the path here.) Thoughts anyone, before I get to making another patch? Considering the insn fetch aspect mentioned above I'm having trouble following why the cache has 3 entries. With insn fetches permitted, descriptor table accesses where the accessed bit needs setting may also fail because of that limited capacity of the cache, due to the way the accesses are done. The read and write (cmpxchg) are independent accesses from the cache's perspective, and hence we'd need another entry there. If, otoh, the 3 entries are there to account for precisely this (which seems unlikely with commit e101123463d2 ["x86/hvm: track large memory mapped accesses by buffer offset"] not saying anything at all), then we should be fine in this regard. If we were to permit insn fetches, which way to overcome this (possibly by allowing the write to re-use the earlier read's entry in this special situation) would remain to be determined. --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -31,8 +31,9 @@ * device-model transactions. */ struct hvm_mmio_cache { - unsigned long gla; - unsigned int size; + unsigned long gla; /* Start of original access (e.g. insn operand) */ + unsigned int skip; /* Offset to start of MMIO */ + unsigned int size; /* Populated space, including @skip */ unsigned int space:31; unsigned int dir:1; uint8_t buffer[] __aligned(sizeof(long)); @@ -953,6 +954,13 @@ static int hvmemul_phys_mmio_access( return X86EMUL_UNHANDLEABLE; } + /* Accesses must not be to the unused leading space. */ + if ( offset < cache->skip ) + { + ASSERT_UNREACHABLE(); + return X86EMUL_UNHANDLEABLE; + } + /* * hvmemul_do_io() cannot handle non-power-of-2 accesses or * accesses larger than sizeof(long), so choose the highest power @@ -1010,13 +1018,15 @@ static int hvmemul_phys_mmio_access( /* * Multi-cycle MMIO handling is based upon the assumption that emulation - * of the same instruction will not access the same MMIO region more - * than once. Hence we can deal with re-emulation (for secondary or - * subsequent cycles) by looking up the result or previous I/O in a - * cache indexed by linear MMIO address. + * of the same instruction will not access the exact same MMIO region + * more than once in exactly the same way (if it does, the accesses will + * be "folded"). Hence we can deal with re-emulation (for secondary or + * subsequent cycles) by looking up the result of previous I/O in a cache + * indexed by linear address and access type. */ static struct hvm_mmio_cache *hvmemul_find_mmio_cache( - struct hvm_vcpu_io *hvio, unsigned long gla, uint8_t dir, bool create) + struct hvm_vcpu_io *hvio, unsigned long gla, uint8_t dir, + unsigned int skip) { unsigned int i; struct hvm_mmio_cache *cache; @@ -1030,7 +1040,11 @@ static struct hvm_mmio_cache *hvmemul_fi return cache; } - if ( !create ) + /* + * Bail if a new entry shouldn't be allocated, utilizing that ->space has + * the same value for all entries. + */ + if ( skip >= hvio->mmio_cache[0]->space ) return NULL; i = hvio->mmio_cache_count; @@ -1043,7 +1057,8 @@ static struct hvm_mmio_cache *hvmemul_fi memset(cache->buffer, 0, cache->space); cache->gla = gla; - cache->size = 0; + cache->skip = skip; + cache->size = skip; cache->dir = dir; return cache; @@ -1064,12 +1079,14 @@ static void latch_linear_to_phys(struct static int hvmemul_linear_mmio_access( unsigned long gla, unsigned int size, uint8_t dir, void *buffer, - uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool known_gpfn) + uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, + unsigned long start, bool known_gpfn) { struct hvm_vcpu_io *hvio = ¤t->arch.hvm.hvm_io; unsigned long offset = gla & ~PAGE_MASK; - struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(hvio, gla, dir, true); - unsigned int chunk, buffer_offset = 0; + unsigned int chunk, buffer_offset = gla - start; + struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(hvio, start, dir, + buffer_offset); paddr_t gpa; unsigned long one_rep = 1; int rc; @@ -1117,19 +1134,19 @@ static int hvmemul_linear_mmio_access( static inline int hvmemul_linear_mmio_read( unsigned long gla, unsigned int size, void *buffer, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, - bool translate) + unsigned long start, bool translate) { return hvmemul_linear_mmio_access(gla, size, IOREQ_READ, buffer, - pfec, hvmemul_ctxt, translate); + pfec, hvmemul_ctxt, start, translate); } static inline int hvmemul_linear_mmio_write( unsigned long gla, unsigned int size, void *buffer, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, - bool translate) + unsigned long start, bool translate) { return hvmemul_linear_mmio_access(gla, size, IOREQ_WRITE, buffer, - pfec, hvmemul_ctxt, translate); + pfec, hvmemul_ctxt, start, translate); } static bool known_gla(unsigned long addr, unsigned int bytes, uint32_t pfec) @@ -1158,7 +1175,10 @@ static int linear_read(unsigned long add { pagefault_info_t pfinfo; struct hvm_vcpu_io *hvio = ¤t->arch.hvm.hvm_io; + void *buffer = p_data; + unsigned long start = addr; unsigned int offset = addr & ~PAGE_MASK; + const struct hvm_mmio_cache *cache; int rc; if ( offset + bytes > PAGE_SIZE ) @@ -1182,8 +1202,17 @@ static int linear_read(unsigned long add * an access that was previously handled as MMIO. Thus it is imperative that * we handle this access in the same way to guarantee completion and hence * clean up any interim state. + * + * Care must be taken, however, to correctly deal with crossing RAM/MMIO or + * MMIO/RAM boundaries. While we want to use a single cache entry (tagged + * by the starting linear address), we need to continue issuing (i.e. also + * upon replay) the RAM access for anything that's ahead of or past MMIO, + * i.e. in RAM. */ - if ( !hvmemul_find_mmio_cache(hvio, addr, IOREQ_READ, false) ) + cache = hvmemul_find_mmio_cache(hvio, start, IOREQ_READ, ~0); + if ( !cache || + addr + bytes <= start + cache->skip || + addr >= start + cache->size ) rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo); switch ( rc ) @@ -1199,8 +1228,8 @@ static int linear_read(unsigned long add if ( pfec & PFEC_insn_fetch ) return X86EMUL_UNHANDLEABLE; - return hvmemul_linear_mmio_read(addr, bytes, p_data, pfec, - hvmemul_ctxt, + return hvmemul_linear_mmio_read(addr, bytes, buffer, pfec, + hvmemul_ctxt, start, known_gla(addr, bytes, pfec)); case HVMTRANS_gfn_paged_out: @@ -1217,7 +1246,10 @@ static int linear_write(unsigned long ad { pagefault_info_t pfinfo; struct hvm_vcpu_io *hvio = ¤t->arch.hvm.hvm_io; + void *buffer = p_data; + unsigned long start = addr; unsigned int offset = addr & ~PAGE_MASK; + const struct hvm_mmio_cache *cache; int rc; if ( offset + bytes > PAGE_SIZE ) @@ -1236,13 +1268,11 @@ static int linear_write(unsigned long ad rc = HVMTRANS_bad_gfn_to_mfn; - /* - * If there is an MMIO cache entry for the access then we must be re-issuing - * an access that was previously handled as MMIO. Thus it is imperative that - * we handle this access in the same way to guarantee completion and hence - * clean up any interim state. - */ - if ( !hvmemul_find_mmio_cache(hvio, addr, IOREQ_WRITE, false) ) + /* See commentary in linear_read(). */ + cache = hvmemul_find_mmio_cache(hvio, start, IOREQ_WRITE, ~0); + if ( !cache || + addr + bytes <= start + cache->skip || + addr >= start + cache->size ) rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo); switch ( rc ) @@ -1255,8 +1285,8 @@ static int linear_write(unsigned long ad return X86EMUL_EXCEPTION; case HVMTRANS_bad_gfn_to_mfn: - return hvmemul_linear_mmio_write(addr, bytes, p_data, pfec, - hvmemul_ctxt, + return hvmemul_linear_mmio_write(addr, bytes, buffer, pfec, + hvmemul_ctxt, start, known_gla(addr, bytes, pfec)); case HVMTRANS_gfn_paged_out: @@ -1643,7 +1673,7 @@ static int cf_check hvmemul_cmpxchg( { /* Fix this in case the guest is really relying on r-m-w atomicity. */ return hvmemul_linear_mmio_write(addr, bytes, p_new, pfec, - hvmemul_ctxt, + hvmemul_ctxt, addr, hvio->mmio_access.write_access && hvio->mmio_gla == (addr & PAGE_MASK)); } From patchwork Wed Sep 4 13:29:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 13790862 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CAEABCA0ED3 for ; Wed, 4 Sep 2024 13:30:10 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.790373.1200118 (Exim 4.92) (envelope-from ) id 1slq5D-0006m4-QI; Wed, 04 Sep 2024 13:30:03 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 790373.1200118; Wed, 04 Sep 2024 13:30:03 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1slq5D-0006lT-MZ; Wed, 04 Sep 2024 13:30:03 +0000 Received: by outflank-mailman (input) for mailman id 790373; Wed, 04 Sep 2024 13:30:02 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1slq5C-0005w7-5M for xen-devel@lists.xenproject.org; Wed, 04 Sep 2024 13:30:02 +0000 Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com [2a00:1450:4864:20::636]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id c95e3ae7-6ac1-11ef-a0b3-8be0dac302b0; Wed, 04 Sep 2024 15:30:01 +0200 (CEST) Received: by mail-ej1-x636.google.com with SMTP id a640c23a62f3a-a86acbaddb4so791563766b.1 for ; Wed, 04 Sep 2024 06:30:01 -0700 (PDT) Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de. [37.24.206.209]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a8989196d4asm822570666b.141.2024.09.04.06.30.00 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 04 Sep 2024 06:30:00 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c95e3ae7-6ac1-11ef-a0b3-8be0dac302b0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1725456601; x=1726061401; darn=lists.xenproject.org; h=content-transfer-encoding:in-reply-to:autocrypt:content-language :references:cc:to:from:subject:user-agent:mime-version:date :message-id:from:to:cc:subject:date:message-id:reply-to; bh=2QgSbescz9+dIFTje+LMqUKMCR9FaWXm6Y8nQPTQ+CA=; b=L+k4KMO/WndtgVypC3Z3+jMe3PrloheHeGzXDjiABgFbOaS1PdJz4SfziG2ld66DOy aJkM3l4xEVwr3t/OSf1rL6Rrt6BIpaOUT6F46/Bu0MED5GtecMI1ApxmyAnldapfcEfY U5W51FvPUFbZmbPT8WVWrqhYFCOKC69pp5oHuh/I0aqMtIjMrAybQ4X657BwtH7coJmm mtv7hc42tux4S+iVCZu82uybcBL5cLGSWmGmpMNZLAKUQd+fyH1YaEIJuw6BwVC6qVyq EYn+L4XPkBgaKUBiNQXfiWrkoLJ7RBB7fqXxanjB1/0uA+aAT3tbvljvvb4YU4gWnbTc wMCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725456601; x=1726061401; h=content-transfer-encoding:in-reply-to:autocrypt:content-language :references:cc:to:from:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=2QgSbescz9+dIFTje+LMqUKMCR9FaWXm6Y8nQPTQ+CA=; b=pn1VJSbmE9GK6KK6HPVki6gyf2Xr7mv6P2+rKVRf8OfaNg+BJ1F4EvHzl1U4bAto7T K2wVcWOvfsLIwgmhXCnWomLUhYj0N4Uj03lK3do8o6oXxuaxzx0iglSqGY5ZUgGdbFLo UL4lLCpR4m/9I6RJdhKHplnuTtlfHWrfxRqRbqQnADqMDctcgFujciUiGSqKOMnSarGS usYCLfmxf4o+NkIyZn8+p/O3P1MTAxxeajSJEPlogeqCcUzYw/RK0cGyYIGGr4mm5HBo kT0jt22SNUdrpeKchBnbQ6fEqQBd5oP0CdmMvvFIymDUC8wDEmr0e6lVwTwL1kmQSoph sxaQ== X-Gm-Message-State: AOJu0Yx/p2txlHQ3XYtnkwVa5e5pcMpt88YIIAm3Mb2jMiFr396Bpf0y IPsOmm8Zn1BaDDb16sZ/xICO7/7BSuX311htas/6ikgybxtmLnFgRNCKwBQCbPZIQok5L9Zq7OE = X-Google-Smtp-Source: AGHT+IHm2ePunR/QralYBJXAWsi35BkmukRbSCn1s8Rdr33IuvKyuY5+dF5ltle+rrY2uovZA5RWoA== X-Received: by 2002:a17:907:d90:b0:a86:b64e:bc4d with SMTP id a640c23a62f3a-a897fa74489mr1641733466b.44.1725456600718; Wed, 04 Sep 2024 06:30:00 -0700 (PDT) Message-ID: Date: Wed, 4 Sep 2024 15:29:59 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: [PATCH 4/5] x86/HVM: slightly improve CMPXCHG16B emulation From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= References: <31906cba-8646-4cf9-ab31-1d23654df8d1@suse.com> Content-Language: en-US Autocrypt: addr=jbeulich@suse.com; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL In-Reply-To: <31906cba-8646-4cf9-ab31-1d23654df8d1@suse.com> Using hvmemul_linear_mmio_write() directly (as fallback when mapping the memory operand isn't possible) won't work properly when the access crosses a RAM/MMIO boundary. Use linear_write() instead, which splits at such boundaries as necessary. Signed-off-by: Jan Beulich Acked-by: Andrew Cooper --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -1645,10 +1645,8 @@ static int cf_check hvmemul_cmpxchg( { struct hvm_emulate_ctxt *hvmemul_ctxt = container_of(ctxt, struct hvm_emulate_ctxt, ctxt); - struct vcpu *curr = current; unsigned long addr; uint32_t pfec = PFEC_page_present | PFEC_write_access; - struct hvm_vcpu_io *hvio = &curr->arch.hvm.hvm_io; int rc; void *mapping = NULL; @@ -1672,10 +1670,7 @@ static int cf_check hvmemul_cmpxchg( if ( !mapping ) { /* Fix this in case the guest is really relying on r-m-w atomicity. */ - return hvmemul_linear_mmio_write(addr, bytes, p_new, pfec, - hvmemul_ctxt, addr, - hvio->mmio_access.write_access && - hvio->mmio_gla == (addr & PAGE_MASK)); + return linear_write(addr, bytes, p_new, pfec, hvmemul_ctxt); } switch ( bytes ) From patchwork Wed Sep 4 13:30:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 13790863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5E6EECD3431 for ; Wed, 4 Sep 2024 13:30:37 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.790381.1200129 (Exim 4.92) (envelope-from ) id 1slq5d-0007qy-1j; Wed, 04 Sep 2024 13:30:29 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 790381.1200129; Wed, 04 Sep 2024 13:30:29 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1slq5c-0007qr-UP; Wed, 04 Sep 2024 13:30:28 +0000 Received: by outflank-mailman (input) for mailman id 790381; Wed, 04 Sep 2024 13:30:28 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1slq5c-0005qq-I0 for xen-devel@lists.xenproject.org; Wed, 04 Sep 2024 13:30:28 +0000 Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com [2a00:1450:4864:20::629]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id d8a323d2-6ac1-11ef-99a1-01e77a169b0f; Wed, 04 Sep 2024 15:30:26 +0200 (CEST) Received: by mail-ej1-x629.google.com with SMTP id a640c23a62f3a-a7a81bd549eso603837966b.3 for ; Wed, 04 Sep 2024 06:30:26 -0700 (PDT) Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de. [37.24.206.209]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a8989221c24sm803774066b.196.2024.09.04.06.30.25 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 04 Sep 2024 06:30:25 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d8a323d2-6ac1-11ef-99a1-01e77a169b0f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1725456626; x=1726061426; darn=lists.xenproject.org; h=content-transfer-encoding:in-reply-to:autocrypt:content-language :references:cc:to:from:subject:user-agent:mime-version:date :message-id:from:to:cc:subject:date:message-id:reply-to; bh=d304rnLLf8wqFXcdxGQwytZrZHtgz+7F/g6Hp97VsEM=; b=Xn39PM4HHoo1txcrKQJEB+0RbbPKUhfzvUpkium642ynNNUWdAeTklalNAkRgXRquH tCMnXnG3aRBtBnqxKAvmBFvojVeAoLWg/LikR7eEdcSrk8YWKNarzZGtulTkiLDFBYAI RHIPeMi+6zgDdX+0IOXzsY2LqflW0McCww365Y2qN5U2RqTsyuzr6wR0D+28SZ7L0NAT eB5gIQGPWqEML0aLIiOtScnXiRE5wO4cJ9kvDCaM3wGSaVk4qoFQv54DA8CRK3kFfJPA f6k6ynSZg9JxlMBYFcwz1buaamGVv5Q1j+Ks4ELkOzuwVuZZM6rvAP79p0kA0skZSVUe x8/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725456626; x=1726061426; h=content-transfer-encoding:in-reply-to:autocrypt:content-language :references:cc:to:from:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=d304rnLLf8wqFXcdxGQwytZrZHtgz+7F/g6Hp97VsEM=; b=HR+jgCeSTvWyr1EE+kHRTyRkYd6QmYMYyadd0y4I6W1I3Ym0qhiH0y6e473AS44khB /GsXTfgC0QcRtg1DrBi5LBRFeO5+UqAkO7SaNEgKXB7saiogpCW+IW3ykPiTRfhUa1IF WWqELgRhfTTuaxEibgoV1O7rOP6W7VYn5YPl8p+PwggUVOycrmiDYWUkVIHU2HreP2IJ kFRjv9UUJb1rOJ9YaNrAf3I+DraiGL+dDnfw6aplRC7UZpBF23hpbmNeA2p5TQlYk37o HUJlwlvZLyM8gnFiSpzgTfaomi000NGJI7wQdemwukhkpb+EjTqsLw8f/4G+ircR+J94 fFIA== X-Gm-Message-State: AOJu0YwGJf2vZkXPJIGaYkny6WyDZTEvnZTKiQONWwhx43BYKtA6O1nL VQJq2R7Vuj8Uh2aZ42zqEg58TQfWKACFoK4vW7KzzepewYg4yQ+hnVKOS7996DptOIPSHE1ztH4 = X-Google-Smtp-Source: AGHT+IEWfC+RJs0qL7syaZj9ksV52s+RpS7FHzjt5wCK9gixdQWjmjPqvlFlA3gUQYIerB8VXajdCw== X-Received: by 2002:a17:907:3e94:b0:a86:8a05:760d with SMTP id a640c23a62f3a-a8a1d29ba0dmr555838966b.5.1725456626180; Wed, 04 Sep 2024 06:30:26 -0700 (PDT) Message-ID: Date: Wed, 4 Sep 2024 15:30:24 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: [PATCH 5/5] x86/HVM: drop redundant access splitting From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= References: <31906cba-8646-4cf9-ab31-1d23654df8d1@suse.com> Content-Language: en-US Autocrypt: addr=jbeulich@suse.com; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL In-Reply-To: <31906cba-8646-4cf9-ab31-1d23654df8d1@suse.com> With all paths into hvmemul_linear_mmio_access() coming through linear_{read,write}(), there's no need anymore to split accesses at page boundaries there. Leave an assertion, though. Signed-off-by: Jan Beulich --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -1084,7 +1084,7 @@ static int hvmemul_linear_mmio_access( { struct hvm_vcpu_io *hvio = ¤t->arch.hvm.hvm_io; unsigned long offset = gla & ~PAGE_MASK; - unsigned int chunk, buffer_offset = gla - start; + unsigned int buffer_offset = gla - start; struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(hvio, start, dir, buffer_offset); paddr_t gpa; @@ -1094,13 +1094,13 @@ static int hvmemul_linear_mmio_access( if ( cache == NULL ) return X86EMUL_UNHANDLEABLE; - chunk = min_t(unsigned int, size, PAGE_SIZE - offset); + ASSERT(size <= PAGE_SIZE - offset); if ( known_gpfn ) gpa = pfn_to_paddr(hvio->mmio_gpfn) | offset; else { - rc = hvmemul_linear_to_phys(gla, &gpa, chunk, &one_rep, pfec, + rc = hvmemul_linear_to_phys(gla, &gpa, size, &one_rep, pfec, hvmemul_ctxt); if ( rc != X86EMUL_OKAY ) return rc; @@ -1108,27 +1108,8 @@ static int hvmemul_linear_mmio_access( latch_linear_to_phys(hvio, gla, gpa, dir == IOREQ_WRITE); } - for ( ;; ) - { - rc = hvmemul_phys_mmio_access(cache, gpa, chunk, dir, buffer, buffer_offset); - if ( rc != X86EMUL_OKAY ) - break; - - gla += chunk; - buffer_offset += chunk; - size -= chunk; - - if ( size == 0 ) - break; - - chunk = min_t(unsigned int, size, PAGE_SIZE); - rc = hvmemul_linear_to_phys(gla, &gpa, chunk, &one_rep, pfec, - hvmemul_ctxt); - if ( rc != X86EMUL_OKAY ) - return rc; - } - - return rc; + return hvmemul_phys_mmio_access(cache, gpa, size, dir, buffer, + buffer_offset); } static inline int hvmemul_linear_mmio_read(