diff mbox series

[5/5] x86/HVM: drop redundant access splitting

Message ID bcaab3fd-2dca-4504-ad4b-830bc8dcf923@suse.com (mailing list archive)
State Superseded
Headers show
Series x86/HVM: emulation (MMIO) improvements | expand

Commit Message

Jan Beulich Sept. 4, 2024, 1:30 p.m. UTC
With all paths into hvmemul_linear_mmio_access() coming through
linear_{read,write}(), there's no need anymore to split accesses at
page boundaries there. Leave an assertion, though.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

Comments

Andrew Cooper Sept. 6, 2024, 6:06 p.m. UTC | #1
On 04/09/2024 2:30 pm, Jan Beulich wrote:
> @@ -1094,13 +1094,13 @@ static int hvmemul_linear_mmio_access(
>      if ( cache == NULL )
>          return X86EMUL_UNHANDLEABLE;
>  
> -    chunk = min_t(unsigned int, size, PAGE_SIZE - offset);
> +    ASSERT(size <= PAGE_SIZE - offset);

Do we really want a plain assert, or should we go with

    if ( size > PAGE_SIZE - offset )
    {
        /* Callers should have arranged not to cross a page boundary */
        ASSERT_UNREACHABLE();
        return X86EMUL_UNHANDLEABLE;
    }

This is hardly a fastpath, and it's rather safer.

~Andrew
Jan Beulich Sept. 9, 2024, 11:54 a.m. UTC | #2
On 06.09.2024 20:06, Andrew Cooper wrote:
> On 04/09/2024 2:30 pm, Jan Beulich wrote:
>> @@ -1094,13 +1094,13 @@ static int hvmemul_linear_mmio_access(
>>      if ( cache == NULL )
>>          return X86EMUL_UNHANDLEABLE;
>>  
>> -    chunk = min_t(unsigned int, size, PAGE_SIZE - offset);
>> +    ASSERT(size <= PAGE_SIZE - offset);
> 
> Do we really want a plain assert, or should we go with
> 
>     if ( size > PAGE_SIZE - offset )
>     {
>         /* Callers should have arranged not to cross a page boundary */
>         ASSERT_UNREACHABLE();
>         return X86EMUL_UNHANDLEABLE;
>     }
> 
> This is hardly a fastpath, and it's rather safer.

I can switch, sure, yet to be honest it was already feeling a little
like going too far to have the assertion, considering the obviousness
of all callers guaranteeing this. The only reason I decided to add
one is the remaining concern of there, at some point, possibly being
single memory operands exceeding PAGE_SIZE. Yet nothing comes
anywhere near that right now; whole AMX tiles are 1k "only", and tile
rows / columns are even further restricted. Of course, if and when we
add XSAVE/XRSTORE emulation ...

Jan
diff mbox series

Patch

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -1084,7 +1084,7 @@  static int hvmemul_linear_mmio_access(
 {
     struct hvm_vcpu_io *hvio = &current->arch.hvm.hvm_io;
     unsigned long offset = gla & ~PAGE_MASK;
-    unsigned int chunk, buffer_offset = gla - start;
+    unsigned int buffer_offset = gla - start;
     struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(hvio, start, dir,
                                                            buffer_offset);
     paddr_t gpa;
@@ -1094,13 +1094,13 @@  static int hvmemul_linear_mmio_access(
     if ( cache == NULL )
         return X86EMUL_UNHANDLEABLE;
 
-    chunk = min_t(unsigned int, size, PAGE_SIZE - offset);
+    ASSERT(size <= PAGE_SIZE - offset);
 
     if ( known_gpfn )
         gpa = pfn_to_paddr(hvio->mmio_gpfn) | offset;
     else
     {
-        rc = hvmemul_linear_to_phys(gla, &gpa, chunk, &one_rep, pfec,
+        rc = hvmemul_linear_to_phys(gla, &gpa, size, &one_rep, pfec,
                                     hvmemul_ctxt);
         if ( rc != X86EMUL_OKAY )
             return rc;
@@ -1108,27 +1108,8 @@  static int hvmemul_linear_mmio_access(
         latch_linear_to_phys(hvio, gla, gpa, dir == IOREQ_WRITE);
     }
 
-    for ( ;; )
-    {
-        rc = hvmemul_phys_mmio_access(cache, gpa, chunk, dir, buffer, buffer_offset);
-        if ( rc != X86EMUL_OKAY )
-            break;
-
-        gla += chunk;
-        buffer_offset += chunk;
-        size -= chunk;
-
-        if ( size == 0 )
-            break;
-
-        chunk = min_t(unsigned int, size, PAGE_SIZE);
-        rc = hvmemul_linear_to_phys(gla, &gpa, chunk, &one_rep, pfec,
-                                    hvmemul_ctxt);
-        if ( rc != X86EMUL_OKAY )
-            return rc;
-    }
-
-    return rc;
+    return hvmemul_phys_mmio_access(cache, gpa, size, dir, buffer,
+                                    buffer_offset);
 }
 
 static inline int hvmemul_linear_mmio_read(