diff mbox series

[2/5] x86/HVM: allocate emulation cache entries dynamically

Message ID e35606db-9050-485a-84fb-168f101b5d1c@suse.com (mailing list archive)
State Superseded
Headers show
Series x86/HVM: emulation (MMIO) improvements | expand

Commit Message

Jan Beulich Sept. 4, 2024, 1:29 p.m. UTC
Both caches may need higher capacity, and the upper bound will need to
be determined dynamically based on CPUID policy (for AMX at least).

While touching the check in hvmemul_phys_mmio_access() anyway, also
tighten it: To avoid overrunning the internal buffer we need to take the
offset into the buffer into account.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
This is a patch taken from the AMX series, which was part of the v3
submission. All I did is strip out the actual AMX bits (from
hvmemul_cache_init()), plus of course change the description. As a
result some local variables there may look unnecessary, but this way
it's going to be less churn when the AMX bits are added. The next patch
pretty strongly depends on the changed approach (contextually, not so
much functionally), and I'd really like to avoid rebasing that one ahead
of this one, and then this one on top of that.

TBD: For AMX hvmemul_cache_init() will become CPUID policy dependent. We
     could of course take the opportunity and also reduce overhead when
     AVX-512 (and maybe even AVX) is unavailable (in the future: to the
     guest).

Comments

Andrew Cooper Sept. 6, 2024, 7:20 p.m. UTC | #1
On 04/09/2024 2:29 pm, Jan Beulich wrote:
> Both caches may need higher capacity, and the upper bound will need to
> be determined dynamically based on CPUID policy (for AMX at least).

Is this to cope with TILE{LOAD,STORE}, or something else?

It's not exactly clear, even when looking at prior AMX series.

> While touching the check in hvmemul_phys_mmio_access() anyway, also
> tighten it: To avoid overrunning the internal buffer we need to take the
> offset into the buffer into account.

Does this really want to be mixed with a prep patch ?

>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> This is a patch taken from the AMX series, which was part of the v3
> submission. All I did is strip out the actual AMX bits (from
> hvmemul_cache_init()), plus of course change the description. As a
> result some local variables there may look unnecessary, but this way
> it's going to be less churn when the AMX bits are added. The next patch
> pretty strongly depends on the changed approach (contextually, not so
> much functionally), and I'd really like to avoid rebasing that one ahead
> of this one, and then this one on top of that.

Fine by me.

> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -26,6 +26,18 @@
>  #include <asm/iocap.h>
>  #include <asm/vm_event.h>
>  
> +/*
> + * We may read or write up to m512 or up to a tile row as a number of
> + * device-model transactions.
> + */
> +struct hvm_mmio_cache {
> +    unsigned long gla;
> +    unsigned int size;
> +    unsigned int space:31;
> +    unsigned int dir:1;
> +    uint8_t buffer[] __aligned(sizeof(long));

I know this is a minor tangent, but you are turning a regular struct
into a flexible one.

Could we introduce __counted_by() and start using it here?

At the toolchain level, it lets the compiler understand the real size of
the object, so e.g. the sanitisers can spot out-of-bounds accesses
through the flexible member.

But, even in the short term, having

    /* TODO */
    # define __counted_by(member)

in compiler.h still leaves us with better code, because

    struct hvm_mmio_cache {
        unsigned long gla;
        unsigned int size;
        unsigned int space:31;
        unsigned int dir:1;
        uint8_t buffer[] __aligned(sizeof(long)) __counted_by(size);
    };

is explicitly clear in a case where the "space" field creates some
ambiguity.

> @@ -2978,16 +2991,21 @@ void hvm_dump_emulation_state(const char
>  int hvmemul_cache_init(struct vcpu *v)
>  {
>      /*
> -     * No insn can access more than 16 independent linear addresses (AVX512F
> -     * scatters/gathers being the worst). Each such linear range can span a
> -     * page boundary, i.e. may require two page walks. Account for each insn
> -     * byte individually, for simplicity.
> +     * AVX512F scatter/gather insns can access up to 16 independent linear
> +     * addresses, up to 8 bytes size. Each such linear range can span a page
> +     * boundary, i.e. may require two page walks.
> +     */
> +    unsigned int nents = 16 * 2 * (CONFIG_PAGING_LEVELS + 1);
> +    unsigned int i, max_bytes = 64;
> +    struct hvmemul_cache *cache;
> +
> +    /*
> +     * Account for each insn byte individually, both for simplicity and to
> +     * leave some slack space.
>       */

Hang on.  Do we seriously use a separate cache entry for each
instruction byte ?

~Andrew
Jan Beulich Sept. 9, 2024, 11:19 a.m. UTC | #2
On 06.09.2024 21:20, Andrew Cooper wrote:
> On 04/09/2024 2:29 pm, Jan Beulich wrote:
>> Both caches may need higher capacity, and the upper bound will need to
>> be determined dynamically based on CPUID policy (for AMX at least).
> 
> Is this to cope with TILE{LOAD,STORE}, or something else?

These two, yes.

> It's not exactly clear, even when looking at prior AMX series.

I've added mention of them.

>> While touching the check in hvmemul_phys_mmio_access() anyway, also
>> tighten it: To avoid overrunning the internal buffer we need to take the
>> offset into the buffer into account.
> 
> Does this really want to be mixed with a prep patch ?

Taking the offset into account becomes more important with subsequent
patches. If needed, I can certainly split this out into its own prereq
patch.

>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -26,6 +26,18 @@
>>  #include <asm/iocap.h>
>>  #include <asm/vm_event.h>
>>  
>> +/*
>> + * We may read or write up to m512 or up to a tile row as a number of
>> + * device-model transactions.
>> + */
>> +struct hvm_mmio_cache {
>> +    unsigned long gla;
>> +    unsigned int size;
>> +    unsigned int space:31;
>> +    unsigned int dir:1;
>> +    uint8_t buffer[] __aligned(sizeof(long));
> 
> I know this is a minor tangent, but you are turning a regular struct
> into a flexible one.
> 
> Could we introduce __counted_by() and start using it here?
> 
> At the toolchain level, it lets the compiler understand the real size of
> the object, so e.g. the sanitisers can spot out-of-bounds accesses
> through the flexible member.
> 
> But, even in the short term, having
> 
>     /* TODO */
>     # define __counted_by(member)
> 
> in compiler.h still leaves us with better code, because
> 
>     struct hvm_mmio_cache {
>         unsigned long gla;
>         unsigned int size;
>         unsigned int space:31;
>         unsigned int dir:1;
>         uint8_t buffer[] __aligned(sizeof(long)) __counted_by(size);
>     };
> 
> is explicitly clear in a case where the "space" field creates some
> ambiguity.

Which raises a question here: Is it really "size" that you mean, not
"space"? It's the latter that describes the capacity after all.

As to __counted_by() (or counted_by()) introduction: While is seems
pretty orthogonal, I could of course add a prereq patch to introduce it.
Yet - even if I had it expand to nothing for now, what do you expect it
to expand to going forward? I've just gone through docs trying to find
something to match, yet with no success.

>> @@ -2978,16 +2991,21 @@ void hvm_dump_emulation_state(const char
>>  int hvmemul_cache_init(struct vcpu *v)
>>  {
>>      /*
>> -     * No insn can access more than 16 independent linear addresses (AVX512F
>> -     * scatters/gathers being the worst). Each such linear range can span a
>> -     * page boundary, i.e. may require two page walks. Account for each insn
>> -     * byte individually, for simplicity.
>> +     * AVX512F scatter/gather insns can access up to 16 independent linear
>> +     * addresses, up to 8 bytes size. Each such linear range can span a page
>> +     * boundary, i.e. may require two page walks.
>> +     */
>> +    unsigned int nents = 16 * 2 * (CONFIG_PAGING_LEVELS + 1);
>> +    unsigned int i, max_bytes = 64;
>> +    struct hvmemul_cache *cache;
>> +
>> +    /*
>> +     * Account for each insn byte individually, both for simplicity and to
>> +     * leave some slack space.
>>       */
> 
> Hang on.  Do we seriously use a separate cache entry for each
> instruction byte ?

We don't _use_ one, but we set up one. Already prior to this patch; the
comment merely moves (and is extended to mention the slack space aspect).
Also note that it is only the other (read) cache this is relevant for,
not the (MMIO) cache this series as a whole is about.

Jan
diff mbox series

Patch

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -26,6 +26,18 @@ 
 #include <asm/iocap.h>
 #include <asm/vm_event.h>
 
+/*
+ * We may read or write up to m512 or up to a tile row as a number of
+ * device-model transactions.
+ */
+struct hvm_mmio_cache {
+    unsigned long gla;
+    unsigned int size;
+    unsigned int space:31;
+    unsigned int dir:1;
+    uint8_t buffer[] __aligned(sizeof(long));
+};
+
 struct hvmemul_cache
 {
     /* The cache is disabled as long as num_ents > max_ents. */
@@ -935,7 +947,7 @@  static int hvmemul_phys_mmio_access(
     }
 
     /* Accesses must not overflow the cache's buffer. */
-    if ( size > sizeof(cache->buffer) )
+    if ( offset + size > cache->space )
     {
         ASSERT_UNREACHABLE();
         return X86EMUL_UNHANDLEABLE;
@@ -1011,7 +1023,7 @@  static struct hvm_mmio_cache *hvmemul_fi
 
     for ( i = 0; i < hvio->mmio_cache_count; i ++ )
     {
-        cache = &hvio->mmio_cache[i];
+        cache = hvio->mmio_cache[i];
 
         if ( gla == cache->gla &&
              dir == cache->dir )
@@ -1027,10 +1039,11 @@  static struct hvm_mmio_cache *hvmemul_fi
 
     ++hvio->mmio_cache_count;
 
-    cache = &hvio->mmio_cache[i];
-    memset(cache, 0, sizeof (*cache));
+    cache = hvio->mmio_cache[i];
+    memset(cache->buffer, 0, cache->space);
 
     cache->gla = gla;
+    cache->size = 0;
     cache->dir = dir;
 
     return cache;
@@ -2978,16 +2991,21 @@  void hvm_dump_emulation_state(const char
 int hvmemul_cache_init(struct vcpu *v)
 {
     /*
-     * No insn can access more than 16 independent linear addresses (AVX512F
-     * scatters/gathers being the worst). Each such linear range can span a
-     * page boundary, i.e. may require two page walks. Account for each insn
-     * byte individually, for simplicity.
+     * AVX512F scatter/gather insns can access up to 16 independent linear
+     * addresses, up to 8 bytes size. Each such linear range can span a page
+     * boundary, i.e. may require two page walks.
+     */
+    unsigned int nents = 16 * 2 * (CONFIG_PAGING_LEVELS + 1);
+    unsigned int i, max_bytes = 64;
+    struct hvmemul_cache *cache;
+
+    /*
+     * Account for each insn byte individually, both for simplicity and to
+     * leave some slack space.
      */
-    const unsigned int nents = (CONFIG_PAGING_LEVELS + 1) *
-                               (MAX_INST_LEN + 16 * 2);
-    struct hvmemul_cache *cache = xmalloc_flex_struct(struct hvmemul_cache,
-                                                      ents, nents);
+    nents += MAX_INST_LEN * (CONFIG_PAGING_LEVELS + 1);
 
+    cache = xvmalloc_flex_struct(struct hvmemul_cache, ents, nents);
     if ( !cache )
         return -ENOMEM;
 
@@ -2997,6 +3015,15 @@  int hvmemul_cache_init(struct vcpu *v)
 
     v->arch.hvm.hvm_io.cache = cache;
 
+    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.hvm_io.mmio_cache); ++i )
+    {
+        v->arch.hvm.hvm_io.mmio_cache[i] =
+            xmalloc_flex_struct(struct hvm_mmio_cache, buffer, max_bytes);
+        if ( !v->arch.hvm.hvm_io.mmio_cache[i] )
+            return -ENOMEM;
+        v->arch.hvm.hvm_io.mmio_cache[i]->space = max_bytes;
+    }
+
     return 0;
 }
 
--- a/xen/arch/x86/include/asm/hvm/emulate.h
+++ b/xen/arch/x86/include/asm/hvm/emulate.h
@@ -15,6 +15,7 @@ 
 #include <xen/err.h>
 #include <xen/mm.h>
 #include <xen/sched.h>
+#include <xen/xvmalloc.h>
 #include <asm/hvm/hvm.h>
 #include <asm/x86_emulate.h>
 
@@ -119,7 +120,11 @@  int hvmemul_do_pio_buffer(uint16_t port,
 int __must_check hvmemul_cache_init(struct vcpu *v);
 static inline void hvmemul_cache_destroy(struct vcpu *v)
 {
-    XFREE(v->arch.hvm.hvm_io.cache);
+    unsigned int i;
+
+    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.hvm_io.mmio_cache); ++i )
+        XFREE(v->arch.hvm.hvm_io.mmio_cache[i]);
+    XVFREE(v->arch.hvm.hvm_io.cache);
 }
 bool hvmemul_read_cache(const struct vcpu *v, paddr_t gpa,
                         void *buffer, unsigned int size);
--- a/xen/arch/x86/include/asm/hvm/vcpu.h
+++ b/xen/arch/x86/include/asm/hvm/vcpu.h
@@ -22,17 +22,6 @@  struct hvm_vcpu_asid {
     uint32_t asid;
 };
 
-/*
- * We may read or write up to m512 as a number of device-model
- * transactions.
- */
-struct hvm_mmio_cache {
-    unsigned long gla;
-    unsigned int size;
-    uint8_t dir;
-    uint8_t buffer[64] __aligned(sizeof(long));
-};
-
 struct hvm_vcpu_io {
     /*
      * HVM emulation:
@@ -48,7 +37,7 @@  struct hvm_vcpu_io {
      * We may need to handle up to 3 distinct memory accesses per
      * instruction.
      */
-    struct hvm_mmio_cache mmio_cache[3];
+    struct hvm_mmio_cache *mmio_cache[3];
     unsigned int mmio_cache_count;
 
     /* For retries we shouldn't re-fetch the instruction. */