diff mbox

[RFC,v2,2/2] utils: Add prefetch for Thunderx platform

Message ID 1471348968-4614-3-git-send-email-vijay.kilari@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Vijay Kilari Aug. 16, 2016, 12:02 p.m. UTC
From: Vijaya Kumar K <Vijaya.Kumar@cavium.com>

Thunderx pass2 chip requires explicit prefetch
instruction to give prefetch hint.

To speed up live migration on Thunderx platform,
prefetch instruction is added in zero buffer check
function.

The below results show live migration time improvement
with prefetch instruction with 1K and 4K page size.
VM with 4 VCPUs, 8GB RAM is migrated.

1K page size, no prefetch

Comments

Richard Henderson Aug. 16, 2016, 6:02 p.m. UTC | #1
On 08/16/2016 05:02 AM, vijay.kilari@gmail.com wrote:
> +static inline void prefetch_vector_loop(const VECTYPE *p, int index)
> +{
> +#if defined(__aarch64__)
> +    if (is_thunderx_pass2_cpu()) {
> +        /* Prefetch 4 cache lines ahead from index */
> +        VEC_PREFETCH(p, index + (BUFFER_FIND_NONZERO_OFFSET_UNROLL_FACTOR * 4));
> +    }
> +#endif
> +}

Oh come now.  This is even worse than before.  A function call protecting a 
mere prefetch within the main body of an inner loop?

Did you not understand what I was asking for?


r~
Vijay Kilari Aug. 16, 2016, 11:45 p.m. UTC | #2
On Tue, Aug 16, 2016 at 11:32 PM, Richard Henderson <rth@twiddle.net> wrote:
> On 08/16/2016 05:02 AM, vijay.kilari@gmail.com wrote:
>>
>> +static inline void prefetch_vector_loop(const VECTYPE *p, int index)
>> +{
>> +#if defined(__aarch64__)
>> +    if (is_thunderx_pass2_cpu()) {
>> +        /* Prefetch 4 cache lines ahead from index */
>> +        VEC_PREFETCH(p, index + (BUFFER_FIND_NONZERO_OFFSET_UNROLL_FACTOR
>> * 4));
>> +    }
>> +#endif
>> +}
>
>
> Oh come now.  This is even worse than before.  A function call protecting a
> mere prefetch within the main body of an inner loop?
>
> Did you not understand what I was asking for?

No, Could you please detail the problem?.

>
>
> r~
Richard Henderson Aug. 17, 2016, 3:34 p.m. UTC | #3
On 08/16/2016 04:45 PM, Vijay Kilari wrote:
> On Tue, Aug 16, 2016 at 11:32 PM, Richard Henderson <rth@twiddle.net> wrote:
>> On 08/16/2016 05:02 AM, vijay.kilari@gmail.com wrote:
>>>
>>> +static inline void prefetch_vector_loop(const VECTYPE *p, int index)
>>> +{
>>> +#if defined(__aarch64__)
>>> +    if (is_thunderx_pass2_cpu()) {
>>> +        /* Prefetch 4 cache lines ahead from index */
>>> +        VEC_PREFETCH(p, index + (BUFFER_FIND_NONZERO_OFFSET_UNROLL_FACTOR
>>> * 4));
>>> +    }
>>> +#endif
>>> +}
>>
>>
>> Oh come now.  This is even worse than before.  A function call protecting a
>> mere prefetch within the main body of an inner loop?
>>
>> Did you not understand what I was asking for?
>
> No, Could you please detail the problem?.

The thunderx check, *if it even needs to exist at all*, must happen outside the 
loop.  Preferably not more than once, at startup time.

I strongly suspect that you do not need any check at all.  That even for cpus 
which automatically detect the streaming loop, adding a prefetch will not hurt.

You should repeat your same benchmark, with and without the prefetch, on (1) an 
A57 or suchlike, and (2) an x86 of some variety.


r~
diff mbox

Patch

=========================
Migration status: completed
total time: 13012 milliseconds
downtime: 10 milliseconds
setup: 15 milliseconds
transferred ram: 268131 kbytes
throughput: 168.84 mbps
remaining ram: 0 kbytes
total ram: 8519872 kbytes
duplicate: 8338072 pages
skipped: 0 pages
normal: 193335 pages
normal bytes: 193335 kbytes
dirty sync count: 4

1K page size with prefetch
=========================
Migration status: completed
total time: 7493 milliseconds
downtime: 71 milliseconds
setup: 16 milliseconds
transferred ram: 269666 kbytes
throughput: 294.88 mbps
remaining ram: 0 kbytes
total ram: 8519872 kbytes
duplicate: 8340596 pages
skipped: 0 pages
normal: 194837 pages
normal bytes: 194837 kbytes
dirty sync count: 3

4K page size with no prefetch
=============================
Migration status: completed
total time: 10456 milliseconds
downtime: 49 milliseconds
setup: 5 milliseconds
transferred ram: 231726 kbytes
throughput: 181.59 mbps
remaining ram: 0 kbytes
total ram: 8519872 kbytes
duplicate: 2079914 pages
skipped: 0 pages
normal: 53257 pages
normal bytes: 213028 kbytes
dirty sync count: 3

4K page size with prefetch
==========================
Migration status: completed
total time: 3937 milliseconds
downtime: 23 milliseconds
setup: 5 milliseconds
transferred ram: 229283 kbytes
throughput: 477.19 mbps
remaining ram: 0 kbytes
total ram: 8519872 kbytes
duplicate: 2079775 pages
skipped: 0 pages
normal: 52648 pages
normal bytes: 210592 kbytes
dirty sync count: 3

Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
---
 util/cutils.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/util/cutils.c b/util/cutils.c
index 7505fda..342d1e3 100644
--- a/util/cutils.c
+++ b/util/cutils.c
@@ -186,11 +186,14 @@  int qemu_fdatasync(int fd)
 #define VEC_OR(v1, v2) (_mm_or_si128(v1, v2))
 #elif defined(__aarch64__)
 #include "arm_neon.h"
+#include "qemu/aarch64-cpuid.h"
 #define VECTYPE        uint64x2_t
 #define ALL_EQ(v1, v2) \
         ((vgetq_lane_u64(v1, 0) == vgetq_lane_u64(v2, 0)) && \
          (vgetq_lane_u64(v1, 1) == vgetq_lane_u64(v2, 1)))
 #define VEC_OR(v1, v2) ((v1) | (v2))
+#define VEC_PREFETCH(base, index) \
+        __builtin_prefetch(&base[index], 0, 0);
 #else
 #define VECTYPE        unsigned long
 #define SPLAT(p)       (*(p) * (~0UL / 255))
@@ -200,6 +203,29 @@  int qemu_fdatasync(int fd)
 
 #define BUFFER_FIND_NONZERO_OFFSET_UNROLL_FACTOR 8
 
+static inline void prefetch_vector(const VECTYPE *p, int index)
+{
+#if defined(__aarch64__)
+    get_aarch64_cpu_id();
+    if (is_thunderx_pass2_cpu()) {
+        /* Prefetch first 3 cache lines */
+        VEC_PREFETCH(p, index + BUFFER_FIND_NONZERO_OFFSET_UNROLL_FACTOR);
+        VEC_PREFETCH(p, index + (BUFFER_FIND_NONZERO_OFFSET_UNROLL_FACTOR * 2));
+        VEC_PREFETCH(p, index + (BUFFER_FIND_NONZERO_OFFSET_UNROLL_FACTOR * 3));
+    }
+#endif
+}
+
+static inline void prefetch_vector_loop(const VECTYPE *p, int index)
+{
+#if defined(__aarch64__)
+    if (is_thunderx_pass2_cpu()) {
+        /* Prefetch 4 cache lines ahead from index */
+        VEC_PREFETCH(p, index + (BUFFER_FIND_NONZERO_OFFSET_UNROLL_FACTOR * 4));
+    }
+#endif
+}
+
 static bool
 can_use_buffer_find_nonzero_offset_inner(const void *buf, size_t len)
 {
@@ -246,9 +272,14 @@  static size_t buffer_find_nonzero_offset_inner(const void *buf, size_t len)
         }
     }
 
+    prefetch_vector(p, 0);
+
     for (i = BUFFER_FIND_NONZERO_OFFSET_UNROLL_FACTOR;
          i < len / sizeof(VECTYPE);
          i += BUFFER_FIND_NONZERO_OFFSET_UNROLL_FACTOR) {
+
+        prefetch_vector_loop(p, i);
+
         VECTYPE tmp0 = VEC_OR(p[i + 0], p[i + 1]);
         VECTYPE tmp1 = VEC_OR(p[i + 2], p[i + 3]);
         VECTYPE tmp2 = VEC_OR(p[i + 4], p[i + 5]);