diff mbox series

xen/list: Remove prefetching

Message ID 20200114203545.8897-1-andrew.cooper3@citrix.com (mailing list archive)
State New, archived
Headers show
Series xen/list: Remove prefetching | expand

Commit Message

Andrew Cooper Jan. 14, 2020, 8:35 p.m. UTC
Xen inherited its list infrastructure from Linux.  One area where has fallen
behind is that of prefetching, which as it turns out is a performance penalty
in most cases.

Prefetch of NULL on x86 is now widely measured to have glacial performance
properties, and will unconditionally hit on every hlist use due to the
termination condition.

Cross-port the following Linux patches:

  75d65a425c (2011) "hlist: remove software prefetching in hlist iterators"
  e66eed651f (2011) "list: remove prefetching from regular list iterators"
  c0d15cc7ee (2013) "linked-list: Remove __list_for_each"

to Xen, which results in the following net diffstat on x86:

  add/remove: 0/1 grow/shrink: 27/83 up/down: 576/-1648 (-1072)

(The code additions comes from a few now-inlined functions, and slightly
different basic block padding.)

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
---
 xen/include/xen/list.h | 46 +++++++++++++---------------------------------
 1 file changed, 13 insertions(+), 33 deletions(-)

Comments

Julien Grall Jan. 14, 2020, 8:58 p.m. UTC | #1
On 14/01/2020 20:35, Andrew Cooper wrote:
> Xen inherited its list infrastructure from Linux.  One area where has fallen
> behind is that of prefetching, which as it turns out is a performance penalty
> in most cases.
> 
> Prefetch of NULL on x86 is now widely measured to have glacial performance
> properties, and will unconditionally hit on every hlist use due to the
> termination condition.
> 
> Cross-port the following Linux patches:
> 
>    75d65a425c (2011) "hlist: remove software prefetching in hlist iterators"
>    e66eed651f (2011) "list: remove prefetching from regular list iterators"
>    c0d15cc7ee (2013) "linked-list: Remove __list_for_each"
> 
> to Xen, which results in the following net diffstat on x86:
> 
>    add/remove: 0/1 grow/shrink: 27/83 up/down: 576/-1648 (-1072)
> 
> (The code additions comes from a few now-inlined functions, and slightly
> different basic block padding.)
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Julien Grall <julien@xen.org>

Cheers,
Roger Pau Monné Jan. 15, 2020, 10:39 a.m. UTC | #2
On Tue, Jan 14, 2020 at 08:35:45PM +0000, Andrew Cooper wrote:
> Xen inherited its list infrastructure from Linux.  One area where has fallen
> behind is that of prefetching, which as it turns out is a performance penalty
> in most cases.
> 
> Prefetch of NULL on x86 is now widely measured to have glacial performance
> properties, and will unconditionally hit on every hlist use due to the
> termination condition.
> 
> Cross-port the following Linux patches:
> 
>   75d65a425c (2011) "hlist: remove software prefetching in hlist iterators"
>   e66eed651f (2011) "list: remove prefetching from regular list iterators"
>   c0d15cc7ee (2013) "linked-list: Remove __list_for_each"
> 
> to Xen, which results in the following net diffstat on x86:
> 
>   add/remove: 0/1 grow/shrink: 27/83 up/down: 576/-1648 (-1072)
> 
> (The code additions comes from a few now-inlined functions, and slightly
> different basic block padding.)
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Has this gone through some XenRT performance testing to assert there
are not regressions performance wise?

Thanks, Roger.
Jan Beulich Jan. 15, 2020, 11:17 a.m. UTC | #3
On 14.01.2020 21:35, Andrew Cooper wrote:
> Xen inherited its list infrastructure from Linux.  One area where has fallen
> behind is that of prefetching, which as it turns out is a performance penalty
> in most cases.
> 
> Prefetch of NULL on x86 is now widely measured to have glacial performance
> properties, and will unconditionally hit on every hlist use due to the
> termination condition.
> 
> Cross-port the following Linux patches:
> 
>   75d65a425c (2011) "hlist: remove software prefetching in hlist iterators"
>   e66eed651f (2011) "list: remove prefetching from regular list iterators"
>   c0d15cc7ee (2013) "linked-list: Remove __list_for_each"

Just as an observation (not an objection), the 2nd of these says
"normally the downsides are bigger than the upsides", which makes
it unbelievably clear what these supposed downsides are. I can
accept prefetches through NULL to be harmful. I can also accept
prefetches on single entry lists to not be very useful. But does
this also render them useless on long lists with not overly much
cache churn done by the body of the iteration loop? Wouldn't it
at least be worthwhile to have list_for_each_prefetch() retaining
prior behavior, and use it in places where prefetching can be
deemed to help?

Jan
Andrew Cooper Jan. 15, 2020, 11:25 a.m. UTC | #4
On 15/01/2020 10:39, Roger Pau Monné wrote:
> On Tue, Jan 14, 2020 at 08:35:45PM +0000, Andrew Cooper wrote:
>> Xen inherited its list infrastructure from Linux.  One area where has fallen
>> behind is that of prefetching, which as it turns out is a performance penalty
>> in most cases.
>>
>> Prefetch of NULL on x86 is now widely measured to have glacial performance
>> properties, and will unconditionally hit on every hlist use due to the
>> termination condition.
>>
>> Cross-port the following Linux patches:
>>
>>   75d65a425c (2011) "hlist: remove software prefetching in hlist iterators"
>>   e66eed651f (2011) "list: remove prefetching from regular list iterators"
>>   c0d15cc7ee (2013) "linked-list: Remove __list_for_each"
>>
>> to Xen, which results in the following net diffstat on x86:
>>
>>   add/remove: 0/1 grow/shrink: 27/83 up/down: 576/-1648 (-1072)
>>
>> (The code additions comes from a few now-inlined functions, and slightly
>> different basic block padding.)
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>
> Has this gone through some XenRT performance testing to assert there
> are not regressions performance wise?

No.  The Linux measurements are still valid observations.

~Andrew
Andrew Cooper Jan. 15, 2020, 12:40 p.m. UTC | #5
On 15/01/2020 11:17, Jan Beulich wrote:
> On 14.01.2020 21:35, Andrew Cooper wrote:
>> Xen inherited its list infrastructure from Linux.  One area where has fallen
>> behind is that of prefetching, which as it turns out is a performance penalty
>> in most cases.
>>
>> Prefetch of NULL on x86 is now widely measured to have glacial performance
>> properties, and will unconditionally hit on every hlist use due to the
>> termination condition.
>>
>> Cross-port the following Linux patches:
>>
>>   75d65a425c (2011) "hlist: remove software prefetching in hlist iterators"
>>   e66eed651f (2011) "list: remove prefetching from regular list iterators"
>>   c0d15cc7ee (2013) "linked-list: Remove __list_for_each"
> Just as an observation (not an objection), the 2nd of these says
> "normally the downsides are bigger than the upsides", which makes
> it unbelievably clear what these supposed downsides are. I can
> accept prefetches through NULL to be harmful. I can also accept
> prefetches on single entry lists to not be very useful. But does
> this also render them useless on long lists with not overly much
> cache churn done by the body of the iteration loop?

Yes.

Prefetch is only useful when you're making an access which none of the
hardware prefetchers can predict, and that the costs (extra instruction,
L1 cache perturbance, and tying up the pagewalker for a while) are
outweighed by the perf improvement from not stalling against the access.

A programmer cannot figure this out by just looking at the C.  The
details are micro-architectural, and based on rare and unpredictable
data access patterns.  (Incorrectly) tying up the pagewalker early can
be far more detrimental to performance than to have forward speculation
pull it in at the next time that there is available micro-architectural
resource to do so.

> Wouldn't it
> at least be worthwhile to have list_for_each_prefetch() retaining
> prior behavior, and use it in places where prefetching can be
> deemed to help?

No, I don't think so.  The repetitive pattern of a loop is easy for
hardware to spot.

The cases where prefetching helps in practice are the one-off totally
unpredictable accesses which are suddenly going to block all other
instructions in flight, *and* you are not going to incur a TLB miss in
the short term.

This is why I made the prefetch() suggestion for your svm_load_segs()
code.  The memory operand is used once per context switch, so very
likely to have fallen out of the cache and TLB, and VMLOAD is
microcoded, so a stalling black box as far as forward speculation goes. 
As the code leading up to it is operating in hot TLB mappings, the
pagewalker is free ahead of time to complete the fill.

There are cases where prefetch() really makes a difference, but they are
rare and the hardware vendors have already optimised the common data
access patterns in programs.

It is also highly telling that in nearly a decade, Linux still hasn't
found a case warranting the re-introduction of prefetches on the loop
entry metadata.

Of course, if someone does find a case, we can reconsider, but I doubt
it will ever come up, and misuse of such a list iterator can easily do
more damage than good.

~Andrew
diff mbox series

Patch

diff --git a/xen/include/xen/list.h b/xen/include/xen/list.h
index 1387abb211..dc5a8c461b 100644
--- a/xen/include/xen/list.h
+++ b/xen/include/xen/list.h
@@ -42,9 +42,6 @@  struct list_head {
 #define LIST_HEAD_READ_MOSTLY(name) \
     struct list_head __read_mostly name = LIST_HEAD_INIT(name)
 
-/* Do not move this ahead of the struct list_head definition! */
-#include <xen/prefetch.h>
-
 static inline void INIT_LIST_HEAD(struct list_head *list)
 {
     list->next = list;
@@ -455,20 +452,6 @@  static inline void list_splice_init(struct list_head *list,
  * @head:    the head for your list.
  */
 #define list_for_each(pos, head)                                        \
-    for (pos = (head)->next; prefetch(pos->next), pos != (head);        \
-         pos = pos->next)
-
-/**
- * __list_for_each - iterate over a list
- * @pos:    the &struct list_head to use as a loop cursor.
- * @head:   the head for your list.
- *
- * This variant differs from list_for_each() in that it's the
- * simplest possible list iteration code, no prefetching is done.
- * Use this for code that knows the list to be very short (empty
- * or 1 entry) most of the time.
- */
-#define __list_for_each(pos, head)                              \
     for (pos = (head)->next; pos != (head); pos = pos->next)
 
 /**
@@ -477,8 +460,7 @@  static inline void list_splice_init(struct list_head *list,
  * @head:   the head for your list.
  */
 #define list_for_each_prev(pos, head)                                   \
-    for (pos = (head)->prev; prefetch(pos->prev), pos != (head);        \
-         pos = pos->prev)
+    for (pos = (head)->prev; pos != (head); pos = pos->prev)
 
 /**
  * list_for_each_safe - iterate over a list safe against removal of list entry
@@ -509,7 +491,7 @@  static inline void list_splice_init(struct list_head *list,
  */
 #define list_for_each_entry(pos, head, member)                          \
     for (pos = list_entry((head)->next, typeof(*pos), member);          \
-         prefetch(pos->member.next), &pos->member != (head);            \
+         &pos->member != (head);                                        \
          pos = list_entry(pos->member.next, typeof(*pos), member))
 
 /**
@@ -520,7 +502,7 @@  static inline void list_splice_init(struct list_head *list,
  */
 #define list_for_each_entry_reverse(pos, head, member)                  \
     for (pos = list_entry((head)->prev, typeof(*pos), member);          \
-         prefetch(pos->member.prev), &pos->member != (head);            \
+         &pos->member != (head);                                        \
          pos = list_entry(pos->member.prev, typeof(*pos), member))
 
 /**
@@ -547,7 +529,7 @@  static inline void list_splice_init(struct list_head *list,
  */
 #define list_for_each_entry_continue(pos, head, member)                 \
     for (pos = list_entry(pos->member.next, typeof(*pos), member);      \
-         prefetch(pos->member.next), &pos->member != (head);            \
+         &pos->member != (head);                                        \
          pos = list_entry(pos->member.next, typeof(*pos), member))
 
 /**
@@ -560,7 +542,7 @@  static inline void list_splice_init(struct list_head *list,
  * Iterate over list of given type, continuing from current position.
  */
 #define list_for_each_entry_from(pos, head, member)                     \
-    for (; prefetch(pos->member.next), &pos->member != (head);          \
+    for (; &pos->member != (head);                                      \
          pos = list_entry(pos->member.next, typeof(*pos), member))
 
 /**
@@ -635,7 +617,7 @@  static inline void list_splice_init(struct list_head *list,
  */
 #define list_for_each_rcu(pos, head)                            \
     for (pos = (head)->next;                                    \
-         prefetch(rcu_dereference(pos)->next), pos != (head);   \
+         rcu_dereference(pos) != (head);                        \
          pos = pos->next)
 
 #define __list_for_each_rcu(pos, head)          \
@@ -672,8 +654,7 @@  static inline void list_splice_init(struct list_head *list,
  */
 #define list_for_each_entry_rcu(pos, head, member)                      \
     for (pos = list_entry((head)->next, typeof(*pos), member);          \
-         prefetch(rcu_dereference(pos)->member.next),                   \
-         &pos->member != (head);                                        \
+         &rcu_dereference(pos)->member != (head);                       \
          pos = list_entry(pos->member.next, typeof(*pos), member))
 
 /**
@@ -689,7 +670,7 @@  static inline void list_splice_init(struct list_head *list,
  */
 #define list_for_each_continue_rcu(pos, head)                           \
     for ((pos) = (pos)->next;                                           \
-         prefetch(rcu_dereference((pos))->next), (pos) != (head);       \
+         rcu_dereference(pos) != (head);                                \
          (pos) = (pos)->next)
 
 /*
@@ -918,8 +899,7 @@  static inline void hlist_add_after_rcu(struct hlist_node *prev,
 #define hlist_entry(ptr, type, member) container_of(ptr,type,member)
 
 #define hlist_for_each(pos, head)                                       \
-    for (pos = (head)->first; pos && ({ prefetch(pos->next); 1; });     \
-         pos = pos->next)
+    for (pos = (head)->first; pos; pos = pos->next)
 
 #define hlist_for_each_safe(pos, n, head)                       \
     for (pos = (head)->first; pos && ({ n = pos->next; 1; });   \
@@ -934,7 +914,7 @@  static inline void hlist_add_after_rcu(struct hlist_node *prev,
  */
 #define hlist_for_each_entry(tpos, pos, head, member)                   \
     for (pos = (head)->first;                                           \
-         pos && ({ prefetch(pos->next); 1;}) &&                         \
+         pos &&                                                         \
          ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1;});       \
          pos = pos->next)
 
@@ -947,7 +927,7 @@  static inline void hlist_add_after_rcu(struct hlist_node *prev,
  */
 #define hlist_for_each_entry_continue(tpos, pos, member)                \
     for (pos = (pos)->next;                                             \
-         pos && ({ prefetch(pos->next); 1;}) &&                         \
+         pos &&                                                         \
          ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1;});       \
          pos = pos->next)
 
@@ -959,7 +939,7 @@  static inline void hlist_add_after_rcu(struct hlist_node *prev,
  * @member:    the name of the hlist_node within the struct.
  */
 #define hlist_for_each_entry_from(tpos, pos, member)                    \
-    for (; pos && ({ prefetch(pos->next); 1;}) &&                       \
+    for (; pos &&                                                       \
          ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1;});       \
          pos = pos->next)
 
@@ -992,7 +972,7 @@  static inline void hlist_add_after_rcu(struct hlist_node *prev,
  */
 #define hlist_for_each_entry_rcu(tpos, pos, head, member)               \
      for (pos = (head)->first;                                          \
-          rcu_dereference(pos) && ({ prefetch(pos->next); 1;}) &&       \
+          rcu_dereference(pos) &&                                       \
           ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1;});      \
           pos = pos->next)