@@ -81,7 +81,17 @@
...
struct percpu_ref {
- atomic_long_t count;
...
+ atomic_long_wrap_t count;
The way it works (before and after our patch) is that the count needs to
be updated in a non-atomic way. This means that before all the percpu
refs are added the value could be off in either direction, but no more
than the actual "true" value of the counter. In order to prevent the
counter prematurely reaching zero, a bias (defined in
lib/percup-refcount.c) is used to offset the range from [MIN,MAX] to
[1,MAX]+[MIN,-1] (with "zero" in the middle, as far from 0 as possible).
https://github.com/ereshetova/linux-stable/commit/af44298668d12bf79f48e14396568e9f29ca4bef#diff-be7e4fe901ed6a9d5292276fef233468R34
The problem is then that if the atomic is protected it cannot wrap (and
zero is already offset next to the "wrap-barrier", so it is practically