diff mbox

compiler bug gcc4.6/4.7 with ACCESS_ONCE and workarounds

Message ID CA+55aFyEigSL64RJH8AO86gFBBB84+dgk7eEUPx=CaLwJMcO_w@mail.gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Linus Torvalds Nov. 10, 2014, 9:07 p.m. UTC
On Mon, Nov 10, 2014 at 12:18 PM, Christian Borntraeger
<borntraeger@de.ibm.com> wrote:
>
> Now: I can reproduces belows miscompile on gcc46 and gcc 47
> gcc 45 seems ok, gcc 48 is fixed.  This makes blacklisting
> a bit hard, especially since it is not limited to s390, but
> covers all architectures.
> In essence ACCESS_ONCE will not work reliably on aggregate
> types with gcc 4.6 and gcc 4.7.
> In Linux we seem to use ACCESS_ONCE mostly on scalar types,
> below code is an example were we dont - and break.

Hmm. I think we should see how painful it would be to make it a rule
that ACCESS_ONCE() only works on scalar types.

Even in the actual code you show as an example, the "fix" is really to
use the "unsigned long val" member of the union for the ACCESS_ONCE().
And that seems to be true in many other cases too.

So before blacklisting any compilers, let's first see if

 (a) we can actually make it a real rule that we only use ACCESS_ONCE on scalars
 (b) we can somehow enforce this with a compiler warning/error for mis-uses

For example, the attached patch works for some cases, but shows how we
use ACCESS_ONCE() on pointers to pte_t's etc, so it doesn't come even
close to compiling the whole kernel. But I wonder how painful that
would be to change.. The places where it complains are actually
somewhat debatable to begin with, like:

 - handle_pte_fault(.. pte_t *pte ..):

        entry = ACCESS_ONCE(*pte);

and the thing is, "pte" is actually possibly an 8-byte entity on
x86-32, and that ACCESS_ONCE() fundamentally will be two 32-byte
reads.

So there is a very valid argument for saying "well, you shouldn't do
that, then", and that we might be better off cleaning up our
ACCESS_ONCE() uses, than to just blindly blacklist compilers.

NOTE! I'm not at all advocating the attached patch. I'm sending it out
white-space damaged on purpose, it's more of a "hey, something like
this might be the direction we want to go in", with the spinlock.h
part of the patch also acting as an example of the kind of changes the
"ACCESS_ONCE() only works on scalars" rule would require.

So I do agree with Heiko that we generally don't want to work around
compiler bugs if we can avoid it. But sometimes the compiler bugs do
end up saying "your'e doing something very fragile". Maybe we should
try to be less fragile here.

And in your example, the whole

        old = ACCESS_ONCE(*ic);

*could* just be a

        old->val = ACCESS_ONCE(ic->val);

the same way the x86 spinlock.h changes below.

I did *not* try to see how many other cases we have. It's possible
that your "don't use ACCESS_ONCE, use a barrier() instead" ends up
being a valid workaround. For the pte case, that may well be the
solution, for example (because what we really care about is not so
much "it's an atomic access" but "it's still the same that we
originally assumed").  Sometimes we want ACCESS_ONCE() because we
really want an atomic value (and we just don't care if it's old or
new), but sometimes it's really because we don't want the compiler to
re-load it and possibly see two different values - one that we check,
and one that we actually use (and then a barrier() would generally be
perfectly sufficient)

Adding some more people to the discussion just to see if anybody else
has comments about ACCESS_ONCE() on aggregate types.

(Btw, it's not just aggregate types, even non-aggregate types like
"long long" are not necessarily safe, to give the same 64-bit on
x86-32 example. So adding an assert that it's smaller or equal in size
to a "long" might also not be unreasonable)

                   Linus

---

@@ -162,16 +162,14 @@ static __always_inline void
arch_spin_unlock(arch_spinlock_t *lock)

 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
-       struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
-
-       return tmp.tail != tmp.head;
+       struct arch_spinlock tmp = { .head_tail =
ACCESS_ONCE(lock->head_tail) };
+       return tmp.tickets.tail != tmp.tickets.head;
 }

 static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
-       struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
-
-       return (__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC;
+       struct arch_spinlock tmp = { .head_tail =
ACCESS_ONCE(lock->head_tail) };
+       return (__ticket_t)(tmp.tickets.tail - tmp.tickets.head) >
TICKET_LOCK_INC;
 }
 #define arch_spin_is_contended arch_spin_is_contended
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Paul E. McKenney Nov. 11, 2014, 12:37 a.m. UTC | #1
On Mon, Nov 10, 2014 at 01:07:33PM -0800, Linus Torvalds wrote:
> On Mon, Nov 10, 2014 at 12:18 PM, Christian Borntraeger
> <borntraeger@de.ibm.com> wrote:
> >
> > Now: I can reproduces belows miscompile on gcc46 and gcc 47
> > gcc 45 seems ok, gcc 48 is fixed.  This makes blacklisting
> > a bit hard, especially since it is not limited to s390, but
> > covers all architectures.
> > In essence ACCESS_ONCE will not work reliably on aggregate
> > types with gcc 4.6 and gcc 4.7.
> > In Linux we seem to use ACCESS_ONCE mostly on scalar types,
> > below code is an example were we dont - and break.
> 
> Hmm. I think we should see how painful it would be to make it a rule
> that ACCESS_ONCE() only works on scalar types.

For whatever it is worth, I have been assuming that ACCESS_ONCE() was
only ever supposed to work on scalar types.  And one of the reasons
that I have been giving the pre-EV56 Alpha guys a hard time is because
I would like us to be able to continue using ACCESS_ONCE() on 8-bit and
16-bit quantities as well.

> Even in the actual code you show as an example, the "fix" is really to
> use the "unsigned long val" member of the union for the ACCESS_ONCE().
> And that seems to be true in many other cases too.
> 
> So before blacklisting any compilers, let's first see if
> 
>  (a) we can actually make it a real rule that we only use ACCESS_ONCE on scalars
>  (b) we can somehow enforce this with a compiler warning/error for mis-uses
> 
> For example, the attached patch works for some cases, but shows how we
> use ACCESS_ONCE() on pointers to pte_t's etc, so it doesn't come even
> close to compiling the whole kernel. But I wonder how painful that
> would be to change.. The places where it complains are actually
> somewhat debatable to begin with, like:
> 
>  - handle_pte_fault(.. pte_t *pte ..):
> 
>         entry = ACCESS_ONCE(*pte);
> 
> and the thing is, "pte" is actually possibly an 8-byte entity on
> x86-32, and that ACCESS_ONCE() fundamentally will be two 32-byte
> reads.
> 
> So there is a very valid argument for saying "well, you shouldn't do
> that, then", and that we might be better off cleaning up our
> ACCESS_ONCE() uses, than to just blindly blacklist compilers.
> 
> NOTE! I'm not at all advocating the attached patch. I'm sending it out
> white-space damaged on purpose, it's more of a "hey, something like
> this might be the direction we want to go in", with the spinlock.h
> part of the patch also acting as an example of the kind of changes the
> "ACCESS_ONCE() only works on scalars" rule would require.
> 
> So I do agree with Heiko that we generally don't want to work around
> compiler bugs if we can avoid it. But sometimes the compiler bugs do
> end up saying "your'e doing something very fragile". Maybe we should
> try to be less fragile here.
> 
> And in your example, the whole
> 
>         old = ACCESS_ONCE(*ic);
> 
> *could* just be a
> 
>         old->val = ACCESS_ONCE(ic->val);
> 
> the same way the x86 spinlock.h changes below.
> 
> I did *not* try to see how many other cases we have. It's possible
> that your "don't use ACCESS_ONCE, use a barrier() instead" ends up
> being a valid workaround. For the pte case, that may well be the
> solution, for example (because what we really care about is not so
> much "it's an atomic access" but "it's still the same that we
> originally assumed").  Sometimes we want ACCESS_ONCE() because we
> really want an atomic value (and we just don't care if it's old or
> new), but sometimes it's really because we don't want the compiler to
> re-load it and possibly see two different values - one that we check,
> and one that we actually use (and then a barrier() would generally be
> perfectly sufficient)
> 
> Adding some more people to the discussion just to see if anybody else
> has comments about ACCESS_ONCE() on aggregate types.
> 
> (Btw, it's not just aggregate types, even non-aggregate types like
> "long long" are not necessarily safe, to give the same 64-bit on
> x86-32 example. So adding an assert that it's smaller or equal in size
> to a "long" might also not be unreasonable)

Good point on "long long" on 32-bit systems.

Another reason to avoid trying to do anything that even smells atomic on
non-machine-sized/aligned variables is that the compiler guys' current
reaction to this sort of situation is "No problem!!!  The compiler can
just invent a lock to guard all such accesses!"  I don't think that we
want to go there.

>                    Linus
> 
> ---
> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index d5ad7b1118fc..63e82f1dfc1a 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -378,7 +378,11 @@ void ftrace_likely_update(struct
> ftrace_branch_data *f, int val, int expect);
>   * use is to mediate communication between process-level code and irq/NMI
>   * handlers, all running on the same CPU.
>   */
> -#define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x))
> +#define get_scalar_volatile_pointer(x) ({ \
> +       typeof(x) *__p = &(x); \
> +       volatile typeof(x) *__vp = __p; \
> +       (void)(long)*__p; __vp; })
> +#define ACCESS_ONCE(x) (*get_scalar_volatile_pointer(x))

I know you said that this was to be experimental, but it happily loads
from a "long long" on 32-bit x86 running gcc version 4.6.3, and does it
32 bits at a time.  How about something like the following instead?

#define get_scalar_volatile_pointer(x) ({ \
	typeof(x) *__p = &(x); \
	BUILD_BUG_ON(sizeof(x) != sizeof(char) && \
		     sizeof(x) != sizeof(short) && \
		     sizeof(x) != sizeof(int) && \
		     sizeof(x) != sizeof(long)); \
	volatile typeof(x) *__vp = __p; \
	(void)(long)*__p; __vp; })
#define ACCESS_ONCE(x) (*get_scalar_volatile_pointer(x))

							Thanx, Paul

> 
>  /* Ignore/forbid kprobes attach on very low level functions marked by
> this attribute: */
>  #ifdef CONFIG_KPROBES
> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
> index 9295016485c9..b7e6825af5e3 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -105,7 +105,7 @@ static __always_inline int
> arch_spin_trylock(arch_spinlock_t *lock)
>  {
>         arch_spinlock_t old, new;
> 
> -       old.tickets = ACCESS_ONCE(lock->tickets);
> +       old.head_tail = ACCESS_ONCE(lock->head_tail);
>         if (old.tickets.head != (old.tickets.tail & ~TICKET_SLOWPATH_FLAG))
>                 return 0;
> 
> @@ -162,16 +162,14 @@ static __always_inline void
> arch_spin_unlock(arch_spinlock_t *lock)
> 
>  static inline int arch_spin_is_locked(arch_spinlock_t *lock)
>  {
> -       struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
> -
> -       return tmp.tail != tmp.head;
> +       struct arch_spinlock tmp = { .head_tail =
> ACCESS_ONCE(lock->head_tail) };
> +       return tmp.tickets.tail != tmp.tickets.head;
>  }
> 
>  static inline int arch_spin_is_contended(arch_spinlock_t *lock)
>  {
> -       struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
> -
> -       return (__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC;
> +       struct arch_spinlock tmp = { .head_tail =
> ACCESS_ONCE(lock->head_tail) };
> +       return (__ticket_t)(tmp.tickets.tail - tmp.tickets.head) >
> TICKET_LOCK_INC;
>  }
>  #define arch_spin_is_contended arch_spin_is_contended
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christian Borntraeger Nov. 11, 2014, 9:16 p.m. UTC | #2
Am 10.11.2014 um 22:07 schrieb Linus Torvalds:
> On Mon, Nov 10, 2014 at 12:18 PM, Christian Borntraeger
> <borntraeger@de.ibm.com> wrote:
>>
>> Now: I can reproduces belows miscompile on gcc46 and gcc 47
>> gcc 45 seems ok, gcc 48 is fixed.  This makes blacklisting
>> a bit hard, especially since it is not limited to s390, but
>> covers all architectures.
>> In essence ACCESS_ONCE will not work reliably on aggregate
>> types with gcc 4.6 and gcc 4.7.
>> In Linux we seem to use ACCESS_ONCE mostly on scalar types,
>> below code is an example were we dont - and break.
> 
> Hmm. I think we should see how painful it would be to make it a rule
> that ACCESS_ONCE() only works on scalar types.
> 
> Even in the actual code you show as an example, the "fix" is really to
> use the "unsigned long val" member of the union for the ACCESS_ONCE().
> And that seems to be true in many other cases too.

Yes, using the val like in 
-               new = old = ACCESS_ONCE(*ic);
+               new.val = old.val = ACCESS_ONCE(ic->val);

does solve the problem as well. In fact, gcc does create the same binary
code on my 4.7.2.

Are you ok with the patch as is in kvm/next for the time being or shall
we revert that and replace it with the .val scheme?

We can also do the cleanup later on if we manage to get your initial patch
into a shape that works out.

Christian

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Linus Torvalds Nov. 12, 2014, 12:33 a.m. UTC | #3
On Tue, Nov 11, 2014 at 1:16 PM, Christian Borntraeger
<borntraeger@de.ibm.com> wrote:
>
> Are you ok with the patch as is in kvm/next for the time being or shall
> we revert that and replace it with the .val scheme?

Is that the one that was quoted at the beginning of the thread, that
uses barrier()?

I guess as a workaround it is fine, as long as we don't lose sight of
trying to eventually do a better job.

                     Linus
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Linus Torvalds Nov. 12, 2014, 12:36 a.m. UTC | #4
On Tue, Nov 11, 2014 at 4:33 PM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> I guess as a workaround it is fine, as long as we don't lose sight of
> trying to eventually do a better job.

Oh, and when it comes to the actual gcc bug - do you have any reason
to believe that it's somehow triggered more easily by something
particular in the arch/s390/kvm/gaccess.c code?

IOW, why does this problem not hit the x86 spinlocks that also use
volatile pointers to aggregate types? Or does it?

                       Linus
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Christian Borntraeger Nov. 12, 2014, 8:05 a.m. UTC | #5
Am 12.11.2014 um 01:36 schrieb Linus Torvalds:
> On Tue, Nov 11, 2014 at 4:33 PM, Linus Torvalds
> <torvalds@linux-foundation.org> wrote:
>>
>> I guess as a workaround it is fine, as long as we don't lose sight of
>> trying to eventually do a better job.
> 
> Oh, and when it comes to the actual gcc bug - do you have any reason
> to believe that it's somehow triggered more easily by something
> particular in the arch/s390/kvm/gaccess.c code?

Yes there are reasons. First of all the bug if SRA removes the volatile tag, but that does not mean that this breaks things. As long as the operation is simple enough things will be mostly ok. If things are not simple like in gaccess things get more complicated. Lets look at the ipte lock. The lock itself consists of 3 parts: k (1 bit:locked), kh(31bit counter for the host) and kg(32 bit counter for the millicode when doing specific guest instructions). There are 3 valid states
1. k=0, kh=0; kg=0
2. k=1, kh!=0, kg=0
3. k=1, kh=0, kg!=0

So the host code must check if the guest counter is zero and can then set the k bit to one and increase the counter. (for unlock it has to decrement kh and if that becomes zero also zero the k bit)
That means that we have multiple accesses to subcomponents. As the host counter is bit 1-31 (ibm speak, linux speak bit 32-62) gcc wants to use the load thirty one bit instruction. 
So far so good. The ticket lock is also not a trivial set/clear bit.

Now: In gcc the memory costs for s390 are modeled to have the same costs as register accesses (TARGET_MEMORY_MOVE_COST==1, TARGET_REGISTER_MOVE_COST=1)
So for gcc a re-loading of a part of the lock from memory costs the same as loading it from a register. That probably triggered that bug.

Christian




> 
> IOW, why does this problem not hit the x86 spinlocks that also use
> volatile pointers to aggregate types? Or does it?

I think we would have noticed if that hits. But there are certainly cases where this bug triggers also on x86, see
the initial bug report of https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 This bug is certainly different, (instead of transforming one load into multiple loads , it combines multiple write into one) but it shows, that a volatile marker is removed.


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Martin Schwidefsky Nov. 12, 2014, 9:28 a.m. UTC | #6
On Tue, 11 Nov 2014 16:36:06 -0800
Linus Torvalds <torvalds@linux-foundation.org> wrote:

> On Tue, Nov 11, 2014 at 4:33 PM, Linus Torvalds
> <torvalds@linux-foundation.org> wrote:
> >
> > I guess as a workaround it is fine, as long as we don't lose sight of
> > trying to eventually do a better job.
> 
> Oh, and when it comes to the actual gcc bug - do you have any reason
> to believe that it's somehow triggered more easily by something
> particular in the arch/s390/kvm/gaccess.c code?
> 
> IOW, why does this problem not hit the x86 spinlocks that also use
> volatile pointers to aggregate types? Or does it?

This looks similiar to what we had on s390:

	old.tickets = ACCESS_ONCE(lock->tickets)

In theory x86 should be affected as well. On s390 we have lots of
instruction that operate on memory and the cost model of gcc makes
the compiler more inclined to access memory multiple times. My
guess would be that once the value is cached in a register the
cost model for x86 will usually make sure that the value is not
read a second time. But this is no guarantee.
Christian Borntraeger Nov. 20, 2014, 11:39 a.m. UTC | #7
Am 10.11.2014 um 22:07 schrieb Linus Torvalds:
[...]
> So before blacklisting any compilers, let's first see if
> 
>  (a) we can actually make it a real rule that we only use ACCESS_ONCE on scalars
>  (b) we can somehow enforce this with a compiler warning/error for mis-uses
> 
> For example, the attached patch works for some cases, but shows how we
> use ACCESS_ONCE() on pointers to pte_t's etc, so it doesn't come even
> close to compiling the whole kernel. But I wonder how painful that
> would be to change.. The places where it complains are actually
> somewhat debatable to begin with, like:
> 
>  - handle_pte_fault(.. pte_t *pte ..):
> 
>         entry = ACCESS_ONCE(*pte);
> 
> and the thing is, "pte" is actually possibly an 8-byte entity on
> x86-32, and that ACCESS_ONCE() fundamentally will be two 32-byte
> reads.
> 
> So there is a very valid argument for saying "well, you shouldn't do
> that, then", and that we might be better off cleaning up our
> ACCESS_ONCE() uses, than to just blindly blacklist compilers.
> 
> NOTE! I'm not at all advocating the attached patch. I'm sending it out
> white-space damaged on purpose, it's more of a "hey, something like
> this might be the direction we want to go in", with the spinlock.h
> part of the patch also acting as an example of the kind of changes the
> "ACCESS_ONCE() only works on scalars" rule would require.

So I tried to see if I can come up with some results on how often this problem happens...

[...]


> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index d5ad7b1118fc..63e82f1dfc1a 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -378,7 +378,11 @@ void ftrace_likely_update(struct
> ftrace_branch_data *f, int val, int expect);
>   * use is to mediate communication between process-level code and irq/NMI
>   * handlers, all running on the same CPU.
>   */
> -#define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x))
> +#define get_scalar_volatile_pointer(x) ({ \
> +       typeof(x) *__p = &(x); \
> +       volatile typeof(x) *__vp = __p; \
> +       (void)(long)*__p; __vp; })
> +#define ACCESS_ONCE(x) (*get_scalar_volatile_pointer(x))

..and just took this patch. On s390 is pretty much clean with allyesconfig
In fact with the siif lock changed only the pte/pmd cases you mentioned trigger a compile error:

mm/memory.c: In function 'handle_pte_fault':
mm/memory.c:3203:2: error: aggregate value used where an integer was expected
  entry = ACCESS_ONCE(*pte);

mm/rmap.c: In function 'mm_find_pmd':
mm/rmap.c:584:2: error: aggregate value used where an integer was expected
  pmde = ACCESS_ONCE(*pmd);


Here a barrier() might be a good solution as well, I guess.
On x86 allyesconfig its almost the same.
- we need your spinlock changes (well, something different to make it compile)
- we need to fix pmd and pte
- we have gup_get_pte in arch/x86/mm/gup.c getting a ptep

So It looks like we could make a change to ACCESS_ONCE. Would something like

CONFIG_ARCH_SCALAR_ACCESS_ONCE be a good start?

This would boil down to
Patch1: Provide stricter ACCESS_ONCE if CONFIG_ARCH_SCALAR_ACCESS_ONCE is set + docu update + comments
Patch2: Change mm/* to barriers
Patch3: Change x86 locks
Patch4: Change x86 gup
Patch4: Enable CONFIG_ARCH_SCALAR_ACCESS_ONCE for s390x and x86

Makes sense?

Christian

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Linus Torvalds Nov. 20, 2014, 8:30 p.m. UTC | #8
On Thu, Nov 20, 2014 at 3:39 AM, Christian Borntraeger
<borntraeger@de.ibm.com> wrote:
>
> So It looks like we could make a change to ACCESS_ONCE. Would something like
>
> CONFIG_ARCH_SCALAR_ACCESS_ONCE be a good start?

No, if it's just a handful of places to be fixed, let's not add config
options for broken cases.

> This would boil down to
> Patch1: Provide stricter ACCESS_ONCE if CONFIG_ARCH_SCALAR_ACCESS_ONCE is set + docu update + comments
> Patch2: Change mm/* to barriers
> Patch3: Change x86 locks
> Patch4: Change x86 gup
> Patch4: Enable CONFIG_ARCH_SCALAR_ACCESS_ONCE for s390x and x86

Just do patches 2-4 first, and then patch 1 unconditionally.

Obviously you'd need to spread the word on linux-arch to see how bad
it is for other cases, but if other architectures are at all like x86
and s390, and just require a few trivial patches, let's not make this
some config option.

                   Linus
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index d5ad7b1118fc..63e82f1dfc1a 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -378,7 +378,11 @@  void ftrace_likely_update(struct
ftrace_branch_data *f, int val, int expect);
  * use is to mediate communication between process-level code and irq/NMI
  * handlers, all running on the same CPU.
  */
-#define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x))
+#define get_scalar_volatile_pointer(x) ({ \
+       typeof(x) *__p = &(x); \
+       volatile typeof(x) *__vp = __p; \
+       (void)(long)*__p; __vp; })
+#define ACCESS_ONCE(x) (*get_scalar_volatile_pointer(x))

 /* Ignore/forbid kprobes attach on very low level functions marked by
this attribute: */
 #ifdef CONFIG_KPROBES
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 9295016485c9..b7e6825af5e3 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -105,7 +105,7 @@  static __always_inline int
arch_spin_trylock(arch_spinlock_t *lock)
 {
        arch_spinlock_t old, new;

-       old.tickets = ACCESS_ONCE(lock->tickets);
+       old.head_tail = ACCESS_ONCE(lock->head_tail);
        if (old.tickets.head != (old.tickets.tail & ~TICKET_SLOWPATH_FLAG))
                return 0;