diff mbox series

[v3,1/2] xen/link: Introduce .bss.percpu.page_aligned

Message ID 20190729173843.21586-2-andrew.cooper3@citrix.com (mailing list archive)
State New, archived
Headers show
Series x86/xpti: Don't leak TSS-adjacent percpu data via Meltdown | expand

Commit Message

Andrew Cooper July 29, 2019, 5:38 p.m. UTC
Future changes are going to need to page align some percpu data.

Shuffle the exact link order of items within the BSS to give
.bss.percpu.page_aligned appropriate alignment, even on CPU0, which uses
.bss.percpu itself.

Insert explicit alignment such that the result is safe even with objects
shorter than a page in length.  The POINTER_ALIGN for __bss_end is to cover
the lack of SMP_CACHE_BYTES alignment, as the loops which zero the BSS use
pointer-sized stores on all architectures.

In addition, we need to be able to specify an alignment attribute to
__DEFINE_PER_CPU().  Rework it so the caller passes in all attributes, and
adjust DEFINE_PER_CPU{,_READ_MOSTLY}() to match.  This has the added bonus
that it is now possible to grep for .bss.percpu and find all the users.

Finally, introduce DEFINE_PER_CPU_PAGE_ALIGNED() which uses both section and
alignment attributes.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Julien Grall <julien.grall@arm.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>

A sample build including the subsequent patch is now:

ffff82d08092d000 B zero_page
ffff82d08092e000 B per_cpu__init_tss
ffff82d08092e000 B __per_cpu_start
ffff82d08092f000 B per_cpu__cpupool
ffff82d08092f008 b per_cpu__continue_info
ffff82d08092f010 b per_cpu__grant_rwlock

which demonstrates the correct alignment of data in .bss.percpu even when
following a non-page-sized object in .bss.percpu.page_aligned.

v3:
 * Insert explicit alignment.
 * Reduce __bss_end's alignment to just POINTER_ALIGN.

v2:
 * Rework __DEFINE_PER_CPU() to allow for further attributes to be passed.
 * Specify __aligned(PAGE_SIZE) as part of DEFINE_PER_CPU_PAGE_ALIGNED().
---
 xen/arch/arm/xen.lds.S       |  7 +++++--
 xen/arch/x86/xen.lds.S       |  7 +++++--
 xen/include/asm-arm/percpu.h |  6 ++----
 xen/include/asm-x86/percpu.h |  6 ++----
 xen/include/xen/percpu.h     | 10 ++++++++--
 5 files changed, 22 insertions(+), 14 deletions(-)

Comments

Jan Beulich July 30, 2019, 8:42 a.m. UTC | #1
On 29.07.2019 19:38, Andrew Cooper wrote:
> --- a/xen/arch/x86/xen.lds.S
> +++ b/xen/arch/x86/xen.lds.S
> @@ -293,14 +293,17 @@ SECTIONS
>         __bss_start = .;
>         *(.bss.stack_aligned)
>         *(.bss.page_aligned*)
> -       *(.bss)
> -       . = ALIGN(SMP_CACHE_BYTES);
> +       . = ALIGN(PAGE_SIZE);
>         __per_cpu_start = .;
> +       *(.bss.percpu.page_aligned)
> +       . = ALIGN(PAGE_SIZE);

But this goes too far: What we want is for the TSS to occupy a full
page, not for whatever random other page-aligned object ends up
last here (should any every appear). If you want to effect this
via linker script (rather than C arrangements), then just like the
CPU0 stack I think we want a dedicated section containing _just_
the TSS. (Having said that I realize that .bss.stack_aligned isn't
really a good name for a dedicated section. It's just that we
obviously want the stack to occupy its entire STACK_SIZE range. Of
course we could put ourselves on the position that
.bss.percpu.page_aligned too _is_ a dedicated section despite its
name, but if you mean it to be like that then it would be nice for
the description to actually say so.)

Jan
Andrew Cooper July 30, 2019, 9:35 a.m. UTC | #2
On 30/07/2019 09:42, Jan Beulich wrote:
> On 29.07.2019 19:38, Andrew Cooper wrote:
>> --- a/xen/arch/x86/xen.lds.S
>> +++ b/xen/arch/x86/xen.lds.S
>> @@ -293,14 +293,17 @@ SECTIONS
>>         __bss_start = .;
>>         *(.bss.stack_aligned)
>>         *(.bss.page_aligned*)
>> -       *(.bss)
>> -       . = ALIGN(SMP_CACHE_BYTES);
>> +       . = ALIGN(PAGE_SIZE);
>>         __per_cpu_start = .;
>> +       *(.bss.percpu.page_aligned)
>> +       . = ALIGN(PAGE_SIZE);
> But this goes too far: What we want is for the TSS to occupy a full
> page, not for whatever random other page-aligned object ends up
> last here (should any every appear).

Come again?  This is ridiculous.

Objects in a section following foo.page_aligned should never end up in
the tail of the final page of foo.page_aligned, because then they are in
the wrong section.

A short TSS is a pain to deal with, but even you said you didn't like
the xen_tss idea when you suggested it.

The name of the section is very deliberately not TSS specific, because
there is plenty of other cleanup which will end up with objects in this
section.

~Andrew
Jan Beulich July 30, 2019, 9:49 a.m. UTC | #3
On 30.07.2019 11:35, Andrew Cooper wrote:
> On 30/07/2019 09:42, Jan Beulich wrote:
>> On 29.07.2019 19:38, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/xen.lds.S
>>> +++ b/xen/arch/x86/xen.lds.S
>>> @@ -293,14 +293,17 @@ SECTIONS
>>>          __bss_start = .;
>>>          *(.bss.stack_aligned)
>>>          *(.bss.page_aligned*)
>>> -       *(.bss)
>>> -       . = ALIGN(SMP_CACHE_BYTES);
>>> +       . = ALIGN(PAGE_SIZE);
>>>          __per_cpu_start = .;
>>> +       *(.bss.percpu.page_aligned)
>>> +       . = ALIGN(PAGE_SIZE);
>> But this goes too far: What we want is for the TSS to occupy a full
>> page, not for whatever random other page-aligned object ends up
>> last here (should any every appear).
> 
> Come again?  This is ridiculous.
> 
> Objects in a section following foo.page_aligned should never end up in
> the tail of the final page of foo.page_aligned, because then they are in
> the wrong section.

How do you derive "wrong section"? Sections have an alignment and a
size. The latter doesn't need to be a multiple of the former. The
section ends wherever its size says it ends. Using this property is
actually desirable in other cases, to limit waste of space.

> A short TSS is a pain to deal with, but even you said you didn't like
> the xen_tss idea when you suggested it.
> 
> The name of the section is very deliberately not TSS specific, because
> there is plenty of other cleanup which will end up with objects in this
> section.

Well, if they're all strictly a multiple of PAGE_SIZE, then writing
down a respective requirement might be acceptable. But even then I
wouldn't be overly happy going that route.

Jan
Andrew Cooper July 30, 2019, 4:36 p.m. UTC | #4
On 30/07/2019 10:49, Jan Beulich wrote:
> On 30.07.2019 11:35, Andrew Cooper wrote:
>> On 30/07/2019 09:42, Jan Beulich wrote:
>>> On 29.07.2019 19:38, Andrew Cooper wrote:
>>>> --- a/xen/arch/x86/xen.lds.S
>>>> +++ b/xen/arch/x86/xen.lds.S
>>>> @@ -293,14 +293,17 @@ SECTIONS
>>>>          __bss_start = .;
>>>>          *(.bss.stack_aligned)
>>>>          *(.bss.page_aligned*)
>>>> -       *(.bss)
>>>> -       . = ALIGN(SMP_CACHE_BYTES);
>>>> +       . = ALIGN(PAGE_SIZE);
>>>>          __per_cpu_start = .;
>>>> +       *(.bss.percpu.page_aligned)
>>>> +       . = ALIGN(PAGE_SIZE);
>>> But this goes too far: What we want is for the TSS to occupy a full
>>> page, not for whatever random other page-aligned object ends up
>>> last here (should any every appear).
>> Come again?  This is ridiculous.
>>
>> Objects in a section following foo.page_aligned should never end up in
>> the tail of the final page of foo.page_aligned, because then they are in
>> the wrong section.
> How do you derive "wrong section"? Sections have an alignment and a
> size. The latter doesn't need to be a multiple of the former. The
> section ends wherever its size says it ends. Using this property is
> actually desirable in other cases, to limit waste of space.

The principle of least surprise, for a section with page_aligned in its
name, is that the section starts and ends on a page boundary.  This is
what people expect when they see a name like that.

It really doesn't matter what is technically possible.  What does matter
is peoples reasonable expectations.

>> A short TSS is a pain to deal with, but even you said you didn't like
>> the xen_tss idea when you suggested it.
>>
>> The name of the section is very deliberately not TSS specific, because
>> there is plenty of other cleanup which will end up with objects in this
>> section.
> Well, if they're all strictly a multiple of PAGE_SIZE, then writing
> down a respective requirement might be acceptable. But even then Inot submitted by me.
> wouldn't be overly happy going that route.

This reply, like most others in this thread, is actively unhelpful, and
I give up.

I can't read your mind.  Neither can anyone else, and noone has the time
to divine what you want.

If you don't come up with something more helpful than "I don't like it
this way", then I'm going to commit this series in its current form, and
you can adjusting it to your own taste, in your own time.

This goes for other series as well, including ones submitted by others. 
It is absolutely critical that review feedback identifies, in a clear
manner, how you expect the issue to be resolved.

For any non-trivial piece of feedback, if it can't be phrased in the
form "I would this patch ok if you alter $X to $Y", then it is probably
wants rethinking.  Whatever the feedback actually is, that gives a
concrete $X which is the perceived problem, and a concrete $Y which is
either a clear discussion point, or a clear direction to work towards.

~Andrew
Jan Beulich July 31, 2019, 9:28 a.m. UTC | #5
On 30.07.2019 18:36, Andrew Cooper wrote:
> On 30/07/2019 10:49, Jan Beulich wrote:
>> On 30.07.2019 11:35, Andrew Cooper wrote:
>>> On 30/07/2019 09:42, Jan Beulich wrote:
>>>> On 29.07.2019 19:38, Andrew Cooper wrote:
>>>>> --- a/xen/arch/x86/xen.lds.S
>>>>> +++ b/xen/arch/x86/xen.lds.S
>>>>> @@ -293,14 +293,17 @@ SECTIONS
>>>>>           __bss_start = .;
>>>>>           *(.bss.stack_aligned)
>>>>>           *(.bss.page_aligned*)
>>>>> -       *(.bss)
>>>>> -       . = ALIGN(SMP_CACHE_BYTES);
>>>>> +       . = ALIGN(PAGE_SIZE);
>>>>>           __per_cpu_start = .;
>>>>> +       *(.bss.percpu.page_aligned)
>>>>> +       . = ALIGN(PAGE_SIZE);
>>>> But this goes too far: What we want is for the TSS to occupy a full
>>>> page, not for whatever random other page-aligned object ends up
>>>> last here (should any every appear).
>>> Come again?  This is ridiculous.
>>>
>>> Objects in a section following foo.page_aligned should never end up in
>>> the tail of the final page of foo.page_aligned, because then they are in
>>> the wrong section.
>> How do you derive "wrong section"? Sections have an alignment and a
>> size. The latter doesn't need to be a multiple of the former. The
>> section ends wherever its size says it ends. Using this property is
>> actually desirable in other cases, to limit waste of space.
> 
> The principle of least surprise, for a section with page_aligned in its
> name, is that the section starts and ends on a page boundary.  This is
> what people expect when they see a name like that.

Hmm, when I see "aligned" I think "aligned", and nothing else. What's
more odd - your response here is inconsistent with your earlier one in
https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg02084.html
There you did appear to agree that .bss.page_aligned may end with an
object the size of which is not a multiple of PAGE_SIZE. Furthermore
you even make provisions for this to happen right in the patch
description here.

(As an aside, you also don't seem to have any problems with .text.kexec
being page aligned but not a multiple of PAGE_SIZE in size. Granted this
section doesn't carry "page_aligned" in its name.)

>>> A short TSS is a pain to deal with, but even you said you didn't like
>>> the xen_tss idea when you suggested it.
>>>
>>> The name of the section is very deliberately not TSS specific, because
>>> there is plenty of other cleanup which will end up with objects in this
>>> section.
>> Well, if they're all strictly a multiple of PAGE_SIZE, then writing
>> down a respective requirement might be acceptable. But even then Inot submitted by me.
>> wouldn't be overly happy going that route.
> 
> This reply, like most others in this thread, is actively unhelpful, and
> I give up.
> 
> I can't read your mind.  Neither can anyone else, and noone has the time
> to divine what you want.
> 
> If you don't come up with something more helpful than "I don't like it
> this way", then I'm going to commit this series in its current form, and
> you can adjusting it to your own taste, in your own time.
> 
> This goes for other series as well, including ones submitted by others.
> It is absolutely critical that review feedback identifies, in a clear
> manner, how you expect the issue to be resolved.
> 
> For any non-trivial piece of feedback, if it can't be phrased in the
> form "I would this patch ok if you alter $X to $Y", then it is probably
> wants rethinking.  Whatever the feedback actually is, that gives a
> concrete $X which is the perceived problem, and a concrete $Y which is
> either a clear discussion point, or a clear direction to work towards.

I have to admit that I'm rather surprised to get _this_ as a reply here,
when we've already sketched out the correct alternative. Despite me not
particularly liking it, I don't see anything wrong with

union _aligned(PAGE_SIZE) tss_union {
     struct __packed tss_struct {
         uint32_t :32;
         uint64_t rsp0, rsp1, rsp2;
         uint64_t :64;
         /*
          * Interrupt Stack Table is 1-based so tss->ist[0] corresponds to an IST
          * value of 1 in an Interrupt Descriptor.
          */
         uint64_t ist[7];
         uint64_t :64;
         uint16_t :16, bitmap;
     };
     char pad[PAGE_SIZE];
};

And since it's a technically correct solution (providing both a type
correctly describing the hardware interface and one correctly describing
our own needs) with no better alternative either of us can see, I think
this (or whatever variation of it) is the way to go.

As to the rest of your rant: I disagree that a reviewer has to always
suggest how things are to be done; that's desirable where possible,
but simply pointing out something is wrong will have to do in certain
cases. In the case here, which is a good example imo, the point of my
response is that from simply looking at the resulting code it is
unclear why _either_ of the two ALIGN(PAGE_SIZE) have been inserted.
This carries the risk of later someone like me coming and deleting
everything that's there without apparent reason (see e.g. 01fe4da624,
albeit istr having done something more extensive at some other time,
but I can't seem to be able to spot it). If anything it should be you
to explain why they need to be there. And while (in a way) you do so
(in the description, the same passage as referenced above), the reason
isn't really as simple as "such that the result is safe even with
objects shorter than a page in length". Instead the reasons actually
differ for both ALIGN()s: In the first case we simply want to avoid
__per_cpu_start living needlessly early. In the latter one you want to
compensate for something that should be done elsewhere (see above).

Jan
diff mbox series

Patch

diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index 12c107f45d..cc27131d5e 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -201,14 +201,17 @@  SECTIONS
        *(.bss.stack_aligned)
        . = ALIGN(PAGE_SIZE);
        *(.bss.page_aligned)
-       *(.bss)
-       . = ALIGN(SMP_CACHE_BYTES);
+       . = ALIGN(PAGE_SIZE);
        __per_cpu_start = .;
+       *(.bss.percpu.page_aligned)
+       . = ALIGN(PAGE_SIZE);
        *(.bss.percpu)
        . = ALIGN(SMP_CACHE_BYTES);
        *(.bss.percpu.read_mostly)
        . = ALIGN(SMP_CACHE_BYTES);
        __per_cpu_data_end = .;
+       *(.bss)
+       . = ALIGN(POINTER_ALIGN);
        __bss_end = .;
   } :text
   _end = . ;
diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index a73139cd29..3bf21975a2 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -293,14 +293,17 @@  SECTIONS
        __bss_start = .;
        *(.bss.stack_aligned)
        *(.bss.page_aligned*)
-       *(.bss)
-       . = ALIGN(SMP_CACHE_BYTES);
+       . = ALIGN(PAGE_SIZE);
        __per_cpu_start = .;
+       *(.bss.percpu.page_aligned)
+       . = ALIGN(PAGE_SIZE);
        *(.bss.percpu)
        . = ALIGN(SMP_CACHE_BYTES);
        *(.bss.percpu.read_mostly)
        . = ALIGN(SMP_CACHE_BYTES);
        __per_cpu_data_end = .;
+       *(.bss)
+       . = ALIGN(POINTER_ALIGN);
        __bss_end = .;
   } :text
   _end = . ;
diff --git a/xen/include/asm-arm/percpu.h b/xen/include/asm-arm/percpu.h
index 9584b830d4..264120b192 100644
--- a/xen/include/asm-arm/percpu.h
+++ b/xen/include/asm-arm/percpu.h
@@ -10,10 +10,8 @@  extern char __per_cpu_start[], __per_cpu_data_end[];
 extern unsigned long __per_cpu_offset[NR_CPUS];
 void percpu_init_areas(void);
 
-/* Separate out the type, so (int[3], foo) works. */
-#define __DEFINE_PER_CPU(type, name, suffix)                    \
-    __section(".bss.percpu" #suffix)                            \
-    __typeof__(type) per_cpu_##name
+#define __DEFINE_PER_CPU(attr, type, name) \
+    attr __typeof__(type) per_cpu_ ## name
 
 #define per_cpu(var, cpu)  \
     (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu]))
diff --git a/xen/include/asm-x86/percpu.h b/xen/include/asm-x86/percpu.h
index ff34dc7897..5b6cef04c4 100644
--- a/xen/include/asm-x86/percpu.h
+++ b/xen/include/asm-x86/percpu.h
@@ -7,10 +7,8 @@  extern unsigned long __per_cpu_offset[NR_CPUS];
 void percpu_init_areas(void);
 #endif
 
-/* Separate out the type, so (int[3], foo) works. */
-#define __DEFINE_PER_CPU(type, name, suffix)                    \
-    __section(".bss.percpu" #suffix)                            \
-    __typeof__(type) per_cpu_##name
+#define __DEFINE_PER_CPU(attr, type, name) \
+    attr __typeof__(type) per_cpu_ ## name
 
 /* var is in discarded region: offset to particular copy we want */
 #define per_cpu(var, cpu)  \
diff --git a/xen/include/xen/percpu.h b/xen/include/xen/percpu.h
index aeec5c19d6..71a31cc361 100644
--- a/xen/include/xen/percpu.h
+++ b/xen/include/xen/percpu.h
@@ -9,9 +9,15 @@ 
  * The _##name concatenation is being used here to prevent 'name' from getting
  * macro expanded, while still allowing a per-architecture symbol name prefix.
  */
-#define DEFINE_PER_CPU(type, name) __DEFINE_PER_CPU(type, _##name, )
+#define DEFINE_PER_CPU(type, name) \
+    __DEFINE_PER_CPU(__section(".bss.percpu"), type, _ ## name)
+
+#define DEFINE_PER_CPU_PAGE_ALIGNED(type, name) \
+    __DEFINE_PER_CPU(__section(".bss.percpu.page_aligned") \
+                     __aligned(PAGE_SIZE), type, _ ## name)
+
 #define DEFINE_PER_CPU_READ_MOSTLY(type, name) \
-	__DEFINE_PER_CPU(type, _##name, .read_mostly)
+    __DEFINE_PER_CPU(__section(".bss.percpu.read_mostly"), type, _ ## name)
 
 #define get_per_cpu_var(var)  (per_cpu__##var)