diff mbox series

[4/4] x86: use POPCNT for hweight<N>() when available

Message ID 5CF0F9DC0200007800233E0A@prv1-mh.provo.novell.com (mailing list archive)
State New, archived
Headers show
Series bitops: hweight<N>() improvements | expand

Commit Message

Jan Beulich May 31, 2019, 9:54 a.m. UTC
This is faster than using the software implementation, and the insn is
available on all half-way recent hardware. Therefore convert
generic_hweight<N>() to out-of-line functions (without affecting Arm)
and use alternatives patching to replace the function calls.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Note: Using "g" instead of "X" as the dummy constraint in hweight64()
      and hweight32(), other than expected, produces slightly better
      code with gcc 8.

Comments

Andrew Cooper May 31, 2019, 8:43 p.m. UTC | #1
On 31/05/2019 02:54, Jan Beulich wrote:
> This is faster than using the software implementation, and the insn is
> available on all half-way recent hardware. Therefore convert
> generic_hweight<N>() to out-of-line functions (without affecting Arm)
> and use alternatives patching to replace the function calls.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

So, I trust you weren't expecting to just ack this and let it go in?

The principle of the patch (use popcnt when available) is an improvement
which I'm entirely in agreement with, but everything else is a problem.

The long and the short of it is that I'm not going to accept any version
of this which isn't the Linux version.

>From a microarchitectural standpoint, the tradeoff between fractional
register scheduling flexibility (which in practice is largely bound
anyway by real function calls in surrounding code) and increased icache
pressure/coldness (from the redundant function copies) falls largely in
favour of the Linux way of doing it, a cold icache line is
disproportionally more expensive than requiring the compiler to order
its registers differently (especially as all non-obsolete processors
these days have zero-cost register renaming internally, for the purpose
of superscalar execution).

If however you can demonstrate that a number of CPU microarchitects are
wrong about what is faster in practice, then I will happily accept a
patch which matches Linux, once you've improved the performance of Linux.

> @@ -245,6 +246,9 @@ boot/mkelf32: boot/mkelf32.c
>  efi/mkreloc: efi/mkreloc.c
>  	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
>  
> +nocov-y += hweight.o

Irrespective of the exact specifics of how the patch ends up, I don't
think the nocov restriction is a direction we want to take.

Coverage may not be a thing used in production, but when it is used for
development, it needs to not have random holes missing in the results data.

> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
> +

Does this work with Clang?


> --- /dev/null
> +++ b/xen/arch/x86/hweight.c
> @@ -0,0 +1,28 @@
> +#define generic_hweight64 _hweight64
> +#define generic_hweight32 _hweight32
> +#define generic_hweight16 _hweight16
> +#define generic_hweight8  _hweight8
> +
> +#include <xen/compiler.h>
> +
> +#undef inline
> +#define inline always_inline
> +
> +#include <xen/bitops.h>
> +
> +#undef generic_hweight8
> +#undef generic_hweight16
> +#undef generic_hweight32
> +#undef generic_hweight64
> +
> +#define HWEIGHT(n)                                             \
> +typeof(_hweight##n) generic_hweight##n;                        \
> +unsigned int generic_hweight##n(typeof((uint##n##_t)0 + 0U) x) \

A question to the rest of xen-devel.  Is there anyone else who can
actually work out what this construct is doing?

I'd like to get a feel for how many people can even follow some of our C.

> --- a/xen/include/asm-x86/bitops.h
> +++ b/xen/include/asm-x86/bitops.h
> @@ -469,15 +469,35 @@ static inline int fls(unsigned int x)
>      return r + 1;
>  }
>  
> +/* POPCNT encodings with %{r,e}di input and %{r,e}ax output: */
> +#define POPCNT_64 ".byte 0xF3, 0x48, 0x0F, 0xB8, 0xC7"
> +#define POPCNT_32 ".byte 0xF3, 0x0F, 0xB8, 0xC7"

So (the dangers of false micro-optimsiation aside), POPCNT_32 will
probably be better using a redundant %ds prefix.

The reason is that the result needs padding to 5 bytes, as the original
instruction is `call disp32`, meaning that a single byte nop needs
inserting.  The version with a single nop takes two decode ports as
opposed to one, and single byte nops are forced to take an execution
delay for backwards compatibility with DoS.

OTOH, I also bet that noone could observe a difference without using
perf counters and fetch/decode uarch events.

~Andrew
Jan Beulich June 3, 2019, 8:13 a.m. UTC | #2
>>> On 31.05.19 at 22:43, <andrew.cooper3@citrix.com> wrote:
> On 31/05/2019 02:54, Jan Beulich wrote:
>> This is faster than using the software implementation, and the insn is
>> available on all half-way recent hardware. Therefore convert
>> generic_hweight<N>() to out-of-line functions (without affecting Arm)
>> and use alternatives patching to replace the function calls.
>>
>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> So, I trust you weren't expecting to just ack this and let it go in?
> 
> The principle of the patch (use popcnt when available) is an improvement
> which I'm entirely in agreement with, but everything else is a problem.
> 
> The long and the short of it is that I'm not going to accept any version
> of this which isn't the Linux version.

You're kidding. We want to move away from assembly wherever we
can, and you demand new assembly code?

>>From a microarchitectural standpoint, the tradeoff between fractional
> register scheduling flexibility (which in practice is largely bound
> anyway by real function calls in surrounding code) and increased icache
> pressure/coldness (from the redundant function copies) falls largely in
> favour of the Linux way of doing it, a cold icache line is
> disproportionally more expensive than requiring the compiler to order
> its registers differently (especially as all non-obsolete processors
> these days have zero-cost register renaming internally, for the purpose
> of superscalar execution).

I'm afraid I'm struggling heavily as to what you're wanting to tell
me here: Where's the difference (in this regard) between the
change here and the way how Linux does it? Both emit a CALL
insn with registers set up suitably for it, and both patch it with a
POPCNT insn using the registers as demanded by the CALL.

The difference to Linux is what gets called, not how the patching
works (afaict). I'm simply not buying the combination of arguments
and effects of the removal of the use of -ffixed-*.

>> @@ -245,6 +246,9 @@ boot/mkelf32: boot/mkelf32.c
>>  efi/mkreloc: efi/mkreloc.c
>>  	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
>>  
>> +nocov-y += hweight.o
> 
> Irrespective of the exact specifics of how the patch ends up, I don't
> think the nocov restriction is a direction we want to take.
> 
> Coverage may not be a thing used in production, but when it is used for
> development, it needs to not have random holes missing in the results data.

Sure, but then we can't avoid saving/restoring the callee clobbered
registers in the to be called functions. Which in turn means I see no
way of avoiding code duplications (be it in C or assembly) of the
generic_hweight<N>() implementations.

>> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
>> +
> 
> Does this work with Clang?

I have no idea.

>> --- /dev/null
>> +++ b/xen/arch/x86/hweight.c
>> @@ -0,0 +1,28 @@
>> +#define generic_hweight64 _hweight64
>> +#define generic_hweight32 _hweight32
>> +#define generic_hweight16 _hweight16
>> +#define generic_hweight8  _hweight8
>> +
>> +#include <xen/compiler.h>
>> +
>> +#undef inline
>> +#define inline always_inline
>> +
>> +#include <xen/bitops.h>
>> +
>> +#undef generic_hweight8
>> +#undef generic_hweight16
>> +#undef generic_hweight32
>> +#undef generic_hweight64
>> +
>> +#define HWEIGHT(n)                                             \
>> +typeof(_hweight##n) generic_hweight##n;                        \
>> +unsigned int generic_hweight##n(typeof((uint##n##_t)0 + 0U) x) \
> 
> A question to the rest of xen-devel.  Is there anyone else who can
> actually work out what this construct is doing?
> 
> I'd like to get a feel for how many people can even follow some of our C.

I know you don't like such constructs, but you likely also know
that I don't like the redundancy resulting when not using them.
You've vetoed a change by Roger in this direction recently.
While I did accept this (as the code we have is fine as is as
well), I don't think your personal taste should rule out such
uses. If anything, may I ask for clear guidelines (to be put into
./CODING_STYLE after having reached consensus) which parts
of the C language are fine to use, and which ones aren't?

>> --- a/xen/include/asm-x86/bitops.h
>> +++ b/xen/include/asm-x86/bitops.h
>> @@ -469,15 +469,35 @@ static inline int fls(unsigned int x)
>>      return r + 1;
>>  }
>>  
>> +/* POPCNT encodings with %{r,e}di input and %{r,e}ax output: */
>> +#define POPCNT_64 ".byte 0xF3, 0x48, 0x0F, 0xB8, 0xC7"
>> +#define POPCNT_32 ".byte 0xF3, 0x0F, 0xB8, 0xC7"
> 
> So (the dangers of false micro-optimsiation aside), POPCNT_32 will
> probably be better using a redundant %ds prefix.

For the use in hweight32() - perhaps. But not for the uses in
hweight{16,8}(), as there original code and replacement fully
match up in lengths.

> The reason is that the result needs padding to 5 bytes, as the original
> instruction is `call disp32`, meaning that a single byte nop needs
> inserting.  The version with a single nop takes two decode ports as
> opposed to one, and single byte nops are forced to take an execution
> delay for backwards compatibility with DoS.
> 
> OTOH, I also bet that noone could observe a difference without using
> perf counters and fetch/decode uarch events.

Plus this is then a more general problem to address, not something
to specifically worry about here. Would you have asked for an
explicit override if the insn was written using a proper mnemonic
(i.e. if we didn't need to cope with incapable binutils)?

Furthermore, if we were to follow Linux, we'd rather use an empty
REX prefix here.

Jan
Jan Beulich June 4, 2019, 6:08 a.m. UTC | #3
>>> On 03.06.19 at 10:13, <JBeulich@suse.com> wrote:
>>>> On 31.05.19 at 22:43, <andrew.cooper3@citrix.com> wrote:
>> On 31/05/2019 02:54, Jan Beulich wrote:
>>> This is faster than using the software implementation, and the insn is
>>> available on all half-way recent hardware. Therefore convert
>>> generic_hweight<N>() to out-of-line functions (without affecting Arm)
>>> and use alternatives patching to replace the function calls.
>>>
>>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> 
>> So, I trust you weren't expecting to just ack this and let it go in?
>> 
>> The principle of the patch (use popcnt when available) is an improvement
>> which I'm entirely in agreement with, but everything else is a problem.
>> 
>> The long and the short of it is that I'm not going to accept any version
>> of this which isn't the Linux version.
> 
> You're kidding. We want to move away from assembly wherever we
> can, and you demand new assembly code?
> 
>>>From a microarchitectural standpoint, the tradeoff between fractional
>> register scheduling flexibility (which in practice is largely bound
>> anyway by real function calls in surrounding code) and increased icache
>> pressure/coldness (from the redundant function copies) falls largely in
>> favour of the Linux way of doing it, a cold icache line is
>> disproportionally more expensive than requiring the compiler to order
>> its registers differently (especially as all non-obsolete processors
>> these days have zero-cost register renaming internally, for the purpose
>> of superscalar execution).
> 
> I'm afraid I'm struggling heavily as to what you're wanting to tell
> me here: Where's the difference (in this regard) between the
> change here and the way how Linux does it? Both emit a CALL
> insn with registers set up suitably for it, and both patch it with a
> POPCNT insn using the registers as demanded by the CALL.

Having thought about this some more, in an attempt to try to
understand (a) what you mean and (b) how you want things
to be done "your way", I'm afraid I've got more confused: Your
reply reminds me heavily of the discussion we had on the BMI2
patching series I had done (and now dropped): There you
complained about me _not_ using fixed registers and hence
potentially calling frequent i-cache-cold lines to be accessed.
While my original plan was to use a similar approach here, I
specifically went the opposite way to avoid similar complaints
of yours. Just to find that you use the (apparently) same
argument again. As a result I can only conclude that I'm now
pretty unclear on what model you would actually approve of.

Jan
Andrew Cooper June 4, 2019, 7:07 p.m. UTC | #4
On 03/06/2019 09:13, Jan Beulich wrote:
>>>> On 31.05.19 at 22:43, <andrew.cooper3@citrix.com> wrote:
>> On 31/05/2019 02:54, Jan Beulich wrote:
>>> This is faster than using the software implementation, and the insn is
>>> available on all half-way recent hardware. Therefore convert
>>> generic_hweight<N>() to out-of-line functions (without affecting Arm)
>>> and use alternatives patching to replace the function calls.
>>>
>>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> So, I trust you weren't expecting to just ack this and let it go in?
>>
>> The principle of the patch (use popcnt when available) is an improvement
>> which I'm entirely in agreement with, but everything else is a problem.
>>
>> The long and the short of it is that I'm not going to accept any version
>> of this which isn't the Linux version.
> You're kidding. We want to move away from assembly wherever we
> can, and you demand new assembly code?
>
>> >From a microarchitectural standpoint, the tradeoff between fractional
>> register scheduling flexibility (which in practice is largely bound
>> anyway by real function calls in surrounding code) and increased icache
>> pressure/coldness (from the redundant function copies) falls largely in
>> favour of the Linux way of doing it, a cold icache line is
>> disproportionally more expensive than requiring the compiler to order
>> its registers differently (especially as all non-obsolete processors
>> these days have zero-cost register renaming internally, for the purpose
>> of superscalar execution).
> I'm afraid I'm struggling heavily as to what you're wanting to tell
> me here: Where's the difference (in this regard) between the
> change here and the way how Linux does it? Both emit a CALL
> insn with registers set up suitably for it, and both patch it with a
> POPCNT insn using the registers as demanded by the CALL.
>
> The difference to Linux is what gets called, not how the patching
> works (afaict). I'm simply not buying the combination of arguments
> and effects of the removal of the use of -ffixed-*.

The patch description made it look as if it was still variadic on the
input registers, and throughout the entire patch, I failed to work out
what the -ffixed-r* was actually trying to achieve.

After actually trying the code locally, I see that isn't the case.  I
apologise for my original reaction.

>
>>> @@ -245,6 +246,9 @@ boot/mkelf32: boot/mkelf32.c
>>>  efi/mkreloc: efi/mkreloc.c
>>>  	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
>>>  
>>> +nocov-y += hweight.o
>> Irrespective of the exact specifics of how the patch ends up, I don't
>> think the nocov restriction is a direction we want to take.
>>
>> Coverage may not be a thing used in production, but when it is used for
>> development, it needs to not have random holes missing in the results data.
> Sure, but then we can't avoid saving/restoring the callee clobbered
> registers in the to be called functions.

Why is this of concern?

a) it the compilers job to DTRT, and the sum total of GCC's coverage
data appears to be "addq $1, mumble(%rip)"

b) coverage is just one of several things which might add
instrumentation, ubsan being the other example which Xen already supports.

>  Which in turn means I see no
> way of avoiding code duplications (be it in C or assembly) of the
> generic_hweight<N>() implementations.
>
>>> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
>>> +
>> Does this work with Clang?
> I have no idea.

I do - every time I have a suspicion, the answer is invariably no.

clang: error: unknown argument: '-ffixed-rcx'
clang: error: unknown argument: '-ffixed-rdx'
clang: error: unknown argument: '-ffixed-rsi'
clang: error: unknown argument '-ffixed-r8', did you mean '-ffixed-r9'?
clang: error: unknown argument '-ffixed-r10', did you mean '-ffixed-r19'?
clang: error: unknown argument '-ffixed-r11', did you mean '-ffixed-r19'?

As it turns out, the missing -ffixed-r9 is a Hexagon option only, and
isn't applicable for x86.

>
>>> --- /dev/null
>>> +++ b/xen/arch/x86/hweight.c
>>> @@ -0,0 +1,28 @@
>>> +#define generic_hweight64 _hweight64
>>> +#define generic_hweight32 _hweight32
>>> +#define generic_hweight16 _hweight16
>>> +#define generic_hweight8  _hweight8
>>> +
>>> +#include <xen/compiler.h>
>>> +
>>> +#undef inline
>>> +#define inline always_inline
>>> +
>>> +#include <xen/bitops.h>
>>> +
>>> +#undef generic_hweight8
>>> +#undef generic_hweight16
>>> +#undef generic_hweight32
>>> +#undef generic_hweight64
>>> +
>>> +#define HWEIGHT(n)                                             \
>>> +typeof(_hweight##n) generic_hweight##n;                        \
>>> +unsigned int generic_hweight##n(typeof((uint##n##_t)0 + 0U) x) \
>> A question to the rest of xen-devel.  Is there anyone else who can
>> actually work out what this construct is doing?
>>
>> I'd like to get a feel for how many people can even follow some of our C.
> I know you don't like such constructs, but you likely also know
> that I don't like the redundancy resulting when not using them.
> You've vetoed a change by Roger in this direction recently.
> While I did accept this (as the code we have is fine as is as
> well), I don't think your personal taste should rule out such
> uses. If anything, may I ask for clear guidelines (to be put into
> ./CODING_STYLE after having reached consensus) which parts
> of the C language are fine to use, and which ones aren't?

If it were up to me, I'd reject any use of constructs like this.  They
are fundamentally incompatible with easy-to-follow code, and I value
that as a far more important property than the source file being a few
lines shorter.

Particularly in this case, it really is just obfuscation, because the
longhand version can be written shorter than the HWEIGHT macro alone,
let alone its invocations:

unsigned int generic_hweight8 (unsigned int x) { return _hweight8(x);  }
unsigned int generic_hweight16(unsigned int x) { return _hweight16(x); }
unsigned int generic_hweight32(unsigned int x) { return _hweight32(x); }
unsigned int generic_hweight64(uint64_t x)     { return _hweight64(x); }

is a far easier to read result.  That said, the point is moot because
this file still doesn't compile.

>
>>> --- a/xen/include/asm-x86/bitops.h
>>> +++ b/xen/include/asm-x86/bitops.h
>>> @@ -469,15 +469,35 @@ static inline int fls(unsigned int x)
>>>      return r + 1;
>>>  }
>>>  
>>> +/* POPCNT encodings with %{r,e}di input and %{r,e}ax output: */
>>> +#define POPCNT_64 ".byte 0xF3, 0x48, 0x0F, 0xB8, 0xC7"
>>> +#define POPCNT_32 ".byte 0xF3, 0x0F, 0xB8, 0xC7"
>> So (the dangers of false micro-optimsiation aside), POPCNT_32 will
>> probably be better using a redundant %ds prefix.
> For the use in hweight32() - perhaps. But not for the uses in
> hweight{16,8}(), as there original code and replacement fully
> match up in lengths.

Looking at the compiler generated code, the 32 and 16 bit are identical
other than the final imul.  The 8 bit case is very similar, but can
actually get away with imm8 rather than imm32.

There is one caller each for the 8 and 16 bit version, both of which are
in is_multicast_dest() in the vlapic emulation code.  This particular
example would actually benefit from Linux's example of aliasing of the 8
and 16bit versions to the 32bit version.

The only user which is in any way a fastpath is in __bitmap_weight(),
and I've got another cleanup patch for that.  (I also bet that doing an
x86 specific vectorised version of __bitmap_weight() using POPCNT's
memory operand would be a perf win, but that can wait until some free
and some proper profiling support appears.)

Given that this is a helper which is unlikely to ever be rewritten, and
there doesn't appear to be an obvious way to get Clang to avoid using
specific registers, I'd still recommend the Linux way, with the 32 and
64bit versions custom written in assembly, and not worry about the
duplication.

>
>> The reason is that the result needs padding to 5 bytes, as the original
>> instruction is `call disp32`, meaning that a single byte nop needs
>> inserting.  The version with a single nop takes two decode ports as
>> opposed to one, and single byte nops are forced to take an execution
>> delay for backwards compatibility with DoS.
>>
>> OTOH, I also bet that noone could observe a difference without using
>> perf counters and fetch/decode uarch events.
> Plus this is then a more general problem to address, not something
> to specifically worry about here. Would you have asked for an
> explicit override if the insn was written using a proper mnemonic
> (i.e. if we didn't need to cope with incapable binutils)?

I probably wouldn't have noticed, but as eluded to earlier, I really
don't expect it matters.

Talking of binutils, when are we going to get around to upping our
minimum version?

~Andrew
Jan Beulich June 5, 2019, 8:52 a.m. UTC | #5
>>> On 04.06.19 at 21:07, <andrew.cooper3@citrix.com> wrote:
> On 03/06/2019 09:13, Jan Beulich wrote:
>>>>> On 31.05.19 at 22:43, <andrew.cooper3@citrix.com> wrote:
>>> On 31/05/2019 02:54, Jan Beulich wrote:
>>>> @@ -245,6 +246,9 @@ boot/mkelf32: boot/mkelf32.c
>>>>  efi/mkreloc: efi/mkreloc.c
>>>>  	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
>>>>  
>>>> +nocov-y += hweight.o
>>> Irrespective of the exact specifics of how the patch ends up, I don't
>>> think the nocov restriction is a direction we want to take.
>>>
>>> Coverage may not be a thing used in production, but when it is used for
>>> development, it needs to not have random holes missing in the results data.
>> Sure, but then we can't avoid saving/restoring the callee clobbered
>> registers in the to be called functions.
> 
> Why is this of concern?
> 
> a) it the compilers job to DTRT, and the sum total of GCC's coverage
> data appears to be "addq $1, mumble(%rip)"
> 
> b) coverage is just one of several things which might add
> instrumentation, ubsan being the other example which Xen already supports.

Hmm, have I been under the wrong impression that, like for -p, a
function call _might_ get inserted at the beginning of functions? Is
it spelled out somewhere what exact code can be inserted? The
command line option descriptions alone don't ... What we need to
avoid are function calls; I believe any other sort of code insertion
ought to be okay.

However - having seen your suggestion to follow Linux and use
assembly implementations: How would not adding coverage
instrumentation here differ from there not being any in assembly
code?

As to ubsan - yes, this should be treated the same way as the
coverage instrumentation. I.e. unless there's a guarantee that
it can't result in emitted function calls, it would need to be
suppressed.

In fact I was considering to add a build step to verify that the
object file has no external dependencies, i.e. in particular
doesn't call any functions.

>>  Which in turn means I see no
>> way of avoiding code duplications (be it in C or assembly) of the
>> generic_hweight<N>() implementations.
>>
>>>> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
>>>> +
>>> Does this work with Clang?
>> I have no idea.
> 
> I do - every time I have a suspicion, the answer is invariably no.
> 
> clang: error: unknown argument: '-ffixed-rcx'
> clang: error: unknown argument: '-ffixed-rdx'
> clang: error: unknown argument: '-ffixed-rsi'
> clang: error: unknown argument '-ffixed-r8', did you mean '-ffixed-r9'?
> clang: error: unknown argument '-ffixed-r10', did you mean '-ffixed-r19'?
> clang: error: unknown argument '-ffixed-r11', did you mean '-ffixed-r19'?
> 
> As it turns out, the missing -ffixed-r9 is a Hexagon option only, and
> isn't applicable for x86.

And -ffixed-r19 hardly can be x86 either. Do they expect the 32-bit
register names? https://clang.llvm.org/docs/UsersManual.html has
no hit of -ffixed-* at all, despite the hints the compiler gives according
to the output you've quoted.

I have to admit that I find it pretty frustrating to have to deal with
such issues: If they claim to be gcc compatible, there shouldn't be
a need to think about using basically every not entirely standard
command line option. However much I like being able to have code
compilable with multiple compilers (not the least because one may
spot issues another doesn't), I don't think this is a basis to work
from. I.e. without a sufficient level of compatibility on their part
I'd like to raise the question of whether we really need to retain
clang compatibility in our code.

>>>> --- /dev/null
>>>> +++ b/xen/arch/x86/hweight.c
>>>> @@ -0,0 +1,28 @@
>>>> +#define generic_hweight64 _hweight64
>>>> +#define generic_hweight32 _hweight32
>>>> +#define generic_hweight16 _hweight16
>>>> +#define generic_hweight8  _hweight8
>>>> +
>>>> +#include <xen/compiler.h>
>>>> +
>>>> +#undef inline
>>>> +#define inline always_inline
>>>> +
>>>> +#include <xen/bitops.h>
>>>> +
>>>> +#undef generic_hweight8
>>>> +#undef generic_hweight16
>>>> +#undef generic_hweight32
>>>> +#undef generic_hweight64
>>>> +
>>>> +#define HWEIGHT(n)                                             \
>>>> +typeof(_hweight##n) generic_hweight##n;                        \
>>>> +unsigned int generic_hweight##n(typeof((uint##n##_t)0 + 0U) x) \
>>> A question to the rest of xen-devel.  Is there anyone else who can
>>> actually work out what this construct is doing?
>>>
>>> I'd like to get a feel for how many people can even follow some of our C.
>> I know you don't like such constructs, but you likely also know
>> that I don't like the redundancy resulting when not using them.
>> You've vetoed a change by Roger in this direction recently.
>> While I did accept this (as the code we have is fine as is as
>> well), I don't think your personal taste should rule out such
>> uses. If anything, may I ask for clear guidelines (to be put into
>> ./CODING_STYLE after having reached consensus) which parts
>> of the C language are fine to use, and which ones aren't?
> 
> If it were up to me, I'd reject any use of constructs like this.  They
> are fundamentally incompatible with easy-to-follow code, and I value
> that as a far more important property than the source file being a few
> lines shorter.
> 
> Particularly in this case, it really is just obfuscation, because the
> longhand version can be written shorter than the HWEIGHT macro alone,
> let alone its invocations:
> 
> unsigned int generic_hweight8 (unsigned int x) { return _hweight8(x);  }
> unsigned int generic_hweight16(unsigned int x) { return _hweight16(x); }
> unsigned int generic_hweight32(unsigned int x) { return _hweight32(x); }
> unsigned int generic_hweight64(uint64_t x)     { return _hweight64(x); }
> 
> is a far easier to read result.

Okay, I'll switch. But you realize that this loses a property I had
specifically added: My original variant also ensured _hweight<N>()
and generic_hweight<N>() have matching types.

>  That said, the point is moot because this file still doesn't compile.

With clang you mean? Or with gcc (where it obviously does compile
for me, so I'd appreciate more detail).

>>>> --- a/xen/include/asm-x86/bitops.h
>>>> +++ b/xen/include/asm-x86/bitops.h
>>>> @@ -469,15 +469,35 @@ static inline int fls(unsigned int x)
>>>>      return r + 1;
>>>>  }
>>>>  
>>>> +/* POPCNT encodings with %{r,e}di input and %{r,e}ax output: */
>>>> +#define POPCNT_64 ".byte 0xF3, 0x48, 0x0F, 0xB8, 0xC7"
>>>> +#define POPCNT_32 ".byte 0xF3, 0x0F, 0xB8, 0xC7"
>>> So (the dangers of false micro-optimsiation aside), POPCNT_32 will
>>> probably be better using a redundant %ds prefix.
>> For the use in hweight32() - perhaps. But not for the uses in
>> hweight{16,8}(), as there original code and replacement fully
>> match up in lengths.
> 
> Looking at the compiler generated code, the 32 and 16 bit are identical
> other than the final imul.  The 8 bit case is very similar, but can
> actually get away with imm8 rather than imm32.

What imul? What imm8 / imm32? Are you taking about
generic_hweight<N>()? The code fragment above is the to-be-
patched-in code, which shouldn't require anything like this.

> There is one caller each for the 8 and 16 bit version, both of which are
> in is_multicast_dest() in the vlapic emulation code.  This particular
> example would actually benefit from Linux's example of aliasing of the 8
> and 16bit versions to the 32bit version.

Possibly. However, for one I don't think specific use sites should
influence the decision here. And then I don't think hweight() of
any form is needed in those two cases in the first place. All they
want to know is whether more than one bit is set, which can be
achieved by a standard is-power-of-2 check.

> The only user which is in any way a fastpath is in __bitmap_weight(),
> and I've got another cleanup patch for that.  (I also bet that doing an
> x86 specific vectorised version of __bitmap_weight() using POPCNT's
> memory operand would be a perf win, but that can wait until some free
> and some proper profiling support appears.)
> 
> Given that this is a helper which is unlikely to ever be rewritten, and
> there doesn't appear to be an obvious way to get Clang to avoid using
> specific registers, I'd still recommend the Linux way, with the 32 and
> 64bit versions custom written in assembly, and not worry about the
> duplication.

While you're likely right about the "never be rewritten" part (leaving
aside the logic change in patch 1 of the series, which - had we had
an assembly implementation already - would have meant a rewrite),
I'm still rather puzzled by you suggesting to extend the amount of
assembly code we have, when it can be easily written in C.

Anyway, if I was to clone Linux'es code, I wouldn't use it entirely
as is anyway (in particular I'd like to stick to clobbering %rdi at
the call site, rather than preserving it in the functions). Yet you
seem to be demanding us doing so.

> Talking of binutils, when are we going to get around to upping our
> minimum version?

I guess as soon as someone cares enough to submit a proposal of
a new minimum version that would cover all environments we care
about to be able to build on. Iirc there's some breakage already
anyway for the oldest version we claim to support.

Jan
diff mbox series

Patch

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -31,6 +31,7 @@  obj-y += emul-i8254.o
 obj-y += extable.o
 obj-y += flushtlb.o
 obj-$(CONFIG_CRASH_DEBUG) += gdbstub.o
+obj-y += hweight.o
 obj-y += hypercall.o
 obj-y += i387.o
 obj-y += i8259.o
@@ -245,6 +246,9 @@  boot/mkelf32: boot/mkelf32.c
 efi/mkreloc: efi/mkreloc.c
 	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
 
+nocov-y += hweight.o
+hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
+
 .PHONY: clean
 clean::
 	rm -f asm-offsets.s *.lds boot/*.o boot/*~ boot/core boot/mkelf32
--- /dev/null
+++ b/xen/arch/x86/hweight.c
@@ -0,0 +1,28 @@ 
+#define generic_hweight64 _hweight64
+#define generic_hweight32 _hweight32
+#define generic_hweight16 _hweight16
+#define generic_hweight8  _hweight8
+
+#include <xen/compiler.h>
+
+#undef inline
+#define inline always_inline
+
+#include <xen/bitops.h>
+
+#undef generic_hweight8
+#undef generic_hweight16
+#undef generic_hweight32
+#undef generic_hweight64
+
+#define HWEIGHT(n)                                             \
+typeof(_hweight##n) generic_hweight##n;                        \
+unsigned int generic_hweight##n(typeof((uint##n##_t)0 + 0U) x) \
+{                                                              \
+    return _hweight##n(x);                                     \
+}
+
+HWEIGHT(64)
+HWEIGHT(32)
+HWEIGHT(16)
+HWEIGHT( 8)
--- a/xen/include/asm-x86/bitops.h
+++ b/xen/include/asm-x86/bitops.h
@@ -469,15 +469,35 @@  static inline int fls(unsigned int x)
     return r + 1;
 }
 
+/* POPCNT encodings with %{r,e}di input and %{r,e}ax output: */
+#define POPCNT_64 ".byte 0xF3, 0x48, 0x0F, 0xB8, 0xC7"
+#define POPCNT_32 ".byte 0xF3, 0x0F, 0xB8, 0xC7"
+
 /**
  * hweightN - returns the hamming weight of a N-bit word
  * @x: the word to weigh
  *
  * The Hamming Weight of a number is the total number of bits set in it.
  */
-#define hweight64(x) generic_hweight64(x)
-#define hweight32(x) generic_hweight32(x)
-#define hweight16(x) generic_hweight16(x)
-#define hweight8(x) generic_hweight8(x)
+#define hweight_(n, x, insn, setup, cout, cin) ({                         \
+    unsigned int res_;                                                    \
+    /*                                                                    \
+     * For the function call the POPCNT input register needs to be marked \
+     * modified as well. Set up a local variable of appropriate type      \
+     * for this purpose.                                                  \
+     */                                                                   \
+    typeof((uint##n##_t)(x) + 0U) val_ = (x);                             \
+    alternative_io(setup "; call generic_hweight" #n,                     \
+                   insn, X86_FEATURE_POPCNT,                              \
+                   ASM_OUTPUT2([res] "=a" (res_), [val] cout (val_)),     \
+                   [src] cin (val_));                                     \
+    res_;                                                                 \
+})
+#define hweight64(x) hweight_(64, x, POPCNT_64, "", "+D", "g")
+#define hweight32(x) hweight_(32, x, POPCNT_32, "", "+D", "g")
+#define hweight16(x) hweight_(16, x, "movzwl %w[src], %[val]; " POPCNT_32, \
+                              "mov %[src], %[val]", "=&D", "rm")
+#define hweight8(x)  hweight_( 8, x, "movzbl %b[src], %[val]; " POPCNT_32, \
+                              "mov %[src], %[val]", "=&D", "rm")
 
 #endif /* _X86_BITOPS_H */