mbox series

[0/4] riscv: uaccess: optimizations

Message ID 20240625040500.1788-1-jszhang@kernel.org (mailing list archive)
Headers show
Series riscv: uaccess: optimizations | expand

Message

Jisheng Zhang June 25, 2024, 4:04 a.m. UTC
This series tries to optimize riscv uaccess in the following way:

patch1 implement the user_access_begin and families, I.E the unsafe
accessors. when a function like strncpy_from_user() is called,
the userspace access protection is disabled and enabled for every
word read. After patch1, the protection is disabled at the beginning
of the copy and enabled at the end.

patch2 is a simple clean up

patch3 uses 'asm goto' for put_user()
patch4 uses 'asm goto output' for get_user()

Both patch3 and patch4 are based on the fact -- 'asm goto' and
'asm goto output'generates noticeably better code since we don't need
to test the error etc, the exception just jumps to the error handling
directly.


Jisheng Zhang (4):
  riscv: implement user_access_begin and families
  riscv: uaccess: use input constraints for ptr of __put_user
  riscv: uaccess: use 'asm goto' for put_user()
  riscv: uaccess: use 'asm goto output' for get_user

 arch/riscv/include/asm/uaccess.h | 201 +++++++++++++++++++++++--------
 1 file changed, 149 insertions(+), 52 deletions(-)

Comments

Arnd Bergmann June 25, 2024, 7:21 a.m. UTC | #1
On Tue, Jun 25, 2024, at 06:04, Jisheng Zhang wrote:
> This series tries to optimize riscv uaccess in the following way:
>
> patch1 implement the user_access_begin and families, I.E the unsafe
> accessors. when a function like strncpy_from_user() is called,
> the userspace access protection is disabled and enabled for every
> word read. After patch1, the protection is disabled at the beginning
> of the copy and enabled at the end.
>
> patch2 is a simple clean up
>
> patch3 uses 'asm goto' for put_user()
> patch4 uses 'asm goto output' for get_user()
>
> Both patch3 and patch4 are based on the fact -- 'asm goto' and
> 'asm goto output'generates noticeably better code since we don't need
> to test the error etc, the exception just jumps to the error handling
> directly.

Nice!

I hope that we can one day make this more straightforward
and share more of the implementation across architectures,
in particular the bits that are just wrappers around
the low-level asm.

We have something like this already on powerpc and x86,
and Linus just did the version for arm64, which I assume
you are using as a template for this. Four architectures
is probably the point at which we should try to consolidate
the code again and move as much as we can into asm-generic.

I think the only bets we really need from an architecture
here are:

- __enable_user_access()/__disable_user_access() in
   the label version
- __raw_get_mem_{1,2,4,8}() and __raw_put_mem_{1,2,4,8}()

and then we can build all the normal interface functions
on those in a way that assumes we have as goto available,
with the appropriate fallback in __raw_get_mem_*() as
long as we need to support gcc-10 and older.

      Arnd
Linus Torvalds June 25, 2024, 6:12 p.m. UTC | #2
On Tue, 25 Jun 2024 at 00:22, Arnd Bergmann <arnd@arndb.de> wrote:
>
> I think the only bets we really need from an architecture
> here are:
>
> - __enable_user_access()/__disable_user_access() in
>    the label version

KCSAN wants user_access_save/restore() too, but yes.

> - __raw_get_mem_{1,2,4,8}() and __raw_put_mem_{1,2,4,8}()

You still need to split it into user/kernel.

Yes, *often* there is just one address space and they can be one "mem"
accessor thing, but we do have architectures with actually separate
user and kernel address spaces.

But yes, it would be lovely if we did things as "implement the
low-level accessor functions and we'll wrap them for the generic case"
rather than have every architecture have to do the wrapping..

The wrapping isn't just the label-based "unsafe" accesses and the
traditional error number return case, it's also the size-based case
statements ("poor man's generics").

The problem, of course, is that the majority of architectures are
based on the legacy interfaces, so it's a lot of annoying low-level
small details. I think there's a reason why nobody did the arm64
"unsafe" ones - the patch didn't end up being that bad, but it's just
fairly grotty code.

But apparently the arm64 patch was simple enough that at least RISC-V
people went "that doesn't look so bad".

Maybe other architectures will also be fairly straightforward when
there's a fairly clear example of how it should be done.

             Linus
Jisheng Zhang June 26, 2024, 1:04 p.m. UTC | #3
On Tue, Jun 25, 2024 at 11:12:16AM -0700, Linus Torvalds wrote:
> On Tue, 25 Jun 2024 at 00:22, Arnd Bergmann <arnd@arndb.de> wrote:
> >
> > I think the only bets we really need from an architecture
> > here are:
> >
> > - __enable_user_access()/__disable_user_access() in
> >    the label version
> 
> KCSAN wants user_access_save/restore() too, but yes.
> 
> > - __raw_get_mem_{1,2,4,8}() and __raw_put_mem_{1,2,4,8}()
> 
> You still need to split it into user/kernel.
> 
> Yes, *often* there is just one address space and they can be one "mem"
> accessor thing, but we do have architectures with actually separate
> user and kernel address spaces.
> 
> But yes, it would be lovely if we did things as "implement the
> low-level accessor functions and we'll wrap them for the generic case"
> rather than have every architecture have to do the wrapping..
> 
> The wrapping isn't just the label-based "unsafe" accesses and the
> traditional error number return case, it's also the size-based case
> statements ("poor man's generics").
> 
> The problem, of course, is that the majority of architectures are
> based on the legacy interfaces, so it's a lot of annoying low-level
> small details. I think there's a reason why nobody did the arm64
> "unsafe" ones - the patch didn't end up being that bad, but it's just
> fairly grotty code.
> 
> But apparently the arm64 patch was simple enough that at least RISC-V
> people went "that doesn't look so bad".
> 
> Maybe other architectures will also be fairly straightforward when
> there's a fairly clear example of how it should be done.

>> We have something like this already on powerpc and x86,
>> and Linus just did the version for arm64, which I assume
>> you are using as a template for this. Four architectures

Yep, this series is inspired by Linus's patch series, and to be honest,
some code is borrowed from his arm64 version ;) I saw he improved
arm64, then I thought a nice improvement we want for riscv too, can I do
similarly for riscv?
Linus Torvalds June 30, 2024, 4:59 p.m. UTC | #4
On Tue, 25 Jun 2024 at 11:12, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> But yes, it would be lovely if we did things as "implement the
> low-level accessor functions and we'll wrap them for the generic case"
> rather than have every architecture have to do the wrapping..

Btw, to do that _well_, we need to expand on the user access functions
a bit more.

In particular, we can already implement "get_user()" on top of
user_access_begin() and friends something like this:

  #define get_user(x,ptr) ({                                    \
        __label__ Efault_read;                                  \
        long __err;                                             \
        __typeof__(ptr) __ptr = (ptr);                          \
        if (likely(user_access_begin(__ptr, sizeof(x)))) {      \
                unsafe_get_user(x, __ptr, Efault_read);         \
                user_access_end();                              \
                __err = 0;                                      \
        } else {                                                \
                if (0) {                                        \
Efault_read:            user_access_end();                      \
                }                                               \
                x = (__typeof__(x))0;                           \
                __err = -EFAULT;                                \
        }                                                       \
        __err; })

and it doesn't generate _horrible_ code. It looks pretty bad, but all
the error handling goes out-of-line, so on x86-64 (without debug
options) it generates code something like this:

        test   %rdi,%rdi
        js     <cap_validate_magic+0x98>
        stac
        lfence
        mov    (%rdi),%ecx
        clac

which is certainly not horrid. But it's not great, because that lfence
ends up really screwing up the pipeline.

The manually coded out-of-line code generates this instead:

        mov    %rax,%rdx
        sar    $0x3f,%rdx
        or     %rdx,%rax
        stac
        movzbl (%rax),%edx
        xor    %eax,%eax
        clac
        ret

because it doesn't do a conditional branch (with the required spectre
thing), but instead does the address as a data dependency and knows
that "all bits set" if the address was negative will cause a page
fault.

But we *can* get the user accesses to do the same with a slight
expansion of user_access_begin():

        stac
        mov    %rdi,%rax
        sar    $0x3f,%rax
        or     %rdi,%rax
        mov    (%rax),%eax
        clac

by just introducing a notion of "masked_user_access". The get_user()
macro part would look like this:

        __typeof__(ptr) __ptr;                                  \
        __ptr = masked_user_access_begin(ptr);                  \
        if (1) {                                                \
                unsafe_get_user(x, __ptr, Efault_read);         \
                user_access_end();                              \

and the patch to implement this is attached. I've had it around for a
while, but I don't know how many architectures can do this.

Note this part of the commit message:

  This model only works for dense accesses that start at 'src' and on
  architectures that have a guard region that is guaranteed to fault in
  between the user space and the kernel space area.

which is true on x86-64, but that "guard region" thing might not be
true everywhere.

Will/Catalin - would that

        src = masked_user_access_begin(src);

work on arm64? The code does do something like that with
__uaccess_mask_ptr() already, but at least currently it doesn't do the
"avoid conditional entirely", the masking is just in _addition_ to the
access_ok().

                 Linus
Will Deacon July 5, 2024, 11:25 a.m. UTC | #5
On Sun, Jun 30, 2024 at 09:59:36AM -0700, Linus Torvalds wrote:
> On Tue, 25 Jun 2024 at 11:12, Linus Torvalds
> <torvalds@linux-foundation.org> wrote:
> >
> > But yes, it would be lovely if we did things as "implement the
> > low-level accessor functions and we'll wrap them for the generic case"
> > rather than have every architecture have to do the wrapping..
> 
> Btw, to do that _well_, we need to expand on the user access functions
> a bit more.

[...]

> Will/Catalin - would that
> 
>         src = masked_user_access_begin(src);
> 
> work on arm64? The code does do something like that with
> __uaccess_mask_ptr() already, but at least currently it doesn't do the
> "avoid conditional entirely", the masking is just in _addition_ to the
> access_ok().

I think we'd need to go back to our old __uaccess_mask_ptr()
implementation, where kernel addresses end up being forced to NULL. In
other words, revert 2305b809be93 ("arm64: uaccess: simplify
uaccess_mask_ptr()"). If we then want to drop the access_ok() entirely,
we'd probably want to use an address that lives between the two TTBRs
(i.e. in the "guard region" you mentioned above), just in case somebody
has fscked around with /proc/sys/vm/mmap_min_addr.

Will
Linus Torvalds July 5, 2024, 5:58 p.m. UTC | #6
On Fri, 5 Jul 2024 at 04:25, Will Deacon <will@kernel.org> wrote:
>
> we'd probably want to use an address that lives between the two TTBRs
> (i.e. in the "guard region" you mentioned above), just in case somebody
> has fscked around with /proc/sys/vm/mmap_min_addr.

Yes, I don't want to use a NULL pointer and rely on mmap_min_addr.

For x86-64, we have two "guard regions" that can be used to generate
an address that is guaranteed to fault:

 - the kernel always lives in the "top bit set" part of the address
space (and any address tagging bits don't touch that part), and does
not map the highest virtual address because that's used for error
pointers, so the "all bits set" address always faults

 - the region between valid user addresses and kernel addresses is
also always going to fault, and we don't have them adjacent to each
other (unlike, for example, 32-bit i386, where the kernel address
space is directly adjacent to the top of user addresses)

So on x86-64, the simple solution is to just say "we know if the top
bit is clear, it cannot ever touch kernel code, and if the top bit is
set we have to make the address fault". So just duplicating the top
bit (with an arithmetic shift) and or'ing it with the low bits, we get
exactly what we want.

But my knowledge of arm64 is weak enough that while I am reading
assembly language and I know that instead of the top bit, it's bit55,
I don't know what the actual rules for the translation table registers
are.

If the all-bits-set address is guaranteed to always trap, then arm64
could just use the same thing x86 does (just duplicating bit 55
instead of the sign bit)?

               Linus
Will Deacon July 8, 2024, 1:52 p.m. UTC | #7
On Fri, Jul 05, 2024 at 10:58:29AM -0700, Linus Torvalds wrote:
> On Fri, 5 Jul 2024 at 04:25, Will Deacon <will@kernel.org> wrote:
> >
> > we'd probably want to use an address that lives between the two TTBRs
> > (i.e. in the "guard region" you mentioned above), just in case somebody
> > has fscked around with /proc/sys/vm/mmap_min_addr.
> 
> Yes, I don't want to use a NULL pointer and rely on mmap_min_addr.
> 
> For x86-64, we have two "guard regions" that can be used to generate
> an address that is guaranteed to fault:
> 
>  - the kernel always lives in the "top bit set" part of the address
> space (and any address tagging bits don't touch that part), and does
> not map the highest virtual address because that's used for error
> pointers, so the "all bits set" address always faults
> 
>  - the region between valid user addresses and kernel addresses is
> also always going to fault, and we don't have them adjacent to each
> other (unlike, for example, 32-bit i386, where the kernel address
> space is directly adjacent to the top of user addresses)

I think we're very similar on arm64. The kernel lives at the top (i.e.
virtual address space descends below 0) and is mapped by TTBR1.
Userspace lives at the bottom (i.e. virtual address space ascends from
0) and is mapped by TTBR0. There's an unaddressable gap in the middle
and it's bit 55 of the address which determines user vs kernel (well, it
selects the TTBR to be precise).

The kernel doesn't map the top 8MiB of its VA space, although talking to
Mark, it sounds like we might not be as robust as x86 if there's a stray
dereference of an unaligned error pointer that straddles 0. He can
elaborate, but we probably can't rely on a pointer of all-ones faulting
safely for the same reason.

> So on x86-64, the simple solution is to just say "we know if the top
> bit is clear, it cannot ever touch kernel code, and if the top bit is
> set we have to make the address fault". So just duplicating the top
> bit (with an arithmetic shift) and or'ing it with the low bits, we get
> exactly what we want.
> 
> But my knowledge of arm64 is weak enough that while I am reading
> assembly language and I know that instead of the top bit, it's bit55,
> I don't know what the actual rules for the translation table registers
> are.
> 
> If the all-bits-set address is guaranteed to always trap, then arm64
> could just use the same thing x86 does (just duplicating bit 55
> instead of the sign bit)?

Perhaps we could just force accesses with bit 55 set to the address
'1UL << 55'? That should sit slap bang in the middle of the guard
region between the TTBRs and I think it would resolve any issues we may
have with wrapping. It still means effectively reverting 2305b809be93
("arm64: uaccess: simplify uaccess_mask_ptr()"), though.

Dunno. Mark, Catalin, what do you guys reckon?

Will
Mark Rutland July 8, 2024, 3:21 p.m. UTC | #8
On Fri, Jul 05, 2024 at 10:58:29AM -0700, Linus Torvalds wrote:
> On Fri, 5 Jul 2024 at 04:25, Will Deacon <will@kernel.org> wrote:
> >
> > we'd probably want to use an address that lives between the two TTBRs
> > (i.e. in the "guard region" you mentioned above), just in case somebody
> > has fscked around with /proc/sys/vm/mmap_min_addr.
> 
> Yes, I don't want to use a NULL pointer and rely on mmap_min_addr.
> 
> For x86-64, we have two "guard regions" that can be used to generate
> an address that is guaranteed to fault:
> 
>  - the kernel always lives in the "top bit set" part of the address
> space (and any address tagging bits don't touch that part), and does
> not map the highest virtual address because that's used for error
> pointers, so the "all bits set" address always faults

The same should be true on arm64, though I'm not immediately sure if we
explicitly reserve that VA region -- if we don't, then we should.

>  - the region between valid user addresses and kernel addresses is
> also always going to fault, and we don't have them adjacent to each
> other (unlike, for example, 32-bit i386, where the kernel address
> space is directly adjacent to the top of user addresses)

Today we have a gap between the TTBR0 and TTBR1 VA ranges in all
configurations, but in future (with the new FEAT_D128 page table format)
we will have configurations where there's no gap between the two ranges.

> So on x86-64, the simple solution is to just say "we know if the top
> bit is clear, it cannot ever touch kernel code, and if the top bit is
> set we have to make the address fault". So just duplicating the top
> bit (with an arithmetic shift) and or'ing it with the low bits, we get
> exactly what we want.
> 
> But my knowledge of arm64 is weak enough that while I am reading
> assembly language and I know that instead of the top bit, it's bit55,
> I don't know what the actual rules for the translation table registers
> are.
> 
> If the all-bits-set address is guaranteed to always trap, then arm64
> could just use the same thing x86 does (just duplicating bit 55
> instead of the sign bit)?

I think something of that shape can work (see below). There are a couple
of things that make using all-ones unsafe:

1) Non-faulting parts of a misaligned load/store can occur *before* the
   fault is raised. If you have two pages where one of which is writable
   and the other of which is not writeable (in either order), a store
   which straddles those pages can write to the writeable page before
   raising a fault on the non-writeable page.

   I've seen this behaviour on real HW, and IIUC this is fairly common.

2) Loads/stores which wrap past 0xFFFF_FFFF_FFFF_FFFF access bytes at
   UNKNOWN addresses. An N-byte store at 0xFFFF_FFFF_FFFF_FFFF may write
   to N-1 bytes at an arbitrary address which is not
   0x0000_0000_0000_0000.

   In the latest ARM ARM (K.a), this is described tersely in section
   K1.2.9 "Out of range virtual address".

   That can be found at:

   https://developer.arm.com/documentation/ddi0487/ka/?lang=en

   I'm aware of implementation styles where that address is not zero and
   can be a TTBR1 (kernel) address.

Given that, we'd need to avoid all-ones, but provided we know that the
first access using the pointer will be limited to PAGE_SIZE bytes past
the pointer, we could round down the bad pointer to be somewhere within
the error pointer page, e.g.

	SBFX <mask>, <ptr>, #55, #1
	ORR  <ptr>, <ptr>, <mask>
	BIC <ptr>, <ptr>, <mask>, lsr #(64 - PAGE_SHIFT)

That last `BIC` instructions is "BIt Clear" AKA "AND NOT". When bit 55
is one that will clear the lower bits to round down to a page boundary,
and when bit 55 is zero it will have no effect (as it'll be an AND with
all-ones).

Thanks,
Mark.
Mark Rutland July 8, 2024, 3:30 p.m. UTC | #9
On Mon, Jul 08, 2024 at 02:52:43PM +0100, Will Deacon wrote:
> On Fri, Jul 05, 2024 at 10:58:29AM -0700, Linus Torvalds wrote:
> > On Fri, 5 Jul 2024 at 04:25, Will Deacon <will@kernel.org> wrote:
> > So on x86-64, the simple solution is to just say "we know if the top
> > bit is clear, it cannot ever touch kernel code, and if the top bit is
> > set we have to make the address fault". So just duplicating the top
> > bit (with an arithmetic shift) and or'ing it with the low bits, we get
> > exactly what we want.
> > 
> > But my knowledge of arm64 is weak enough that while I am reading
> > assembly language and I know that instead of the top bit, it's bit55,
> > I don't know what the actual rules for the translation table registers
> > are.
> > 
> > If the all-bits-set address is guaranteed to always trap, then arm64
> > could just use the same thing x86 does (just duplicating bit 55
> > instead of the sign bit)?
> 
> Perhaps we could just force accesses with bit 55 set to the address
> '1UL << 55'? That should sit slap bang in the middle of the guard
> region between the TTBRs

Yep, that'll work until we handle FEAT_D128 where (1UL << 55) will be
the start of the TTBR1 range in some configurations.

> and I think it would resolve any issues we may have with wrapping. It
> still means effectively reverting 2305b809be93 ("arm64: uaccess:
> simplify uaccess_mask_ptr()"), though.

If we do bring that back, it'd be nice if we could do that without the
CSEL+CSDB, as the CSDB is liable to be expensive on some parts (e.g.
it's an alias of DSB on older designs).

> Dunno. Mark, Catalin, what do you guys reckon?

I think there's a slight variant of the x86 approach that might work,
I've posted in my reply at

  https://lore.kernel.org/lkml/ZowD3LQT_KTz2g4X@J2N7QTR9R3/

Mark.
Will Deacon July 23, 2024, 2:16 p.m. UTC | #10
On Mon, Jul 08, 2024 at 04:30:52PM +0100, Mark Rutland wrote:
> On Mon, Jul 08, 2024 at 02:52:43PM +0100, Will Deacon wrote:
> > On Fri, Jul 05, 2024 at 10:58:29AM -0700, Linus Torvalds wrote:
> > > On Fri, 5 Jul 2024 at 04:25, Will Deacon <will@kernel.org> wrote:
> > > So on x86-64, the simple solution is to just say "we know if the top
> > > bit is clear, it cannot ever touch kernel code, and if the top bit is
> > > set we have to make the address fault". So just duplicating the top
> > > bit (with an arithmetic shift) and or'ing it with the low bits, we get
> > > exactly what we want.
> > > 
> > > But my knowledge of arm64 is weak enough that while I am reading
> > > assembly language and I know that instead of the top bit, it's bit55,
> > > I don't know what the actual rules for the translation table registers
> > > are.
> > > 
> > > If the all-bits-set address is guaranteed to always trap, then arm64
> > > could just use the same thing x86 does (just duplicating bit 55
> > > instead of the sign bit)?
> > 
> > Perhaps we could just force accesses with bit 55 set to the address
> > '1UL << 55'? That should sit slap bang in the middle of the guard
> > region between the TTBRs
> 
> Yep, that'll work until we handle FEAT_D128 where (1UL << 55) will be
> the start of the TTBR1 range in some configurations.
> 
> > and I think it would resolve any issues we may have with wrapping. It
> > still means effectively reverting 2305b809be93 ("arm64: uaccess:
> > simplify uaccess_mask_ptr()"), though.
> 
> If we do bring that back, it'd be nice if we could do that without the
> CSEL+CSDB, as the CSDB is liable to be expensive on some parts (e.g.
> it's an alias of DSB on older designs).

DSB?! Are you sure? I thought it was basically a NOP for everybody apart
from a small subset of implementations.

Will
Palmer Dabbelt July 24, 2024, 10:57 p.m. UTC | #11
On Mon, 24 Jun 2024 21:04:56 PDT (-0700), jszhang@kernel.org wrote:
> This series tries to optimize riscv uaccess in the following way:
>
> patch1 implement the user_access_begin and families, I.E the unsafe
> accessors. when a function like strncpy_from_user() is called,
> the userspace access protection is disabled and enabled for every
> word read. After patch1, the protection is disabled at the beginning
> of the copy and enabled at the end.
>
> patch2 is a simple clean up
>
> patch3 uses 'asm goto' for put_user()
> patch4 uses 'asm goto output' for get_user()
>
> Both patch3 and patch4 are based on the fact -- 'asm goto' and
> 'asm goto output'generates noticeably better code since we don't need
> to test the error etc, the exception just jumps to the error handling
> directly.
>
>
> Jisheng Zhang (4):
>   riscv: implement user_access_begin and families
>   riscv: uaccess: use input constraints for ptr of __put_user
>   riscv: uaccess: use 'asm goto' for put_user()
>   riscv: uaccess: use 'asm goto output' for get_user
>
>  arch/riscv/include/asm/uaccess.h | 201 +++++++++++++++++++++++--------
>  1 file changed, 149 insertions(+), 52 deletions(-)

This genreally looks good to me, but there's some failures reported by 
the LKP tester and I don't think they're spurious.