diff mbox series

[4/6] arm64/io: Provide a WC friendly __iowriteXX_copy()

Message ID 4-v1-38290193eace+5-mlx5_arm_wc_jgg@nvidia.com (mailing list archive)
State Not Applicable
Headers show
Series Fix mlx5 write combining support on new ARM64 cores | expand

Checks

Context Check Description
netdev/tree_selection success Guessing tree name failed - patch did not apply

Commit Message

Jason Gunthorpe Feb. 21, 2024, 1:17 a.m. UTC
The kernel provides driver support for using write combining IO memory
through the __iowriteXX_copy() API which is commonly used as an optional
optimization to generate 16/32/64 byte MemWr TLPs in a PCIe environment.

iomap_copy.c provides a generic implementation as a simple 4/8 byte at a
time copy loop that has worked well with past ARM64 CPUs, giving a high
frequency of large TLPs being successfully formed.

However modern ARM64 CPUs are quite sensitive to how the write combining
CPU HW is operated and a compiler generated loop with intermixed
load/store is not sufficient to frequently generate a large TLP. The CPUs
would like to see the entire TLP generated by consecutive store
instructions from registers. Compilers like gcc tend to intermix loads and
stores and have poor code generation, in part, due to the ARM64 situation
that writeq() does not codegen anything other than "[xN]". However even
with that resolved compilers like clang still do not have good code
generation.

This means on modern ARM64 CPUs the rate at which __iowriteXX_copy()
successfully generates large TLPs is very small (less than 1 in 10,000)
tries), to the point that the use of WC is pointless.

Implement __iowrite32/64_copy() specifically for ARM64 and use inline
assembly to build consecutive blocks of STR instructions. Provide direct
support for 64/32/16 large TLP generation in this manner. Optimize for
common constant lengths so that the compiler can directly inline the store
blocks.

This brings the frequency of large TLP generation up to a high level that
is comparable with older CPU generations.

As the __iowriteXX_copy() family of APIs is intended for use with WC
incorporate the DGH hint directly into the function.

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arch@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 arch/arm64/include/asm/io.h | 132 ++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/io.c      |  42 ++++++++++++
 2 files changed, 174 insertions(+)

Comments

Will Deacon Feb. 21, 2024, 7:22 p.m. UTC | #1
On Tue, Feb 20, 2024 at 09:17:08PM -0400, Jason Gunthorpe wrote:
> +static inline void __const_memcpy_toio_aligned64(volatile u64 __iomem *to,
> +						 const u64 *from, size_t count)
> +{
> +	switch (count) {
> +	case 8:
> +		asm volatile("str %x0, [%8, #8 * 0]\n"
> +			     "str %x1, [%8, #8 * 1]\n"
> +			     "str %x2, [%8, #8 * 2]\n"
> +			     "str %x3, [%8, #8 * 3]\n"
> +			     "str %x4, [%8, #8 * 4]\n"
> +			     "str %x5, [%8, #8 * 5]\n"
> +			     "str %x6, [%8, #8 * 6]\n"
> +			     "str %x7, [%8, #8 * 7]\n"
> +			     :
> +			     : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
> +			       "rZ"(from[3]), "rZ"(from[4]), "rZ"(from[5]),
> +			       "rZ"(from[6]), "rZ"(from[7]), "r"(to));
> +		break;
> +	case 4:
> +		asm volatile("str %x0, [%4, #8 * 0]\n"
> +			     "str %x1, [%4, #8 * 1]\n"
> +			     "str %x2, [%4, #8 * 2]\n"
> +			     "str %x3, [%4, #8 * 3]\n"
> +			     :
> +			     : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
> +			       "rZ"(from[3]), "r"(to));
> +		break;
> +	case 2:
> +		asm volatile("str %x0, [%2, #8 * 0]\n"
> +			     "str %x1, [%2, #8 * 1]\n"
> +			     :
> +			     : "rZ"(from[0]), "rZ"(from[1]), "r"(to));
> +		break;
> +	case 1:
> +		__raw_writel(*from, to);

Shouldn't this be __raw_writeq?

Will
Jason Gunthorpe Feb. 21, 2024, 11:28 p.m. UTC | #2
On Wed, Feb 21, 2024 at 07:22:06PM +0000, Will Deacon wrote:
> On Tue, Feb 20, 2024 at 09:17:08PM -0400, Jason Gunthorpe wrote:
> > +static inline void __const_memcpy_toio_aligned64(volatile u64 __iomem *to,
> > +						 const u64 *from, size_t count)
> > +{
> > +	switch (count) {
> > +	case 8:
> > +		asm volatile("str %x0, [%8, #8 * 0]\n"
> > +			     "str %x1, [%8, #8 * 1]\n"
> > +			     "str %x2, [%8, #8 * 2]\n"
> > +			     "str %x3, [%8, #8 * 3]\n"
> > +			     "str %x4, [%8, #8 * 4]\n"
> > +			     "str %x5, [%8, #8 * 5]\n"
> > +			     "str %x6, [%8, #8 * 6]\n"
> > +			     "str %x7, [%8, #8 * 7]\n"
> > +			     :
> > +			     : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
> > +			       "rZ"(from[3]), "rZ"(from[4]), "rZ"(from[5]),
> > +			       "rZ"(from[6]), "rZ"(from[7]), "r"(to));
> > +		break;
> > +	case 4:
> > +		asm volatile("str %x0, [%4, #8 * 0]\n"
> > +			     "str %x1, [%4, #8 * 1]\n"
> > +			     "str %x2, [%4, #8 * 2]\n"
> > +			     "str %x3, [%4, #8 * 3]\n"
> > +			     :
> > +			     : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
> > +			       "rZ"(from[3]), "r"(to));
> > +		break;
> > +	case 2:
> > +		asm volatile("str %x0, [%2, #8 * 0]\n"
> > +			     "str %x1, [%2, #8 * 1]\n"
> > +			     :
> > +			     : "rZ"(from[0]), "rZ"(from[1]), "r"(to));
> > +		break;
> > +	case 1:
> > +		__raw_writel(*from, to);
> 
> Shouldn't this be __raw_writeq?

Yes! Thanks

Jason
David Laight Feb. 22, 2024, 10:05 p.m. UTC | #3
From: Jason Gunthorpe
> Sent: 21 February 2024 01:17
> 
> The kernel provides driver support for using write combining IO memory
> through the __iowriteXX_copy() API which is commonly used as an optional
> optimization to generate 16/32/64 byte MemWr TLPs in a PCIe environment.
> 
...
> Implement __iowrite32/64_copy() specifically for ARM64 and use inline
> assembly to build consecutive blocks of STR instructions. Provide direct
> support for 64/32/16 large TLP generation in this manner. Optimize for
> common constant lengths so that the compiler can directly inline the store
> blocks.
...
> +/*
> + * This generates a memcpy that works on a from/to address which is aligned to
> + * bits. Count is in terms of the number of bits sized quantities to copy. It
> + * optimizes to use the STR groupings when possible so that it is WC friendly.
> + */
> +#define memcpy_toio_aligned(to, from, count, bits)                        \
> +	({                                                                \
> +		volatile u##bits __iomem *_to = to;                       \
> +		const u##bits *_from = from;                              \
> +		size_t _count = count;                                    \
> +		const u##bits *_end_from = _from + ALIGN_DOWN(_count, 8); \
> +                                                                          \
> +		for (; _from < _end_from; _from += 8, _to += 8)           \
> +			__const_memcpy_toio_aligned##bits(_to, _from, 8); \
> +		if ((_count % 8) >= 4) {    

If (_count & 4) {
                              \
> +			__const_memcpy_toio_aligned##bits(_to, _from, 4); \
> +			_from += 4;                                       \
> +			_to += 4;                                         \
> +		}                                                         \
> +		if ((_count % 4) >= 2) {                                  \
Ditto
> +			__const_memcpy_toio_aligned##bits(_to, _from, 2); \
> +			_from += 2;                                       \
> +			_to += 2;                                         \
> +		}                                                         \
> +		if (_count % 2)                                           \
and again
> +			__const_memcpy_toio_aligned##bits(_to, _from, 1); \
> +	})

But that looks bit a bit large to be inlined.
Except, perhaps, for small constant lengths.
I'd guess that even with write-combining and posted PCIe writes it
doesn't take much for it to be PCIe limited rather than cpu limited?

Is there a sane way to do the same for reads - they are far worse
than writes.

I solved the problem a few years back on a little ppc by using an on-cpu
DMA controller that could do PCIe master accesses and spinning until
the transfer completed.
But that sort of DMA controller seems uncommon.
We now initiate most of the transfers from the slave (an fpga) - after
writing a suitable/sane dma controller for that end.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Jason Gunthorpe Feb. 22, 2024, 10:36 p.m. UTC | #4
On Thu, Feb 22, 2024 at 10:05:04PM +0000, David Laight wrote:
> From: Jason Gunthorpe
> > Sent: 21 February 2024 01:17
> > 
> > The kernel provides driver support for using write combining IO memory
> > through the __iowriteXX_copy() API which is commonly used as an optional
> > optimization to generate 16/32/64 byte MemWr TLPs in a PCIe environment.
> > 
> ...
> > Implement __iowrite32/64_copy() specifically for ARM64 and use inline
> > assembly to build consecutive blocks of STR instructions. Provide direct
> > support for 64/32/16 large TLP generation in this manner. Optimize for
> > common constant lengths so that the compiler can directly inline the store
> > blocks.
> ...
> > +/*
> > + * This generates a memcpy that works on a from/to address which is aligned to
> > + * bits. Count is in terms of the number of bits sized quantities to copy. It
> > + * optimizes to use the STR groupings when possible so that it is WC friendly.
> > + */
> > +#define memcpy_toio_aligned(to, from, count, bits)                        \
> > +	({                                                                \
> > +		volatile u##bits __iomem *_to = to;                       \
> > +		const u##bits *_from = from;                              \
> > +		size_t _count = count;                                    \
> > +		const u##bits *_end_from = _from + ALIGN_DOWN(_count, 8); \
> > +                                                                          \
> > +		for (; _from < _end_from; _from += 8, _to += 8)           \
> > +			__const_memcpy_toio_aligned##bits(_to, _from, 8); \
> > +		if ((_count % 8) >= 4) {    
> 
> If (_count & 4) {

That would be obfuscating, IMHO. The compiler doesn't need such things
to generate optimal code.

> > +			__const_memcpy_toio_aligned##bits(_to, _from, 1); \
> > +	})
> 
> But that looks bit a bit large to be inlined.

You trimmed alot, this #define is in a C file and it is a template to
generate the 32 and 64 bit out of line functions. Things are done like
this because the 32/64 version are exactly the same logic except just
with different types and sizes.

Jason
David Laight Feb. 23, 2024, 9:07 a.m. UTC | #5
From: Jason Gunthorpe
> Sent: 22 February 2024 22:36
> To: David Laight <David.Laight@ACULAB.COM>
> 
> On Thu, Feb 22, 2024 at 10:05:04PM +0000, David Laight wrote:
> > From: Jason Gunthorpe
> > > Sent: 21 February 2024 01:17
> > >
> > > The kernel provides driver support for using write combining IO memory
> > > through the __iowriteXX_copy() API which is commonly used as an optional
> > > optimization to generate 16/32/64 byte MemWr TLPs in a PCIe environment.
> > >
> > ...
> > > Implement __iowrite32/64_copy() specifically for ARM64 and use inline
> > > assembly to build consecutive blocks of STR instructions. Provide direct
> > > support for 64/32/16 large TLP generation in this manner. Optimize for
> > > common constant lengths so that the compiler can directly inline the store
> > > blocks.
> > ...
> > > +/*
> > > + * This generates a memcpy that works on a from/to address which is aligned to
> > > + * bits. Count is in terms of the number of bits sized quantities to copy. It
> > > + * optimizes to use the STR groupings when possible so that it is WC friendly.
> > > + */
> > > +#define memcpy_toio_aligned(to, from, count, bits)                        \
> > > +	({                                                                \
> > > +		volatile u##bits __iomem *_to = to;                       \
> > > +		const u##bits *_from = from;                              \
> > > +		size_t _count = count;                                    \
> > > +		const u##bits *_end_from = _from + ALIGN_DOWN(_count, 8); \
> > > +                                                                          \
> > > +		for (; _from < _end_from; _from += 8, _to += 8)           \
> > > +			__const_memcpy_toio_aligned##bits(_to, _from, 8); \
> > > +		if ((_count % 8) >= 4) {
> >
> > If (_count & 4) {
> 
> That would be obfuscating, IMHO. The compiler doesn't need such things
> to generate optimal code.

Try it: https://godbolt.org/z/EvvGrTxv3 
And it isn't that obfuscated - no more so than your version.

> > > +			__const_memcpy_toio_aligned##bits(_to, _from, 1); \
> > > +	})
> >
> > But that looks bit a bit large to be inlined.
> 
> You trimmed alot, this #define is in a C file and it is a template to
> generate the 32 and 64 bit out of line functions. Things are done like
> this because the 32/64 version are exactly the same logic except just
> with different types and sizes.

I missed that in a quick read at 11pm :-(

Although I doubt that generating long TLP from byte writes is
really necessary.
IIRC you were merging at most 4 writes.
So better to do a single 32bit write instead.
(Unless you have misaligned source data - unlikely.)

While write-combining to generate long TLP is probably mostly
safe for PCIe targets, there are some that will only handle
TLP for single 32bit data items.
Which might be why the code is explicitly requesting 4 byte copies.
So it may be entirely wrong to write-combine anything except
the generic memcpy_toio().

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Niklas Schnelle Feb. 23, 2024, 11:01 a.m. UTC | #6
On Fri, 2024-02-23 at 09:07 +0000, David Laight wrote:
> From: Jason Gunthorpe
> > Sent: 22 February 2024 22:36
> > To: David Laight <David.Laight@ACULAB.COM>
> > 
> > On Thu, Feb 22, 2024 at 10:05:04PM +0000, David Laight wrote:
> > > From: Jason Gunthorpe
> > > > Sent: 21 February 2024 01:17
> > > > 
> > > > The kernel provides driver support for using write combining IO memory
> > > > through the __iowriteXX_copy() API which is commonly used as an optional
> > > > optimization to generate 16/32/64 byte MemWr TLPs in a PCIe environment.
> > > > 
> > > ...
> > > > Implement __iowrite32/64_copy() specifically for ARM64 and use inline
> > > > assembly to build consecutive blocks of STR instructions. Provide direct
> > > > support for 64/32/16 large TLP generation in this manner. Optimize for
> > > > common constant lengths so that the compiler can directly inline the store
> > > > blocks.
> > > ...
> > > > +/*
> > > > + * This generates a memcpy that works on a from/to address which is aligned to
> > > > + * bits. Count is in terms of the number of bits sized quantities to copy. It
> > > > + * optimizes to use the STR groupings when possible so that it is WC friendly.
> > > > + */
> > > > +#define memcpy_toio_aligned(to, from, count, bits)                        \
> > > > +	({                                                                \
> > > > +		volatile u##bits __iomem *_to = to;                       \
> > > > +		const u##bits *_from = from;                              \
> > > > +		size_t _count = count;                                    \
> > > > +		const u##bits *_end_from = _from + ALIGN_DOWN(_count, 8); \
> > > > +                                                                          \
> > > > +		for (; _from < _end_from; _from += 8, _to += 8)           \
> > > > +			__const_memcpy_toio_aligned##bits(_to, _from, 8); \
> > > > +		if ((_count % 8) >= 4) {
> > > 
> > > If (_count & 4) {
> > 
> > That would be obfuscating, IMHO. The compiler doesn't need such things
> > to generate optimal code.
> 
> Try it: https://godbolt.org/z/EvvGrTxv3 
> And it isn't that obfuscated - no more so than your version.

The godbolt link does "n % 8 > 4" instead of "... >= 4" as in Jason's
original code. With ">=" the compiled code matches that for "n & 4".
David Laight Feb. 23, 2024, 11:05 a.m. UTC | #7
...
> > > > > +		if ((_count % 8) >= 4) {
> > > >
> > > > If (_count & 4) {
> > >
> > > That would be obfuscating, IMHO. The compiler doesn't need such things
> > > to generate optimal code.
> >
> > Try it: https://godbolt.org/z/EvvGrTxv3
> > And it isn't that obfuscated - no more so than your version.
> 
> The godbolt link does "n % 8 > 4" instead of "... >= 4" as in Jason's
> original code. With ">=" the compiled code matches that for "n & 4".

Bugger :-)

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Niklas Schnelle Feb. 23, 2024, 11:38 a.m. UTC | #8
On Fri, 2024-02-23 at 09:07 +0000, David Laight wrote:
> From: Jason Gunthorpe
> > Sent: 22 February 2024 22:36
> > To: David Laight <David.Laight@ACULAB.COM>
> > 
> > On Thu, Feb 22, 2024 at 10:05:04PM +0000, David Laight wrote:
> > > From: Jason Gunthorpe
> > > > Sent: 21 February 2024 01:17
> > > > 
> > > > The kernel provides driver support for using write combining IO memory
> > > > through the __iowriteXX_copy() API which is commonly used as an optional
> > > > optimization to generate 16/32/64 byte MemWr TLPs in a PCIe environment.
> > > > 
> > > ...
> > > > Implement __iowrite32/64_copy() specifically for ARM64 and use inline
> > > > assembly to build consecutive blocks of STR instructions. Provide direct
> > > > support for 64/32/16 large TLP generation in this manner. Optimize for
> > > > common constant lengths so that the compiler can directly inline the store
> > > > blocks.
> > > ...
> > > > +/*
> > > > + * This generates a memcpy that works on a from/to address which is aligned to
> > > > + * bits. Count is in terms of the number of bits sized quantities to copy. It
> > > > + * optimizes to use the STR groupings when possible so that it is WC friendly.
> > > > + */
> > > > +#define memcpy_toio_aligned(to, from, count, bits)                        \
> > > > +	({                                                                \
> > > > +		volatile u##bits __iomem *_to = to;                       \
> > > > +		const u##bits *_from = from;                              \
> > > > +		size_t _count = count;                                    \
> > > > +		const u##bits *_end_from = _from + ALIGN_DOWN(_count, 8); \
> > > > +                                                                          \
> > > > +		for (; _from < _end_from; _from += 8, _to += 8)           \
> > > > +			__const_memcpy_toio_aligned##bits(_to, _from, 8); \
> > > > +		if ((_count % 8) >= 4) {
> > > 
> > > If (_count & 4) {
> > 
> > That would be obfuscating, IMHO. The compiler doesn't need such things
> > to generate optimal code.
> 
> Try it: https://godbolt.org/z/EvvGrTxv3 
> And it isn't that obfuscated - no more so than your version.
> 
> > > > +			__const_memcpy_toio_aligned##bits(_to, _from, 1); \
> > > > +	})
> > > 
> > > But that looks bit a bit large to be inlined.
> > 
> > You trimmed alot, this #define is in a C file and it is a template to
> > generate the 32 and 64 bit out of line functions. Things are done like
> > this because the 32/64 version are exactly the same logic except just
> > with different types and sizes.
> 
> I missed that in a quick read at 11pm :-(
> 
> Although I doubt that generating long TLP from byte writes is
> really necessary.

I might have gotten confused but I think these are not byte writes.
Remember that the count is in terms of the number of bits sized
quantities to copy so "count == 1" is 4/8 bytes here.

> IIRC you were merging at most 4 writes.
> So better to do a single 32bit write instead.
> (Unless you have misaligned source data - unlikely.)
> 
> While write-combining to generate long TLP is probably mostly
> safe for PCIe targets, there are some that will only handle
> TLP for single 32bit data items.
> Which might be why the code is explicitly requesting 4 byte copies.
> So it may be entirely wrong to write-combine anything except
> the generic memcpy_toio().
> 
> 	David

On anything other than s390x this should only do write-combine if the
memory mapping allows it, no? Meaning a driver that can't handle larger
TLPs really shouldn't use ioremap_wc() then.

On s390x one could argue that our version of __iowriteXX_copy() is
strictly speaking not correct in that zpci_memcpy_toio() doesn't really
use XX bit writes which is why for us memcpy_toio() was actually a
better fit indeed. On the other hand doing 32 bit PCI stores (an s390x
thing) can't combine multiple stores into a single TLP which these
functions are used for and which has much more use cases than forcing a
copy loop with 32/64 bit sized writes which would also be a lot slower
on s390x than an aligned zpci_memcpy_toio().
David Laight Feb. 23, 2024, 12:19 p.m. UTC | #9
From: Niklas Schnelle
> Sent: 23 February 2024 11:38
...
> > Although I doubt that generating long TLP from byte writes is
> > really necessary.
> 
> I might have gotten confused but I think these are not byte writes.
> Remember that the count is in terms of the number of bits sized
> quantities to copy so "count == 1" is 4/8 bytes here.

Something made me think you were generating a byte version
as well as the 32 and 64 bit ones.

...
> > While write-combining to generate long TLP is probably mostly
> > safe for PCIe targets, there are some that will only handle
> > TLP for single 32bit data items.
> > Which might be why the code is explicitly requesting 4 byte copies.
> > So it may be entirely wrong to write-combine anything except
> > the generic memcpy_toio().
> >
> > 	David
> 
> On anything other than s390x this should only do write-combine if the
> memory mapping allows it, no? Meaning a driver that can't handle larger
> TLPs really shouldn't use ioremap_wc() then.

I can't decide whether merged writes could be required for some
target addresses but be problematic on others.
Probably not.

> On s390x one could argue that our version of __iowriteXX_copy() is
> strictly speaking not correct in that zpci_memcpy_toio() doesn't really
> use XX bit writes which is why for us memcpy_toio() was actually a
> better fit indeed. On the other hand doing 32 bit PCI stores (an s390x
> thing) can't combine multiple stores into a single TLP which these
> functions are used for and which has much more use cases than forcing a
> copy loop with 32/64 bit sized writes which would also be a lot slower
> on s390x than an aligned zpci_memcpy_toio().

If I read that correctly 32bit writes don't get merged?
Indeed any code that will benefit from merging can (probably)
do 64bit writes so is even attempting to merge 32bit ones
worth the effort?

Since writes get 'posted' all over the place.
How many writes do you need to do before write-combining makes a difference?
We've logic in our fpga to trace the RX and TX TLP [1].
Although the link is slow; back to back writes are limited by
what happens later in the fpga logic - not the pcie link.

Reads are another matter entirely.
The x86 cpu I've used assign a tag to each cpu core.
So while reads from multiple processes happen in parallel, those
from a single process are definitely synchronous.
The cpu stalls for a few thousand clock on every read.

Large read TLPs (and overlapped read TLPs) would have a much
bigger effect than large write TLP.

[1] It is nice to be able to see what is going on without having
to beg/steal/borrow an expensive PCIe analyser and persuade the
hardware to work with it connected.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Jason Gunthorpe Feb. 23, 2024, 12:53 p.m. UTC | #10
On Fri, Feb 23, 2024 at 11:05:29AM +0000, David Laight wrote:
> ...
> > > > > > +		if ((_count % 8) >= 4) {
> > > > >
> > > > > If (_count & 4) {
> > > >
> > > > That would be obfuscating, IMHO. The compiler doesn't need such things
> > > > to generate optimal code.
> > >
> > > Try it: https://godbolt.org/z/EvvGrTxv3
> > > And it isn't that obfuscated - no more so than your version.
> > 
> > The godbolt link does "n % 8 > 4" instead of "... >= 4" as in Jason's
> > original code. With ">=" the compiled code matches that for "n & 4".
> 
> Bugger :-)

Yes, I already fine tuned things to get good codegen.

Jason
Jason Gunthorpe Feb. 23, 2024, 12:58 p.m. UTC | #11
On Fri, Feb 23, 2024 at 12:38:18PM +0100, Niklas Schnelle wrote:
> > Although I doubt that generating long TLP from byte writes is
> > really necessary.
> 
> I might have gotten confused but I think these are not byte writes.
> Remember that the count is in terms of the number of bits sized
> quantities to copy so "count == 1" is 4/8 bytes here.

Right.

There seem to be two callers of this API in the kernel, one is calling
with a constant size and wants a large TLP

Another seems to want memcpy_to_io with a guarenteed 32/64 bit store.

> > IIRC you were merging at most 4 writes.
> > So better to do a single 32bit write instead.
> > (Unless you have misaligned source data - unlikely.)
> > 
> > While write-combining to generate long TLP is probably mostly
> > safe for PCIe targets, there are some that will only handle
> > TLP for single 32bit data items.
> > Which might be why the code is explicitly requesting 4 byte copies.
> > So it may be entirely wrong to write-combine anything except
> > the generic memcpy_toio().
> 
> On anything other than s390x this should only do write-combine if the
> memory mapping allows it, no? Meaning a driver that can't handle larger
> TLPs really shouldn't use ioremap_wc() then.

Right.

> On s390x one could argue that our version of __iowriteXX_copy() is
> strictly speaking not correct in that zpci_memcpy_toio() doesn't really
> use XX bit writes which is why for us memcpy_toio() was actually a
> better fit indeed. On the other hand doing 32 bit PCI stores (an s390x
> thing) can't combine multiple stores into a single TLP which these
> functions are used for and which has much more use cases than forcing a
> copy loop with 32/64 bit sized writes which would also be a lot slower
> on s390x than an aligned zpci_memcpy_toio().

mlx5 will definitely not work right if __iowrite64_copy() results in
anything smaller than 32/64 bit PCIe TLPs.

Jason
Jason Gunthorpe Feb. 23, 2024, 1:03 p.m. UTC | #12
On Fri, Feb 23, 2024 at 12:19:24PM +0000, David Laight wrote:

> Since writes get 'posted' all over the place.
> How many writes do you need to do before write-combining makes a
> difference?

The issue is that the HW can optimize if the entire transaction is
presented in one TLP, if it has to reassemble the transaction it takes
a big slow path hit.

Jason
David Laight Feb. 23, 2024, 1:52 p.m. UTC | #13
From: Jason Gunthorpe
> Sent: 23 February 2024 13:03
> 
> On Fri, Feb 23, 2024 at 12:19:24PM +0000, David Laight wrote:
> 
> > Since writes get 'posted' all over the place.
> > How many writes do you need to do before write-combining makes a
> > difference?
> 
> The issue is that the HW can optimize if the entire transaction is
> presented in one TLP, if it has to reassemble the transaction it takes
> a big slow path hit.

Ah, so you aren't optimising to reduce the number of TLP for
(effectively) a write to a memory buffer, but have a pcie slave
that really want to see (for example) the writes for a ring buffer
entry in a single TLP?

So you really want something that (should) generate a 16 (or 32)
byte TLP? Rather than abusing the function that is expected to
generate multiple 8 byte TLP to generate larger TLP.

I'm guessing that on arm64 the ldp/stp instructions will generate
a single 16 byte TLP regardless of write combining?
They would definitely help memcpy_fromio().

Are they enough for arm64?
Getting but TLP on x86 is probably harder.
(Unless you use AVX512 registers and aligned accesses.)

It is rather a shame that there isn't an efficient way to get
access to a couple of large SIMD registers.
(eg save on stack and have the fpu code where they are for
a lazy fpu switch.)
There is quite a bit of code that would benefit, but kernel_fpu_begin()
is just too expensive.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Jason Gunthorpe Feb. 23, 2024, 2:44 p.m. UTC | #14
On Fri, Feb 23, 2024 at 01:52:37PM +0000, David Laight wrote:
> > > Since writes get 'posted' all over the place.
> > > How many writes do you need to do before write-combining makes a
> > > difference?
> > 
> > The issue is that the HW can optimize if the entire transaction is
> > presented in one TLP, if it has to reassemble the transaction it takes
> > a big slow path hit.
> 
> Ah, so you aren't optimising to reduce the number of TLP for
> (effectively) a write to a memory buffer, but have a pcie slave
> that really want to see (for example) the writes for a ring buffer
> entry in a single TLP?
> 
> So you really want something that (should) generate a 16 (or 32)
> byte TLP? Rather than abusing the function that is expected to
> generate multiple 8 byte TLP to generate larger TLP.

__iowriteXX_copy() was originally created by Pathscale (an RDMA device
company) to support RDMA drivers doing exactly this workload. It is
not an abuse.

> It is rather a shame that there isn't an efficient way to get
> access to a couple of large SIMD registers.

Yes, userspace uses SIMD to make this work alot better and run faster.

Jason
Niklas Schnelle Feb. 23, 2024, 4:35 p.m. UTC | #15
On Fri, 2024-02-23 at 08:58 -0400, Jason Gunthorpe wrote:
> On Fri, Feb 23, 2024 at 12:38:18PM +0100, Niklas Schnelle wrote:
> > > Although I doubt that generating long TLP from byte writes is
> > > really necessary.
> > 
> > I might have gotten confused but I think these are not byte writes.
> > Remember that the count is in terms of the number of bits sized
> > quantities to copy so "count == 1" is 4/8 bytes here.
> 
> Right.
> 
> There seem to be two callers of this API in the kernel, one is calling
> with a constant size and wants a large TLP
> 
> Another seems to want memcpy_to_io with a guarenteed 32/64 bit store.

I don't really understand how that works together with the order not
being guaranteed. Do they use normal ioremap() and then require 32/64
bit TLPs and don't care about the order? But then the generic and ARM
variants do things in order so who knows if they actually rely on that.

> 
> > > IIRC you were merging at most 4 writes.
> > > So better to do a single 32bit write instead.
> > > (Unless you have misaligned source data - unlikely.)
> > > 
> > > While write-combining to generate long TLP is probably mostly
> > > safe for PCIe targets, there are some that will only handle
> > > TLP for single 32bit data items.
> > > Which might be why the code is explicitly requesting 4 byte copies.
> > > So it may be entirely wrong to write-combine anything except
> > > the generic memcpy_toio().
> > 
> > On anything other than s390x this should only do write-combine if the
> > memory mapping allows it, no? Meaning a driver that can't handle larger
> > TLPs really shouldn't use ioremap_wc() then.
> 
> Right.
> 
> > On s390x one could argue that our version of __iowriteXX_copy() is
> > strictly speaking not correct in that zpci_memcpy_toio() doesn't really
> > use XX bit writes which is why for us memcpy_toio() was actually a
> > better fit indeed. On the other hand doing 32 bit PCI stores (an s390x
> > thing) can't combine multiple stores into a single TLP which these
> > functions are used for and which has much more use cases than forcing a
> > copy loop with 32/64 bit sized writes which would also be a lot slower
> > on s390x than an aligned zpci_memcpy_toio().
> 
> mlx5 will definitely not work right if __iowrite64_copy() results in
> anything smaller than 32/64 bit PCIe TLPs.
> 
> Jason

Yes and we do actually have mlx5 on s390x so this is my priority.
Jason Gunthorpe Feb. 23, 2024, 5:05 p.m. UTC | #16
On Fri, Feb 23, 2024 at 05:35:42PM +0100, Niklas Schnelle wrote:
> On Fri, 2024-02-23 at 08:58 -0400, Jason Gunthorpe wrote:
> > On Fri, Feb 23, 2024 at 12:38:18PM +0100, Niklas Schnelle wrote:
> > > > Although I doubt that generating long TLP from byte writes is
> > > > really necessary.
> > > 
> > > I might have gotten confused but I think these are not byte writes.
> > > Remember that the count is in terms of the number of bits sized
> > > quantities to copy so "count == 1" is 4/8 bytes here.
> > 
> > Right.
> > 
> > There seem to be two callers of this API in the kernel, one is calling
> > with a constant size and wants a large TLP
> > 
> > Another seems to want memcpy_to_io with a guarenteed 32/64 bit store.
> 
> I don't really understand how that works together with the order not
> being guaranteed. Do they use normal ioremap() and then require 32/64
> bit TLPs and don't care about the order?

Yes, I assume so. From my impression the cases looked like they were
copying to MMIO memory so order probably doesn't matter.

Jason
Catalin Marinas Feb. 27, 2024, 10:37 a.m. UTC | #17
On Tue, Feb 20, 2024 at 09:17:08PM -0400, Jason Gunthorpe wrote:
> +/*
> + * This generates a memcpy that works on a from/to address which is aligned to
> + * bits. Count is in terms of the number of bits sized quantities to copy. It
> + * optimizes to use the STR groupings when possible so that it is WC friendly.
> + */
> +#define memcpy_toio_aligned(to, from, count, bits)                        \
> +	({                                                                \
> +		volatile u##bits __iomem *_to = to;                       \
> +		const u##bits *_from = from;                              \
> +		size_t _count = count;                                    \
> +		const u##bits *_end_from = _from + ALIGN_DOWN(_count, 8); \
> +                                                                          \
> +		for (; _from < _end_from; _from += 8, _to += 8)           \
> +			__const_memcpy_toio_aligned##bits(_to, _from, 8); \
> +		if ((_count % 8) >= 4) {                                  \
> +			__const_memcpy_toio_aligned##bits(_to, _from, 4); \
> +			_from += 4;                                       \
> +			_to += 4;                                         \
> +		}                                                         \
> +		if ((_count % 4) >= 2) {                                  \
> +			__const_memcpy_toio_aligned##bits(_to, _from, 2); \
> +			_from += 2;                                       \
> +			_to += 2;                                         \
> +		}                                                         \
> +		if (_count % 2)                                           \
> +			__const_memcpy_toio_aligned##bits(_to, _from, 1); \
> +	})

Do we actually need all this if count is not constant? If it's not
performance critical anywhere, I'd rather copy the generic
implementation, it's easier to read.

Otherwise, apart from the __raw_writeq() typo that Will mentioned, the
patch looks fine to me.
Jason Gunthorpe Feb. 28, 2024, 11:06 p.m. UTC | #18
On Tue, Feb 27, 2024 at 10:37:18AM +0000, Catalin Marinas wrote:
> On Tue, Feb 20, 2024 at 09:17:08PM -0400, Jason Gunthorpe wrote:
> > +/*
> > + * This generates a memcpy that works on a from/to address which is aligned to
> > + * bits. Count is in terms of the number of bits sized quantities to copy. It
> > + * optimizes to use the STR groupings when possible so that it is WC friendly.
> > + */
> > +#define memcpy_toio_aligned(to, from, count, bits)                        \
> > +	({                                                                \
> > +		volatile u##bits __iomem *_to = to;                       \
> > +		const u##bits *_from = from;                              \
> > +		size_t _count = count;                                    \
> > +		const u##bits *_end_from = _from + ALIGN_DOWN(_count, 8); \
> > +                                                                          \
> > +		for (; _from < _end_from; _from += 8, _to += 8)           \
> > +			__const_memcpy_toio_aligned##bits(_to, _from, 8); \
> > +		if ((_count % 8) >= 4) {                                  \
> > +			__const_memcpy_toio_aligned##bits(_to, _from, 4); \
> > +			_from += 4;                                       \
> > +			_to += 4;                                         \
> > +		}                                                         \
> > +		if ((_count % 4) >= 2) {                                  \
> > +			__const_memcpy_toio_aligned##bits(_to, _from, 2); \
> > +			_from += 2;                                       \
> > +			_to += 2;                                         \
> > +		}                                                         \
> > +		if (_count % 2)                                           \
> > +			__const_memcpy_toio_aligned##bits(_to, _from, 1); \
> > +	})
> 
> Do we actually need all this if count is not constant? If it's not
> performance critical anywhere, I'd rather copy the generic
> implementation, it's easier to read.

Which generic version?

The point is to maximize WC effects with non-constant values, so I
think we do need something like this. ie we can't just fall back to
looping over 64 bit stores one at a time.

If we don't use the large block stores we know we get very poor WC
behavior. So at least the 8 and 4 constant value sections are
needed. At that point you may as well just do 4 and 2 instead of
another loop.

Most places I know about using this are performance paths, the entire
iocopy infrastructure was introduced as an x86 performance
optimization..

Jason
Catalin Marinas Feb. 29, 2024, 10:24 a.m. UTC | #19
On Wed, Feb 28, 2024 at 07:06:16PM -0400, Jason Gunthorpe wrote:
> On Tue, Feb 27, 2024 at 10:37:18AM +0000, Catalin Marinas wrote:
> > On Tue, Feb 20, 2024 at 09:17:08PM -0400, Jason Gunthorpe wrote:
> > > +/*
> > > + * This generates a memcpy that works on a from/to address which is aligned to
> > > + * bits. Count is in terms of the number of bits sized quantities to copy. It
> > > + * optimizes to use the STR groupings when possible so that it is WC friendly.
> > > + */
> > > +#define memcpy_toio_aligned(to, from, count, bits)                        \
> > > +	({                                                                \
> > > +		volatile u##bits __iomem *_to = to;                       \
> > > +		const u##bits *_from = from;                              \
> > > +		size_t _count = count;                                    \
> > > +		const u##bits *_end_from = _from + ALIGN_DOWN(_count, 8); \
> > > +                                                                          \
> > > +		for (; _from < _end_from; _from += 8, _to += 8)           \
> > > +			__const_memcpy_toio_aligned##bits(_to, _from, 8); \
> > > +		if ((_count % 8) >= 4) {                                  \
> > > +			__const_memcpy_toio_aligned##bits(_to, _from, 4); \
> > > +			_from += 4;                                       \
> > > +			_to += 4;                                         \
> > > +		}                                                         \
> > > +		if ((_count % 4) >= 2) {                                  \
> > > +			__const_memcpy_toio_aligned##bits(_to, _from, 2); \
> > > +			_from += 2;                                       \
> > > +			_to += 2;                                         \
> > > +		}                                                         \
> > > +		if (_count % 2)                                           \
> > > +			__const_memcpy_toio_aligned##bits(_to, _from, 1); \
> > > +	})
> > 
> > Do we actually need all this if count is not constant? If it's not
> > performance critical anywhere, I'd rather copy the generic
> > implementation, it's easier to read.
> 
> Which generic version?

The current __iowriteXX_copy() in lib/iomap_copy.c (copy them over or
add some preprocessor reuse the generic functions).

> The point is to maximize WC effects with non-constant values, so I
> think we do need something like this. ie we can't just fall back to
> looping over 64 bit stores one at a time.

If that's a case you are also targeting and have seen it in practice,
that's fine. But I had the impression that you are mostly after the
constant count case which is already addressed by the other part of this
patch. For the non-constant case, we have a DGH only at the end of
whatever buffer was copied rather than after every 64-byte increments
you'd get for a count of 8.

> Most places I know about using this are performance paths, the entire
> iocopy infrastructure was introduced as an x86 performance
> optimization..

At least the x86 case makes sense even from a maintenance perspective,
it's just a much simpler "rep movsl". I just want to make sure we don't
over-complicate this code on arm64 unnecessarily.
Catalin Marinas Feb. 29, 2024, 10:33 a.m. UTC | #20
On Tue, Feb 20, 2024 at 09:17:08PM -0400, Jason Gunthorpe wrote:
> +						 const u32 *from, size_t count)
> +{
> +	switch (count) {
> +	case 8:
> +		asm volatile("str %w0, [%8, #4 * 0]\n"
> +			     "str %w1, [%8, #4 * 1]\n"
> +			     "str %w2, [%8, #4 * 2]\n"
> +			     "str %w3, [%8, #4 * 3]\n"
> +			     "str %w4, [%8, #4 * 4]\n"
> +			     "str %w5, [%8, #4 * 5]\n"
> +			     "str %w6, [%8, #4 * 6]\n"
> +			     "str %w7, [%8, #4 * 7]\n"
> +			     :
> +			     : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
> +			       "rZ"(from[3]), "rZ"(from[4]), "rZ"(from[5]),
> +			       "rZ"(from[6]), "rZ"(from[7]), "r"(to));
> +		break;

BTW, talking of maintenance, would a series of __raw_writel() with
Mark's recent patch for offset addressing generate similar code? I.e.:

		__raw_writel(from[0], to);
		__raw_writel(from[1], to + 1);
		...
		__raw_writel(from[7], to + 7);

(you may have mentioned it in previous threads, I did not check)
Jason Gunthorpe Feb. 29, 2024, 1:28 p.m. UTC | #21
On Thu, Feb 29, 2024 at 10:24:42AM +0000, Catalin Marinas wrote:
> On Wed, Feb 28, 2024 at 07:06:16PM -0400, Jason Gunthorpe wrote:
> > On Tue, Feb 27, 2024 at 10:37:18AM +0000, Catalin Marinas wrote:
> > > On Tue, Feb 20, 2024 at 09:17:08PM -0400, Jason Gunthorpe wrote:
> > > > +/*
> > > > + * This generates a memcpy that works on a from/to address which is aligned to
> > > > + * bits. Count is in terms of the number of bits sized quantities to copy. It
> > > > + * optimizes to use the STR groupings when possible so that it is WC friendly.
> > > > + */
> > > > +#define memcpy_toio_aligned(to, from, count, bits)                        \
> > > > +	({                                                                \
> > > > +		volatile u##bits __iomem *_to = to;                       \
> > > > +		const u##bits *_from = from;                              \
> > > > +		size_t _count = count;                                    \
> > > > +		const u##bits *_end_from = _from + ALIGN_DOWN(_count, 8); \
> > > > +                                                                          \
> > > > +		for (; _from < _end_from; _from += 8, _to += 8)           \
> > > > +			__const_memcpy_toio_aligned##bits(_to, _from, 8); \
> > > > +		if ((_count % 8) >= 4) {                                  \
> > > > +			__const_memcpy_toio_aligned##bits(_to, _from, 4); \
> > > > +			_from += 4;                                       \
> > > > +			_to += 4;                                         \
> > > > +		}                                                         \
> > > > +		if ((_count % 4) >= 2) {                                  \
> > > > +			__const_memcpy_toio_aligned##bits(_to, _from, 2); \
> > > > +			_from += 2;                                       \
> > > > +			_to += 2;                                         \
> > > > +		}                                                         \
> > > > +		if (_count % 2)                                           \
> > > > +			__const_memcpy_toio_aligned##bits(_to, _from, 1); \
> > > > +	})
> > > 
> > > Do we actually need all this if count is not constant? If it's not
> > > performance critical anywhere, I'd rather copy the generic
> > > implementation, it's easier to read.
> > 
> > Which generic version?
> 
> The current __iowriteXX_copy() in lib/iomap_copy.c (copy them over or
> add some preprocessor reuse the generic functions).

That just loops over 64 bit quantities - we know that doesn't work?

> > The point is to maximize WC effects with non-constant values, so I
> > think we do need something like this. ie we can't just fall back to
> > looping over 64 bit stores one at a time.
> 
> If that's a case you are also targeting and have seen it in practice,
> that's fine. But I had the impression that you are mostly after the
> constant count case which is already addressed by the other part of this
> patch. For the non-constant case, we have a DGH only at the end of
> whatever buffer was copied rather than after every 64-byte increments
> you'd get for a count of 8.

mlx5 uses only the constant case. From my looking most places were
using the constant path.

However, from an API perspective, we know we need these runs of stores
for the CPU to work properly so it doesn't make any sense that the
same function called with a constant length would have good WC and the
very same function called with a variable length would have bad WC. I
would expect them to behave the same.

This is what the above does, if you pass in non-constant 64 or 32 you
get the same instruction sequence out of line as constant 64 or 32
length generates in-line. I think it is important to work like this
for basic sanity.

Jason
Jason Gunthorpe Feb. 29, 2024, 1:29 p.m. UTC | #22
On Thu, Feb 29, 2024 at 10:33:04AM +0000, Catalin Marinas wrote:
> On Tue, Feb 20, 2024 at 09:17:08PM -0400, Jason Gunthorpe wrote:
> > +						 const u32 *from, size_t count)
> > +{
> > +	switch (count) {
> > +	case 8:
> > +		asm volatile("str %w0, [%8, #4 * 0]\n"
> > +			     "str %w1, [%8, #4 * 1]\n"
> > +			     "str %w2, [%8, #4 * 2]\n"
> > +			     "str %w3, [%8, #4 * 3]\n"
> > +			     "str %w4, [%8, #4 * 4]\n"
> > +			     "str %w5, [%8, #4 * 5]\n"
> > +			     "str %w6, [%8, #4 * 6]\n"
> > +			     "str %w7, [%8, #4 * 7]\n"
> > +			     :
> > +			     : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
> > +			       "rZ"(from[3]), "rZ"(from[4]), "rZ"(from[5]),
> > +			       "rZ"(from[6]), "rZ"(from[7]), "r"(to));
> > +		break;
> 
> BTW, talking of maintenance, would a series of __raw_writel() with
> Mark's recent patch for offset addressing generate similar code? I.e.:

No

gcc intersperses reads/writes (which we were advised not to do) and
clang doesn't support the "o" directive so it produces poor
codegen.

Jason
Catalin Marinas March 1, 2024, 6:52 p.m. UTC | #23
On Tue, Feb 20, 2024 at 09:17:08PM -0400, Jason Gunthorpe wrote:
> The kernel provides driver support for using write combining IO memory
> through the __iowriteXX_copy() API which is commonly used as an optional
> optimization to generate 16/32/64 byte MemWr TLPs in a PCIe environment.
> 
> iomap_copy.c provides a generic implementation as a simple 4/8 byte at a
> time copy loop that has worked well with past ARM64 CPUs, giving a high
> frequency of large TLPs being successfully formed.
> 
> However modern ARM64 CPUs are quite sensitive to how the write combining
> CPU HW is operated and a compiler generated loop with intermixed
> load/store is not sufficient to frequently generate a large TLP. The CPUs
> would like to see the entire TLP generated by consecutive store
> instructions from registers. Compilers like gcc tend to intermix loads and
> stores and have poor code generation, in part, due to the ARM64 situation
> that writeq() does not codegen anything other than "[xN]". However even
> with that resolved compilers like clang still do not have good code
> generation.
> 
> This means on modern ARM64 CPUs the rate at which __iowriteXX_copy()
> successfully generates large TLPs is very small (less than 1 in 10,000)
> tries), to the point that the use of WC is pointless.
> 
> Implement __iowrite32/64_copy() specifically for ARM64 and use inline
> assembly to build consecutive blocks of STR instructions. Provide direct
> support for 64/32/16 large TLP generation in this manner. Optimize for
> common constant lengths so that the compiler can directly inline the store
> blocks.
> 
> This brings the frequency of large TLP generation up to a high level that
> is comparable with older CPU generations.
> 
> As the __iowriteXX_copy() family of APIs is intended for use with WC
> incorporate the DGH hint directly into the function.
> 
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: linux-arch@vger.kernel.org
> Cc: linux-arm-kernel@lists.infradead.org
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Apart from the slightly more complicated code, I don't expect it to make
things worse on any of the existing hardware.

So, with the typo fix that Will mentioned:

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index 3b694511b98f83..471ab46621e7d6 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -135,6 +135,138 @@  extern void __memset_io(volatile void __iomem *, int, size_t);
 #define memcpy_fromio(a,c,l)	__memcpy_fromio((a),(c),(l))
 #define memcpy_toio(c,a,l)	__memcpy_toio((c),(a),(l))
 
+/*
+ * The ARM64 iowrite implementation is intended to support drivers that want to
+ * use write combining. For instance PCI drivers using write combining with a 64
+ * byte __iowrite64_copy() expect to get a 64 byte MemWr TLP on the PCIe bus.
+ *
+ * Newer ARM core have sensitive write combining buffers, it is important that
+ * the stores be contiguous blocks of store instructions. Normal memcpy
+ * approaches have a very low chance to generate write combining.
+ *
+ * Since this is the only API on ARM64 that should be used with write combining
+ * it also integrates the DGH hint which is supposed to lower the latency to
+ * emit the large TLP from the CPU.
+ */
+
+static inline void __const_memcpy_toio_aligned32(volatile u32 __iomem *to,
+						 const u32 *from, size_t count)
+{
+	switch (count) {
+	case 8:
+		asm volatile("str %w0, [%8, #4 * 0]\n"
+			     "str %w1, [%8, #4 * 1]\n"
+			     "str %w2, [%8, #4 * 2]\n"
+			     "str %w3, [%8, #4 * 3]\n"
+			     "str %w4, [%8, #4 * 4]\n"
+			     "str %w5, [%8, #4 * 5]\n"
+			     "str %w6, [%8, #4 * 6]\n"
+			     "str %w7, [%8, #4 * 7]\n"
+			     :
+			     : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
+			       "rZ"(from[3]), "rZ"(from[4]), "rZ"(from[5]),
+			       "rZ"(from[6]), "rZ"(from[7]), "r"(to));
+		break;
+	case 4:
+		asm volatile("str %w0, [%4, #4 * 0]\n"
+			     "str %w1, [%4, #4 * 1]\n"
+			     "str %w2, [%4, #4 * 2]\n"
+			     "str %w3, [%4, #4 * 3]\n"
+			     :
+			     : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
+			       "rZ"(from[3]), "r"(to));
+		break;
+	case 2:
+		asm volatile("str %w0, [%2, #4 * 0]\n"
+			     "str %w1, [%2, #4 * 1]\n"
+			     :
+			     : "rZ"(from[0]), "rZ"(from[1]), "r"(to));
+		break;
+	case 1:
+		__raw_writel(*from, to);
+		break;
+	default:
+		BUILD_BUG();
+	}
+}
+
+void __iowrite32_copy_full(void __iomem *to, const void *from, size_t count);
+
+static inline void __const_iowrite32_copy(void __iomem *to, const void *from,
+					  size_t count)
+{
+	if (count == 8 || count == 4 || count == 2 || count == 1) {
+		__const_memcpy_toio_aligned32(to, from, count);
+		dgh();
+	} else {
+		__iowrite32_copy_full(to, from, count);
+	}
+}
+
+#define __iowrite32_copy(to, from, count)                  \
+	(__builtin_constant_p(count) ?                     \
+		 __const_iowrite32_copy(to, from, count) : \
+		 __iowrite32_copy_full(to, from, count))
+
+static inline void __const_memcpy_toio_aligned64(volatile u64 __iomem *to,
+						 const u64 *from, size_t count)
+{
+	switch (count) {
+	case 8:
+		asm volatile("str %x0, [%8, #8 * 0]\n"
+			     "str %x1, [%8, #8 * 1]\n"
+			     "str %x2, [%8, #8 * 2]\n"
+			     "str %x3, [%8, #8 * 3]\n"
+			     "str %x4, [%8, #8 * 4]\n"
+			     "str %x5, [%8, #8 * 5]\n"
+			     "str %x6, [%8, #8 * 6]\n"
+			     "str %x7, [%8, #8 * 7]\n"
+			     :
+			     : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
+			       "rZ"(from[3]), "rZ"(from[4]), "rZ"(from[5]),
+			       "rZ"(from[6]), "rZ"(from[7]), "r"(to));
+		break;
+	case 4:
+		asm volatile("str %x0, [%4, #8 * 0]\n"
+			     "str %x1, [%4, #8 * 1]\n"
+			     "str %x2, [%4, #8 * 2]\n"
+			     "str %x3, [%4, #8 * 3]\n"
+			     :
+			     : "rZ"(from[0]), "rZ"(from[1]), "rZ"(from[2]),
+			       "rZ"(from[3]), "r"(to));
+		break;
+	case 2:
+		asm volatile("str %x0, [%2, #8 * 0]\n"
+			     "str %x1, [%2, #8 * 1]\n"
+			     :
+			     : "rZ"(from[0]), "rZ"(from[1]), "r"(to));
+		break;
+	case 1:
+		__raw_writel(*from, to);
+		break;
+	default:
+		BUILD_BUG();
+	}
+}
+
+void __iowrite64_copy_full(void __iomem *to, const void *from, size_t count);
+
+static inline void __const_iowrite64_copy(void __iomem *to, const void *from,
+					  size_t count)
+{
+	if (count == 8 || count == 4 || count == 2 || count == 1) {
+		__const_memcpy_toio_aligned64(to, from, count);
+		dgh();
+	} else {
+		__iowrite64_copy_full(to, from, count);
+	}
+}
+
+#define __iowrite64_copy(to, from, count)                  \
+	(__builtin_constant_p(count) ?                     \
+		 __const_iowrite64_copy(to, from, count) : \
+		 __iowrite64_copy_full(to, from, count))
+
 /*
  * I/O memory mapping functions.
  */
diff --git a/arch/arm64/kernel/io.c b/arch/arm64/kernel/io.c
index aa7a4ec6a3ae6f..ef48089fbfe1a4 100644
--- a/arch/arm64/kernel/io.c
+++ b/arch/arm64/kernel/io.c
@@ -37,6 +37,48 @@  void __memcpy_fromio(void *to, const volatile void __iomem *from, size_t count)
 }
 EXPORT_SYMBOL(__memcpy_fromio);
 
+/*
+ * This generates a memcpy that works on a from/to address which is aligned to
+ * bits. Count is in terms of the number of bits sized quantities to copy. It
+ * optimizes to use the STR groupings when possible so that it is WC friendly.
+ */
+#define memcpy_toio_aligned(to, from, count, bits)                        \
+	({                                                                \
+		volatile u##bits __iomem *_to = to;                       \
+		const u##bits *_from = from;                              \
+		size_t _count = count;                                    \
+		const u##bits *_end_from = _from + ALIGN_DOWN(_count, 8); \
+                                                                          \
+		for (; _from < _end_from; _from += 8, _to += 8)           \
+			__const_memcpy_toio_aligned##bits(_to, _from, 8); \
+		if ((_count % 8) >= 4) {                                  \
+			__const_memcpy_toio_aligned##bits(_to, _from, 4); \
+			_from += 4;                                       \
+			_to += 4;                                         \
+		}                                                         \
+		if ((_count % 4) >= 2) {                                  \
+			__const_memcpy_toio_aligned##bits(_to, _from, 2); \
+			_from += 2;                                       \
+			_to += 2;                                         \
+		}                                                         \
+		if (_count % 2)                                           \
+			__const_memcpy_toio_aligned##bits(_to, _from, 1); \
+	})
+
+void __iowrite64_copy_full(void __iomem *to, const void *from, size_t count)
+{
+	memcpy_toio_aligned(to, from, count, 64);
+	dgh();
+}
+EXPORT_SYMBOL(__iowrite64_copy_full);
+
+void __iowrite32_copy_full(void __iomem *to, const void *from, size_t count)
+{
+	memcpy_toio_aligned(to, from, count, 32);
+	dgh();
+}
+EXPORT_SYMBOL(__iowrite32_copy_full);
+
 /*
  * Copy data from "real" memory space to IO memory space.
  */