diff mbox series

[2/2,v2] crypto: hisilicon - allow compile-testing on x86

Message ID 20190919140917.1290556-1-arnd@arndb.de (mailing list archive)
State Accepted
Delegated to: Herbert Xu
Headers show
Series None | expand

Commit Message

Arnd Bergmann Sept. 19, 2019, 2:09 p.m. UTC
To avoid missing arm64 specific warnings that get introduced
in this driver, allow compile-testing on all 64-bit architectures.

The only actual arm64 specific code in this driver is an open-
coded 128 bit MMIO write. On non-arm64 the same can be done
using memcpy_toio. What I also noticed is that the mmio store
(either one) is not endian-safe, this will only work on little-
endian configurations, so I also add a Kconfig dependency on
that, regardless of the architecture.
Finally, a depenndecy on CONFIG_64BIT is needed because of the
writeq().

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
v2: actually add !CPU_BIG_ENDIAN dependency as described in the
changelog
---
 drivers/crypto/hisilicon/Kconfig | 9 ++++++---
 drivers/crypto/hisilicon/qm.c    | 6 ++++++
 2 files changed, 12 insertions(+), 3 deletions(-)

Comments

John Garry Sept. 20, 2019, 8:33 a.m. UTC | #1
On 19/09/2019 15:09, Arnd Bergmann wrote:
> To avoid missing arm64 specific warnings that get introduced
> in this driver, allow compile-testing on all 64-bit architectures.
>
> The only actual arm64 specific code in this driver is an open-
> coded 128 bit MMIO write. On non-arm64 the same can be done
> using memcpy_toio. What I also noticed is that the mmio store
> (either one) is not endian-safe, this will only work on little-
> endian configurations, so I also add a Kconfig dependency on
> that, regardless of the architecture.
> Finally, a depenndecy on CONFIG_64BIT is needed because of the

nit: spelling mistake

> writeq().
>
> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
> ---
> v2: actually add !CPU_BIG_ENDIAN dependency as described in the
> changelog
> ---
>  drivers/crypto/hisilicon/Kconfig | 9 ++++++---
>  drivers/crypto/hisilicon/qm.c    | 6 ++++++
>  2 files changed, 12 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/crypto/hisilicon/Kconfig b/drivers/crypto/hisilicon/Kconfig
> index ebaf91e0146d..7bfcaa7674fd 100644
> --- a/drivers/crypto/hisilicon/Kconfig
> +++ b/drivers/crypto/hisilicon/Kconfig
> @@ -16,14 +16,15 @@ config CRYPTO_DEV_HISI_SEC
>
>  config CRYPTO_DEV_HISI_QM
>  	tristate
> -	depends on ARM64 && PCI && PCI_MSI
> +	depends on ARM64 || COMPILE_TEST
> +	depends on PCI && PCI_MSI
>  	help
>  	  HiSilicon accelerator engines use a common queue management
>  	  interface. Specific engine driver may use this module.
>
>  config CRYPTO_HISI_SGL
>  	tristate
> -	depends on ARM64
> +	depends on ARM64 || COMPILE_TEST
>  	help
>  	  HiSilicon accelerator engines use a common hardware scatterlist
>  	  interface for data format. Specific engine driver may use this
> @@ -31,7 +32,9 @@ config CRYPTO_HISI_SGL
>
>  config CRYPTO_DEV_HISI_ZIP
>  	tristate "Support for HiSilicon ZIP accelerator"
> -	depends on ARM64 && PCI && PCI_MSI
> +	depends on PCI && PCI_MSI
> +	depends on ARM64 || (COMPILE_TEST && 64BIT)
> +	depends on !CPU_BIG_ENDIAN || COMPILE_TEST
>  	select CRYPTO_DEV_HISI_QM
>  	select CRYPTO_HISI_SGL
>  	select SG_SPLIT
> diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
> index f975c393a603..a8ed699081b7 100644
> --- a/drivers/crypto/hisilicon/qm.c
> +++ b/drivers/crypto/hisilicon/qm.c
> @@ -331,6 +331,12 @@ static void qm_mb_write(struct hisi_qm *qm, const void *src)
>  	void __iomem *fun_base = qm->io_base + QM_MB_CMD_SEND_BASE;
>  	unsigned long tmp0 = 0, tmp1 = 0;
>

Hi Arnd,

> +	if (!IS_ENABLED(CONFIG_ARM64)) {
> +		memcpy_toio(fun_base, src, 16);
> +		wmb();
> +		return;
> +	}
> +
>  	asm volatile("ldp %0, %1, %3\n"
>  		     "stp %0, %1, %2\n"
>  		     "dsb sy\n"
>

As I understand, this operation needs to be done atomically. So - even 
though your change is just for compile testing - the memcpy_to_io() may 
not do the same thing on other archs, right?

I just wonder if it's right to make that change, or at least warn the 
imaginary user of possible malfunction for !arm64.

Thanks,
John
Arnd Bergmann Sept. 20, 2019, 1:26 p.m. UTC | #2
On Fri, Sep 20, 2019 at 10:34 AM John Garry <john.garry@huawei.com> wrote:

> > +     if (!IS_ENABLED(CONFIG_ARM64)) {
> > +             memcpy_toio(fun_base, src, 16);
> > +             wmb();
> > +             return;
> > +     }
> > +
> >       asm volatile("ldp %0, %1, %3\n"
> >                    "stp %0, %1, %2\n"
> >                    "dsb sy\n"
> >
>
> As I understand, this operation needs to be done atomically. So - even
> though your change is just for compile testing - the memcpy_to_io() may
> not do the same thing on other archs, right?
>
> I just wonder if it's right to make that change, or at least warn the
> imaginary user of possible malfunction for !arm64.

It's probably not necessary here. From what I can tell from the documentation,
this is only safe on ARMv8.4 or higher anyway, earlier ARMv8.x implementations
don't guarantee that an stp arrives on the bus in one piece either.

Usually, hardware like this has no hard requirement on an atomic store,
it just needs the individual bits to arrive in a particular order, and then
triggers the update on the last bit that gets stored. If that is the case here
as well, it might actually be better to use two writeq_relaxed() and
a barrier. This would also solve the endianess issue.

     Arnd
Arnd Bergmann Sept. 20, 2019, 1:36 p.m. UTC | #3
On Fri, Sep 20, 2019 at 3:26 PM Arnd Bergmann <arnd@arndb.de> wrote:
>
> On Fri, Sep 20, 2019 at 10:34 AM John Garry <john.garry@huawei.com> wrote:
>
> > > +     if (!IS_ENABLED(CONFIG_ARM64)) {
> > > +             memcpy_toio(fun_base, src, 16);
> > > +             wmb();
> > > +             return;
> > > +     }
> > > +
> > >       asm volatile("ldp %0, %1, %3\n"
> > >                    "stp %0, %1, %2\n"
> > >                    "dsb sy\n"
> > >
> >
> > As I understand, this operation needs to be done atomically. So - even
> > though your change is just for compile testing - the memcpy_to_io() may
> > not do the same thing on other archs, right?
> >
> > I just wonder if it's right to make that change, or at least warn the
> > imaginary user of possible malfunction for !arm64.
>
> It's probably not necessary here. From what I can tell from the documentation,
> this is only safe on ARMv8.4 or higher anyway, earlier ARMv8.x implementations
> don't guarantee that an stp arrives on the bus in one piece either.
>
> Usually, hardware like this has no hard requirement on an atomic store,
> it just needs the individual bits to arrive in a particular order, and then
> triggers the update on the last bit that gets stored. If that is the case here
> as well, it might actually be better to use two writeq_relaxed() and
> a barrier. This would also solve the endianess issue.

See also https://lkml.org/lkml/2018/1/26/554 for a previous attempt
to introduce 128-bit MMIO accessors, this got rejected since they
are not atomic even on ARMv8.4.

    Arnd
John Garry Sept. 20, 2019, 2:16 p.m. UTC | #4
On 20/09/2019 14:36, Arnd Bergmann wrote:
> On Fri, Sep 20, 2019 at 3:26 PM Arnd Bergmann <arnd@arndb.de> wrote:
>>
>> On Fri, Sep 20, 2019 at 10:34 AM John Garry <john.garry@huawei.com> wrote:
>>
>>>> +     if (!IS_ENABLED(CONFIG_ARM64)) {
>>>> +             memcpy_toio(fun_base, src, 16);
>>>> +             wmb();
>>>> +             return;
>>>> +     }
>>>> +
>>>>       asm volatile("ldp %0, %1, %3\n"
>>>>                    "stp %0, %1, %2\n"
>>>>                    "dsb sy\n"
>>>>
>>>
>>> As I understand, this operation needs to be done atomically. So - even
>>> though your change is just for compile testing - the memcpy_to_io() may
>>> not do the same thing on other archs, right?
>>>
>>> I just wonder if it's right to make that change, or at least warn the
>>> imaginary user of possible malfunction for !arm64.
>>

Hi Arnd,

>> It's probably not necessary here. From what I can tell from the documentation,
>> this is only safe on ARMv8.4 or higher anyway, earlier ARMv8.x implementations
>> don't guarantee that an stp arrives on the bus in one piece either.
>>
>> Usually, hardware like this has no hard requirement on an atomic store,
>> it just needs the individual bits to arrive in a particular order, and then
>> triggers the update on the last bit that gets stored. If that is the case here
>> as well, it might actually be better to use two writeq_relaxed() and
>> a barrier. This would also solve the endianess issue.
>
> See also https://lkml.org/lkml/2018/1/26/554 for a previous attempt
> to introduce 128-bit MMIO accessors, this got rejected since they
> are not atomic even on ARMv8.4.

So this is proprietary IP integrated with a proprietary ARMv8 
implementation, so there could be a tight coupling, the like of which 
Will mentioned in that thread, but I'm doubtful.

I'm looking at the electronically translated documentation on this HW, 
and it reads "The Mailbox operation performed by the CPU cannot be 
interleaved", and then tells that software should lock against 
concurrent accesses or alternatively use a 128-bit access. So it seems 
that the 128b op used is only to guarantee software is atomic.

Wang Zhou can confirm my understanding.

If true, I see that we seem to be already guaranteeing mutual exclusion 
in qm_mb(), in taking a mutex.

Thanks,
John


>
>     Arnd
>
> .
>
Zhou Wang Sept. 21, 2019, 10:26 a.m. UTC | #5
On 2019/9/20 22:16, John Garry wrote:
> On 20/09/2019 14:36, Arnd Bergmann wrote:
>> On Fri, Sep 20, 2019 at 3:26 PM Arnd Bergmann <arnd@arndb.de> wrote:
>>>
>>> On Fri, Sep 20, 2019 at 10:34 AM John Garry <john.garry@huawei.com> wrote:
>>>
>>>>> +     if (!IS_ENABLED(CONFIG_ARM64)) {
>>>>> +             memcpy_toio(fun_base, src, 16);
>>>>> +             wmb();
>>>>> +             return;
>>>>> +     }
>>>>> +
>>>>>       asm volatile("ldp %0, %1, %3\n"
>>>>>                    "stp %0, %1, %2\n"
>>>>>                    "dsb sy\n"
>>>>>
>>>>
>>>> As I understand, this operation needs to be done atomically. So - even
>>>> though your change is just for compile testing - the memcpy_to_io() may
>>>> not do the same thing on other archs, right?
>>>>
>>>> I just wonder if it's right to make that change, or at least warn the
>>>> imaginary user of possible malfunction for !arm64.
>>>
> 
> Hi Arnd,
> 
>>> It's probably not necessary here. From what I can tell from the documentation,
>>> this is only safe on ARMv8.4 or higher anyway, earlier ARMv8.x implementations
>>> don't guarantee that an stp arrives on the bus in one piece either.
>>>
>>> Usually, hardware like this has no hard requirement on an atomic store,
>>> it just needs the individual bits to arrive in a particular order, and then
>>> triggers the update on the last bit that gets stored. If that is the case here
>>> as well, it might actually be better to use two writeq_relaxed() and
>>> a barrier. This would also solve the endianess issue.
>>
>> See also https://lkml.org/lkml/2018/1/26/554 for a previous attempt
>> to introduce 128-bit MMIO accessors, this got rejected since they
>> are not atomic even on ARMv8.4.
> 
> So this is proprietary IP integrated with a proprietary ARMv8 implementation,
> so there could be a tight coupling, the like of which Will mentioned in that thread,
> but I'm doubtful.
> 
> I'm looking at the electronically translated documentation on this HW, and it reads
> "The Mailbox operation performed by the CPU cannot be interleaved", and then tells
> that software should lock against concurrent accesses or alternatively use a 128-bit
> access. So it seems that the 128b op used is only to guarantee software is atomic.
> 
> Wang Zhou can confirm my understanding

We have to do a 128bit atomic write here to trigger a mailbox. The reason is
that one QM hardware entity in one accelerator servers QM mailbox MMIO interfaces in
related PF and VFs.

A mutex can not lock different processing flows in different functions.

As Arnd mentioned, v8.4 extends the support for 16 bytes atomic stp to some kinds of
normal memory, but for device memory, it is still implementation defined. For this
SoC(Kunpeng920) which has QM/ZIP, if the address is 128bit aligned, stp will be atomic.
The offset of QM mailbox is 128bit aligned, so it is safe here.

Best,
Zhou

> 
> If true, I see that we seem to be already guaranteeing mutual exclusion in qm_mb(),
> in taking a mutex.
> 
> Thanks,
> John
> 
> 
>>
>>     Arnd
>>
>> .
>>
> 
> 
> 
> .
>
John Garry Oct. 2, 2019, 4:47 p.m. UTC | #6
On 21/09/2019 11:26, Zhou Wang wrote:
> On 2019/9/20 22:16, John Garry wrote:
>> On 20/09/2019 14:36, Arnd Bergmann wrote:
>>> On Fri, Sep 20, 2019 at 3:26 PM Arnd Bergmann <arnd@arndb.de> wrote:
>>>>
>>>> On Fri, Sep 20, 2019 at 10:34 AM John Garry <john.garry@huawei.com> wrote:
>>>>
>>>>>> +     if (!IS_ENABLED(CONFIG_ARM64)) {
>>>>>> +             memcpy_toio(fun_base, src, 16);
>>>>>> +             wmb();
>>>>>> +             return;
>>>>>> +     }
>>>>>> +
>>>>>>       asm volatile("ldp %0, %1, %3\n"
>>>>>>                    "stp %0, %1, %2\n"
>>>>>>                    "dsb sy\n"
>>>>>>
>>>>>

***

>>>>> As I understand, this operation needs to be done atomically. So - even
>>>>> though your change is just for compile testing - the memcpy_to_io() may
>>>>> not do the same thing on other archs, right?
>>>>>
>>>>> I just wonder if it's right to make that change, or at least warn the
>>>>> imaginary user of possible malfunction for !arm64.
>>>>
>>
>> Hi Arnd,
>>
>>>> It's probably not necessary here. From what I can tell from the documentation,
>>>> this is only safe on ARMv8.4 or higher anyway, earlier ARMv8.x implementations
>>>> don't guarantee that an stp arrives on the bus in one piece either.
>>>>
>>>> Usually, hardware like this has no hard requirement on an atomic store,
>>>> it just needs the individual bits to arrive in a particular order, and then
>>>> triggers the update on the last bit that gets stored. If that is the case here
>>>> as well, it might actually be better to use two writeq_relaxed() and
>>>> a barrier. This would also solve the endianess issue.
>>>
>>> See also https://lkml.org/lkml/2018/1/26/554 for a previous attempt
>>> to introduce 128-bit MMIO accessors, this got rejected since they
>>> are not atomic even on ARMv8.4.
>>
>> So this is proprietary IP integrated with a proprietary ARMv8 implementation,
>> so there could be a tight coupling, the like of which Will mentioned in that thread,
>> but I'm doubtful.
>>
>> I'm looking at the electronically translated documentation on this HW, and it reads
>> "The Mailbox operation performed by the CPU cannot be interleaved", and then tells
>> that software should lock against concurrent accesses or alternatively use a 128-bit
>> access. So it seems that the 128b op used is only to guarantee software is atomic.
>>
>> Wang Zhou can confirm my understanding
>

Just to add a few more details here:

> We have to do a 128bit atomic write here to trigger a mailbox. The reason is
> that one QM hardware entity in one accelerator servers QM mailbox MMIO interfaces in
> related PF and VFs.
>
> A mutex can not lock different processing flows in different functions.

This means that we can have userspace drivers and the kernel driver 
simultaneously accessing these mailboxes.

>
> As Arnd mentioned, v8.4 extends the support for 16 bytes atomic stp to some kinds of
> normal memory, but for device memory, it is still implementation defined. For this
> SoC(Kunpeng920) which has QM/ZIP, if the address is 128bit aligned, stp will be atomic.
> The offset of QM mailbox is 128bit aligned, so it is safe here.

The strange thing (to me) about this hw is that we cannot interleave 
mailbox accesses on different functions - with 2x 64b accesses, for 
example. This is the reason that we require 128b accesses.

The upshot is that we can't switch to generic code unfortunately.

But I am not sure if the change is right, as I mentioned originally, 
above ***.

I'll leave to Arnd's better judgment :)

Thanks,
John

>
> Best,
> Zhou
>
>>
>> If true, I see that we seem to be already guaranteeing mutual exclusion in qm_mb(),
>> in taking a mutex.
>>
>> Thanks,
>> John
>>
>>
>>>
>>>     Arnd
>>>
>>> .
>>>
>>
>>
>>
>> .
>>
>
>
> .
>
Herbert Xu Oct. 4, 2019, 3:44 p.m. UTC | #7
On Thu, Sep 19, 2019 at 04:09:06PM +0200, Arnd Bergmann wrote:
> To avoid missing arm64 specific warnings that get introduced
> in this driver, allow compile-testing on all 64-bit architectures.
> 
> The only actual arm64 specific code in this driver is an open-
> coded 128 bit MMIO write. On non-arm64 the same can be done
> using memcpy_toio. What I also noticed is that the mmio store
> (either one) is not endian-safe, this will only work on little-
> endian configurations, so I also add a Kconfig dependency on
> that, regardless of the architecture.
> Finally, a depenndecy on CONFIG_64BIT is needed because of the
> writeq().
> 
> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
> ---
> v2: actually add !CPU_BIG_ENDIAN dependency as described in the
> changelog
> ---
>  drivers/crypto/hisilicon/Kconfig | 9 ++++++---
>  drivers/crypto/hisilicon/qm.c    | 6 ++++++
>  2 files changed, 12 insertions(+), 3 deletions(-)

Patch applied.  Thanks.
diff mbox series

Patch

diff --git a/drivers/crypto/hisilicon/Kconfig b/drivers/crypto/hisilicon/Kconfig
index ebaf91e0146d..7bfcaa7674fd 100644
--- a/drivers/crypto/hisilicon/Kconfig
+++ b/drivers/crypto/hisilicon/Kconfig
@@ -16,14 +16,15 @@  config CRYPTO_DEV_HISI_SEC
 
 config CRYPTO_DEV_HISI_QM
 	tristate
-	depends on ARM64 && PCI && PCI_MSI
+	depends on ARM64 || COMPILE_TEST
+	depends on PCI && PCI_MSI
 	help
 	  HiSilicon accelerator engines use a common queue management
 	  interface. Specific engine driver may use this module.
 
 config CRYPTO_HISI_SGL
 	tristate
-	depends on ARM64
+	depends on ARM64 || COMPILE_TEST
 	help
 	  HiSilicon accelerator engines use a common hardware scatterlist
 	  interface for data format. Specific engine driver may use this
@@ -31,7 +32,9 @@  config CRYPTO_HISI_SGL
 
 config CRYPTO_DEV_HISI_ZIP
 	tristate "Support for HiSilicon ZIP accelerator"
-	depends on ARM64 && PCI && PCI_MSI
+	depends on PCI && PCI_MSI
+	depends on ARM64 || (COMPILE_TEST && 64BIT)
+	depends on !CPU_BIG_ENDIAN || COMPILE_TEST
 	select CRYPTO_DEV_HISI_QM
 	select CRYPTO_HISI_SGL
 	select SG_SPLIT
diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
index f975c393a603..a8ed699081b7 100644
--- a/drivers/crypto/hisilicon/qm.c
+++ b/drivers/crypto/hisilicon/qm.c
@@ -331,6 +331,12 @@  static void qm_mb_write(struct hisi_qm *qm, const void *src)
 	void __iomem *fun_base = qm->io_base + QM_MB_CMD_SEND_BASE;
 	unsigned long tmp0 = 0, tmp1 = 0;
 
+	if (!IS_ENABLED(CONFIG_ARM64)) {
+		memcpy_toio(fun_base, src, 16);
+		wmb();
+		return;
+	}
+
 	asm volatile("ldp %0, %1, %3\n"
 		     "stp %0, %1, %2\n"
 		     "dsb sy\n"