diff mbox series

[v2,3/3] block: sed-opal: Cache-line-align the cmd/resp buffers

Message ID 20220929224648.8997-4-Sergey.Semin@baikalelectronics.ru (mailing list archive)
State New, archived
Headers show
Series block/nvme: Fix DMA-noncoherent platforms support | expand

Commit Message

Serge Semin Sept. 29, 2022, 10:46 p.m. UTC
In accordance with [1] the DMA-able memory buffers must be
cacheline-aligned otherwise the cache writing-back and invalidation
performed during the mapping may cause the adjacent data being lost. It's
specifically required for the DMA-noncoherent platforms. Seeing the
opal_dev.{cmd,resp} buffers are used for DMAs in the NVME and SCSI/SD
drivers in framework of the nvme_sec_submit() and sd_sec_submit() methods
respectively we must make sure the passed buffers are cacheline-aligned to
prevent the denoted problem.

[1] Documentation/core-api/dma-api.rst

Fixes: 455a7b238cd6 ("block: Add Sed-opal library")
Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 block/sed-opal.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

Comments

Jonathan Derrick Oct. 3, 2022, 6:24 p.m. UTC | #1
Hi

On 9/29/2022 4:46 PM, Serge Semin wrote:
> In accordance with [1] the DMA-able memory buffers must be
> cacheline-aligned otherwise the cache writing-back and invalidation
> performed during the mapping may cause the adjacent data being lost. It's
> specifically required for the DMA-noncoherent platforms. Seeing the
> opal_dev.{cmd,resp} buffers are used for DMAs in the NVME and SCSI/SD
> drivers in framework of the nvme_sec_submit() and sd_sec_submit() methods
> respectively we must make sure the passed buffers are cacheline-aligned to
> prevent the denoted problem.
> 
> [1] Documentation/core-api/dma-api.rst
> 
> Fixes: 455a7b238cd6 ("block: Add Sed-opal library")
> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
> ---
>   block/sed-opal.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/block/sed-opal.c b/block/sed-opal.c
> index 9700197000f2..222acbd1f03a 100644
> --- a/block/sed-opal.c
> +++ b/block/sed-opal.c
> @@ -73,6 +73,7 @@ struct parsed_resp {
>   	struct opal_resp_tok toks[MAX_TOKS];
>   };
>   
> +/* Presumably DMA-able buffers must be cache-aligned */
>   struct opal_dev {
>   	bool supported;
>   	bool mbr_enabled;
> @@ -88,8 +89,8 @@ struct opal_dev {
>   	u64 lowest_lba;
>   
>   	size_t pos;
> -	u8 cmd[IO_BUFFER_LENGTH];
> -	u8 resp[IO_BUFFER_LENGTH];
> +	u8 cmd[IO_BUFFER_LENGTH] ____cacheline_aligned;
> +	u8 resp[IO_BUFFER_LENGTH] ____cacheline_aligned;
I'm with Christoph on this one.
When I see ____cacheline_aligned, I assume its for performance reasons, 
not to work around a DMA limitation. Can we instead kmalloc (which 
provides alignment) these buffers to make it more clear? May want to add 
that same comment pointing out some architectures require these dma 
targets to be cache aligned.


>   
>   	struct parsed_resp parsed;
>   	size_t prev_d_len;
Serge Semin Oct. 4, 2022, 3:32 p.m. UTC | #2
On Mon, Oct 03, 2022 at 12:24:08PM -0600, Jonathan Derrick wrote:
> Hi
> 
> On 9/29/2022 4:46 PM, Serge Semin wrote:
> > In accordance with [1] the DMA-able memory buffers must be
> > cacheline-aligned otherwise the cache writing-back and invalidation
> > performed during the mapping may cause the adjacent data being lost. It's
> > specifically required for the DMA-noncoherent platforms. Seeing the
> > opal_dev.{cmd,resp} buffers are used for DMAs in the NVME and SCSI/SD
> > drivers in framework of the nvme_sec_submit() and sd_sec_submit() methods
> > respectively we must make sure the passed buffers are cacheline-aligned to
> > prevent the denoted problem.
> > 
> > [1] Documentation/core-api/dma-api.rst
> > 
> > Fixes: 455a7b238cd6 ("block: Add Sed-opal library")
> > Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
> > ---
> >   block/sed-opal.c | 5 +++--
> >   1 file changed, 3 insertions(+), 2 deletions(-)
> > 
> > diff --git a/block/sed-opal.c b/block/sed-opal.c
> > index 9700197000f2..222acbd1f03a 100644
> > --- a/block/sed-opal.c
> > +++ b/block/sed-opal.c
> > @@ -73,6 +73,7 @@ struct parsed_resp {
> >   	struct opal_resp_tok toks[MAX_TOKS];
> >   };
> > +/* Presumably DMA-able buffers must be cache-aligned */
> >   struct opal_dev {
> >   	bool supported;
> >   	bool mbr_enabled;
> > @@ -88,8 +89,8 @@ struct opal_dev {
> >   	u64 lowest_lba;
> >   	size_t pos;
> > -	u8 cmd[IO_BUFFER_LENGTH];
> > -	u8 resp[IO_BUFFER_LENGTH];
> > +	u8 cmd[IO_BUFFER_LENGTH] ____cacheline_aligned;
> > +	u8 resp[IO_BUFFER_LENGTH] ____cacheline_aligned;

> I'm with Christoph on this one.
> When I see ____cacheline_aligned, I assume its for performance reasons, not
> to work around a DMA limitation. Can we instead kmalloc (which provides
> alignment) these buffers to make it more clear? May want to add that same
> comment pointing out some architectures require these dma targets to be
> cache aligned.

Ok. I'll resend v3 with these buffers being kmalloc'ed.

Please note the SED OPAL entry of the MAINTAINTER list contains your
intel-email address, which bounces back the messages (so does the
Revanth' one). I'll add your new address to my patchset' "To"-list,
but if you want to get new OPAL-related patches sent directly to your
linux.dev email address the entry should be updated.

-Sergey

> 
> 
> >   	struct parsed_resp parsed;
> >   	size_t prev_d_len;
diff mbox series

Patch

diff --git a/block/sed-opal.c b/block/sed-opal.c
index 9700197000f2..222acbd1f03a 100644
--- a/block/sed-opal.c
+++ b/block/sed-opal.c
@@ -73,6 +73,7 @@  struct parsed_resp {
 	struct opal_resp_tok toks[MAX_TOKS];
 };
 
+/* Presumably DMA-able buffers must be cache-aligned */
 struct opal_dev {
 	bool supported;
 	bool mbr_enabled;
@@ -88,8 +89,8 @@  struct opal_dev {
 	u64 lowest_lba;
 
 	size_t pos;
-	u8 cmd[IO_BUFFER_LENGTH];
-	u8 resp[IO_BUFFER_LENGTH];
+	u8 cmd[IO_BUFFER_LENGTH] ____cacheline_aligned;
+	u8 resp[IO_BUFFER_LENGTH] ____cacheline_aligned;
 
 	struct parsed_resp parsed;
 	size_t prev_d_len;