diff mbox series

[net,v2] s390/ism: fix receive message buffer allocation

Message ID 20240405111606.1785928-1-gbayer@linux.ibm.com (mailing list archive)
State Accepted
Commit 58effa3476536215530c9ec4910ffc981613b413
Delegated to: Netdev Maintainers
Headers show
Series [net,v2] s390/ism: fix receive message buffer allocation | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag present in non-next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 8 this patch: 8
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers success CCed 10 of 10 maintainers
netdev/build_clang success Errors and warnings before: 8 this patch: 8
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 8 this patch: 8
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 62 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-04-07--00-00 (tests: 956)

Commit Message

Gerd Bayer April 5, 2024, 11:16 a.m. UTC
Since [1], dma_alloc_coherent() does not accept requests for GFP_COMP
anymore, even on archs that may be able to fulfill this. Functionality that
relied on the receive buffer being a compound page broke at that point:
The SMC-D protocol, that utilizes the ism device driver, passes receive
buffers to the splice processor in a struct splice_pipe_desc with a
single entry list of struct pages. As the buffer is no longer a compound
page, the splice processor now rejects requests to handle more than a
page worth of data.

Replace dma_alloc_coherent() and allocate a buffer with folio_alloc and
create a DMA map for it with dma_map_page(). Since only receive buffers
on ISM devices use DMA, qualify the mapping as FROM_DEVICE.
Since ISM devices are available on arch s390, only and on that arch all
DMA is coherent, there is no need to introduce and export some kind of
dma_sync_to_cpu() method to be called by the SMC-D protocol layer.

Analogously, replace dma_free_coherent by a two step dma_unmap_page,
then folio_put to free the receive buffer.

[1] https://lore.kernel.org/all/20221113163535.884299-1-hch@lst.de/

Fixes: c08004eede4b ("s390/ism: don't pass bogus GFP_ flags to dma_alloc_coherent")
Signed-off-by: Gerd Bayer <gbayer@linux.ibm.com>
---
Sorry for the immediate resend - forgot to mark this for "net"

changelog:
v1->v2:
- replaced kmalloc/kfree with folio_alloc/folio_put
  as suggested.

---
 drivers/s390/net/ism_drv.c | 38 +++++++++++++++++++++++++++++---------
 1 file changed, 29 insertions(+), 9 deletions(-)

Comments

patchwork-bot+netdevbpf@kernel.org April 8, 2024, 11 a.m. UTC | #1
Hello:

This patch was applied to netdev/net.git (main)
by David S. Miller <davem@davemloft.net>:

On Fri,  5 Apr 2024 13:16:06 +0200 you wrote:
> Since [1], dma_alloc_coherent() does not accept requests for GFP_COMP
> anymore, even on archs that may be able to fulfill this. Functionality that
> relied on the receive buffer being a compound page broke at that point:
> The SMC-D protocol, that utilizes the ism device driver, passes receive
> buffers to the splice processor in a struct splice_pipe_desc with a
> single entry list of struct pages. As the buffer is no longer a compound
> page, the splice processor now rejects requests to handle more than a
> page worth of data.
> 
> [...]

Here is the summary with links:
  - [net,v2] s390/ism: fix receive message buffer allocation
    https://git.kernel.org/netdev/net/c/58effa347653

You are awesome, thank you!
Niklas Schnelle April 8, 2024, 12:40 p.m. UTC | #2
On Mon, 2024-04-08 at 11:00 +0000, patchwork-bot+netdevbpf@kernel.org
wrote:
> Hello:
> 
> This patch was applied to netdev/net.git (main)
> by David S. Miller <davem@davemloft.net>:
> 
> On Fri,  5 Apr 2024 13:16:06 +0200 you wrote:
> > Since [1], dma_alloc_coherent() does not accept requests for GFP_COMP
> > anymore, even on archs that may be able to fulfill this. Functionality that
> > relied on the receive buffer being a compound page broke at that point:
> > The SMC-D protocol, that utilizes the ism device driver, passes receive
> > buffers to the splice processor in a struct splice_pipe_desc with a
> > single entry list of struct pages. As the buffer is no longer a compound
> > page, the splice processor now rejects requests to handle more than a
> > page worth of data.
> > 
> > [...]
> 
> Here is the summary with links:
>   - [net,v2] s390/ism: fix receive message buffer allocation
>     https://git.kernel.org/netdev/net/c/58effa347653
> 
> You are awesome, thank you!

This version of the patch has an outstanding issue around handling
allocation failure see the comments on v1 here[0]. Please drop. Gerd
will send a v3 with that issue fixed.

Thanks,
Niklas

[0] https://lore.kernel.org/all/20240405143641.GA5865@lst.de/
Gerd Bayer April 8, 2024, 7:05 p.m. UTC | #3
On Mon, 2024-04-08 at 14:40 +0200, Niklas Schnelle wrote:
> On Mon, 2024-04-08 at 11:00 +0000, patchwork-bot+netdevbpf@kernel.org
> wrote:
> > Hello:
> > 
> > This patch was applied to netdev/net.git (main)
> > by David S. Miller <davem@davemloft.net>:
> > 
> > On Fri,  5 Apr 2024 13:16:06 +0200 you wrote:
> > > Since [1], dma_alloc_coherent() does not accept requests for
> > > GFP_COMP
> > > anymore, even on archs that may be able to fulfill this.
> > > Functionality that
> > > relied on the receive buffer being a compound page broke at that
> > > point:
> > > The SMC-D protocol, that utilizes the ism device driver, passes
> > > receive
> > > buffers to the splice processor in a struct splice_pipe_desc with
> > > a
> > > single entry list of struct pages. As the buffer is no longer a
> > > compound
> > > page, the splice processor now rejects requests to handle more
> > > than a
> > > page worth of data.
> > > 
> > > [...]
> > 
> > Here is the summary with links:
> >   - [net,v2] s390/ism: fix receive message buffer allocation
> >     https://git.kernel.org/netdev/net/c/58effa347653
> > 
> > You are awesome, thank you!
> 
> This version of the patch has an outstanding issue around handling
> allocation failure see the comments on v1 here[0]. Please drop. Gerd
> will send a v3 with that issue fixed.
> 
> Thanks,
> Niklas
> 
> [0] https://lore.kernel.org/all/20240405143641.GA5865@lst.de/

Hi Dave,

so how do we go forward? Would you revert this v2 in the netdev tree to
have my next v3 properly reviewed?

Second best option: I can send a fixup to address the last issue from
[0], but that would still leave some pieces sent with (v1/v2) not
properly R-by'd or at least ack'd.

Thanks,
Gerd
Jakub Kicinski April 8, 2024, 7:46 p.m. UTC | #4
On Mon, 08 Apr 2024 21:05:47 +0200 Gerd Bayer wrote:
> Hi Dave,

Hi, from the Dave replacement service.

> so how do we go forward? Would you revert this v2 in the netdev tree to
> have my next v3 properly reviewed?
> 
> Second best option: I can send a fixup to address the last issue from
> [0], but that would still leave some pieces sent with (v1/v2) not
> properly R-by'd or at least ack'd.

If there's a chance we can get an incremental fix ready to merge by
Wednesday morning, let's try that. If we fail please post a revert by
Wednesday morning. On Thu morning EU time we'll ship the fixes to Linus,
so we gotta have the tree in a good share at that point.
Gerd Bayer April 9, 2024, 7:58 a.m. UTC | #5
On Mon, 2024-04-08 at 12:46 -0700, Jakub Kicinski wrote:
> On Mon, 08 Apr 2024 21:05:47 +0200 Gerd Bayer wrote:
> > Hi Dave,
> 
> Hi, from the Dave replacement service.

Hi Jakub,

> If there's a chance we can get an incremental fix ready to merge by
> Wednesday morning, let's try that. If we fail please post a revert by
> Wednesday morning. On Thu morning EU time we'll ship the fixes to
> Linus, so we gotta have the tree in a good share at that point.

I'm preparing a revert right now, as I'm not sure the review is at the
bottom of things, yet.

Thanks,
Gerd
diff mbox series

Patch

diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c
index 2c8e964425dc..affb05521e14 100644
--- a/drivers/s390/net/ism_drv.c
+++ b/drivers/s390/net/ism_drv.c
@@ -14,6 +14,8 @@ 
 #include <linux/err.h>
 #include <linux/ctype.h>
 #include <linux/processor.h>
+#include <linux/dma-mapping.h>
+#include <linux/mm.h>
 
 #include "ism.h"
 
@@ -292,13 +294,15 @@  static int ism_read_local_gid(struct ism_dev *ism)
 static void ism_free_dmb(struct ism_dev *ism, struct ism_dmb *dmb)
 {
 	clear_bit(dmb->sba_idx, ism->sba_bitmap);
-	dma_free_coherent(&ism->pdev->dev, dmb->dmb_len,
-			  dmb->cpu_addr, dmb->dma_addr);
+	dma_unmap_page(&ism->pdev->dev, dmb->dma_addr, dmb->dmb_len,
+		       DMA_FROM_DEVICE);
+	folio_put(virt_to_folio(dmb->cpu_addr));
 }
 
 static int ism_alloc_dmb(struct ism_dev *ism, struct ism_dmb *dmb)
 {
 	unsigned long bit;
+	int rc;
 
 	if (PAGE_ALIGN(dmb->dmb_len) > dma_get_max_seg_size(&ism->pdev->dev))
 		return -EINVAL;
@@ -315,14 +319,30 @@  static int ism_alloc_dmb(struct ism_dev *ism, struct ism_dmb *dmb)
 	    test_and_set_bit(dmb->sba_idx, ism->sba_bitmap))
 		return -EINVAL;
 
-	dmb->cpu_addr = dma_alloc_coherent(&ism->pdev->dev, dmb->dmb_len,
-					   &dmb->dma_addr,
-					   GFP_KERNEL | __GFP_NOWARN |
-					   __GFP_NOMEMALLOC | __GFP_NORETRY);
-	if (!dmb->cpu_addr)
-		clear_bit(dmb->sba_idx, ism->sba_bitmap);
+	dmb->cpu_addr =
+		folio_address(folio_alloc(GFP_KERNEL | __GFP_NOWARN |
+					  __GFP_NOMEMALLOC | __GFP_NORETRY,
+					  get_order(dmb->dmb_len)));
 
-	return dmb->cpu_addr ? 0 : -ENOMEM;
+	if (!dmb->cpu_addr) {
+		rc = -ENOMEM;
+		goto out_bit;
+	}
+	dmb->dma_addr = dma_map_page(&ism->pdev->dev,
+				     virt_to_page(dmb->cpu_addr), 0,
+				     dmb->dmb_len, DMA_FROM_DEVICE);
+	if (dma_mapping_error(&ism->pdev->dev, dmb->dma_addr)) {
+		rc = -ENOMEM;
+		goto out_free;
+	}
+
+	return 0;
+
+out_free:
+	kfree(dmb->cpu_addr);
+out_bit:
+	clear_bit(dmb->sba_idx, ism->sba_bitmap);
+	return rc;
 }
 
 int ism_register_dmb(struct ism_dev *ism, struct ism_dmb *dmb,