diff mbox series

[v9,6/8] block: set FOLL_PCI_P2PDMA in bio_map_user_iov()

Message ID 20220825152425.6296-7-logang@deltatee.com (mailing list archive)
State New, archived
Headers show
Series Userspace P2PDMA with O_DIRECT NVMe devices | expand

Commit Message

Logan Gunthorpe Aug. 25, 2022, 3:24 p.m. UTC
When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for
iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be
passed from userspace and enables the NVMe passthru requests to
use P2PDMA pages.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
---
 block/blk-map.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

Comments

Christoph Hellwig Sept. 5, 2022, 2:36 p.m. UTC | #1
On Thu, Aug 25, 2022 at 09:24:23AM -0600, Logan Gunthorpe wrote:
> When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for
> iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be
> passed from userspace and enables the NVMe passthru requests to
> use P2PDMA pages.
> 
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>
John Hubbard Sept. 6, 2022, 12:54 a.m. UTC | #2
On 8/25/22 08:24, Logan Gunthorpe wrote:
> When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for
> iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be
> passed from userspace and enables the NVMe passthru requests to
> use P2PDMA pages.
> 
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> ---
>  block/blk-map.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/block/blk-map.c b/block/blk-map.c
> index 7196a6b64c80..1378f49ca5ca 100644
> --- a/block/blk-map.c
> +++ b/block/blk-map.c
> @@ -236,6 +236,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
>  {
>  	unsigned int max_sectors = queue_max_hw_sectors(rq->q);
>  	unsigned int nr_vecs = iov_iter_npages(iter, BIO_MAX_VECS);
> +	unsigned int flags = 0;

A small thing, but I'd also like to name that one gup_flags, instead of flags.

>  	struct bio *bio;
>  	int ret;
>  	int j;
> @@ -248,13 +249,17 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
>  		return -ENOMEM;
>  	bio_init(bio, NULL, bio->bi_inline_vecs, nr_vecs, req_op(rq));
>  
> +	if (blk_queue_pci_p2pdma(rq->q))
> +		flags |= FOLL_PCI_P2PDMA;
> +
>  	while (iov_iter_count(iter)) {
>  		struct page **pages;
>  		ssize_t bytes;
>  		size_t offs, added = 0;
>  		int npages;
>  
> -		bytes = iov_iter_get_pages_alloc2(iter, &pages, LONG_MAX, &offs);
> +		bytes = iov_iter_get_pages_alloc_flags(iter, &pages, LONG_MAX,
> +						       &offs, flags);
>  		if (unlikely(bytes <= 0)) {
>  			ret = bytes ? bytes : -EFAULT;
>  			goto out_unmap;

Looks good, please feel free to add:

Reviewed-by: John Hubbard <jhubbard@nvidia.com>


thanks,
diff mbox series

Patch

diff --git a/block/blk-map.c b/block/blk-map.c
index 7196a6b64c80..1378f49ca5ca 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -236,6 +236,7 @@  static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
 {
 	unsigned int max_sectors = queue_max_hw_sectors(rq->q);
 	unsigned int nr_vecs = iov_iter_npages(iter, BIO_MAX_VECS);
+	unsigned int flags = 0;
 	struct bio *bio;
 	int ret;
 	int j;
@@ -248,13 +249,17 @@  static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
 		return -ENOMEM;
 	bio_init(bio, NULL, bio->bi_inline_vecs, nr_vecs, req_op(rq));
 
+	if (blk_queue_pci_p2pdma(rq->q))
+		flags |= FOLL_PCI_P2PDMA;
+
 	while (iov_iter_count(iter)) {
 		struct page **pages;
 		ssize_t bytes;
 		size_t offs, added = 0;
 		int npages;
 
-		bytes = iov_iter_get_pages_alloc2(iter, &pages, LONG_MAX, &offs);
+		bytes = iov_iter_get_pages_alloc_flags(iter, &pages, LONG_MAX,
+						       &offs, flags);
 		if (unlikely(bytes <= 0)) {
 			ret = bytes ? bytes : -EFAULT;
 			goto out_unmap;