diff mbox

rbd: use GFP_NOIO consistently for request allocations

Message ID 1459847619-27819-1-git-send-email-ddiss@suse.de (mailing list archive)
State New, archived
Headers show

Commit Message

David Disseldorp April 5, 2016, 9:13 a.m. UTC
As of 5a60e87603c4c533492c515b7f62578189b03c9c, RBD object request
allocations are made via rbd_obj_request_create() with GFP_NOIO.
However, subsequent OSD request allocations in rbd_osd_req_create*()
use GFP_ATOMIC.

With heavy page cache usage (e.g. OSDs running on same host as krbd
client), rbd_osd_req_create() order-1 GFP_ATOMIC allocations have been
observed to fail, where direct reclaim would have allowed GFP_NOIO
allocations to succeed.

Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Suggested-by: Neil Brown <neilb@suse.com>
Signed-off-by: David Disseldorp <ddiss@suse.de>
---
 drivers/block/rbd.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Ilya Dryomov April 5, 2016, 9:41 a.m. UTC | #1
On Tue, Apr 5, 2016 at 11:13 AM, David Disseldorp <ddiss@suse.de> wrote:
> As of 5a60e87603c4c533492c515b7f62578189b03c9c, RBD object request
> allocations are made via rbd_obj_request_create() with GFP_NOIO.
> However, subsequent OSD request allocations in rbd_osd_req_create*()
> use GFP_ATOMIC.
>
> With heavy page cache usage (e.g. OSDs running on same host as krbd
> client), rbd_osd_req_create() order-1 GFP_ATOMIC allocations have been
> observed to fail, where direct reclaim would have allowed GFP_NOIO
> allocations to succeed.
>
> Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> Suggested-by: Neil Brown <neilb@suse.com>
> Signed-off-by: David Disseldorp <ddiss@suse.de>
> ---
>  drivers/block/rbd.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
> index 9c62344..94a1843 100644
> --- a/drivers/block/rbd.c
> +++ b/drivers/block/rbd.c
> @@ -1953,7 +1953,7 @@ static struct ceph_osd_request *rbd_osd_req_create(
>
>         osdc = &rbd_dev->rbd_client->client->osdc;
>         osd_req = ceph_osdc_alloc_request(osdc, snapc, num_ops, false,
> -                                         GFP_ATOMIC);
> +                                         GFP_NOIO);
>         if (!osd_req)
>                 return NULL;    /* ENOMEM */
>
> @@ -2002,7 +2002,7 @@ rbd_osd_req_create_copyup(struct rbd_obj_request *obj_request)
>         rbd_dev = img_request->rbd_dev;
>         osdc = &rbd_dev->rbd_client->client->osdc;
>         osd_req = ceph_osdc_alloc_request(osdc, snapc, num_osd_ops,
> -                                               false, GFP_ATOMIC);
> +                                               false, GFP_NOIO);
>         if (!osd_req)
>                 return NULL;    /* ENOMEM */
>
> @@ -2504,7 +2504,7 @@ static int rbd_img_request_fill(struct rbd_img_request *img_request,
>                                         bio_chain_clone_range(&bio_list,
>                                                                 &bio_offset,
>                                                                 clone_size,
> -                                                               GFP_ATOMIC);
> +                                                               GFP_NOIO);
>                         if (!obj_request->bio_list)
>                                 goto out_unwind;
>                 } else if (type == OBJ_REQUEST_PAGES) {

I've got the first two fixed in one of my wip branches but it's tied to
ceph_osdc_alloc_request() changes.  Given that allocation failures were
observed, I guess it's worth sending this one to stable right away.
I'll queue it up, thanks David.

                Ilya
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Disseldorp April 5, 2016, 9:50 a.m. UTC | #2
On Tue, 5 Apr 2016 11:41:47 +0200, Ilya Dryomov wrote:

> I've got the first two fixed in one of my wip branches but it's tied to
> ceph_osdc_alloc_request() changes.  Given that allocation failures were
> observed, I guess it's worth sending this one to stable right away.
> I'll queue it up, thanks David.

Feel free to add Cc: stable@ tags as you see fit.
Thanks for the feedback Ilya!
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 9c62344..94a1843 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -1953,7 +1953,7 @@  static struct ceph_osd_request *rbd_osd_req_create(
 
 	osdc = &rbd_dev->rbd_client->client->osdc;
 	osd_req = ceph_osdc_alloc_request(osdc, snapc, num_ops, false,
-					  GFP_ATOMIC);
+					  GFP_NOIO);
 	if (!osd_req)
 		return NULL;	/* ENOMEM */
 
@@ -2002,7 +2002,7 @@  rbd_osd_req_create_copyup(struct rbd_obj_request *obj_request)
 	rbd_dev = img_request->rbd_dev;
 	osdc = &rbd_dev->rbd_client->client->osdc;
 	osd_req = ceph_osdc_alloc_request(osdc, snapc, num_osd_ops,
-						false, GFP_ATOMIC);
+						false, GFP_NOIO);
 	if (!osd_req)
 		return NULL;	/* ENOMEM */
 
@@ -2504,7 +2504,7 @@  static int rbd_img_request_fill(struct rbd_img_request *img_request,
 					bio_chain_clone_range(&bio_list,
 								&bio_offset,
 								clone_size,
-								GFP_ATOMIC);
+								GFP_NOIO);
 			if (!obj_request->bio_list)
 				goto out_unwind;
 		} else if (type == OBJ_REQUEST_PAGES) {