From patchwork Thu Aug 20 18:03:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11726903 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 456B2618 for ; Thu, 20 Aug 2020 18:04:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1A3D7208C7 for ; Thu, 20 Aug 2020 18:04:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GcnZfdr0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726786AbgHTSEC (ORCPT ); Thu, 20 Aug 2020 14:04:02 -0400 Received: from us-smtp-2.mimecast.com ([207.211.31.81]:20039 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725819AbgHTSEB (ORCPT ); Thu, 20 Aug 2020 14:04:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597946640; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ty6cDUgncqJhmr9uZX9K42afGHpqzuZGxNGcoZPaJpw=; b=GcnZfdr0nmULbGgJ84WACJMfDq8Q1c5FwYyoOcreIPcKUYoEulMaJhK2e+EyWGNmnEvKB4 r15IfG20cTLEOc1Me23ulcteqMkEadL7nhLs0pjqricAW83/ee1JYiM4N8o9g7uRgCLXTA 1zq/K/L1pgnH8E6b/HYbbeEs2bhD1hU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-283-TQpHYkzONN-Uy04unaF-pQ-1; Thu, 20 Aug 2020 14:03:56 -0400 X-MC-Unique: TQpHYkzONN-Uy04unaF-pQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 843428030C8; Thu, 20 Aug 2020 18:03:55 +0000 (UTC) Received: from localhost (ovpn-12-36.pek2.redhat.com [10.72.12.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id E492E74E08; Thu, 20 Aug 2020 18:03:51 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , Hannes Reinecke , Bart Van Assche , John Garry , Christoph Hellwig Subject: [PATCH 1/5] blk-mq: define max_order for allocating rqs pages as macro Date: Fri, 21 Aug 2020 02:03:31 +0800 Message-Id: <20200820180335.3109216-2-ming.lei@redhat.com> In-Reply-To: <20200820180335.3109216-1-ming.lei@redhat.com> References: <20200820180335.3109216-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Inside blk_mq_alloc_rqs(), 'max_order' is actually one const local variable, define it as macro, and this macro will be re-used in following patch. Signed-off-by: Ming Lei Cc: Hannes Reinecke Cc: Bart Van Assche Cc: John Garry Cc: Christoph Hellwig --- block/blk-mq.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 77d885805699..f9da2d803c18 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2360,10 +2360,12 @@ static int blk_mq_init_request(struct blk_mq_tag_set *set, struct request *rq, return 0; } +#define MAX_RQS_PAGE_ORDER 4 + int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, unsigned int hctx_idx, unsigned int depth) { - unsigned int i, j, entries_per_page, max_order = 4; + unsigned int i, j, entries_per_page; size_t rq_size, left; int node; @@ -2382,7 +2384,7 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, left = rq_size * depth; for (i = 0; i < depth; ) { - int this_order = max_order; + int this_order = MAX_RQS_PAGE_ORDER; struct page *page; int to_do; void *p; From patchwork Thu Aug 20 18:03:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11726905 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0B75614F6 for ; Thu, 20 Aug 2020 18:04:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DDA3D21734 for ; Thu, 20 Aug 2020 18:04:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BOU+fyc9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726799AbgHTSEG (ORCPT ); Thu, 20 Aug 2020 14:04:06 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:38378 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725819AbgHTSEE (ORCPT ); Thu, 20 Aug 2020 14:04:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597946643; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aRuOvVXOit+sTt/GX9gxcFe+1KQ7rUBZjwvjadOlvPM=; b=BOU+fyc9T/U8JPuNc6UhNmfnY6tHO2TIr8O1g0hnR2MSL3s+IhY+TDMNfDSnm29NX0uVi8 5G77YRd9GkVC4QEuetn4q0r/VDjdJNuoagDq7rHO4AXkEzc1AnVrM3poB1jyLQ72GAmmla 076gI4FyPjogilQB3VTLzQenObEckbY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-293-BJv-U2eDNJSXCCOTTRBfZg-1; Thu, 20 Aug 2020 14:03:59 -0400 X-MC-Unique: BJv-U2eDNJSXCCOTTRBfZg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7D34C8030BC; Thu, 20 Aug 2020 18:03:58 +0000 (UTC) Received: from localhost (ovpn-12-36.pek2.redhat.com [10.72.12.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id D62C039A55; Thu, 20 Aug 2020 18:03:57 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , Hannes Reinecke , Bart Van Assche , John Garry , Christoph Hellwig Subject: [PATCH 2/5] blk-mq: add helper of blk_mq_get_hw_queue_node Date: Fri, 21 Aug 2020 02:03:32 +0800 Message-Id: <20200820180335.3109216-3-ming.lei@redhat.com> In-Reply-To: <20200820180335.3109216-1-ming.lei@redhat.com> References: <20200820180335.3109216-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add helper of blk_mq_get_hw_queue_node for retrieve hw queue's numa node. Signed-off-by: Ming Lei Cc: Hannes Reinecke Cc: Bart Van Assche Cc: John Garry Cc: Christoph Hellwig --- block/blk-mq.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index f9da2d803c18..5019d21e7ff8 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2263,6 +2263,18 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio) } EXPORT_SYMBOL_GPL(blk_mq_submit_bio); /* only for request based dm */ +static int blk_mq_get_hw_queue_node(struct blk_mq_tag_set *set, + unsigned int hctx_idx) +{ + int node = blk_mq_hw_queue_to_node(&set->map[HCTX_TYPE_DEFAULT], + hctx_idx); + + if (node == NUMA_NO_NODE) + node = set->numa_node; + + return node; +} + void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, unsigned int hctx_idx) { @@ -2309,11 +2321,7 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, unsigned int reserved_tags) { struct blk_mq_tags *tags; - int node; - - node = blk_mq_hw_queue_to_node(&set->map[HCTX_TYPE_DEFAULT], hctx_idx); - if (node == NUMA_NO_NODE) - node = set->numa_node; + int node = blk_mq_get_hw_queue_node(set, hctx_idx); tags = blk_mq_init_tags(nr_tags, reserved_tags, node, BLK_MQ_FLAG_TO_ALLOC_POLICY(set->flags)); @@ -2367,11 +2375,7 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, { unsigned int i, j, entries_per_page; size_t rq_size, left; - int node; - - node = blk_mq_hw_queue_to_node(&set->map[HCTX_TYPE_DEFAULT], hctx_idx); - if (node == NUMA_NO_NODE) - node = set->numa_node; + int node = blk_mq_get_hw_queue_node(set, hctx_idx); INIT_LIST_HEAD(&tags->page_list); From patchwork Thu Aug 20 18:03:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11726909 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 82823618 for ; Thu, 20 Aug 2020 18:04:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6B04121734 for ; Thu, 20 Aug 2020 18:04:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="EobNO1R+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726939AbgHTSEO (ORCPT ); Thu, 20 Aug 2020 14:04:14 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:23113 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726858AbgHTSEJ (ORCPT ); Thu, 20 Aug 2020 14:04:09 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597946647; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ac8F25PYSqJNC3mib8HdWVEUAXFTxNd8kR9bvEu3Ix8=; b=EobNO1R+pg1O94Gs9oWaC/isX2hvCJrGsJq0dluMwV8VVL20u92O1UPRYCQFdZxXsH7uKT dVAD16bgBVkBzGr7QdLAM6aBTd5iBchOetMo517BRTVDsC+bcoF/QFccAZcPu0GgpNBSUB njSHCoj7VGFOB6Q+krrV1odHkn4gXhM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-162-Mp8j5Eb2PKq7kFekKyNp-A-1; Thu, 20 Aug 2020 14:04:05 -0400 X-MC-Unique: Mp8j5Eb2PKq7kFekKyNp-A-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EAD531074649; Thu, 20 Aug 2020 18:04:03 +0000 (UTC) Received: from localhost (ovpn-12-36.pek2.redhat.com [10.72.12.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4BD4710027AB; Thu, 20 Aug 2020 18:04:00 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , Hannes Reinecke , Bart Van Assche , John Garry , Christoph Hellwig Subject: [PATCH 3/5] blk-mq: add helpers for allocating/freeing pages of request pool Date: Fri, 21 Aug 2020 02:03:33 +0800 Message-Id: <20200820180335.3109216-4-ming.lei@redhat.com> In-Reply-To: <20200820180335.3109216-1-ming.lei@redhat.com> References: <20200820180335.3109216-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add two helpers for allocating and freeing pages of request pool. No function change. Signed-off-by: Ming Lei Cc: Hannes Reinecke Cc: Bart Van Assche Cc: John Garry Cc: Christoph Hellwig --- block/blk-mq.c | 81 +++++++++++++++++++++++++++++++------------------- 1 file changed, 51 insertions(+), 30 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 5019d21e7ff8..65f73b8db477 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2275,6 +2275,53 @@ static int blk_mq_get_hw_queue_node(struct blk_mq_tag_set *set, return node; } +static size_t order_to_size(unsigned int order) +{ + return (size_t)PAGE_SIZE << order; +} + +static struct page *blk_mq_alloc_rqs_page(int node, unsigned order, + unsigned min_size) +{ + struct page *page; + unsigned this_order = order; + + do { + page = alloc_pages_node(node, + GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY | __GFP_ZERO, + this_order); + if (page) + break; + if (!this_order--) + break; + if (order_to_size(this_order) < min_size) + break; + } while (1); + + if (!page) + return NULL; + + page->private = this_order; + + /* + * Allow kmemleak to scan these pages as they contain pointers + * to additional allocations like via ops->init_request(). + */ + kmemleak_alloc(page_address(page), order_to_size(this_order), 1, GFP_NOIO); + + return page; +} + +static void blk_mq_free_rqs_page(struct page *page) +{ + /* + * Remove kmemleak object previously allocated in + * blk_mq_alloc_rqs(). + */ + kmemleak_free(page_address(page)); + __free_pages(page, page->private); +} + void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, unsigned int hctx_idx) { @@ -2296,12 +2343,7 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, while (!list_empty(&tags->page_list)) { page = list_first_entry(&tags->page_list, struct page, lru); list_del_init(&page->lru); - /* - * Remove kmemleak object previously allocated in - * blk_mq_alloc_rqs(). - */ - kmemleak_free(page_address(page)); - __free_pages(page, page->private); + blk_mq_free_rqs_page(page); } } @@ -2348,11 +2390,6 @@ struct blk_mq_tags *blk_mq_alloc_rq_map(struct blk_mq_tag_set *set, return tags; } -static size_t order_to_size(unsigned int order) -{ - return (size_t)PAGE_SIZE << order; -} - static int blk_mq_init_request(struct blk_mq_tag_set *set, struct request *rq, unsigned int hctx_idx, int node) { @@ -2396,30 +2433,14 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, while (this_order && left < order_to_size(this_order - 1)) this_order--; - do { - page = alloc_pages_node(node, - GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY | __GFP_ZERO, - this_order); - if (page) - break; - if (!this_order--) - break; - if (order_to_size(this_order) < rq_size) - break; - } while (1); - + page = blk_mq_alloc_rqs_page(node, this_order, rq_size); if (!page) goto fail; - page->private = this_order; + this_order = (int)page->private; list_add_tail(&page->lru, &tags->page_list); - p = page_address(page); - /* - * Allow kmemleak to scan these pages as they contain pointers - * to additional allocations like via ops->init_request(). - */ - kmemleak_alloc(p, order_to_size(this_order), 1, GFP_NOIO); + entries_per_page = order_to_size(this_order) / rq_size; to_do = min(entries_per_page, depth - i); left -= to_do * rq_size; From patchwork Thu Aug 20 18:03:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11726907 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CBDF0618 for ; Thu, 20 Aug 2020 18:04:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AF993208C7 for ; Thu, 20 Aug 2020 18:04:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Ty3XH/wk" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726788AbgHTSEN (ORCPT ); Thu, 20 Aug 2020 14:04:13 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:29233 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726896AbgHTSEM (ORCPT ); Thu, 20 Aug 2020 14:04:12 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597946650; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=G5mNKDz1d5pHiay+d89sspIYQiLMJirre9aMGaiCb24=; b=Ty3XH/wkSgGTj/Wq+YXdA/L491/5ZhmYyCTkL2hechK8zHWcEkEGV00W0KfMC2FL9B1JKE mumwa0WqrzFSrTMHj/G80JwUrDfHlF4QfrVNeC4pA1A75js4QRSWbnhzghnOSM0xAvFl85 rZLFRPCAfrctt/4FbQ49pxe+oSEKzno= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-438-h4KVqlJHNTCV70T9uPcSaw-1; Thu, 20 Aug 2020 14:04:08 -0400 X-MC-Unique: h4KVqlJHNTCV70T9uPcSaw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1D4D71074649; Thu, 20 Aug 2020 18:04:07 +0000 (UTC) Received: from localhost (ovpn-12-36.pek2.redhat.com [10.72.12.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5B9F374E08; Thu, 20 Aug 2020 18:04:05 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , Hannes Reinecke , Bart Van Assche , John Garry , Christoph Hellwig Subject: [PATCH 4/5] blk-mq: cache freed request pool pages Date: Fri, 21 Aug 2020 02:03:34 +0800 Message-Id: <20200820180335.3109216-5-ming.lei@redhat.com> In-Reply-To: <20200820180335.3109216-1-ming.lei@redhat.com> References: <20200820180335.3109216-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org blk_mq_queue_tag_busy_iter() and blk_mq_tagset_busy_iter() may iterate request via driver tag. However, driver tag allocation and updating tags->rqs[tag] can't be done atomically, meantime we don't clear tags->rqs[tag] before releasing the driver tag in fast path. So the two iterator functions may see stale request via tags->rq[tag], and the stale request may have been freed via blk_mq_update_nr_requests() or elevator switch, then use-after-free warning is triggered. Fix this issue by caching freed request pool pages in one dedicated per-tagset list, and always try to allocate request pool pages first from the cached pages. Memory waste may be caused, and at most one request pool pages is wasted for each request queue when request queue elevator is switched to none from real io sched. The following patch will add one simple mechanism for reclaiming these unused pages for allocating request pool. Signed-off-by: Ming Lei Cc: Hannes Reinecke Cc: Bart Van Assche Cc: John Garry Cc: Christoph Hellwig --- block/blk-mq.c | 98 ++++++++++++++++++++++++++++++++++++------ include/linux/blk-mq.h | 3 ++ 2 files changed, 87 insertions(+), 14 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 65f73b8db477..c644f5cb1549 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2280,12 +2280,58 @@ static size_t order_to_size(unsigned int order) return (size_t)PAGE_SIZE << order; } -static struct page *blk_mq_alloc_rqs_page(int node, unsigned order, - unsigned min_size) +#define MAX_RQS_PAGE_ORDER 4 + +static void blk_mq_mark_rqs_page(struct page *page, unsigned order, + unsigned hctx_idx) +{ + WARN_ON_ONCE(order > MAX_RQS_PAGE_ORDER); + WARN_ON_ONCE(hctx_idx > (ULONG_MAX >> fls(MAX_RQS_PAGE_ORDER))); + + page->private = (hctx_idx << fls(MAX_RQS_PAGE_ORDER)) | order; +} + +static unsigned blk_mq_rqs_page_order(struct page *page) +{ + return page->private & ((1 << fls(MAX_RQS_PAGE_ORDER)) - 1); +} + +static unsigned blk_mq_rqs_page_hctx_idx(struct page *page) +{ + return page->private >> fls(MAX_RQS_PAGE_ORDER); +} + +static struct page *blk_mq_alloc_rqs_page_from_cache( + struct blk_mq_tag_set *set, unsigned hctx_idx) +{ + struct page *page = NULL, *tmp; + + spin_lock(&set->free_page_list_lock); + list_for_each_entry(tmp, &set->free_page_list, lru) { + if (blk_mq_rqs_page_hctx_idx(tmp) == hctx_idx) { + page = tmp; + break; + } + } + if (page) + list_del_init(&page->lru); + spin_unlock(&set->free_page_list_lock); + + return page; +} + +static struct page *blk_mq_alloc_rqs_page(struct blk_mq_tag_set *set, + unsigned hctx_idx, unsigned order, unsigned min_size) { struct page *page; unsigned this_order = order; + int node; + + page = blk_mq_alloc_rqs_page_from_cache(set, hctx_idx); + if (page) + return page; + node = blk_mq_get_hw_queue_node(set, hctx_idx); do { page = alloc_pages_node(node, GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY | __GFP_ZERO, @@ -2301,7 +2347,7 @@ static struct page *blk_mq_alloc_rqs_page(int node, unsigned order, if (!page) return NULL; - page->private = this_order; + blk_mq_mark_rqs_page(page, this_order, hctx_idx); /* * Allow kmemleak to scan these pages as they contain pointers @@ -2312,14 +2358,34 @@ static struct page *blk_mq_alloc_rqs_page(int node, unsigned order, return page; } -static void blk_mq_free_rqs_page(struct page *page) +static void blk_mq_release_rqs_page(struct page *page) { - /* - * Remove kmemleak object previously allocated in - * blk_mq_alloc_rqs(). - */ + /* Remove kmemleak object previously allocated in blk_mq_alloc_rqs() */ kmemleak_free(page_address(page)); - __free_pages(page, page->private); + __free_pages(page, blk_mq_rqs_page_order(page)); +} + +static void blk_mq_free_rqs_page(struct blk_mq_tag_set *set, struct page *page) +{ + spin_lock(&set->free_page_list_lock); + list_add_tail(&page->lru, &set->free_page_list); + spin_unlock(&set->free_page_list_lock); +} + +static void blk_mq_release_all_rqs_page(struct blk_mq_tag_set *set) +{ + struct page *page; + LIST_HEAD(pg_list); + + spin_lock(&set->free_page_list_lock); + list_splice_init(&set->free_page_list, &pg_list); + spin_unlock(&set->free_page_list_lock); + + while (!list_empty(&pg_list)) { + page = list_first_entry(&pg_list, struct page, lru); + list_del_init(&page->lru); + blk_mq_release_rqs_page(page); + } } void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, @@ -2343,7 +2409,7 @@ void blk_mq_free_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, while (!list_empty(&tags->page_list)) { page = list_first_entry(&tags->page_list, struct page, lru); list_del_init(&page->lru); - blk_mq_free_rqs_page(page); + blk_mq_free_rqs_page(set, page); } } @@ -2405,8 +2471,6 @@ static int blk_mq_init_request(struct blk_mq_tag_set *set, struct request *rq, return 0; } -#define MAX_RQS_PAGE_ORDER 4 - int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, unsigned int hctx_idx, unsigned int depth) { @@ -2433,11 +2497,12 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct blk_mq_tags *tags, while (this_order && left < order_to_size(this_order - 1)) this_order--; - page = blk_mq_alloc_rqs_page(node, this_order, rq_size); + page = blk_mq_alloc_rqs_page(set, hctx_idx, this_order, + rq_size); if (!page) goto fail; - this_order = (int)page->private; + this_order = blk_mq_rqs_page_order(page); list_add_tail(&page->lru, &tags->page_list); p = page_address(page); @@ -3460,6 +3525,9 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) if (ret) goto out_free_mq_map; + spin_lock_init(&set->free_page_list_lock); + INIT_LIST_HEAD(&set->free_page_list); + ret = blk_mq_alloc_map_and_requests(set); if (ret) goto out_free_mq_map; @@ -3492,6 +3560,8 @@ void blk_mq_free_tag_set(struct blk_mq_tag_set *set) set->map[j].mq_map = NULL; } + blk_mq_release_all_rqs_page(set); + kfree(set->tags); set->tags = NULL; } diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index ea3461298de5..4c2b135dbbe1 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -247,6 +247,9 @@ struct blk_mq_tag_set { struct mutex tag_list_lock; struct list_head tag_list; + + spinlock_t free_page_list_lock; + struct list_head free_page_list; }; /** From patchwork Thu Aug 20 18:03:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11726911 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ED5DA14F6 for ; Thu, 20 Aug 2020 18:04:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CF7F8208B3 for ; Thu, 20 Aug 2020 18:04:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="dtL9SLht" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726935AbgHTSEX (ORCPT ); Thu, 20 Aug 2020 14:04:23 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:29142 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726823AbgHTSET (ORCPT ); Thu, 20 Aug 2020 14:04:19 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597946658; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eL10uHVyVHoPAiXMn74Rxzn2yLdMaTQyOhdyA9IG3K4=; b=dtL9SLhtLAUdxBJKXZ2A5ET1pPLBQKlkfRa+9k0fVte/1vkZOqdRwbkbf8KVC16YVzOWs0 a0FQE1Pq4IUeO/Jmik/qg3JVgkif0vUjF/A1QVRlq4g0+l8/dTIEeFZc331jr0je9VL7Sx LjssgZyDiZSNTvB1l2Ma7w85nKW//kA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-269-S7Qj0WbtPOeaiCWVUjO1eg-1; Thu, 20 Aug 2020 14:04:14 -0400 X-MC-Unique: S7Qj0WbtPOeaiCWVUjO1eg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9E35F801ADD; Thu, 20 Aug 2020 18:04:12 +0000 (UTC) Received: from localhost (ovpn-12-36.pek2.redhat.com [10.72.12.36]) by smtp.corp.redhat.com (Postfix) with ESMTP id 195265DA30; Thu, 20 Aug 2020 18:04:08 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, Ming Lei , Hannes Reinecke , Bart Van Assche , John Garry , Christoph Hellwig Subject: [PATCH 5/5] blk-mq: check and shrink freed request pool page Date: Fri, 21 Aug 2020 02:03:35 +0800 Message-Id: <20200820180335.3109216-6-ming.lei@redhat.com> In-Reply-To: <20200820180335.3109216-1-ming.lei@redhat.com> References: <20200820180335.3109216-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org request pool pages may take a bit more space, and each request queue may hold one unused request pool at most, so memory waste can be big when there are lots of request queues. Schedule a delayed work to check if tags->rqs[] still may refer to page in freed request pool page. If no any request in tags->rqs[] refers to the freed request pool page, release the page now. Otherwise, schedule the delayed work after 10 seconds for check & release the pages. Signed-off-by: Ming Lei Cc: Hannes Reinecke Cc: Bart Van Assche Cc: John Garry Cc: Christoph Hellwig --- block/blk-mq.c | 55 ++++++++++++++++++++++++++++++++++++++++++ include/linux/blk-mq.h | 1 + 2 files changed, 56 insertions(+) diff --git a/block/blk-mq.c b/block/blk-mq.c index c644f5cb1549..2865920086ea 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2365,11 +2365,63 @@ static void blk_mq_release_rqs_page(struct page *page) __free_pages(page, blk_mq_rqs_page_order(page)); } +#define SHRINK_RQS_PAGE_DELAY (10 * HZ) + static void blk_mq_free_rqs_page(struct blk_mq_tag_set *set, struct page *page) { spin_lock(&set->free_page_list_lock); list_add_tail(&page->lru, &set->free_page_list); spin_unlock(&set->free_page_list_lock); + + schedule_delayed_work(&set->rqs_page_shrink, SHRINK_RQS_PAGE_DELAY); +} + +static bool blk_mq_can_shrink_rqs_page(struct blk_mq_tag_set *set, + struct page *pg) +{ + unsigned hctx_idx = blk_mq_rqs_page_hctx_idx(pg); + struct blk_mq_tags *tags = set->tags[hctx_idx]; + unsigned long start = (unsigned long)page_address(pg); + unsigned long end = start + order_to_size(blk_mq_rqs_page_order(pg)); + int i; + + for (i = 0; i < set->queue_depth; i++) { + unsigned long rq_addr = (unsigned long)tags->rqs[i]; + if (rq_addr >= start && rq_addr < end) + return false; + } + return true; +} + +static void blk_mq_rqs_page_shrink_work(struct work_struct *work) +{ + struct blk_mq_tag_set *set = + container_of(work, struct blk_mq_tag_set, rqs_page_shrink.work); + LIST_HEAD(pg_list); + struct page *page, *tmp; + bool resched; + + spin_lock(&set->free_page_list_lock); + list_splice_init(&set->free_page_list, &pg_list); + spin_unlock(&set->free_page_list_lock); + + mutex_lock(&set->tag_list_lock); + list_for_each_entry_safe(page, tmp, &pg_list, lru) { + if (blk_mq_can_shrink_rqs_page(set, page)) { + list_del_init(&page->lru); + blk_mq_release_rqs_page(page); + } + } + mutex_unlock(&set->tag_list_lock); + + spin_lock(&set->free_page_list_lock); + list_splice_init(&pg_list, &set->free_page_list); + resched = !list_empty(&set->free_page_list); + spin_unlock(&set->free_page_list_lock); + + if (resched) + schedule_delayed_work(&set->rqs_page_shrink, + SHRINK_RQS_PAGE_DELAY); } static void blk_mq_release_all_rqs_page(struct blk_mq_tag_set *set) @@ -2377,6 +2429,8 @@ static void blk_mq_release_all_rqs_page(struct blk_mq_tag_set *set) struct page *page; LIST_HEAD(pg_list); + cancel_delayed_work_sync(&set->rqs_page_shrink); + spin_lock(&set->free_page_list_lock); list_splice_init(&set->free_page_list, &pg_list); spin_unlock(&set->free_page_list_lock); @@ -3527,6 +3581,7 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) spin_lock_init(&set->free_page_list_lock); INIT_LIST_HEAD(&set->free_page_list); + INIT_DELAYED_WORK(&set->rqs_page_shrink, blk_mq_rqs_page_shrink_work); ret = blk_mq_alloc_map_and_requests(set); if (ret) diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 4c2b135dbbe1..b2adf99dbbef 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -250,6 +250,7 @@ struct blk_mq_tag_set { spinlock_t free_page_list_lock; struct list_head free_page_list; + struct delayed_work rqs_page_shrink; }; /**