From patchwork Fri Feb 22 10:46:14 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 2175611 Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from mx4-phx2.redhat.com (mx4-phx2.redhat.com [209.132.183.25]) by patchwork1.kernel.org (Postfix) with ESMTP id 39C883FD4E for ; Fri, 22 Feb 2013 10:49:33 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx4-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id r1MAknta027412; Fri, 22 Feb 2013 05:46:50 -0500 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id r1MAkGrM023306 for ; Fri, 22 Feb 2013 05:46:16 -0500 Received: from mx1.redhat.com (ext-mx12.extmail.prod.ext.phx2.redhat.com [10.5.110.17]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r1MAkGQh005603 for ; Fri, 22 Feb 2013 05:46:16 -0500 Received: from jacques.telenet-ops.be (jacques.telenet-ops.be [195.130.132.50]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r1MAkEE8007487 for ; Fri, 22 Feb 2013 05:46:15 -0500 Received: from [192.168.1.102] ([178.119.235.68]) by jacques.telenet-ops.be with bizsmtp id 3NmE1l00R1VD9XW0JNmEKK; Fri, 22 Feb 2013 11:46:14 +0100 Message-ID: <51274C76.8080007@acm.org> Date: Fri, 22 Feb 2013 11:46:14 +0100 From: Bart Van Assche User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130105 Thunderbird/17.0.2 MIME-Version: 1.0 To: device-mapper development References: <51274C2F.6070500@acm.org> In-Reply-To: <51274C2F.6070500@acm.org> X-RedHat-Spam-Score: -2.311 (BAYES_00, DCC_REPUT_00_12, RCVD_IN_DNSWL_NONE, SPF_PASS) X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-Scanned-By: MIMEDefang 2.68 on 10.5.110.17 X-loop: dm-devel@redhat.com Cc: Jens Axboe , linux-scsi , Mike Snitzer , James Bottomley , Tejun Heo , Alasdair G Kergon Subject: [dm-devel] [PATCH 1/2] block: Avoid invoking blk_run_queue() recursively X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com Some block drivers, e.g. dm and SCSI, need to trigger a queue run from inside functions that may be invoked by their request_fn() implementation. Make sure that invoking blk_run_queue() instead of blk_run_queue_async() from such functions does not trigger recursion. Making blk_run_queue() skip queue processing when invoked recursively is safe because the only two affected request_fn() implementations (dm and SCSI) guarantee that the request queue will be reexamined sooner or later before returning from their request_fn() implementation. Signed-off-by: Bart Van Assche Cc: Jens Axboe Cc: Tejun Heo Cc: James Bottomley Cc: Alasdair G Kergon Cc: Mike Snitzer Cc: --- block/blk-core.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index c973249..cf26e3a 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -304,19 +304,24 @@ EXPORT_SYMBOL(blk_sync_queue); * This variant runs the queue whether or not the queue has been * stopped. Must be called with the queue lock held and interrupts * disabled. See also @blk_run_queue. + * + * Note: + * Request handling functions are allowed to invoke __blk_run_queue() or + * blk_run_queue() directly or indirectly. This will not result in a + * recursive call of the request handler. However, such request handling + * functions must, before they return, either reexamine the request queue + * or invoke blk_delay_queue() to avoid that queue processing stops. + * + * Some request handler implementations, e.g. scsi_request_fn() and + * dm_request_fn(), unlock the queue lock internally. Returning immediately + * if q->request_fn_active > 0 avoids that for the same queue multiple + * threads execute the request handling function concurrently. */ inline void __blk_run_queue_uncond(struct request_queue *q) { - if (unlikely(blk_queue_dead(q))) + if (unlikely(blk_queue_dead(q) || q->request_fn_active)) return; - /* - * Some request_fn implementations, e.g. scsi_request_fn(), unlock - * the queue lock internally. As a result multiple threads may be - * running such a request function concurrently. Keep track of the - * number of active request_fn invocations such that blk_drain_queue() - * can wait until all these request_fn calls have finished. - */ q->request_fn_active++; q->request_fn(q); q->request_fn_active--;