From patchwork Sat Sep 17 08:28:26 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Omar Sandoval X-Patchwork-Id: 9337033 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 360FA60839 for ; Sat, 17 Sep 2016 08:29:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 29E8D296FE for ; Sat, 17 Sep 2016 08:29:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1D36129712; Sat, 17 Sep 2016 08:29:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 96031296FE for ; Sat, 17 Sep 2016 08:29:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935021AbcIQI3a (ORCPT ); Sat, 17 Sep 2016 04:29:30 -0400 Received: from mail-pf0-f182.google.com ([209.85.192.182]:36402 "EHLO mail-pf0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758480AbcIQI2n (ORCPT ); Sat, 17 Sep 2016 04:28:43 -0400 Received: by mail-pf0-f182.google.com with SMTP id q2so14511957pfj.3 for ; Sat, 17 Sep 2016 01:28:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osandov-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=IxXOGJm7eXZdH+YbJ6Vz+srEPXSLIjyydMu3TV9+hpo=; b=hb+gcwfBw0AQwBgJi7AJ2AdDxwTPwFd9jchTEdcR8B6dPN7Z4zuqYbX1K5HvDZWCC7 rofQYO+nH7JHfhZzkmd5DLZXupfw86PsxrNsccl+v7Shp7in0fpZzSmWBKS8cedurhAE 80rDD/IGbBf6DtuLpt08IXQzVwzbd93V1QCA/Q0cG08lMJSjILcezWJdBmcZYE3HqZeO FLF3FnwXCvKgZZr/fHf/zNhZnqwQ22MRnaCg/41DqBhiVP0XXsWOCyN9tB+8FNVSCP/r uWGzWgcexJs7XBRS9Yhcw78qiVgmpVhVGnzCfO2wGQKtXMQ/QzKLKayaa3xZYZ4yqMP5 WBbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=IxXOGJm7eXZdH+YbJ6Vz+srEPXSLIjyydMu3TV9+hpo=; b=B67F31jA0qy6C10tqFB8HF/+SH8ShWvmJeyUaS6Cmc0eJtg0cMxP6De1SXoM7h6FQg nH7d3dD0D/e1R6dUI1DrWabk9goBO60Fturj7Utu6sXwCLy33QjwNIpZEBxM3INU73bN OKwTD1++M5RC4GMojfC465O6KvoDEThsBzOSZTSqmB/3ui7IJeq0MVPlD8a6kT+3WZag 3VB6TCphfyuAU0hDEG7v/zMv9OzQS8rvh0T4dbqUI+gwEYdcLjavDbNHcH3SsD3f05tl zbjKANqhdPyApLm4LWay9LNklDut9e58sVTf0J67YJbInnFRZN56BfxzHxuV5NdxrUKh AM2w== X-Gm-Message-State: AE9vXwM9fOJQ8Ypd/LgI1XZukKWRluPYSpTE2WzfMDHvpiNO9rHGIUCE+9sx02+v4L5raVHQ X-Received: by 10.98.152.69 with SMTP id q66mr15608602pfd.176.1474100922926; Sat, 17 Sep 2016 01:28:42 -0700 (PDT) Received: from vader.thefacebook.com ([2620:10d:c090:180::1:1e34]) by smtp.gmail.com with ESMTPSA id ci9sm17173264pad.34.2016.09.17.01.28.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 17 Sep 2016 01:28:42 -0700 (PDT) From: Omar Sandoval To: Jens Axboe , linux-block@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, Alexei Starovoitov Subject: [PATCH v4 6/6] sbitmap: re-initialize allocation hints after resize Date: Sat, 17 Sep 2016 01:28:26 -0700 Message-Id: X-Mailer: git-send-email 2.9.3 In-Reply-To: References: In-Reply-To: References: Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Omar Sandoval After a struct sbitmap_queue is resized smaller, the allocation hints may still be set to bits beyond the new depth of the bitmap. This means that, for example, if the number of blk-mq tags is reduced through sysfs, more requests than the nominal queue depth may be in flight. It's tempting to fix this at resize time by doing a one-time reinitialization of the hints, but this can race with __sbitmap_queue_get() updating the hint. Instead, check the hint before we use it. This caused no measurable performance difference in my synthetic benchmarks. Signed-off-by: Omar Sandoval --- lib/sbitmap.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/lib/sbitmap.c b/lib/sbitmap.c index 928b82a..f736c52 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -246,10 +246,15 @@ EXPORT_SYMBOL_GPL(sbitmap_queue_resize); int __sbitmap_queue_get(struct sbitmap_queue *sbq) { - unsigned int hint; + unsigned int hint, depth; int nr; hint = this_cpu_read(*sbq->alloc_hint); + depth = READ_ONCE(sbq->sb.depth); + if (unlikely(hint >= depth)) { + hint = depth ? prandom_u32() % depth : 0; + this_cpu_write(*sbq->alloc_hint, hint); + } nr = sbitmap_get(&sbq->sb, hint, sbq->round_robin); if (nr == -1) { @@ -258,7 +263,7 @@ int __sbitmap_queue_get(struct sbitmap_queue *sbq) } else if (nr == hint || unlikely(sbq->round_robin)) { /* Only update the hint if we used it. */ hint = nr + 1; - if (hint >= sbq->sb.depth - 1) + if (hint >= depth - 1) hint = 0; this_cpu_write(*sbq->alloc_hint, hint); }