From patchwork Wed Mar 23 19:45:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Snitzer X-Patchwork-Id: 12790074 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9F9FC433EF for ; Wed, 23 Mar 2022 19:45:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344364AbiCWTrB (ORCPT ); Wed, 23 Mar 2022 15:47:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232439AbiCWTrA (ORCPT ); Wed, 23 Mar 2022 15:47:00 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 618048B6D1 for ; Wed, 23 Mar 2022 12:45:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648064729; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:in-reply-to:in-reply-to: references:references; bh=AKeDBg4ZdHttOQXh6zOoYZL7eiQ4d/9ADfn1bk7vgsI=; b=byzL/NHCLkvlB9Nx0CdlubxkvubUaP20nZTCwFqeaXeul0jcLkXux6lIW+AZ+IDxeXvkUd Rdkfne0KFmIfT0Fx3wxhdvW/LpvuzNbrUzKijLNURTos/stA4bwlsKC9KjkMp4qNrsR/kl 2ZuOfsJYbZiMgD4G9/o0X7q1qyoAo98= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-260-e-Yo4T29MEaU_GA9ETfJyg-1; Wed, 23 Mar 2022 15:45:28 -0400 X-MC-Unique: e-Yo4T29MEaU_GA9ETfJyg-1 Received: by mail-qt1-f199.google.com with SMTP id cb11-20020a05622a1f8b00b002e06f729debso2018986qtb.4 for ; Wed, 23 Mar 2022 12:45:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=AKeDBg4ZdHttOQXh6zOoYZL7eiQ4d/9ADfn1bk7vgsI=; b=MIeA4GmMkR315o37WuJdP3mUpAmOizL424cRcqyEcYEm4VB6Spv3MRX4QpKYLNSwr4 NEuQc8G7JbR6i+cf+w7q7Vq1YbzJ78HfETJkiIY2TZKANW8E8tyQSJGy3lrBX1kU99nt bLD4Y/sWpK2IAPgh0YOqif/RKRcG5I1uPqZSX/c+ezbTOjDbu7jziXPspRcV7J8BHVQz A4oMSmO6Qguc7KGm9fwv2xkmIqgyvRtfCxqpAkdowPt+7tqOiG9eanP4sJFNDlmv/rHC YixErbFyahO06r5cmhjUZyGZwmgqcI5uvInwnfY2uCQuMsysoPh0HhCSUIVv48hynlQd ERnw== X-Gm-Message-State: AOAM530Fsl5kdENL+LUubb6mKVlDpBnUMuxHoY277bdKGd8K4mpsybBz C2/dQEHD6i2DIfYQy09/nL6NFjpwIu7YgU6re81YpepbtDl/u5nhUa9wKULh6XZKieDGS2d8rMG fwbb6DmMPMEqdKhd8pAk9Ng== X-Received: by 2002:ad4:5aa9:0:b0:441:3a0a:1aba with SMTP id u9-20020ad45aa9000000b004413a0a1abamr1463210qvg.20.1648064727288; Wed, 23 Mar 2022 12:45:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx14W12RoUy94PnscHvKvwxWAan+U2UFinhkJZ0fo2szicnnDf5lrhasgjpruR6xvmPU2ZYlw== X-Received: by 2002:ad4:5aa9:0:b0:441:3a0a:1aba with SMTP id u9-20020ad45aa9000000b004413a0a1abamr1463195qvg.20.1648064727094; Wed, 23 Mar 2022 12:45:27 -0700 (PDT) Received: from localhost (pool-68-160-176-52.bstnma.fios.verizon.net. [68.160.176.52]) by smtp.gmail.com with ESMTPSA id u129-20020a376087000000b0067e401d7177sm512410qkb.3.2022.03.23.12.45.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Mar 2022 12:45:26 -0700 (PDT) Sender: Mike Snitzer From: Mike Snitzer X-Google-Original-From: Mike Snitzer To: axboe@kernel.dk Cc: ming.lei@redhat.com, hch@lst.de, dm-devel@redhat.com, linux-block@vger.kernel.org Subject: [PATCH v2 1/4] block: allow BIOSET_PERCPU_CACHE use from bio_alloc_clone Date: Wed, 23 Mar 2022 15:45:21 -0400 Message-Id: <20220323194524.5900-2-snitzer@kernel.org> X-Mailer: git-send-email 2.15.0 In-Reply-To: <20220323194524.5900-1-snitzer@kernel.org> References: <20220323194524.5900-1-snitzer@kernel.org> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org These changes allow DM core to make full use of BIOSET_PERCPU_CACHE for REQ_POLLED bios: Factor out bio_alloc_percpu_cache() from bio_alloc_kiocb() to allow use by bio_alloc_clone() too. Update bioset_init_from_src() to set BIOSET_PERCPU_CACHE if bio_src->cache is not NULL. Move bio_clear_polled() to include/linux/bio.h to allow users outside of block core. Signed-off-by: Mike Snitzer --- block/bio.c | 56 +++++++++++++++++++++++++++++++++-------------------- block/blk.h | 7 ------- include/linux/bio.h | 7 +++++++ 3 files changed, 42 insertions(+), 28 deletions(-) diff --git a/block/bio.c b/block/bio.c index b15f5466ce08..a7633aa82d7d 100644 --- a/block/bio.c +++ b/block/bio.c @@ -420,6 +420,33 @@ static void punt_bios_to_rescuer(struct bio_set *bs) queue_work(bs->rescue_workqueue, &bs->rescue_work); } +static struct bio *bio_alloc_percpu_cache(struct block_device *bdev, + unsigned short nr_vecs, unsigned int opf, gfp_t gfp, + struct bio_set *bs) +{ + struct bio_alloc_cache *cache; + struct bio *bio; + + cache = per_cpu_ptr(bs->cache, get_cpu()); + if (cache->free_list) { + bio = cache->free_list; + cache->free_list = bio->bi_next; + cache->nr--; + put_cpu(); + bio_init(bio, bdev, nr_vecs ? bio->bi_inline_vecs : NULL, + nr_vecs, opf); + bio->bi_pool = bs; + bio_set_flag(bio, BIO_PERCPU_CACHE); + return bio; + } + put_cpu(); + bio = bio_alloc_bioset(bdev, nr_vecs, opf, gfp, bs); + if (!bio) + return NULL; + bio_set_flag(bio, BIO_PERCPU_CACHE); + return bio; +} + /** * bio_alloc_bioset - allocate a bio for I/O * @bdev: block device to allocate the bio for (can be %NULL) @@ -768,7 +795,10 @@ struct bio *bio_alloc_clone(struct block_device *bdev, struct bio *bio_src, { struct bio *bio; - bio = bio_alloc_bioset(bdev, 0, bio_src->bi_opf, gfp, bs); + if (bs->cache && bio_src->bi_opf & REQ_POLLED) + bio = bio_alloc_percpu_cache(bdev, 0, bio_src->bi_opf, gfp, bs); + else + bio = bio_alloc_bioset(bdev, 0, bio_src->bi_opf, gfp, bs); if (!bio) return NULL; @@ -1736,6 +1766,8 @@ int bioset_init_from_src(struct bio_set *bs, struct bio_set *src) flags |= BIOSET_NEED_BVECS; if (src->rescue_workqueue) flags |= BIOSET_NEED_RESCUER; + if (src->cache) + flags |= BIOSET_PERCPU_CACHE; return bioset_init(bs, src->bio_pool.min_nr, src->front_pad, flags); } @@ -1753,35 +1785,17 @@ EXPORT_SYMBOL(bioset_init_from_src); * Like @bio_alloc_bioset, but pass in the kiocb. The kiocb is only * used to check if we should dip into the per-cpu bio_set allocation * cache. The allocation uses GFP_KERNEL internally. On return, the - * bio is marked BIO_PERCPU_CACHEABLE, and the final put of the bio + * bio is marked BIO_PERCPU_CACHE, and the final put of the bio * MUST be done from process context, not hard/soft IRQ. * */ struct bio *bio_alloc_kiocb(struct kiocb *kiocb, struct block_device *bdev, unsigned short nr_vecs, unsigned int opf, struct bio_set *bs) { - struct bio_alloc_cache *cache; - struct bio *bio; - if (!(kiocb->ki_flags & IOCB_ALLOC_CACHE) || nr_vecs > BIO_INLINE_VECS) return bio_alloc_bioset(bdev, nr_vecs, opf, GFP_KERNEL, bs); - cache = per_cpu_ptr(bs->cache, get_cpu()); - if (cache->free_list) { - bio = cache->free_list; - cache->free_list = bio->bi_next; - cache->nr--; - put_cpu(); - bio_init(bio, bdev, nr_vecs ? bio->bi_inline_vecs : NULL, - nr_vecs, opf); - bio->bi_pool = bs; - bio_set_flag(bio, BIO_PERCPU_CACHE); - return bio; - } - put_cpu(); - bio = bio_alloc_bioset(bdev, nr_vecs, opf, GFP_KERNEL, bs); - bio_set_flag(bio, BIO_PERCPU_CACHE); - return bio; + return bio_alloc_percpu_cache(bdev, nr_vecs, opf, GFP_KERNEL, bs); } EXPORT_SYMBOL_GPL(bio_alloc_kiocb); diff --git a/block/blk.h b/block/blk.h index ebaa59ca46ca..8e338e76d303 100644 --- a/block/blk.h +++ b/block/blk.h @@ -451,13 +451,6 @@ extern struct device_attribute dev_attr_events; extern struct device_attribute dev_attr_events_async; extern struct device_attribute dev_attr_events_poll_msecs; -static inline void bio_clear_polled(struct bio *bio) -{ - /* can't support alloc cache if we turn off polling */ - bio_clear_flag(bio, BIO_PERCPU_CACHE); - bio->bi_opf &= ~REQ_POLLED; -} - long blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg); long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg); diff --git a/include/linux/bio.h b/include/linux/bio.h index 7523aba4ddf7..709663ae757a 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -787,6 +787,13 @@ static inline void bio_set_polled(struct bio *bio, struct kiocb *kiocb) bio->bi_opf |= REQ_NOWAIT; } +static inline void bio_clear_polled(struct bio *bio) +{ + /* can't support alloc cache if we turn off polling */ + bio_clear_flag(bio, BIO_PERCPU_CACHE); + bio->bi_opf &= ~REQ_POLLED; +} + struct bio *blk_next_bio(struct bio *bio, struct block_device *bdev, unsigned int nr_pages, unsigned int opf, gfp_t gfp); From patchwork Wed Mar 23 19:45:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Snitzer X-Patchwork-Id: 12790075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBBD1C433F5 for ; Wed, 23 Mar 2022 19:45:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232439AbiCWTrE (ORCPT ); Wed, 23 Mar 2022 15:47:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241263AbiCWTrB (ORCPT ); Wed, 23 Mar 2022 15:47:01 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C05B98BE14 for ; Wed, 23 Mar 2022 12:45:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648064730; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:in-reply-to:in-reply-to: references:references; bh=ewXCoGrLUptePB/yghi1giztCZDIQfZAZPwz3EjuzAI=; b=A72SVye+5smXu3u2GXErBjMTSPwPE/wlCwg15HAXXQORxE52KUjSbOTpEHz6Gth+H0F45c ko5VeSVDxPt1OmHpMAITR/aALIaMKk6OocC4tgLEXhZHkX2dFMUkiIK93u/6PgseiDtPqq k4ez3cGNCPC4T+INFhFmValmA5B9JyE= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-492-RHqboVzQN_izJlGrlOHejg-1; Wed, 23 Mar 2022 15:45:29 -0400 X-MC-Unique: RHqboVzQN_izJlGrlOHejg-1 Received: by mail-qt1-f198.google.com with SMTP id f22-20020ac840d6000000b002dd4d87de21so1978277qtm.23 for ; Wed, 23 Mar 2022 12:45:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=ewXCoGrLUptePB/yghi1giztCZDIQfZAZPwz3EjuzAI=; b=7DKmCCPEP1M2iu6K3eBrpaq79m2p/CBp44pDDnZJaQCEp0QQubJjpkTVoVKJWT5Ws0 Y6oRydnes3W+A+jfhuCOhdml0u6oaNyyKG0B60ZRD3uRzE4/AvOw046heNQiI6U9C+7K 6+fpau4pyuZM2qJyGq+5k4gGSVW3CAdyT1B/mrWU4vgGazhKECB9D/7U3iKxtzuj29W8 VysR+NqCD0PwN6K6JzlbLhKGC0t+Ih+JDNr843SQg5qIP7S13jk9gI0QDlqYwBpuy5oi CokDi77FyJCzfp+1sj+FS6H5A0snyg3KHWCN/L3S4HkRZDkPSWgB//6DTCk/OH6sCJGq 7gVQ== X-Gm-Message-State: AOAM531d7X6EaTm1vFvIYeUCrM4r5cNdhLZUk+wkOjXYEcaJOKTfXmhm TiGEdQl/UnZd8qSHegoC6m7UJ0JduT6wbi2fpelPeTDAe3FVLtQIrtRI+qo2pKO1dBy35DQBfXd A5cVLj8A1KWbkcLFkgcJOvw== X-Received: by 2002:ac8:7fcc:0:b0:2e0:684e:42cc with SMTP id b12-20020ac87fcc000000b002e0684e42ccmr1353713qtk.35.1648064729117; Wed, 23 Mar 2022 12:45:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzRo/E6Via9iAEdkuG26l/B0IpzCO611anBQnTlTq6VFsvjqWDfBgpNUyUNJkJuIgGuIsZXHA== X-Received: by 2002:ac8:7fcc:0:b0:2e0:684e:42cc with SMTP id b12-20020ac87fcc000000b002e0684e42ccmr1353698qtk.35.1648064728862; Wed, 23 Mar 2022 12:45:28 -0700 (PDT) Received: from localhost (pool-68-160-176-52.bstnma.fios.verizon.net. [68.160.176.52]) by smtp.gmail.com with ESMTPSA id h62-20020a37b741000000b0067da4164f8fsm425972qkf.126.2022.03.23.12.45.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Mar 2022 12:45:28 -0700 (PDT) Sender: Mike Snitzer From: Mike Snitzer X-Google-Original-From: Mike Snitzer To: axboe@kernel.dk Cc: ming.lei@redhat.com, hch@lst.de, dm-devel@redhat.com, linux-block@vger.kernel.org Subject: [PATCH v2 2/4] block: allow BIOSET_PERCPU_CACHE use from bio_alloc_bioset Date: Wed, 23 Mar 2022 15:45:22 -0400 Message-Id: <20220323194524.5900-3-snitzer@kernel.org> X-Mailer: git-send-email 2.15.0 In-Reply-To: <20220323194524.5900-1-snitzer@kernel.org> References: <20220323194524.5900-1-snitzer@kernel.org> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add REQ_ALLOC_CACHE and set it in %opf passed to bio_alloc_bioset to inform bio_alloc_bioset (and any stacked block drivers) that bio should be allocated from respective bioset's per-cpu alloc cache if possible. This decouples access control to the alloc cache (via REQ_ALLOC_CACHE) from actual participation in a specific alloc cache (BIO_PERCPU_CACHE). Otherwise an upper layer's bioset may not have an alloc cache, in which case the bio issued to underlying device(s) wouldn't reflect that allocating from an alloc cache warranted (if possible). Signed-off-by: Mike Snitzer --- block/bio.c | 33 ++++++++++++++++++++------------- include/linux/bio.h | 4 +++- include/linux/blk_types.h | 4 +++- 3 files changed, 26 insertions(+), 15 deletions(-) diff --git a/block/bio.c b/block/bio.c index a7633aa82d7d..0b65ea241f54 100644 --- a/block/bio.c +++ b/block/bio.c @@ -440,11 +440,7 @@ static struct bio *bio_alloc_percpu_cache(struct block_device *bdev, return bio; } put_cpu(); - bio = bio_alloc_bioset(bdev, nr_vecs, opf, gfp, bs); - if (!bio) - return NULL; - bio_set_flag(bio, BIO_PERCPU_CACHE); - return bio; + return NULL; } /** @@ -488,11 +484,24 @@ struct bio *bio_alloc_bioset(struct block_device *bdev, unsigned short nr_vecs, gfp_t saved_gfp = gfp_mask; struct bio *bio; void *p; + bool use_alloc_cache; /* should not use nobvec bioset for nr_vecs > 0 */ if (WARN_ON_ONCE(!mempool_initialized(&bs->bvec_pool) && nr_vecs > 0)) return NULL; + use_alloc_cache = (bs->cache && (opf & REQ_ALLOC_CACHE) && + nr_vecs <= BIO_INLINE_VECS); + if (use_alloc_cache) { + bio = bio_alloc_percpu_cache(bdev, nr_vecs, opf, gfp_mask, bs); + if (bio) + return bio; + /* + * No cached bio available, mark bio returned below to + * particpate in per-cpu alloc cache. + */ + } + /* * submit_bio_noacct() converts recursion to iteration; this means if * we're running beneath it, any bios we allocate and submit will not be @@ -546,6 +555,8 @@ struct bio *bio_alloc_bioset(struct block_device *bdev, unsigned short nr_vecs, bio_init(bio, bdev, NULL, 0, opf); } + if (use_alloc_cache) + bio_set_flag(bio, BIO_PERCPU_CACHE); bio->bi_pool = bs; return bio; @@ -795,10 +806,7 @@ struct bio *bio_alloc_clone(struct block_device *bdev, struct bio *bio_src, { struct bio *bio; - if (bs->cache && bio_src->bi_opf & REQ_POLLED) - bio = bio_alloc_percpu_cache(bdev, 0, bio_src->bi_opf, gfp, bs); - else - bio = bio_alloc_bioset(bdev, 0, bio_src->bi_opf, gfp, bs); + bio = bio_alloc_bioset(bdev, 0, bio_src->bi_opf, gfp, bs); if (!bio) return NULL; @@ -1792,10 +1800,9 @@ EXPORT_SYMBOL(bioset_init_from_src); struct bio *bio_alloc_kiocb(struct kiocb *kiocb, struct block_device *bdev, unsigned short nr_vecs, unsigned int opf, struct bio_set *bs) { - if (!(kiocb->ki_flags & IOCB_ALLOC_CACHE) || nr_vecs > BIO_INLINE_VECS) - return bio_alloc_bioset(bdev, nr_vecs, opf, GFP_KERNEL, bs); - - return bio_alloc_percpu_cache(bdev, nr_vecs, opf, GFP_KERNEL, bs); + if (kiocb->ki_flags & IOCB_ALLOC_CACHE) + opf |= REQ_ALLOC_CACHE; + return bio_alloc_bioset(bdev, nr_vecs, opf, GFP_KERNEL, bs); } EXPORT_SYMBOL_GPL(bio_alloc_kiocb); diff --git a/include/linux/bio.h b/include/linux/bio.h index 709663ae757a..1be27e87a1f4 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -783,6 +783,8 @@ static inline int bio_integrity_add_page(struct bio *bio, struct page *page, static inline void bio_set_polled(struct bio *bio, struct kiocb *kiocb) { bio->bi_opf |= REQ_POLLED; + if (kiocb->ki_flags & IOCB_ALLOC_CACHE) + bio->bi_opf |= REQ_ALLOC_CACHE; if (!is_sync_kiocb(kiocb)) bio->bi_opf |= REQ_NOWAIT; } @@ -791,7 +793,7 @@ static inline void bio_clear_polled(struct bio *bio) { /* can't support alloc cache if we turn off polling */ bio_clear_flag(bio, BIO_PERCPU_CACHE); - bio->bi_opf &= ~REQ_POLLED; + bio->bi_opf &= ~(REQ_POLLED | REQ_ALLOC_CACHE); } struct bio *blk_next_bio(struct bio *bio, struct block_device *bdev, diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 5561e58d158a..5f9a0c39d4c5 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -327,7 +327,7 @@ enum { BIO_TRACKED, /* set if bio goes through the rq_qos path */ BIO_REMAPPED, BIO_ZONE_WRITE_LOCKED, /* Owns a zoned device zone write lock */ - BIO_PERCPU_CACHE, /* can participate in per-cpu alloc cache */ + BIO_PERCPU_CACHE, /* participates in per-cpu alloc cache */ BIO_FLAG_LAST }; @@ -414,6 +414,7 @@ enum req_flag_bits { __REQ_NOUNMAP, /* do not free blocks when zeroing */ __REQ_POLLED, /* caller polls for completion using bio_poll */ + __REQ_ALLOC_CACHE, /* allocate IO from cache if available */ /* for driver use */ __REQ_DRV, @@ -439,6 +440,7 @@ enum req_flag_bits { #define REQ_NOUNMAP (1ULL << __REQ_NOUNMAP) #define REQ_POLLED (1ULL << __REQ_POLLED) +#define REQ_ALLOC_CACHE (1ULL << __REQ_ALLOC_CACHE) #define REQ_DRV (1ULL << __REQ_DRV) #define REQ_SWAP (1ULL << __REQ_SWAP) From patchwork Wed Mar 23 19:45:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Snitzer X-Patchwork-Id: 12790076 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4796C433FE for ; Wed, 23 Mar 2022 19:45:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344367AbiCWTrG (ORCPT ); Wed, 23 Mar 2022 15:47:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241263AbiCWTrF (ORCPT ); Wed, 23 Mar 2022 15:47:05 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5FA508B6D1 for ; Wed, 23 Mar 2022 12:45:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648064732; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:in-reply-to:in-reply-to: references:references; bh=tfNH+5/HwjgUfw+cvmAnjvE/JE7439RWvRdUdw80No0=; b=c/aPRBOKUWhpqNx38GbKq9jKBtvofhDFACrwhMSHoIniDTtTXH8Edz+IfiX7uj1KG2Ep/Q 4aE+iP5KybyDVT2kwpN57OSePkhbPmhd/Dazk1mdQ7RhgEM44FeQwkYZZz6uwNacltuepW hEZRbHslr/NaEatiA2yfMZXnbdX45vE= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-327-f_n-DH5SOBWNd_FVkNn8WA-1; Wed, 23 Mar 2022 15:45:31 -0400 X-MC-Unique: f_n-DH5SOBWNd_FVkNn8WA-1 Received: by mail-qk1-f197.google.com with SMTP id d12-20020a379b0c000000b0067d8cda1aaaso1687543qke.8 for ; Wed, 23 Mar 2022 12:45:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=tfNH+5/HwjgUfw+cvmAnjvE/JE7439RWvRdUdw80No0=; b=vF8YBNQ64Hr4HzrjyL9ULBabIM03u3rJvOn6jGi9sJxXwgeKVTbZ25Rys9fPybquMp SPHaoLJ1viM7yVs3Q3rJqYnuAXy2aZVB4qlRY+afJ8zrqUV34ZEIToBd/Byze6BpRnTg Uz7vLzMrg3vOmJaaymvfgYd6dgoXgOAHgxRivJPyX44/mO+nIRFl6vsXSdx100W60INV HK1cLTRe/X74ysNGGbQp85eABt/pflFU54YUCDLz64aP7YD0ciQ2cnQDfK3HyreCGXhP CyJnJyO7YEetEja1yyMkInxXYRFVhG5E5wA4uwyB0DyUMMbOcQlwu1zyXRY3s7IQnH45 5RHQ== X-Gm-Message-State: AOAM532KL1LNPe7gXR+oBEFRoqKb78K/+HG48wiUYnYQbFCFlbma42eH 2DUVdJgvqV89FvY3Ie0Q2ZYJHpNRxqVap9seyKEpAmMPOGksCACPCseB5qHGXP7Ug6hApf8CmmP uxpPg7dtFp8FwhjC6nBYGnQ== X-Received: by 2002:a37:64a:0:b0:67d:430e:2a20 with SMTP id 71-20020a37064a000000b0067d430e2a20mr1067535qkg.265.1648064730571; Wed, 23 Mar 2022 12:45:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzkwWOh6Well00wZWN3Pa0vtdo4r0dGuz4vGTky++LCnVcOeEJg91pOXt2yuaRFGau8p357xw== X-Received: by 2002:a37:64a:0:b0:67d:430e:2a20 with SMTP id 71-20020a37064a000000b0067d430e2a20mr1067522qkg.265.1648064730301; Wed, 23 Mar 2022 12:45:30 -0700 (PDT) Received: from localhost (pool-68-160-176-52.bstnma.fios.verizon.net. [68.160.176.52]) by smtp.gmail.com with ESMTPSA id o4-20020a05620a22c400b0067e02a697e0sm481848qki.33.2022.03.23.12.45.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Mar 2022 12:45:29 -0700 (PDT) Sender: Mike Snitzer From: Mike Snitzer X-Google-Original-From: Mike Snitzer To: axboe@kernel.dk Cc: ming.lei@redhat.com, hch@lst.de, dm-devel@redhat.com, linux-block@vger.kernel.org Subject: [PATCH v2 3/4] dm: enable BIOSET_PERCPU_CACHE for dm_io bioset Date: Wed, 23 Mar 2022 15:45:23 -0400 Message-Id: <20220323194524.5900-4-snitzer@kernel.org> X-Mailer: git-send-email 2.15.0 In-Reply-To: <20220323194524.5900-1-snitzer@kernel.org> References: <20220323194524.5900-1-snitzer@kernel.org> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Also change dm_io_complete() to use bio_clear_polled() so that it properly clears all associated bio state (REQ_POLLED, BIO_PERCPU_CACHE, etc). This commit improves DM's hipri bio polling (REQ_POLLED) perf by ~7%. Signed-off-by: Mike Snitzer --- drivers/md/dm.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 1c4d1e12d74b..b3cb2c1aea2a 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -899,9 +899,9 @@ static void dm_io_complete(struct dm_io *io) /* * Upper layer won't help us poll split bio, io->orig_bio * may only reflect a subset of the pre-split original, - * so clear REQ_POLLED in case of requeue + * so clear REQ_POLLED and BIO_PERCPU_CACHE on requeue. */ - bio->bi_opf &= ~REQ_POLLED; + bio_clear_polled(bio); return; } @@ -3016,7 +3016,7 @@ struct dm_md_mempools *dm_alloc_md_mempools(struct mapped_device *md, enum dm_qu pool_size = max(dm_get_reserved_bio_based_ios(), min_pool_size); front_pad = roundup(per_io_data_size, __alignof__(struct dm_target_io)) + DM_TARGET_IO_BIO_OFFSET; io_front_pad = roundup(per_io_data_size, __alignof__(struct dm_io)) + DM_IO_BIO_OFFSET; - ret = bioset_init(&pools->io_bs, pool_size, io_front_pad, 0); + ret = bioset_init(&pools->io_bs, pool_size, io_front_pad, BIOSET_PERCPU_CACHE); if (ret) goto out; if (integrity && bioset_integrity_create(&pools->io_bs, pool_size)) From patchwork Wed Mar 23 19:45:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Snitzer X-Patchwork-Id: 12790077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15372C4332F for ; Wed, 23 Mar 2022 19:45:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344368AbiCWTrH (ORCPT ); Wed, 23 Mar 2022 15:47:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344365AbiCWTrF (ORCPT ); Wed, 23 Mar 2022 15:47:05 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 665EF8B6FD for ; Wed, 23 Mar 2022 12:45:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648064734; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:in-reply-to:in-reply-to: references:references; bh=AF9xOLertVL1zPER2DBlMssTlLLbZ7i6J9GUZUZZbT8=; b=YJ2SCuEcxGuuvo8BgH1lZ7ZtvyYkFqZk/SItAQRhtbyW3ZmvFC0O/axMkcRFIewbTdFNpf 3EJH3BGuAP3TnnWYOBtKMWGiX3V19MMLAhhSV9+5wd62Q/jI+zBRW6QxCI8SVRR62t9nYG hJmD+litiC6oYuQGzl38HNgyXwQ3vyU= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-332-tQohRMe3Mqy7DRxOxSMH1w-1; Wed, 23 Mar 2022 15:45:33 -0400 X-MC-Unique: tQohRMe3Mqy7DRxOxSMH1w-1 Received: by mail-qt1-f200.google.com with SMTP id y23-20020ac85257000000b002e06697f2ebso1992229qtn.16 for ; Wed, 23 Mar 2022 12:45:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=AF9xOLertVL1zPER2DBlMssTlLLbZ7i6J9GUZUZZbT8=; b=Q+3Dd8mM64goVuruyrcqLA4VxpJd5Vkj8EYjRxH+VjTDt2mT8AgRkI3kcZjzwrmV9S tfAoGFwOuNWVhKVic9fwqTpOXalKNs8sS0tGpS/gPmtpDnv2KtKql8NZnEdOcINpj3Vi GYF2rT4Ong2FMzXg+FweKtHnPEGqKoJMm1ovg9nN/SwtIEqt2L6fOrIEMeXOZUWzzibz u+Gu7+5pYfnbYVamovo65iYyHrX3p0b9czVybfonCrGbwZRBkBcMt9bVV/cUVmydZC3u 2MCybWLUjIvolz4GkW0AV8qOR7AsXP3+OBAWodUbxEvGFyUFJiGDzRdQ7CT3WEaCmt6F l5BQ== X-Gm-Message-State: AOAM530lt6vqiPgI6tUdsyefaEgGr6qwJiBGifN0pNauloYNgQjj7Sow 51ItoETnXvaSciyXQuWP9NoJD5TPb0+RD9uMKZrMy4eczKAHoutqclekgUEAHX0aybLlWmovagy 33BL0izVXM27jw7oGfK1uHg== X-Received: by 2002:ad4:5c6f:0:b0:440:cd9b:db9a with SMTP id i15-20020ad45c6f000000b00440cd9bdb9amr1340810qvh.86.1648064732109; Wed, 23 Mar 2022 12:45:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyGZdV6BdMXLbVGDokSS/YFyydCYRxSjfvid2Egnjd81a3tVo6BqGGKuI+kT/R4KaO8Ehs/6Q== X-Received: by 2002:ad4:5c6f:0:b0:440:cd9b:db9a with SMTP id i15-20020ad45c6f000000b00440cd9bdb9amr1340794qvh.86.1648064731874; Wed, 23 Mar 2022 12:45:31 -0700 (PDT) Received: from localhost (pool-68-160-176-52.bstnma.fios.verizon.net. [68.160.176.52]) by smtp.gmail.com with ESMTPSA id y24-20020a37e318000000b0067d43d76184sm445886qki.97.2022.03.23.12.45.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Mar 2022 12:45:31 -0700 (PDT) Sender: Mike Snitzer From: Mike Snitzer X-Google-Original-From: Mike Snitzer To: axboe@kernel.dk Cc: ming.lei@redhat.com, hch@lst.de, dm-devel@redhat.com, linux-block@vger.kernel.org Subject: [PATCH v2 4/4] dm: conditionally enable BIOSET_PERCPU_CACHE for bio-based dm_io bioset Date: Wed, 23 Mar 2022 15:45:24 -0400 Message-Id: <20220323194524.5900-5-snitzer@kernel.org> X-Mailer: git-send-email 2.15.0 In-Reply-To: <20220323194524.5900-1-snitzer@kernel.org> References: <20220323194524.5900-1-snitzer@kernel.org> Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org A bioset's percpu cache may have broader utility in the future but for now constrain it to being tightly coupled to QUEUE_FLAG_POLL. Signed-off-by: Mike Snitzer --- drivers/md/dm-table.c | 11 ++++++++--- drivers/md/dm.c | 6 +++--- drivers/md/dm.h | 4 ++-- 3 files changed, 13 insertions(+), 8 deletions(-) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index c0be4f60b427..7ebc70e3eb2f 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1002,6 +1002,8 @@ bool dm_table_request_based(struct dm_table *t) return __table_type_request_based(dm_table_get_type(t)); } +static int dm_table_supports_poll(struct dm_table *t); + static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device *md) { enum dm_queue_mode type = dm_table_get_type(t); @@ -1009,21 +1011,24 @@ static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device * unsigned min_pool_size = 0; struct dm_target *ti; unsigned i; + bool poll_supported = false; if (unlikely(type == DM_TYPE_NONE)) { DMWARN("no table type is set, can't allocate mempools"); return -EINVAL; } - if (__table_type_bio_based(type)) + if (__table_type_bio_based(type)) { for (i = 0; i < t->num_targets; i++) { ti = t->targets + i; per_io_data_size = max(per_io_data_size, ti->per_io_data_size); min_pool_size = max(min_pool_size, ti->num_flush_bios); } + poll_supported = !!dm_table_supports_poll(t); + } - t->mempools = dm_alloc_md_mempools(md, type, t->integrity_supported, - per_io_data_size, min_pool_size); + t->mempools = dm_alloc_md_mempools(md, type, per_io_data_size, min_pool_size, + t->integrity_supported, poll_supported); if (!t->mempools) return -ENOMEM; diff --git a/drivers/md/dm.c b/drivers/md/dm.c index b3cb2c1aea2a..ebd7919e555f 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -2999,8 +2999,8 @@ int dm_noflush_suspending(struct dm_target *ti) EXPORT_SYMBOL_GPL(dm_noflush_suspending); struct dm_md_mempools *dm_alloc_md_mempools(struct mapped_device *md, enum dm_queue_mode type, - unsigned integrity, unsigned per_io_data_size, - unsigned min_pool_size) + unsigned per_io_data_size, unsigned min_pool_size, + bool integrity, bool poll) { struct dm_md_mempools *pools = kzalloc_node(sizeof(*pools), GFP_KERNEL, md->numa_node_id); unsigned int pool_size = 0; @@ -3016,7 +3016,7 @@ struct dm_md_mempools *dm_alloc_md_mempools(struct mapped_device *md, enum dm_qu pool_size = max(dm_get_reserved_bio_based_ios(), min_pool_size); front_pad = roundup(per_io_data_size, __alignof__(struct dm_target_io)) + DM_TARGET_IO_BIO_OFFSET; io_front_pad = roundup(per_io_data_size, __alignof__(struct dm_io)) + DM_IO_BIO_OFFSET; - ret = bioset_init(&pools->io_bs, pool_size, io_front_pad, BIOSET_PERCPU_CACHE); + ret = bioset_init(&pools->io_bs, pool_size, io_front_pad, poll ? BIOSET_PERCPU_CACHE : 0); if (ret) goto out; if (integrity && bioset_integrity_create(&pools->io_bs, pool_size)) diff --git a/drivers/md/dm.h b/drivers/md/dm.h index 9013dc1a7b00..3f89664fea01 100644 --- a/drivers/md/dm.h +++ b/drivers/md/dm.h @@ -221,8 +221,8 @@ void dm_kcopyd_exit(void); * Mempool operations */ struct dm_md_mempools *dm_alloc_md_mempools(struct mapped_device *md, enum dm_queue_mode type, - unsigned integrity, unsigned per_bio_data_size, - unsigned min_pool_size); + unsigned per_io_data_size, unsigned min_pool_size, + bool integrity, bool poll); void dm_free_md_mempools(struct dm_md_mempools *pools); /*