From patchwork Fri Oct 13 12:46:07 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Matias_Bj=C3=B8rling?= X-Patchwork-Id: 10004629 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4992B60360 for ; Fri, 13 Oct 2017 13:00:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3979529098 for ; Fri, 13 Oct 2017 13:00:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2E32129097; Fri, 13 Oct 2017 13:00:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 200A129092 for ; Fri, 13 Oct 2017 13:00:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758414AbdJMM77 (ORCPT ); Fri, 13 Oct 2017 08:59:59 -0400 Received: from mail-wm0-f67.google.com ([74.125.82.67]:53814 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758169AbdJMMrZ (ORCPT ); Fri, 13 Oct 2017 08:47:25 -0400 Received: by mail-wm0-f67.google.com with SMTP id q132so21528227wmd.2 for ; Fri, 13 Oct 2017 05:47:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bjorling.me; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xa380vzWf3rVEzHnL+868VHoqFiTAtHyxVRHY3QGfCQ=; b=xKW+DSavEXiv/xIa/Uf4BkTDXNp7RUbH7cUSzIZLsMo036VYi8Biki1vup+ZhCCn1Y pDbIM/1vo0cG74DaXXvHN8N0p000ih6Sjc85/tGrcWRz0EF6P1I6AS3l+kmkCgXQz50Q Qsp+HJvA5zNRFQ0WXuBkt2OmcxQPtTq85Ahd0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xa380vzWf3rVEzHnL+868VHoqFiTAtHyxVRHY3QGfCQ=; b=Wf9gRxOx0RLf6/1ADu581uYovt/WJBhlVnIu7YJGsTjLj2P6JRXZMlgbCHouPr77sb R5B6hmuLWaYV7+kqYyjwmJR76GcfUYyLGW+49L/5r1EWKdG2fbO2u09Dfz04CyLxLohC lLZb9LJoG4T1Mtg3BvCir9AbLl1WdENY49ug5tFUqlhzfXLDIwI5dWpxBEAxTd2bhMR5 XI8oGL3rBhakPlXWpG0LjzUNuAOZFrKaejqWIMtdUpxJuAULCJPMTg8H+jZkYnwY9Kg/ 4N9wc1bL/7xZSMiUddySbkekipF2Qe3lZcXZbo3vv53FEOoTdBzO9FyAkLMh2uyVwylC 67UA== X-Gm-Message-State: AMCzsaXzzvnhdHc18UwLuVy5Fug4DkvQKMER428utgmHE4/KVtD3/Gh/ DPRsQMswd0hVMK66jMByigLNAw== X-Google-Smtp-Source: AOwi7QBCVIYAB98i7VFxgtStTJCdTGJWIekUq+W9+ulfqYz+IumJOSGAN108w/y6C1+gnR0oo+w96w== X-Received: by 10.80.151.47 with SMTP id c44mr2038291edb.139.1507898843564; Fri, 13 Oct 2017 05:47:23 -0700 (PDT) Received: from skyninja.cnexlabs.com (6164211-cl69.boa.fiberby.dk. [193.106.164.211]) by smtp.gmail.com with ESMTPSA id p91sm735012edp.69.2017.10.13.05.47.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Oct 2017 05:47:23 -0700 (PDT) From: =?UTF-8?q?Matias=20Bj=C3=B8rling?= To: axboe@fb.com Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, =?UTF-8?q?Javier=20Gonz=C3=A1lez?= , =?UTF-8?q?Matias=20Bj=C3=B8rling?= Subject: [GIT PULL 18/58] lightnvm: pblk: simplify work_queue mempool Date: Fri, 13 Oct 2017 14:46:07 +0200 Message-Id: <20171013124647.32668-19-m@bjorling.me> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20171013124647.32668-1-m@bjorling.me> References: <20171013124647.32668-1-m@bjorling.me> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Javier González In pblk, we have a mempool to allocate a generic structure that we pass along workqueues. This is heavily used in the GC path in order to have enough inflight reads and fully utilize the GC bandwidth. However, the current GC path copies data to the host memory and puts it back into the write buffer. This requires a vmalloc allocation for the data and a memory copy. Thus, guaranteeing the allocation by using a mempool for the structure in itself does not give us much. Until we implement support for vector copy to avoid moving data through the host, just allocate the workqueue structure using kmalloc. This allows us to have a much smaller mempool. Reported-by: Jens Axboe Signed-off-by: Javier González Signed-off-by: Matias Bjørling --- drivers/lightnvm/pblk-core.c | 13 +++++++------ drivers/lightnvm/pblk-gc.c | 32 ++++++++++++++++---------------- drivers/lightnvm/pblk-init.c | 32 ++++++++++++++++---------------- drivers/lightnvm/pblk-write.c | 4 ++-- drivers/lightnvm/pblk.h | 11 ++++++----- 5 files changed, 47 insertions(+), 45 deletions(-) diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index f5fbb9a..b925322 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -33,7 +33,8 @@ static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line, pr_err("pblk: attempted to erase bb: line:%d, pos:%d\n", line->id, pos); - pblk_line_run_ws(pblk, NULL, ppa, pblk_line_mark_bb, pblk->bb_wq); + pblk_gen_run_ws(pblk, NULL, ppa, pblk_line_mark_bb, + GFP_ATOMIC, pblk->bb_wq); } static void __pblk_end_io_erase(struct pblk *pblk, struct nvm_rq *rqd) @@ -1623,7 +1624,7 @@ void pblk_line_close_ws(struct work_struct *work) struct pblk_line *line = line_ws->line; pblk_line_close(pblk, line); - mempool_free(line_ws, pblk->line_ws_pool); + mempool_free(line_ws, pblk->gen_ws_pool); } void pblk_line_mark_bb(struct work_struct *work) @@ -1648,16 +1649,16 @@ void pblk_line_mark_bb(struct work_struct *work) } kfree(ppa); - mempool_free(line_ws, pblk->line_ws_pool); + mempool_free(line_ws, pblk->gen_ws_pool); } -void pblk_line_run_ws(struct pblk *pblk, struct pblk_line *line, void *priv, - void (*work)(struct work_struct *), +void pblk_gen_run_ws(struct pblk *pblk, struct pblk_line *line, void *priv, + void (*work)(struct work_struct *), gfp_t gfp_mask, struct workqueue_struct *wq) { struct pblk_line_ws *line_ws; - line_ws = mempool_alloc(pblk->line_ws_pool, GFP_ATOMIC); + line_ws = mempool_alloc(pblk->gen_ws_pool, gfp_mask); if (!line_ws) return; diff --git a/drivers/lightnvm/pblk-gc.c b/drivers/lightnvm/pblk-gc.c index 6090d28..f163829 100644 --- a/drivers/lightnvm/pblk-gc.c +++ b/drivers/lightnvm/pblk-gc.c @@ -136,12 +136,12 @@ static void pblk_put_line_back(struct pblk *pblk, struct pblk_line *line) static void pblk_gc_line_ws(struct work_struct *work) { - struct pblk_line_ws *line_rq_ws = container_of(work, + struct pblk_line_ws *gc_rq_ws = container_of(work, struct pblk_line_ws, ws); - struct pblk *pblk = line_rq_ws->pblk; + struct pblk *pblk = gc_rq_ws->pblk; struct pblk_gc *gc = &pblk->gc; - struct pblk_line *line = line_rq_ws->line; - struct pblk_gc_rq *gc_rq = line_rq_ws->priv; + struct pblk_line *line = gc_rq_ws->line; + struct pblk_gc_rq *gc_rq = gc_rq_ws->priv; up(&gc->gc_sem); @@ -151,7 +151,7 @@ static void pblk_gc_line_ws(struct work_struct *work) gc_rq->nr_secs); } - mempool_free(line_rq_ws, pblk->line_ws_pool); + kfree(gc_rq_ws); } static void pblk_gc_line_prepare_ws(struct work_struct *work) @@ -164,7 +164,7 @@ static void pblk_gc_line_prepare_ws(struct work_struct *work) struct pblk_line_meta *lm = &pblk->lm; struct pblk_gc *gc = &pblk->gc; struct line_emeta *emeta_buf; - struct pblk_line_ws *line_rq_ws; + struct pblk_line_ws *gc_rq_ws; struct pblk_gc_rq *gc_rq; __le64 *lba_list; int sec_left, nr_secs, bit; @@ -223,19 +223,19 @@ static void pblk_gc_line_prepare_ws(struct work_struct *work) gc_rq->nr_secs = nr_secs; gc_rq->line = line; - line_rq_ws = mempool_alloc(pblk->line_ws_pool, GFP_KERNEL); - if (!line_rq_ws) + gc_rq_ws = kmalloc(sizeof(struct pblk_line_ws), GFP_KERNEL); + if (!gc_rq_ws) goto fail_free_gc_rq; - line_rq_ws->pblk = pblk; - line_rq_ws->line = line; - line_rq_ws->priv = gc_rq; + gc_rq_ws->pblk = pblk; + gc_rq_ws->line = line; + gc_rq_ws->priv = gc_rq; down(&gc->gc_sem); kref_get(&line->ref); - INIT_WORK(&line_rq_ws->ws, pblk_gc_line_ws); - queue_work(gc->gc_line_reader_wq, &line_rq_ws->ws); + INIT_WORK(&gc_rq_ws->ws, pblk_gc_line_ws); + queue_work(gc->gc_line_reader_wq, &gc_rq_ws->ws); sec_left -= nr_secs; if (sec_left > 0) @@ -243,7 +243,7 @@ static void pblk_gc_line_prepare_ws(struct work_struct *work) out: pblk_mfree(emeta_buf, l_mg->emeta_alloc_type); - mempool_free(line_ws, pblk->line_ws_pool); + kfree(line_ws); kref_put(&line->ref, pblk_line_put); atomic_dec(&gc->inflight_gc); @@ -256,7 +256,7 @@ static void pblk_gc_line_prepare_ws(struct work_struct *work) pblk_mfree(emeta_buf, l_mg->emeta_alloc_type); pblk_put_line_back(pblk, line); kref_put(&line->ref, pblk_line_put); - mempool_free(line_ws, pblk->line_ws_pool); + kfree(line_ws); atomic_dec(&gc->inflight_gc); pr_err("pblk: Failed to GC line %d\n", line->id); @@ -269,7 +269,7 @@ static int pblk_gc_line(struct pblk *pblk, struct pblk_line *line) pr_debug("pblk: line '%d' being reclaimed for GC\n", line->id); - line_ws = mempool_alloc(pblk->line_ws_pool, GFP_KERNEL); + line_ws = kmalloc(sizeof(struct pblk_line_ws), GFP_KERNEL); if (!line_ws) return -ENOMEM; diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index 7b1f29c..3405522 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -20,7 +20,7 @@ #include "pblk.h" -static struct kmem_cache *pblk_blk_ws_cache, *pblk_rec_cache, *pblk_g_rq_cache, +static struct kmem_cache *pblk_ws_cache, *pblk_rec_cache, *pblk_g_rq_cache, *pblk_w_rq_cache, *pblk_line_meta_cache; static DECLARE_RWSEM(pblk_lock); struct bio_set *pblk_bio_set; @@ -184,9 +184,9 @@ static int pblk_init_global_caches(struct pblk *pblk) char cache_name[PBLK_CACHE_NAME_LEN]; down_write(&pblk_lock); - pblk_blk_ws_cache = kmem_cache_create("pblk_blk_ws", + pblk_ws_cache = kmem_cache_create("pblk_blk_ws", sizeof(struct pblk_line_ws), 0, 0, NULL); - if (!pblk_blk_ws_cache) { + if (!pblk_ws_cache) { up_write(&pblk_lock); return -ENOMEM; } @@ -194,7 +194,7 @@ static int pblk_init_global_caches(struct pblk *pblk) pblk_rec_cache = kmem_cache_create("pblk_rec", sizeof(struct pblk_rec_ctx), 0, 0, NULL); if (!pblk_rec_cache) { - kmem_cache_destroy(pblk_blk_ws_cache); + kmem_cache_destroy(pblk_ws_cache); up_write(&pblk_lock); return -ENOMEM; } @@ -202,7 +202,7 @@ static int pblk_init_global_caches(struct pblk *pblk) pblk_g_rq_cache = kmem_cache_create("pblk_g_rq", pblk_g_rq_size, 0, 0, NULL); if (!pblk_g_rq_cache) { - kmem_cache_destroy(pblk_blk_ws_cache); + kmem_cache_destroy(pblk_ws_cache); kmem_cache_destroy(pblk_rec_cache); up_write(&pblk_lock); return -ENOMEM; @@ -211,7 +211,7 @@ static int pblk_init_global_caches(struct pblk *pblk) pblk_w_rq_cache = kmem_cache_create("pblk_w_rq", pblk_w_rq_size, 0, 0, NULL); if (!pblk_w_rq_cache) { - kmem_cache_destroy(pblk_blk_ws_cache); + kmem_cache_destroy(pblk_ws_cache); kmem_cache_destroy(pblk_rec_cache); kmem_cache_destroy(pblk_g_rq_cache); up_write(&pblk_lock); @@ -223,7 +223,7 @@ static int pblk_init_global_caches(struct pblk *pblk) pblk_line_meta_cache = kmem_cache_create(cache_name, pblk->lm.sec_bitmap_len, 0, 0, NULL); if (!pblk_line_meta_cache) { - kmem_cache_destroy(pblk_blk_ws_cache); + kmem_cache_destroy(pblk_ws_cache); kmem_cache_destroy(pblk_rec_cache); kmem_cache_destroy(pblk_g_rq_cache); kmem_cache_destroy(pblk_w_rq_cache); @@ -246,20 +246,20 @@ static int pblk_core_init(struct pblk *pblk) if (pblk_init_global_caches(pblk)) return -ENOMEM; - /* internal bios can be at most the sectors signaled by the device. */ + /* Internal bios can be at most the sectors signaled by the device. */ pblk->page_bio_pool = mempool_create_page_pool(nvm_max_phys_sects(dev), 0); if (!pblk->page_bio_pool) return -ENOMEM; - pblk->line_ws_pool = mempool_create_slab_pool(PBLK_WS_POOL_SIZE, - pblk_blk_ws_cache); - if (!pblk->line_ws_pool) + pblk->gen_ws_pool = mempool_create_slab_pool(PBLK_GEN_WS_POOL_SIZE, + pblk_ws_cache); + if (!pblk->gen_ws_pool) goto free_page_bio_pool; pblk->rec_pool = mempool_create_slab_pool(geo->nr_luns, pblk_rec_cache); if (!pblk->rec_pool) - goto free_blk_ws_pool; + goto free_gen_ws_pool; pblk->g_rq_pool = mempool_create_slab_pool(PBLK_READ_REQ_POOL_SIZE, pblk_g_rq_cache); @@ -308,8 +308,8 @@ static int pblk_core_init(struct pblk *pblk) mempool_destroy(pblk->g_rq_pool); free_rec_pool: mempool_destroy(pblk->rec_pool); -free_blk_ws_pool: - mempool_destroy(pblk->line_ws_pool); +free_gen_ws_pool: + mempool_destroy(pblk->gen_ws_pool); free_page_bio_pool: mempool_destroy(pblk->page_bio_pool); return -ENOMEM; @@ -324,13 +324,13 @@ static void pblk_core_free(struct pblk *pblk) destroy_workqueue(pblk->bb_wq); mempool_destroy(pblk->page_bio_pool); - mempool_destroy(pblk->line_ws_pool); + mempool_destroy(pblk->gen_ws_pool); mempool_destroy(pblk->rec_pool); mempool_destroy(pblk->g_rq_pool); mempool_destroy(pblk->w_rq_pool); mempool_destroy(pblk->line_meta_pool); - kmem_cache_destroy(pblk_blk_ws_cache); + kmem_cache_destroy(pblk_ws_cache); kmem_cache_destroy(pblk_rec_cache); kmem_cache_destroy(pblk_g_rq_cache); kmem_cache_destroy(pblk_w_rq_cache); diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c index d82ca8b..c73b17b 100644 --- a/drivers/lightnvm/pblk-write.c +++ b/drivers/lightnvm/pblk-write.c @@ -198,8 +198,8 @@ static void pblk_end_io_write_meta(struct nvm_rq *rqd) sync = atomic_add_return(rqd->nr_ppas, &emeta->sync); if (sync == emeta->nr_entries) - pblk_line_run_ws(pblk, line, NULL, pblk_line_close_ws, - pblk->close_wq); + pblk_gen_run_ws(pblk, line, NULL, pblk_line_close_ws, + GFP_ATOMIC, pblk->close_wq); bio_put(rqd->bio); nvm_dev_dma_free(dev->parent, rqd->meta_list, rqd->dma_meta_list); diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index 229f602..efaa781 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -40,7 +40,6 @@ #define PBLK_MAX_REQ_ADDRS (64) #define PBLK_MAX_REQ_ADDRS_PW (6) -#define PBLK_WS_POOL_SIZE (128) #define PBLK_META_POOL_SIZE (128) #define PBLK_READ_REQ_POOL_SIZE (1024) @@ -61,6 +60,8 @@ #define ERASE 2 /* READ = 0, WRITE = 1 */ +#define PBLK_GEN_WS_POOL_SIZE (2) + enum { /* IO Types */ PBLK_IOTYPE_USER = 1 << 0, @@ -621,7 +622,7 @@ struct pblk { struct list_head compl_list; mempool_t *page_bio_pool; - mempool_t *line_ws_pool; + mempool_t *gen_ws_pool; mempool_t *rec_pool; mempool_t *g_rq_pool; mempool_t *w_rq_pool; @@ -725,9 +726,9 @@ void pblk_line_close_meta_sync(struct pblk *pblk); void pblk_line_close_ws(struct work_struct *work); void pblk_pipeline_stop(struct pblk *pblk); void pblk_line_mark_bb(struct work_struct *work); -void pblk_line_run_ws(struct pblk *pblk, struct pblk_line *line, void *priv, - void (*work)(struct work_struct *), - struct workqueue_struct *wq); +void pblk_gen_run_ws(struct pblk *pblk, struct pblk_line *line, void *priv, + void (*work)(struct work_struct *), gfp_t gfp_mask, + struct workqueue_struct *wq); u64 pblk_line_smeta_start(struct pblk *pblk, struct pblk_line *line); int pblk_line_read_smeta(struct pblk *pblk, struct pblk_line *line); int pblk_line_read_emeta(struct pblk *pblk, struct pblk_line *line,