From patchwork Wed Jul 31 09:41:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Hans Holmberg X-Patchwork-Id: 11067427 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81E5C13A0 for ; Wed, 31 Jul 2019 09:42:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6E25520415 for ; Wed, 31 Jul 2019 09:42:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6254426CFF; Wed, 31 Jul 2019 09:42:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D8D3D20415 for ; Wed, 31 Jul 2019 09:42:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727171AbfGaJmS (ORCPT ); Wed, 31 Jul 2019 05:42:18 -0400 Received: from mail-lf1-f65.google.com ([209.85.167.65]:45841 "EHLO mail-lf1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726300AbfGaJmR (ORCPT ); Wed, 31 Jul 2019 05:42:17 -0400 Received: by mail-lf1-f65.google.com with SMTP id u10so8210947lfm.12 for ; Wed, 31 Jul 2019 02:42:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=owltronix-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=AVYd9Rjb1zedauayC59UHepF1L1QYQNNE4PKhmE9Kd4=; b=JOI95AzROG8gnpLKyQRW0ZocblqZjHpYx9ghlJ8q9AYJ8WNVXLv6lLE2gQn6ET54pS BJN+zSfPUtzU0sitK6V6ttUhuyFe/uXJFASXF32TGzhH18oI2Oy+Up2xOmwF32gKVsuk Rvb9iSqctIKy6lXtk/8W2PdrOQH9kS+vq4JkGlwelxy9ljzVy4L1qT41S5O+w6gn0IFo mZFfzgJioyC/Pp99I6FFVWqQgTZqyUz1YU8dbvMPwJzZY5qNl9+qlkv3cL3yNda7nMBi uN7PFYsMrPMSFxz2PGucPwYmssSb22t6nDvwDW3XBFnA/d1uHoNfzJFoYqjXZEGiB+Xr qBdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=AVYd9Rjb1zedauayC59UHepF1L1QYQNNE4PKhmE9Kd4=; b=UdaRWJlhLGj4enCGIEdJ6BifDg4U8WPnsIIm3PCYNkHHs4crVP65ODcx46ZcmstwpM vxFgNKc6Tfuhk+2rG2EbNKZauBrCRH9/BsbDz2ymB7TKRZQNo9e6zs6ZmePHMNLQniKl GXtXooGmlBTS3D0Csek+cnSsVp25x+lkcw2jQh+MHUjrZBmnqTGphxBy2SZ8REu0Kuao /2Xl/YdzL2aO3diYSTrQLtz9kUX5vkgk5igwGNfzNJjPTke3IS4Q5+W4Le9bZHh87VBH IUB1WzEad+uOo/HKjom3G6MgXe8b25FJbpKsjuWCsKOvEYcoH0KafBWKaPwY5mzxCoNb Qvww== X-Gm-Message-State: APjAAAWMQhxYQfrjPcbaltqQC8ai+g3DvdBbhqSfMNf6qdEcXTftYr/D oOgFakTBtR9FrZQ5K0wep4I= X-Google-Smtp-Source: APXvYqw2ByT2MQk28RLvGMX3/YVd74Edp6aRaSYjkH6sC38jgHnjvMto+y737ZYrzNvjhka/j+GxHA== X-Received: by 2002:ac2:5690:: with SMTP id 16mr55625639lfr.43.1564566134534; Wed, 31 Jul 2019 02:42:14 -0700 (PDT) Received: from titan.lan (90-230-197-193-no86.tbcn.telia.com. [90.230.197.193]) by smtp.gmail.com with ESMTPSA id t4sm15408200ljh.9.2019.07.31.02.42.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 31 Jul 2019 02:42:14 -0700 (PDT) From: Hans Holmberg To: Matias Bjorling Cc: Christoph Hellwig , =?utf-8?q?Javier_Gonz=C3=A1lez?= , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Hans Holmberg Subject: [PATCH 1/4] lightnvm: remove nvm_submit_io_sync_fn Date: Wed, 31 Jul 2019 11:41:33 +0200 Message-Id: <1564566096-28756-2-git-send-email-hans@owltronix.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1564566096-28756-1-git-send-email-hans@owltronix.com> References: <1564566096-28756-1-git-send-email-hans@owltronix.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move the redundant sync handling interface and wait for a completion in the lightnvm core instead. Signed-off-by: Hans Holmberg Reviewed-by: Javier González Reviewed-by: Christoph Hellwig --- drivers/lightnvm/core.c | 35 +++++++++++++++++++++++++++++------ drivers/nvme/host/lightnvm.c | 29 ----------------------------- include/linux/lightnvm.h | 2 -- 3 files changed, 29 insertions(+), 37 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index a600934fdd9c..01d098fb96ac 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -752,12 +752,36 @@ int nvm_submit_io(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd) } EXPORT_SYMBOL(nvm_submit_io); +static void nvm_sync_end_io(struct nvm_rq *rqd) +{ + struct completion *waiting = rqd->private; + + complete(waiting); +} + +static int nvm_submit_io_wait(struct nvm_dev *dev, struct nvm_rq *rqd) +{ + DECLARE_COMPLETION_ONSTACK(wait); + int ret = 0; + + rqd->end_io = nvm_sync_end_io; + rqd->private = &wait; + + ret = dev->ops->submit_io(dev, rqd); + if (ret) + return ret; + + wait_for_completion_io(&wait); + + return 0; +} + int nvm_submit_io_sync(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd) { struct nvm_dev *dev = tgt_dev->parent; int ret; - if (!dev->ops->submit_io_sync) + if (!dev->ops->submit_io) return -ENODEV; nvm_rq_tgt_to_dev(tgt_dev, rqd); @@ -765,9 +789,7 @@ int nvm_submit_io_sync(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd) rqd->dev = tgt_dev; rqd->flags = nvm_set_flags(&tgt_dev->geo, rqd); - /* In case of error, fail with right address format */ - ret = dev->ops->submit_io_sync(dev, rqd); - nvm_rq_dev_to_tgt(tgt_dev, rqd); + ret = nvm_submit_io_wait(dev, rqd); return ret; } @@ -788,12 +810,13 @@ EXPORT_SYMBOL(nvm_end_io); static int nvm_submit_io_sync_raw(struct nvm_dev *dev, struct nvm_rq *rqd) { - if (!dev->ops->submit_io_sync) + if (!dev->ops->submit_io) return -ENODEV; + rqd->dev = NULL; rqd->flags = nvm_set_flags(&dev->geo, rqd); - return dev->ops->submit_io_sync(dev, rqd); + return nvm_submit_io_wait(dev, rqd); } static int nvm_bb_chunk_sense(struct nvm_dev *dev, struct ppa_addr ppa) diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index ba009d4c9dfa..d6f121452d5d 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -690,34 +690,6 @@ static int nvme_nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd) return 0; } -static int nvme_nvm_submit_io_sync(struct nvm_dev *dev, struct nvm_rq *rqd) -{ - struct request_queue *q = dev->q; - struct request *rq; - struct nvme_nvm_command cmd; - int ret = 0; - - memset(&cmd, 0, sizeof(struct nvme_nvm_command)); - - rq = nvme_nvm_alloc_request(q, rqd, &cmd); - if (IS_ERR(rq)) - return PTR_ERR(rq); - - /* I/Os can fail and the error is signaled through rqd. Callers must - * handle the error accordingly. - */ - blk_execute_rq(q, NULL, rq, 0); - if (nvme_req(rq)->flags & NVME_REQ_CANCELLED) - ret = -EINTR; - - rqd->ppa_status = le64_to_cpu(nvme_req(rq)->result.u64); - rqd->error = nvme_req(rq)->status; - - blk_mq_free_request(rq); - - return ret; -} - static void *nvme_nvm_create_dma_pool(struct nvm_dev *nvmdev, char *name, int size) { @@ -754,7 +726,6 @@ static struct nvm_dev_ops nvme_nvm_dev_ops = { .get_chk_meta = nvme_nvm_get_chk_meta, .submit_io = nvme_nvm_submit_io, - .submit_io_sync = nvme_nvm_submit_io_sync, .create_dma_pool = nvme_nvm_create_dma_pool, .destroy_dma_pool = nvme_nvm_destroy_dma_pool, diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 4d0d5655c7b2..8891647b24b1 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -89,7 +89,6 @@ typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int); typedef int (nvm_get_chk_meta_fn)(struct nvm_dev *, sector_t, int, struct nvm_chk_meta *); typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *); -typedef int (nvm_submit_io_sync_fn)(struct nvm_dev *, struct nvm_rq *); typedef void *(nvm_create_dma_pool_fn)(struct nvm_dev *, char *, int); typedef void (nvm_destroy_dma_pool_fn)(void *); typedef void *(nvm_dev_dma_alloc_fn)(struct nvm_dev *, void *, gfp_t, @@ -104,7 +103,6 @@ struct nvm_dev_ops { nvm_get_chk_meta_fn *get_chk_meta; nvm_submit_io_fn *submit_io; - nvm_submit_io_sync_fn *submit_io_sync; nvm_create_dma_pool_fn *create_dma_pool; nvm_destroy_dma_pool_fn *destroy_dma_pool; From patchwork Wed Jul 31 09:41:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Hans Holmberg X-Patchwork-Id: 11067421 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A24C13A0 for ; Wed, 31 Jul 2019 09:42:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 353C720415 for ; Wed, 31 Jul 2019 09:42:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 289A026CFF; Wed, 31 Jul 2019 09:42:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AB33220415 for ; Wed, 31 Jul 2019 09:42:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728661AbfGaJmU (ORCPT ); Wed, 31 Jul 2019 05:42:20 -0400 Received: from mail-lf1-f66.google.com ([209.85.167.66]:40517 "EHLO mail-lf1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725921AbfGaJmT (ORCPT ); Wed, 31 Jul 2019 05:42:19 -0400 Received: by mail-lf1-f66.google.com with SMTP id b17so47000470lff.7 for ; Wed, 31 Jul 2019 02:42:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=owltronix-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=v7z7ajZA/7PTXrhjZs6VqtjiMZLl1a67tEOrwubYq4s=; b=ybbiv/RImcNU4Fqc9kuj8H/NPe29x3DFXuaZvRQY2+bSdu+iocqkX2956BDOeIqaBz g0H4WaGyYlvOHm9G1htI8BoBEd48vr97Se6vmFofagSsfEsjNhFzd+Edrx8NMzca0YDt 3JrD4sBq1QoXOszosxGezunupUaGnxNhbDP0jAbh6cFY12CLh9E0D2D7Az3rSfL9AINU qBt5wdiaX5TaeE6JtXn1Fwe9jMqPB8HU1vxMChCZRm9SuxWczPdzSf4aLXj73weHnk8p f7c10wEHS9B9CtQoMqLUIkg8/ZWFkxKrZpR+GaLoyk/Wy5V/VBZEe3Dea4bHQRBS1RfJ yz6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=v7z7ajZA/7PTXrhjZs6VqtjiMZLl1a67tEOrwubYq4s=; b=VEtoSD8yGjY/F0ERzehw4sb/D7z9jA+VeJX/laUtA2RvV1WxAlGbLv4DshUcKQh02s yqpw1WG//azSSrl9/2t89dpL2yBh4LBd5XbCBUHdxGn/J6BFu+JFg2hMj3UTqe3IQqxO NOVDmc/mzzj7cMEnovOshInzbrPEQC1hjLqzVek387Je+OgN/QBidMbHTHR8G9UtGgol 4dgpu/gkjaQ6qoFHXAsOFQyblmGWdef01YCffiExUaH/o2dOoSonVzU+IicQU3lR95LC U/xGieEjm5SQxxz7QMZ5QLgDWhCA3aylKeFkG34VZ6pW4b5LBlt0x/L8KUYr2g+rY/mQ GLCA== X-Gm-Message-State: APjAAAW5Fz3ivMpT9VK52rVq6+BuVNlgYmiV9rDQZk7feX23epRx2RMP 5HIe+ZwMw5yRLh8/BKGEng8= X-Google-Smtp-Source: APXvYqzBwa3xMg6hLpa/Rg8cx7rVH9hNI0OFAG2oKzhFJek17W4ynYrq8+zEww6dmKRUS4AsIcUSUw== X-Received: by 2002:ac2:5636:: with SMTP id b22mr2734827lff.2.1564566135872; Wed, 31 Jul 2019 02:42:15 -0700 (PDT) Received: from titan.lan (90-230-197-193-no86.tbcn.telia.com. [90.230.197.193]) by smtp.gmail.com with ESMTPSA id t4sm15408200ljh.9.2019.07.31.02.42.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 31 Jul 2019 02:42:15 -0700 (PDT) From: Hans Holmberg To: Matias Bjorling Cc: Christoph Hellwig , =?utf-8?q?Javier_Gonz=C3=A1lez?= , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Hans Holmberg Subject: [PATCH 2/4] lightnvm: move metadata mapping to lower level driver Date: Wed, 31 Jul 2019 11:41:34 +0200 Message-Id: <1564566096-28756-3-git-send-email-hans@owltronix.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1564566096-28756-1-git-send-email-hans@owltronix.com> References: <1564566096-28756-1-git-send-email-hans@owltronix.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that blk_rq_map_kern can map both kmem and vmem, move internal metadata mapping down to the lower level driver. Signed-off-by: Hans Holmberg Reviewed-by: Javier González Reviewed-by: Christoph Hellwig --- drivers/lightnvm/core.c | 16 +++--- drivers/lightnvm/pblk-core.c | 113 +++++---------------------------------- drivers/lightnvm/pblk-read.c | 22 ++------ drivers/lightnvm/pblk-recovery.c | 39 ++------------ drivers/lightnvm/pblk-write.c | 20 ++----- drivers/lightnvm/pblk.h | 8 +-- drivers/nvme/host/lightnvm.c | 20 +++++-- include/linux/lightnvm.h | 6 +-- 8 files changed, 54 insertions(+), 190 deletions(-) diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index 01d098fb96ac..3cd03582a2ed 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -731,7 +731,7 @@ static int nvm_set_flags(struct nvm_geo *geo, struct nvm_rq *rqd) return flags; } -int nvm_submit_io(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd) +int nvm_submit_io(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd, void *buf) { struct nvm_dev *dev = tgt_dev->parent; int ret; @@ -745,7 +745,7 @@ int nvm_submit_io(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd) rqd->flags = nvm_set_flags(&tgt_dev->geo, rqd); /* In case of error, fail with right address format */ - ret = dev->ops->submit_io(dev, rqd); + ret = dev->ops->submit_io(dev, rqd, buf); if (ret) nvm_rq_dev_to_tgt(tgt_dev, rqd); return ret; @@ -759,7 +759,8 @@ static void nvm_sync_end_io(struct nvm_rq *rqd) complete(waiting); } -static int nvm_submit_io_wait(struct nvm_dev *dev, struct nvm_rq *rqd) +static int nvm_submit_io_wait(struct nvm_dev *dev, struct nvm_rq *rqd, + void *buf) { DECLARE_COMPLETION_ONSTACK(wait); int ret = 0; @@ -767,7 +768,7 @@ static int nvm_submit_io_wait(struct nvm_dev *dev, struct nvm_rq *rqd) rqd->end_io = nvm_sync_end_io; rqd->private = &wait; - ret = dev->ops->submit_io(dev, rqd); + ret = dev->ops->submit_io(dev, rqd, buf); if (ret) return ret; @@ -776,7 +777,8 @@ static int nvm_submit_io_wait(struct nvm_dev *dev, struct nvm_rq *rqd) return 0; } -int nvm_submit_io_sync(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd) +int nvm_submit_io_sync(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd, + void *buf) { struct nvm_dev *dev = tgt_dev->parent; int ret; @@ -789,7 +791,7 @@ int nvm_submit_io_sync(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd) rqd->dev = tgt_dev; rqd->flags = nvm_set_flags(&tgt_dev->geo, rqd); - ret = nvm_submit_io_wait(dev, rqd); + ret = nvm_submit_io_wait(dev, rqd, buf); return ret; } @@ -816,7 +818,7 @@ static int nvm_submit_io_sync_raw(struct nvm_dev *dev, struct nvm_rq *rqd) rqd->dev = NULL; rqd->flags = nvm_set_flags(&dev->geo, rqd); - return nvm_submit_io_wait(dev, rqd); + return nvm_submit_io_wait(dev, rqd, NULL); } static int nvm_bb_chunk_sense(struct nvm_dev *dev, struct ppa_addr ppa) diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index f546e6f28b8a..a58d3c84a3f2 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -507,7 +507,7 @@ void pblk_set_sec_per_write(struct pblk *pblk, int sec_per_write) pblk->sec_per_write = sec_per_write; } -int pblk_submit_io(struct pblk *pblk, struct nvm_rq *rqd) +int pblk_submit_io(struct pblk *pblk, struct nvm_rq *rqd, void *buf) { struct nvm_tgt_dev *dev = pblk->dev; @@ -518,7 +518,7 @@ int pblk_submit_io(struct pblk *pblk, struct nvm_rq *rqd) return NVM_IO_ERR; #endif - return nvm_submit_io(dev, rqd); + return nvm_submit_io(dev, rqd, buf); } void pblk_check_chunk_state_update(struct pblk *pblk, struct nvm_rq *rqd) @@ -541,7 +541,7 @@ void pblk_check_chunk_state_update(struct pblk *pblk, struct nvm_rq *rqd) } } -int pblk_submit_io_sync(struct pblk *pblk, struct nvm_rq *rqd) +int pblk_submit_io_sync(struct pblk *pblk, struct nvm_rq *rqd, void *buf) { struct nvm_tgt_dev *dev = pblk->dev; int ret; @@ -553,7 +553,7 @@ int pblk_submit_io_sync(struct pblk *pblk, struct nvm_rq *rqd) return NVM_IO_ERR; #endif - ret = nvm_submit_io_sync(dev, rqd); + ret = nvm_submit_io_sync(dev, rqd, buf); if (trace_pblk_chunk_state_enabled() && !ret && rqd->opcode == NVM_OP_PWRITE) @@ -562,65 +562,19 @@ int pblk_submit_io_sync(struct pblk *pblk, struct nvm_rq *rqd) return ret; } -int pblk_submit_io_sync_sem(struct pblk *pblk, struct nvm_rq *rqd) +static int pblk_submit_io_sync_sem(struct pblk *pblk, struct nvm_rq *rqd, + void *buf) { struct ppa_addr *ppa_list = nvm_rq_to_ppa_list(rqd); int ret; pblk_down_chunk(pblk, ppa_list[0]); - ret = pblk_submit_io_sync(pblk, rqd); + ret = pblk_submit_io_sync(pblk, rqd, buf); pblk_up_chunk(pblk, ppa_list[0]); return ret; } -static void pblk_bio_map_addr_endio(struct bio *bio) -{ - bio_put(bio); -} - -struct bio *pblk_bio_map_addr(struct pblk *pblk, void *data, - unsigned int nr_secs, unsigned int len, - int alloc_type, gfp_t gfp_mask) -{ - struct nvm_tgt_dev *dev = pblk->dev; - void *kaddr = data; - struct page *page; - struct bio *bio; - int i, ret; - - if (alloc_type == PBLK_KMALLOC_META) - return bio_map_kern(dev->q, kaddr, len, gfp_mask); - - bio = bio_kmalloc(gfp_mask, nr_secs); - if (!bio) - return ERR_PTR(-ENOMEM); - - for (i = 0; i < nr_secs; i++) { - page = vmalloc_to_page(kaddr); - if (!page) { - pblk_err(pblk, "could not map vmalloc bio\n"); - bio_put(bio); - bio = ERR_PTR(-ENOMEM); - goto out; - } - - ret = bio_add_pc_page(dev->q, bio, page, PAGE_SIZE, 0); - if (ret != PAGE_SIZE) { - pblk_err(pblk, "could not add page to bio\n"); - bio_put(bio); - bio = ERR_PTR(-ENOMEM); - goto out; - } - - kaddr += PAGE_SIZE; - } - - bio->bi_end_io = pblk_bio_map_addr_endio; -out: - return bio; -} - int pblk_calc_secs(struct pblk *pblk, unsigned long secs_avail, unsigned long secs_to_flush, bool skip_meta) { @@ -722,9 +676,7 @@ u64 pblk_line_smeta_start(struct pblk *pblk, struct pblk_line *line) int pblk_line_smeta_read(struct pblk *pblk, struct pblk_line *line) { - struct nvm_tgt_dev *dev = pblk->dev; struct pblk_line_meta *lm = &pblk->lm; - struct bio *bio; struct ppa_addr *ppa_list; struct nvm_rq rqd; u64 paddr = pblk_line_smeta_start(pblk, line); @@ -736,16 +688,6 @@ int pblk_line_smeta_read(struct pblk *pblk, struct pblk_line *line) if (ret) return ret; - bio = bio_map_kern(dev->q, line->smeta, lm->smeta_len, GFP_KERNEL); - if (IS_ERR(bio)) { - ret = PTR_ERR(bio); - goto clear_rqd; - } - - bio->bi_iter.bi_sector = 0; /* internal bio */ - bio_set_op_attrs(bio, REQ_OP_READ, 0); - - rqd.bio = bio; rqd.opcode = NVM_OP_PREAD; rqd.nr_ppas = lm->smeta_sec; rqd.is_seq = 1; @@ -754,10 +696,9 @@ int pblk_line_smeta_read(struct pblk *pblk, struct pblk_line *line) for (i = 0; i < lm->smeta_sec; i++, paddr++) ppa_list[i] = addr_to_gen_ppa(pblk, paddr, line->id); - ret = pblk_submit_io_sync(pblk, &rqd); + ret = pblk_submit_io_sync(pblk, &rqd, line->smeta); if (ret) { pblk_err(pblk, "smeta I/O submission failed: %d\n", ret); - bio_put(bio); goto clear_rqd; } @@ -776,9 +717,7 @@ int pblk_line_smeta_read(struct pblk *pblk, struct pblk_line *line) static int pblk_line_smeta_write(struct pblk *pblk, struct pblk_line *line, u64 paddr) { - struct nvm_tgt_dev *dev = pblk->dev; struct pblk_line_meta *lm = &pblk->lm; - struct bio *bio; struct ppa_addr *ppa_list; struct nvm_rq rqd; __le64 *lba_list = emeta_to_lbas(pblk, line->emeta->buf); @@ -791,16 +730,6 @@ static int pblk_line_smeta_write(struct pblk *pblk, struct pblk_line *line, if (ret) return ret; - bio = bio_map_kern(dev->q, line->smeta, lm->smeta_len, GFP_KERNEL); - if (IS_ERR(bio)) { - ret = PTR_ERR(bio); - goto clear_rqd; - } - - bio->bi_iter.bi_sector = 0; /* internal bio */ - bio_set_op_attrs(bio, REQ_OP_WRITE, 0); - - rqd.bio = bio; rqd.opcode = NVM_OP_PWRITE; rqd.nr_ppas = lm->smeta_sec; rqd.is_seq = 1; @@ -814,10 +743,9 @@ static int pblk_line_smeta_write(struct pblk *pblk, struct pblk_line *line, meta->lba = lba_list[paddr] = addr_empty; } - ret = pblk_submit_io_sync_sem(pblk, &rqd); + ret = pblk_submit_io_sync_sem(pblk, &rqd, line->smeta); if (ret) { pblk_err(pblk, "smeta I/O submission failed: %d\n", ret); - bio_put(bio); goto clear_rqd; } @@ -838,10 +766,8 @@ int pblk_line_emeta_read(struct pblk *pblk, struct pblk_line *line, { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct pblk_line_meta *lm = &pblk->lm; void *ppa_list_buf, *meta_list; - struct bio *bio; struct ppa_addr *ppa_list; struct nvm_rq rqd; u64 paddr = line->emeta_ssec; @@ -867,17 +793,6 @@ int pblk_line_emeta_read(struct pblk *pblk, struct pblk_line *line, rq_ppas = pblk_calc_secs(pblk, left_ppas, 0, false); rq_len = rq_ppas * geo->csecs; - bio = pblk_bio_map_addr(pblk, emeta_buf, rq_ppas, rq_len, - l_mg->emeta_alloc_type, GFP_KERNEL); - if (IS_ERR(bio)) { - ret = PTR_ERR(bio); - goto free_rqd_dma; - } - - bio->bi_iter.bi_sector = 0; /* internal bio */ - bio_set_op_attrs(bio, REQ_OP_READ, 0); - - rqd.bio = bio; rqd.meta_list = meta_list; rqd.ppa_list = ppa_list_buf; rqd.dma_meta_list = dma_meta_list; @@ -896,7 +811,6 @@ int pblk_line_emeta_read(struct pblk *pblk, struct pblk_line *line, while (test_bit(pos, line->blk_bitmap)) { paddr += min; if (pblk_boundary_paddr_checks(pblk, paddr)) { - bio_put(bio); ret = -EINTR; goto free_rqd_dma; } @@ -906,7 +820,6 @@ int pblk_line_emeta_read(struct pblk *pblk, struct pblk_line *line, } if (pblk_boundary_paddr_checks(pblk, paddr + min)) { - bio_put(bio); ret = -EINTR; goto free_rqd_dma; } @@ -915,10 +828,9 @@ int pblk_line_emeta_read(struct pblk *pblk, struct pblk_line *line, ppa_list[i] = addr_to_gen_ppa(pblk, paddr, line_id); } - ret = pblk_submit_io_sync(pblk, &rqd); + ret = pblk_submit_io_sync(pblk, &rqd, emeta_buf); if (ret) { pblk_err(pblk, "emeta I/O submission failed: %d\n", ret); - bio_put(bio); goto free_rqd_dma; } @@ -963,7 +875,7 @@ static int pblk_blk_erase_sync(struct pblk *pblk, struct ppa_addr ppa) /* The write thread schedules erases so that it minimizes disturbances * with writes. Thus, there is no need to take the LUN semaphore. */ - ret = pblk_submit_io_sync(pblk, &rqd); + ret = pblk_submit_io_sync(pblk, &rqd, NULL); rqd.private = pblk; __pblk_end_io_erase(pblk, &rqd); @@ -1792,7 +1704,7 @@ int pblk_blk_erase_async(struct pblk *pblk, struct ppa_addr ppa) /* The write thread schedules erases so that it minimizes disturbances * with writes. Thus, there is no need to take the LUN semaphore. */ - err = pblk_submit_io(pblk, rqd); + err = pblk_submit_io(pblk, rqd, NULL); if (err) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; @@ -1923,7 +1835,6 @@ void pblk_line_close_meta(struct pblk *pblk, struct pblk_line *line) static void pblk_save_lba_list(struct pblk *pblk, struct pblk_line *line) { struct pblk_line_meta *lm = &pblk->lm; - struct pblk_line_mgmt *l_mg = &pblk->l_mg; unsigned int lba_list_size = lm->emeta_len[2]; struct pblk_w_err_gc *w_err_gc = line->w_err_gc; struct pblk_emeta *emeta = line->emeta; diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c index d98ea392fe33..d572d4559e4e 100644 --- a/drivers/lightnvm/pblk-read.c +++ b/drivers/lightnvm/pblk-read.c @@ -342,7 +342,7 @@ void pblk_submit_read(struct pblk *pblk, struct bio *bio) bio_put(int_bio); int_bio = bio_clone_fast(bio, GFP_KERNEL, &pblk_bio_set); goto split_retry; - } else if (pblk_submit_io(pblk, rqd)) { + } else if (pblk_submit_io(pblk, rqd, NULL)) { /* Submitting IO to drive failed, let's report an error */ rqd->error = -ENODEV; pblk_end_io_read(rqd); @@ -419,7 +419,6 @@ int pblk_submit_read_gc(struct pblk *pblk, struct pblk_gc_rq *gc_rq) { struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; - struct bio *bio; struct nvm_rq rqd; int data_len; int ret = NVM_IO_OK; @@ -447,25 +446,12 @@ int pblk_submit_read_gc(struct pblk *pblk, struct pblk_gc_rq *gc_rq) goto out; data_len = (gc_rq->secs_to_gc) * geo->csecs; - bio = pblk_bio_map_addr(pblk, gc_rq->data, gc_rq->secs_to_gc, data_len, - PBLK_VMALLOC_META, GFP_KERNEL); - if (IS_ERR(bio)) { - pblk_err(pblk, "could not allocate GC bio (%lu)\n", - PTR_ERR(bio)); - ret = PTR_ERR(bio); - goto err_free_dma; - } - - bio->bi_iter.bi_sector = 0; /* internal bio */ - bio_set_op_attrs(bio, REQ_OP_READ, 0); - rqd.opcode = NVM_OP_PREAD; rqd.nr_ppas = gc_rq->secs_to_gc; - rqd.bio = bio; - if (pblk_submit_io_sync(pblk, &rqd)) { + if (pblk_submit_io_sync(pblk, &rqd, gc_rq->data)) { ret = -EIO; - goto err_free_bio; + goto err_free_dma; } pblk_read_check_rand(pblk, &rqd, gc_rq->lba_list, gc_rq->nr_secs); @@ -489,8 +475,6 @@ int pblk_submit_read_gc(struct pblk *pblk, struct pblk_gc_rq *gc_rq) pblk_free_rqd_meta(pblk, &rqd); return ret; -err_free_bio: - bio_put(bio); err_free_dma: pblk_free_rqd_meta(pblk, &rqd); return ret; diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c index e6dda04de144..d5e210c3c5b7 100644 --- a/drivers/lightnvm/pblk-recovery.c +++ b/drivers/lightnvm/pblk-recovery.c @@ -178,12 +178,11 @@ static int pblk_recov_pad_line(struct pblk *pblk, struct pblk_line *line, void *meta_list; struct pblk_pad_rq *pad_rq; struct nvm_rq *rqd; - struct bio *bio; struct ppa_addr *ppa_list; void *data; __le64 *lba_list = emeta_to_lbas(pblk, line->emeta->buf); u64 w_ptr = line->cur_sec; - int left_line_ppas, rq_ppas, rq_len; + int left_line_ppas, rq_ppas; int i, j; int ret = 0; @@ -212,28 +211,15 @@ static int pblk_recov_pad_line(struct pblk *pblk, struct pblk_line *line, goto fail_complete; } - rq_len = rq_ppas * geo->csecs; - - bio = pblk_bio_map_addr(pblk, data, rq_ppas, rq_len, - PBLK_VMALLOC_META, GFP_KERNEL); - if (IS_ERR(bio)) { - ret = PTR_ERR(bio); - goto fail_complete; - } - - bio->bi_iter.bi_sector = 0; /* internal bio */ - bio_set_op_attrs(bio, REQ_OP_WRITE, 0); - rqd = pblk_alloc_rqd(pblk, PBLK_WRITE_INT); ret = pblk_alloc_rqd_meta(pblk, rqd); if (ret) { pblk_free_rqd(pblk, rqd, PBLK_WRITE_INT); - bio_put(bio); goto fail_complete; } - rqd->bio = bio; + rqd->bio = NULL; rqd->opcode = NVM_OP_PWRITE; rqd->is_seq = 1; rqd->nr_ppas = rq_ppas; @@ -275,13 +261,12 @@ static int pblk_recov_pad_line(struct pblk *pblk, struct pblk_line *line, kref_get(&pad_rq->ref); pblk_down_chunk(pblk, ppa_list[0]); - ret = pblk_submit_io(pblk, rqd); + ret = pblk_submit_io(pblk, rqd, data); if (ret) { pblk_err(pblk, "I/O submission failed: %d\n", ret); pblk_up_chunk(pblk, ppa_list[0]); kref_put(&pad_rq->ref, pblk_recov_complete); pblk_free_rqd(pblk, rqd, PBLK_WRITE_INT); - bio_put(bio); goto fail_complete; } @@ -375,7 +360,6 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, struct ppa_addr *ppa_list; void *meta_list; struct nvm_rq *rqd; - struct bio *bio; void *data; dma_addr_t dma_ppa_list, dma_meta_list; __le64 *lba_list; @@ -407,15 +391,7 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, rq_len = rq_ppas * geo->csecs; retry_rq: - bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL); - if (IS_ERR(bio)) - return PTR_ERR(bio); - - bio->bi_iter.bi_sector = 0; /* internal bio */ - bio_set_op_attrs(bio, REQ_OP_READ, 0); - bio_get(bio); - - rqd->bio = bio; + rqd->bio = NULL; rqd->opcode = NVM_OP_PREAD; rqd->meta_list = meta_list; rqd->nr_ppas = rq_ppas; @@ -445,10 +421,9 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, addr_to_gen_ppa(pblk, paddr + j, line->id); } - ret = pblk_submit_io_sync(pblk, rqd); + ret = pblk_submit_io_sync(pblk, rqd, data); if (ret) { pblk_err(pblk, "I/O submission failed: %d\n", ret); - bio_put(bio); return ret; } @@ -460,24 +435,20 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line, if (padded) { pblk_log_read_err(pblk, rqd); - bio_put(bio); return -EINTR; } pad_distance = pblk_pad_distance(pblk, line); ret = pblk_recov_pad_line(pblk, line, pad_distance); if (ret) { - bio_put(bio); return ret; } padded = true; - bio_put(bio); goto retry_rq; } pblk_get_packed_meta(pblk, rqd); - bio_put(bio); for (i = 0; i < rqd->nr_ppas; i++) { struct pblk_sec_meta *meta = pblk_get_meta(pblk, meta_list, i); diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c index 4e63f9b5954c..b9a2aeba95ab 100644 --- a/drivers/lightnvm/pblk-write.c +++ b/drivers/lightnvm/pblk-write.c @@ -373,7 +373,6 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line) struct pblk_emeta *emeta = meta_line->emeta; struct ppa_addr *ppa_list; struct pblk_g_ctx *m_ctx; - struct bio *bio; struct nvm_rq *rqd; void *data; u64 paddr; @@ -391,20 +390,9 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line) rq_len = rq_ppas * geo->csecs; data = ((void *)emeta->buf) + emeta->mem; - bio = pblk_bio_map_addr(pblk, data, rq_ppas, rq_len, - l_mg->emeta_alloc_type, GFP_KERNEL); - if (IS_ERR(bio)) { - pblk_err(pblk, "failed to map emeta io"); - ret = PTR_ERR(bio); - goto fail_free_rqd; - } - bio->bi_iter.bi_sector = 0; /* internal bio */ - bio_set_op_attrs(bio, REQ_OP_WRITE, 0); - rqd->bio = bio; - ret = pblk_alloc_w_rq(pblk, rqd, rq_ppas, pblk_end_io_write_meta); if (ret) - goto fail_free_bio; + goto fail_free_rqd; ppa_list = nvm_rq_to_ppa_list(rqd); for (i = 0; i < rqd->nr_ppas; ) { @@ -423,7 +411,7 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line) pblk_down_chunk(pblk, ppa_list[0]); - ret = pblk_submit_io(pblk, rqd); + ret = pblk_submit_io(pblk, rqd, data); if (ret) { pblk_err(pblk, "emeta I/O submission failed: %d\n", ret); goto fail_rollback; @@ -437,8 +425,6 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line) pblk_dealloc_page(pblk, meta_line, rq_ppas); list_add(&meta_line->list, &meta_line->list); spin_unlock(&l_mg->close_lock); -fail_free_bio: - bio_put(bio); fail_free_rqd: pblk_free_rqd(pblk, rqd, PBLK_WRITE_INT); return ret; @@ -523,7 +509,7 @@ static int pblk_submit_io_set(struct pblk *pblk, struct nvm_rq *rqd) meta_line = pblk_should_submit_meta_io(pblk, rqd); /* Submit data write for current data line */ - err = pblk_submit_io(pblk, rqd); + err = pblk_submit_io(pblk, rqd, NULL); if (err) { pblk_err(pblk, "data I/O submission failed: %d\n", err); return NVM_IO_ERR; diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index a67855387f53..d515d3409a74 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -783,14 +783,10 @@ struct nvm_chk_meta *pblk_chunk_get_off(struct pblk *pblk, struct ppa_addr ppa); void pblk_log_write_err(struct pblk *pblk, struct nvm_rq *rqd); void pblk_log_read_err(struct pblk *pblk, struct nvm_rq *rqd); -int pblk_submit_io(struct pblk *pblk, struct nvm_rq *rqd); -int pblk_submit_io_sync(struct pblk *pblk, struct nvm_rq *rqd); -int pblk_submit_io_sync_sem(struct pblk *pblk, struct nvm_rq *rqd); +int pblk_submit_io(struct pblk *pblk, struct nvm_rq *rqd, void *buf); +int pblk_submit_io_sync(struct pblk *pblk, struct nvm_rq *rqd, void *buf); int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line); void pblk_check_chunk_state_update(struct pblk *pblk, struct nvm_rq *rqd); -struct bio *pblk_bio_map_addr(struct pblk *pblk, void *data, - unsigned int nr_secs, unsigned int len, - int alloc_type, gfp_t gfp_mask); struct pblk_line *pblk_line_get(struct pblk *pblk); struct pblk_line *pblk_line_get_first_data(struct pblk *pblk); struct pblk_line *pblk_line_replace_data(struct pblk *pblk); diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index d6f121452d5d..ec46693f6b64 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -667,11 +667,14 @@ static struct request *nvme_nvm_alloc_request(struct request_queue *q, return rq; } -static int nvme_nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd) +static int nvme_nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd, + void *buf) { + struct nvm_geo *geo = &dev->geo; struct request_queue *q = dev->q; struct nvme_nvm_command *cmd; struct request *rq; + int ret; cmd = kzalloc(sizeof(struct nvme_nvm_command), GFP_KERNEL); if (!cmd) @@ -679,8 +682,15 @@ static int nvme_nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd) rq = nvme_nvm_alloc_request(q, rqd, cmd); if (IS_ERR(rq)) { - kfree(cmd); - return PTR_ERR(rq); + ret = PTR_ERR(rq); + goto err_free_cmd; + } + + if (buf) { + ret = blk_rq_map_kern(q, rq, buf, geo->csecs * rqd->nr_ppas, + GFP_KERNEL); + if (ret) + goto err_free_cmd; } rq->end_io_data = rqd; @@ -688,6 +698,10 @@ static int nvme_nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd) blk_execute_rq_nowait(q, NULL, rq, 0, nvme_nvm_end_io); return 0; + +err_free_cmd: + kfree(cmd); + return ret; } static void *nvme_nvm_create_dma_pool(struct nvm_dev *nvmdev, char *name, diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h index 8891647b24b1..ee8ec2e68055 100644 --- a/include/linux/lightnvm.h +++ b/include/linux/lightnvm.h @@ -88,7 +88,7 @@ typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *); typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int); typedef int (nvm_get_chk_meta_fn)(struct nvm_dev *, sector_t, int, struct nvm_chk_meta *); -typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *); +typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *, void *); typedef void *(nvm_create_dma_pool_fn)(struct nvm_dev *, char *, int); typedef void (nvm_destroy_dma_pool_fn)(void *); typedef void *(nvm_dev_dma_alloc_fn)(struct nvm_dev *, void *, gfp_t, @@ -680,8 +680,8 @@ extern int nvm_get_chunk_meta(struct nvm_tgt_dev *, struct ppa_addr, int, struct nvm_chk_meta *); extern int nvm_set_chunk_meta(struct nvm_tgt_dev *, struct ppa_addr *, int, int); -extern int nvm_submit_io(struct nvm_tgt_dev *, struct nvm_rq *); -extern int nvm_submit_io_sync(struct nvm_tgt_dev *, struct nvm_rq *); +extern int nvm_submit_io(struct nvm_tgt_dev *, struct nvm_rq *, void *); +extern int nvm_submit_io_sync(struct nvm_tgt_dev *, struct nvm_rq *, void *); extern void nvm_end_io(struct nvm_rq *); #else /* CONFIG_NVM */ From patchwork Wed Jul 31 09:41:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Hans Holmberg X-Patchwork-Id: 11067425 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 45BCB13A0 for ; Wed, 31 Jul 2019 09:42:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3252C20415 for ; Wed, 31 Jul 2019 09:42:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 26BCF26CFF; Wed, 31 Jul 2019 09:42:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 83F8F20415 for ; Wed, 31 Jul 2019 09:42:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728582AbfGaJmU (ORCPT ); Wed, 31 Jul 2019 05:42:20 -0400 Received: from mail-lf1-f66.google.com ([209.85.167.66]:39061 "EHLO mail-lf1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727023AbfGaJmU (ORCPT ); Wed, 31 Jul 2019 05:42:20 -0400 Received: by mail-lf1-f66.google.com with SMTP id v85so46929975lfa.6 for ; Wed, 31 Jul 2019 02:42:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=owltronix-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ruGLS7qlWMQ2x0nF7R27l+9sEyn5zr5g4bSB5JJD5E8=; b=w5A9WS7JZTV1AwMxJtkiM8y3lQz+Ic3KOzFHMizjLEmKkoce+WbU3TLR7oLJ3QMcds +FkCYpxRbm27ytBUYqJxHePK7uLEQ0ZmtagJpjlB0BfZaPlN8s7q//7lHc7ZzPqPJNbV XRv1kUyyxda8FxC0TSWPxcNv4iFnTq5JT3aYfZOEXc8xMheIIXMuV5emNBtGLOYOBS45 KlrQ2SOh6JA5dlcKBRF3V0bkeZRyU+bYF/RTC4eFrDq9d86mfwjvUHZcdvc6sczevsDB SySuUG1Kj9Rv/xGwBvplkdVXDN6HxXospm3JGlSfAr1lE9S3uxaP+J9mimRFEsnawsID Gbxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ruGLS7qlWMQ2x0nF7R27l+9sEyn5zr5g4bSB5JJD5E8=; b=LUtjIFlFVzsrQc6HWD9F5ZUPY57rrCgldLPNKbE/BFuccoytolLnxo6/gF6Yq2qi0k 7j6vXipR6uY0jOJpl/3XNxyx/ckMAfYOLYs2al1X4kn/Q/c+ojf3vtRXde0lLDAxcAlR mjIJLVrmXqCwWZEoeP3gx6Y3cRHIQ2fLxs7hZFOq3/BnSlG5wIMtDB7TMpXpWUo03uqs m3axu3sM2Io2LGsRE8zN+zKYdfjGn6i5ZuZ/79mdax6J7jqewCR7cpLHXhp1qsHvIdRT IPXpnJymePofOHMwu4S/FsfzcyO+gothLKsNbHOsxFAjpWUfv7xhwjAcuK8QF2MJY9OY Plzw== X-Gm-Message-State: APjAAAXI8uwzejksdCPeeIdY2wXyY7JQXTj//d+iW80d+87zC+z7I9lz y3qmRRpej02YJ8QZzfOXzZA= X-Google-Smtp-Source: APXvYqx1b9nmoDTCR6aOEby82ARJ8uYbuEyVCUwBwnrKkb3uR8jAO2YsHbHUhEMmFFFHiMd2vUOQxw== X-Received: by 2002:a19:641a:: with SMTP id y26mr55770553lfb.29.1564566137275; Wed, 31 Jul 2019 02:42:17 -0700 (PDT) Received: from titan.lan (90-230-197-193-no86.tbcn.telia.com. [90.230.197.193]) by smtp.gmail.com with ESMTPSA id t4sm15408200ljh.9.2019.07.31.02.42.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 31 Jul 2019 02:42:16 -0700 (PDT) From: Hans Holmberg To: Matias Bjorling Cc: Christoph Hellwig , =?utf-8?q?Javier_Gonz=C3=A1lez?= , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Hans Holmberg Subject: [PATCH 3/4] lightnvm: pblk: use kvmalloc for metadata Date: Wed, 31 Jul 2019 11:41:35 +0200 Message-Id: <1564566096-28756-4-git-send-email-hans@owltronix.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1564566096-28756-1-git-send-email-hans@owltronix.com> References: <1564566096-28756-1-git-send-email-hans@owltronix.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There is no reason now not to use kvmalloc, so so replace the internal metadata allocation scheme. Signed-off-by: Hans Holmberg Reviewed-by: Javier González Reviewed-by: Christoph Hellwig --- drivers/lightnvm/pblk-core.c | 3 +-- drivers/lightnvm/pblk-gc.c | 19 ++++++++----------- drivers/lightnvm/pblk-init.c | 38 ++++++++++---------------------------- drivers/lightnvm/pblk.h | 23 ----------------------- 4 files changed, 19 insertions(+), 64 deletions(-) diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c index a58d3c84a3f2..b413bafe93fd 100644 --- a/drivers/lightnvm/pblk-core.c +++ b/drivers/lightnvm/pblk-core.c @@ -1839,8 +1839,7 @@ static void pblk_save_lba_list(struct pblk *pblk, struct pblk_line *line) struct pblk_w_err_gc *w_err_gc = line->w_err_gc; struct pblk_emeta *emeta = line->emeta; - w_err_gc->lba_list = pblk_malloc(lba_list_size, - l_mg->emeta_alloc_type, GFP_KERNEL); + w_err_gc->lba_list = kvmalloc(lba_list_size, GFP_KERNEL); memcpy(w_err_gc->lba_list, emeta_to_lbas(pblk, emeta->buf), lba_list_size); } diff --git a/drivers/lightnvm/pblk-gc.c b/drivers/lightnvm/pblk-gc.c index 63ee205b41c4..2581eebcfc41 100644 --- a/drivers/lightnvm/pblk-gc.c +++ b/drivers/lightnvm/pblk-gc.c @@ -132,14 +132,12 @@ static __le64 *get_lba_list_from_emeta(struct pblk *pblk, struct pblk_line *line) { struct line_emeta *emeta_buf; - struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct pblk_line_meta *lm = &pblk->lm; unsigned int lba_list_size = lm->emeta_len[2]; __le64 *lba_list; int ret; - emeta_buf = pblk_malloc(lm->emeta_len[0], - l_mg->emeta_alloc_type, GFP_KERNEL); + emeta_buf = kvmalloc(lm->emeta_len[0], GFP_KERNEL); if (!emeta_buf) return NULL; @@ -147,7 +145,7 @@ static __le64 *get_lba_list_from_emeta(struct pblk *pblk, if (ret) { pblk_err(pblk, "line %d read emeta failed (%d)\n", line->id, ret); - pblk_mfree(emeta_buf, l_mg->emeta_alloc_type); + kvfree(emeta_buf); return NULL; } @@ -161,16 +159,16 @@ static __le64 *get_lba_list_from_emeta(struct pblk *pblk, if (ret) { pblk_err(pblk, "inconsistent emeta (line %d)\n", line->id); - pblk_mfree(emeta_buf, l_mg->emeta_alloc_type); + kvfree(emeta_buf); return NULL; } - lba_list = pblk_malloc(lba_list_size, - l_mg->emeta_alloc_type, GFP_KERNEL); + lba_list = kvmalloc(lba_list_size, GFP_KERNEL); + if (lba_list) memcpy(lba_list, emeta_to_lbas(pblk, emeta_buf), lba_list_size); - pblk_mfree(emeta_buf, l_mg->emeta_alloc_type); + kvfree(emeta_buf); return lba_list; } @@ -181,7 +179,6 @@ static void pblk_gc_line_prepare_ws(struct work_struct *work) ws); struct pblk *pblk = line_ws->pblk; struct pblk_line *line = line_ws->line; - struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct pblk_line_meta *lm = &pblk->lm; struct nvm_tgt_dev *dev = pblk->dev; struct nvm_geo *geo = &dev->geo; @@ -272,7 +269,7 @@ static void pblk_gc_line_prepare_ws(struct work_struct *work) goto next_rq; out: - pblk_mfree(lba_list, l_mg->emeta_alloc_type); + kvfree(lba_list); kfree(line_ws); kfree(invalid_bitmap); @@ -286,7 +283,7 @@ static void pblk_gc_line_prepare_ws(struct work_struct *work) fail_free_gc_rq: kfree(gc_rq); fail_free_lba_list: - pblk_mfree(lba_list, l_mg->emeta_alloc_type); + kvfree(lba_list); fail_free_invalid_bitmap: kfree(invalid_bitmap); fail_free_ws: diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index b351c7f002de..9a967a2e83dd 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -543,7 +543,7 @@ static void pblk_line_mg_free(struct pblk *pblk) for (i = 0; i < PBLK_DATA_LINES; i++) { kfree(l_mg->sline_meta[i]); - pblk_mfree(l_mg->eline_meta[i]->buf, l_mg->emeta_alloc_type); + kvfree(l_mg->eline_meta[i]->buf); kfree(l_mg->eline_meta[i]); } @@ -560,7 +560,7 @@ static void pblk_line_meta_free(struct pblk_line_mgmt *l_mg, kfree(line->erase_bitmap); kfree(line->chks); - pblk_mfree(w_err_gc->lba_list, l_mg->emeta_alloc_type); + kvfree(w_err_gc->lba_list); kfree(w_err_gc); } @@ -890,29 +890,14 @@ static int pblk_line_mg_init(struct pblk *pblk) if (!emeta) goto fail_free_emeta; - if (lm->emeta_len[0] > KMALLOC_MAX_CACHE_SIZE) { - l_mg->emeta_alloc_type = PBLK_VMALLOC_META; - - emeta->buf = vmalloc(lm->emeta_len[0]); - if (!emeta->buf) { - kfree(emeta); - goto fail_free_emeta; - } - - emeta->nr_entries = lm->emeta_sec[0]; - l_mg->eline_meta[i] = emeta; - } else { - l_mg->emeta_alloc_type = PBLK_KMALLOC_META; - - emeta->buf = kmalloc(lm->emeta_len[0], GFP_KERNEL); - if (!emeta->buf) { - kfree(emeta); - goto fail_free_emeta; - } - - emeta->nr_entries = lm->emeta_sec[0]; - l_mg->eline_meta[i] = emeta; + emeta->buf = kvmalloc(lm->emeta_len[0], GFP_KERNEL); + if (!emeta->buf) { + kfree(emeta); + goto fail_free_emeta; } + + emeta->nr_entries = lm->emeta_sec[0]; + l_mg->eline_meta[i] = emeta; } for (i = 0; i < l_mg->nr_lines; i++) @@ -926,10 +911,7 @@ static int pblk_line_mg_init(struct pblk *pblk) fail_free_emeta: while (--i >= 0) { - if (l_mg->emeta_alloc_type == PBLK_VMALLOC_META) - vfree(l_mg->eline_meta[i]->buf); - else - kfree(l_mg->eline_meta[i]->buf); + kvfree(l_mg->eline_meta[i]->buf); kfree(l_mg->eline_meta[i]); } diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h index d515d3409a74..86ffa875bfe1 100644 --- a/drivers/lightnvm/pblk.h +++ b/drivers/lightnvm/pblk.h @@ -482,11 +482,6 @@ struct pblk_line { #define PBLK_DATA_LINES 4 enum { - PBLK_KMALLOC_META = 1, - PBLK_VMALLOC_META = 2, -}; - -enum { PBLK_EMETA_TYPE_HEADER = 1, /* struct line_emeta first sector */ PBLK_EMETA_TYPE_LLBA = 2, /* lba list - type: __le64 */ PBLK_EMETA_TYPE_VSC = 3, /* vsc list - type: __le32 */ @@ -521,9 +516,6 @@ struct pblk_line_mgmt { __le32 *vsc_list; /* Valid sector counts for all lines */ - /* Metadata allocation type: VMALLOC | KMALLOC */ - int emeta_alloc_type; - /* Pre-allocated metadata for data lines */ struct pblk_smeta *sline_meta[PBLK_DATA_LINES]; struct pblk_emeta *eline_meta[PBLK_DATA_LINES]; @@ -934,21 +926,6 @@ void pblk_rl_werr_line_out(struct pblk_rl *rl); int pblk_sysfs_init(struct gendisk *tdisk); void pblk_sysfs_exit(struct gendisk *tdisk); -static inline void *pblk_malloc(size_t size, int type, gfp_t flags) -{ - if (type == PBLK_KMALLOC_META) - return kmalloc(size, flags); - return vmalloc(size); -} - -static inline void pblk_mfree(void *ptr, int type) -{ - if (type == PBLK_KMALLOC_META) - kfree(ptr); - else - vfree(ptr); -} - static inline struct nvm_rq *nvm_rq_from_c_ctx(void *c_ctx) { return c_ctx - sizeof(struct nvm_rq); From patchwork Wed Jul 31 09:41:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Hans Holmberg X-Patchwork-Id: 11067423 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9721213A0 for ; Wed, 31 Jul 2019 09:42:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 825EB20415 for ; Wed, 31 Jul 2019 09:42:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 76C8D26CFF; Wed, 31 Jul 2019 09:42:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2CFD220415 for ; Wed, 31 Jul 2019 09:42:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728665AbfGaJmV (ORCPT ); Wed, 31 Jul 2019 05:42:21 -0400 Received: from mail-lj1-f196.google.com ([209.85.208.196]:35377 "EHLO mail-lj1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727237AbfGaJmU (ORCPT ); Wed, 31 Jul 2019 05:42:20 -0400 Received: by mail-lj1-f196.google.com with SMTP id x25so65006590ljh.2 for ; Wed, 31 Jul 2019 02:42:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=owltronix-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=uCrB5ySZUcR28Rx9lWPqtsy+QKHPyb7HykwAFaRFqlU=; b=ohM6AEXwlJ8JvsmmDKw7ZGh4xVhzDWdJplwFu7vnXSndsSC3nHaIVCeKCraa7czupb j9MK5E4NpZf7yMl2Xy6hDqpSOyyWMdmE+ejqsEo4Ud4xqfl4tDK7qrWsBZJC6KOldfmO j0kCSWCQORgN1gRrrcFRZ8IYDt0l6t1fIE3BQhKCL1+FeigsAwQVMHyxfN3r0OTiJVHr XSwHg08zBqZJ51gjqwARIziUHDj3P746h1qPfUAYDtFn0Fs0lO7uEaRyGG/Zrq/h8QeT KyhkFuMxeEBWGJEDwwRPqKcmTSoVwfuVJi+13RiBAMfX+sw/5g+8zqhEYP7YvKVRbNOp Y6uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uCrB5ySZUcR28Rx9lWPqtsy+QKHPyb7HykwAFaRFqlU=; b=o+/5Swm+o4HyV74x7fHSes4RTUPVSxuIg0Fo3lV0RPt8GCcUMSKyy0WzzH1hyKnrVT eGrVoZbDHSrV/l4gQ1CgWV62BaSmSoF5pnFtesuBe0kwcAlBbjeH840lLh84j4P1BwtK 3u90J/hysAclLHocxMwwsfIJvdbgo8VxEp4U1kCIHisEunnb/Ote3ZP6JTABK1Ph3x7T oUBIv7Kf11ILfeLhToUx+eIhZu9rMGrzaE1IWxAwGLzBpqy6ApW3YP6O0OXmIWAaapp9 SjcL+T+k0yaUsJDAsLyv+WfLgT13VSWM64vGaHFwHH+16V2c0t6VCmEm9mXmSVa6CAmL mY4A== X-Gm-Message-State: APjAAAXo+a+w5fW0qcUB+uDicSjLNUogz7sRP9rtlrOa0CcIA+LY1CaG N9iircKBeDQESrreRlLME5o= X-Google-Smtp-Source: APXvYqyBegkSyiU4m9zufYeMcW3WRboBF6uzdFMkPzmlZiDBXYx7+7BVn4rE8c+1CuTM7jKd+LIGOQ== X-Received: by 2002:a2e:89c8:: with SMTP id c8mr64670727ljk.70.1564566138274; Wed, 31 Jul 2019 02:42:18 -0700 (PDT) Received: from titan.lan (90-230-197-193-no86.tbcn.telia.com. [90.230.197.193]) by smtp.gmail.com with ESMTPSA id t4sm15408200ljh.9.2019.07.31.02.42.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 31 Jul 2019 02:42:17 -0700 (PDT) From: Hans Holmberg To: Matias Bjorling Cc: Christoph Hellwig , =?utf-8?q?Javier_Gonz=C3=A1lez?= , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Hans Holmberg Subject: [PATCH 4/4] block: stop exporting bio_map_kern Date: Wed, 31 Jul 2019 11:41:36 +0200 Message-Id: <1564566096-28756-5-git-send-email-hans@owltronix.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1564566096-28756-1-git-send-email-hans@owltronix.com> References: <1564566096-28756-1-git-send-email-hans@owltronix.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that there no module users left of bio_map_kern, stop exporting the symbol. Signed-off-by: Hans Holmberg Reviewed-by: Javier González Reviewed-by: Christoph Hellwig --- block/bio.c | 1 - 1 file changed, 1 deletion(-) diff --git a/block/bio.c b/block/bio.c index 299a0e7651ec..96ca0b4e73bb 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1521,7 +1521,6 @@ struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len, bio->bi_end_io = bio_map_kern_endio; return bio; } -EXPORT_SYMBOL(bio_map_kern); static void bio_copy_kern_endio(struct bio *bio) {