From patchwork Thu Feb 18 12:56:36 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?UTF-8?q?Javier=20Gonz=C3=A1lez?= X-Patchwork-Id: 8349511 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 1EED19F372 for ; Thu, 18 Feb 2016 12:56:57 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0733420165 for ; Thu, 18 Feb 2016 12:56:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D095E20123 for ; Thu, 18 Feb 2016 12:56:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1425535AbcBRM4m (ORCPT ); Thu, 18 Feb 2016 07:56:42 -0500 Received: from mail-wm0-f49.google.com ([74.125.82.49]:34994 "EHLO mail-wm0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1425476AbcBRM4k (ORCPT ); Thu, 18 Feb 2016 07:56:40 -0500 Received: by mail-wm0-f49.google.com with SMTP id c200so25758046wme.0 for ; Thu, 18 Feb 2016 04:56:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lightnvm-io.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:mime-version:content-type :content-transfer-encoding; bh=oegtjucJYxR8NpXVMSGhur8zfnIGGCz36Za2zq2QY1I=; b=bgqX5wwrV/hUHxT7NfwVjpNWHlAk311jFOumt9ZmirDv6vJNs67lex+kkzOjJeKs36 76zeFY+WsiFLG0jA2mVqk+dZjOgGUm69BnbvMRdft49cVlztbGujtjcsL+VXQbl78BJD 0PdkmAsWpqsQ2jqABq/sl1qse5X4Eb/lo2dxz7LWmxocTGqL9om9Dy6WGPKLxp2GvCOS VxC9qnrDEikPwsPXT7+dQ+ZqVBex6uXh1r8/9RbfRCG1JZiNsydIPPg2AhFwZ6dHW0RG qpxDJlmvvB99NzD7Fy7Qx8iJNa+wWUCXgh/vy4Wju55lClaKFJbYnnZ9Ly33QhNZE38A 1b6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-type:content-transfer-encoding; bh=oegtjucJYxR8NpXVMSGhur8zfnIGGCz36Za2zq2QY1I=; b=RybbxsDnmWbrbkl9FRptqMiQpUeh2nKyXzaFG+utNwVGb1DofEEOXQm/WX6Z7EbhNI x0PW3Iz9tlKjrcRi0ak2kJcaptuyoTrFHigsBX0MKwueW0p1zWOHmiFIa6I6WKBsrB9Z cheXM3dx73I1CIFNo0Qj2wyLCgmxBqCMt54ldbyw4LpgAHGMSpyFq+CNi5yFmaO52zRj GqexdpdFIx2j1AWBdUwGSTW7aUa2eRjh8fg2VR0SZawWhxE4pR5lZcAeAO0KEh0deqrJ efM4lts5ePAMl8EVuQTMzDpM3QOo40HjKdgMnnerXR/A9nMPVX2cTl7BEKtnPD7+a1uZ Vo1Q== X-Gm-Message-State: AG10YOSliedxBPdSTpZLNpq3BIYtvWuoCS1hpJb2mnm8d9XrRDHX2L4fj6wANdyGmAyIFQ== X-Received: by 10.194.92.68 with SMTP id ck4mr7350063wjb.144.1455800199313; Thu, 18 Feb 2016 04:56:39 -0800 (PST) Received: from localhost.localdomain (6164198-cl69.boa.fiberby.dk. [193.106.164.198]) by smtp.gmail.com with ESMTPSA id s8sm6466128wje.35.2016.02.18.04.56.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 18 Feb 2016 04:56:38 -0800 (PST) From: "=?UTF-8?q?Javier=20Gonz=C3=A1lez?=" X-Google-Original-From: =?UTF-8?q?Javier=20Gonz=C3=A1lez?= To: mb@lightnvm.io Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, =?UTF-8?q?Javier=20Gonz=C3=A1lez?= Subject: [PATCH V2] lightnvm: generalize rrpc ppa calculations Date: Thu, 18 Feb 2016 13:56:36 +0100 Message-Id: <1455800196-4449-1-git-send-email-javier@javigon.com> X-Mailer: git-send-email 2.1.4 MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Javier González In rrpc, some calculations assume a certain configuration (e.g., 1 LUN, 1 sector per page). The reason behind this was that we have used a simple configuration in QEMU to test core features generally in LightNVM, and concretely in rrpc. This patch relaxes these assumptions and generalizes calculations in order to support real hardware. Note that more complex configurations in QEMU, that allow to simulate this hardware, are also pushed into the qemu-nvme repository implementing LightNVM support, available under the Open-Channel SSD project in github [1]. [1] https://github.com/OpenChannelSSD/qemu-nvme V2: Use right operations to calculate modulus on 64-bit integers. Signed-off-by: Javier González --- drivers/lightnvm/rrpc.c | 48 +++++++++++++++++++++++++++++++----------------- drivers/lightnvm/rrpc.h | 9 +++++++++ 2 files changed, 40 insertions(+), 17 deletions(-) diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c index 775bf6c2..8234378 100644 --- a/drivers/lightnvm/rrpc.c +++ b/drivers/lightnvm/rrpc.c @@ -38,7 +38,7 @@ static void rrpc_page_invalidate(struct rrpc *rrpc, struct rrpc_addr *a) spin_lock(&rblk->lock); - div_u64_rem(a->addr, rrpc->dev->pgs_per_blk, &pg_offset); + div_u64_rem(a->addr, rrpc->dev->sec_per_blk, &pg_offset); WARN_ON(test_and_set_bit(pg_offset, rblk->invalid_pages)); rblk->nr_invalid_pages++; @@ -113,14 +113,24 @@ static void rrpc_discard(struct rrpc *rrpc, struct bio *bio) static int block_is_full(struct rrpc *rrpc, struct rrpc_block *rblk) { - return (rblk->next_page == rrpc->dev->pgs_per_blk); + return (rblk->next_page == rrpc->dev->sec_per_blk); } +/* Calculate relative addr for the given block, considering instantiated LUNs */ +static u64 block_to_rel_addr(struct rrpc *rrpc, struct rrpc_block *rblk) +{ + struct nvm_block *blk = rblk->parent; + int lun_blk = blk->id % (rrpc->dev->blks_per_lun * rrpc->nr_luns); + + return lun_blk * rrpc->dev->sec_per_blk; +} + +/* Calculate global addr for the given block */ static u64 block_to_addr(struct rrpc *rrpc, struct rrpc_block *rblk) { struct nvm_block *blk = rblk->parent; - return blk->id * rrpc->dev->pgs_per_blk; + return blk->id * rrpc->dev->sec_per_blk; } static struct ppa_addr linear_to_generic_addr(struct nvm_dev *dev, @@ -136,7 +146,7 @@ static struct ppa_addr linear_to_generic_addr(struct nvm_dev *dev, l.g.sec = secs; sector_div(ppa, dev->sec_per_pg); - div_u64_rem(ppa, dev->sec_per_blk, &pgs); + div_u64_rem(ppa, dev->pgs_per_blk, &pgs); l.g.pg = pgs; sector_div(ppa, dev->pgs_per_blk); @@ -191,12 +201,12 @@ static struct rrpc_block *rrpc_get_blk(struct rrpc *rrpc, struct rrpc_lun *rlun, return NULL; } - rblk = &rlun->blocks[blk->id]; + rblk = rrpc_get_rblk(rlun, blk->id); list_add_tail(&rblk->list, &rlun->open_list); spin_unlock(&lun->lock); blk->priv = rblk; - bitmap_zero(rblk->invalid_pages, rrpc->dev->pgs_per_blk); + bitmap_zero(rblk->invalid_pages, rrpc->dev->sec_per_blk); rblk->next_page = 0; rblk->nr_invalid_pages = 0; atomic_set(&rblk->data_cmnt_size, 0); @@ -286,11 +296,11 @@ static int rrpc_move_valid_pages(struct rrpc *rrpc, struct rrpc_block *rblk) struct bio *bio; struct page *page; int slot; - int nr_pgs_per_blk = rrpc->dev->pgs_per_blk; + int nr_sec_per_blk = rrpc->dev->sec_per_blk; u64 phys_addr; DECLARE_COMPLETION_ONSTACK(wait); - if (bitmap_full(rblk->invalid_pages, nr_pgs_per_blk)) + if (bitmap_full(rblk->invalid_pages, nr_sec_per_blk)) return 0; bio = bio_alloc(GFP_NOIO, 1); @@ -306,10 +316,10 @@ static int rrpc_move_valid_pages(struct rrpc *rrpc, struct rrpc_block *rblk) } while ((slot = find_first_zero_bit(rblk->invalid_pages, - nr_pgs_per_blk)) < nr_pgs_per_blk) { + nr_sec_per_blk)) < nr_sec_per_blk) { /* Lock laddr */ - phys_addr = (rblk->parent->id * nr_pgs_per_blk) + slot; + phys_addr = rblk->parent->id * nr_sec_per_blk + slot; try: spin_lock(&rrpc->rev_lock); @@ -381,7 +391,7 @@ finished: mempool_free(page, rrpc->page_pool); bio_put(bio); - if (!bitmap_full(rblk->invalid_pages, nr_pgs_per_blk)) { + if (!bitmap_full(rblk->invalid_pages, nr_sec_per_blk)) { pr_err("nvm: failed to garbage collect block\n"); return -EIO; } @@ -677,7 +687,7 @@ static void rrpc_end_io_write(struct rrpc *rrpc, struct rrpc_rq *rrqd, lun = rblk->parent->lun; cmnt_size = atomic_inc_return(&rblk->data_cmnt_size); - if (unlikely(cmnt_size == rrpc->dev->pgs_per_blk)) + if (unlikely(cmnt_size == rrpc->dev->sec_per_blk)) rrpc_run_gc(rrpc, rblk); } } @@ -1014,6 +1024,7 @@ static int rrpc_l2p_update(u64 slba, u32 nlb, __le64 *entries, void *private) for (i = 0; i < nlb; i++) { u64 pba = le64_to_cpu(entries[i]); + unsigned int mod; /* LNVM treats address-spaces as silos, LBA and PBA are * equally large and zero-indexed. */ @@ -1029,8 +1040,10 @@ static int rrpc_l2p_update(u64 slba, u32 nlb, __le64 *entries, void *private) if (!pba) continue; + div_u64_rem(pba, rrpc->nr_sects, &mod); + addr[i].addr = pba; - raddr[pba].addr = slba + i; + raddr[mod].addr = slba + i; } return 0; @@ -1137,7 +1150,7 @@ static int rrpc_luns_init(struct rrpc *rrpc, int lun_begin, int lun_end) struct rrpc_lun *rlun; int i, j; - if (dev->pgs_per_blk > MAX_INVALID_PAGES_STORAGE * BITS_PER_LONG) { + if (dev->sec_per_blk > MAX_INVALID_PAGES_STORAGE * BITS_PER_LONG) { pr_err("rrpc: number of pages per block too high."); return -EINVAL; } @@ -1238,10 +1251,11 @@ static void rrpc_block_map_update(struct rrpc *rrpc, struct rrpc_block *rblk) struct nvm_dev *dev = rrpc->dev; int offset; struct rrpc_addr *laddr; - u64 paddr, pladdr; + u64 bpaddr, paddr, pladdr; - for (offset = 0; offset < dev->pgs_per_blk; offset++) { - paddr = block_to_addr(rrpc, rblk) + offset; + bpaddr = block_to_rel_addr(rrpc, rblk); + for (offset = 0; offset < dev->sec_per_blk; offset++) { + paddr = bpaddr + offset; pladdr = rrpc->rev_trans_map[paddr].addr; if (pladdr == ADDR_EMPTY) diff --git a/drivers/lightnvm/rrpc.h b/drivers/lightnvm/rrpc.h index 3989d65..855f4a5 100644 --- a/drivers/lightnvm/rrpc.h +++ b/drivers/lightnvm/rrpc.h @@ -156,6 +156,15 @@ struct rrpc_rev_addr { u64 addr; }; +static inline struct rrpc_block *rrpc_get_rblk(struct rrpc_lun *rlun, + int blk_id) +{ + struct rrpc *rrpc = rlun->rrpc; + int lun_blk = blk_id % rrpc->dev->blks_per_lun; + + return &rlun->blocks[lun_blk]; +} + static inline sector_t rrpc_get_laddr(struct bio *bio) { return bio->bi_iter.bi_sector / NR_PHY_IN_LOG;