From patchwork Mon Jan 4 09:54:49 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenwei Tao X-Patchwork-Id: 7946421 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A4D4A9F1CC for ; Mon, 4 Jan 2016 09:55:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id ADA0920373 for ; Mon, 4 Jan 2016 09:55:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C130F202FE for ; Mon, 4 Jan 2016 09:55:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752241AbcADJzD (ORCPT ); Mon, 4 Jan 2016 04:55:03 -0500 Received: from mail-pa0-f46.google.com ([209.85.220.46]:35925 "EHLO mail-pa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751941AbcADJzB (ORCPT ); Mon, 4 Jan 2016 04:55:01 -0500 Received: by mail-pa0-f46.google.com with SMTP id yy13so102626763pab.3; Mon, 04 Jan 2016 01:55:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=eqPPvhfBlYCNtdUb7zcFB7hNlaR+GPAISNkMhMuKhl0=; b=z5UAriRNR47ipSzjaPLQR2xuxE/zTJRAzpvqN2zeIcLeYOb+Pewjm+VHhU8ciWV2gE N0gTeXic3/Qkm/e/0sLPIuHBd/2g9M53HnN/xtzF/vvQYa+v2iP/TH19IPf8H2os9fpE eUwq3iObxIaJySK4mTxHa5G4y4zg05aTCPPNc56sqOlmpEAAygCBta+epK4O7a45m8SN gOEQAGvUcgs+70KRIGh0yLRyAcxxiLk/RI6y0x4ZwdepVfAQmu7x8IBcBSzAnampBDJO b5oXTTSlxDYCXnD6apS5W1aCo55vewFhrdArGlwZx4qpJs5NbvD5N55No8IyYsCWT/Lg RXyw== X-Received: by 10.66.163.231 with SMTP id yl7mr122198243pab.141.1451901301065; Mon, 04 Jan 2016 01:55:01 -0800 (PST) Received: from localhost.localdomain.com ([111.204.49.2]) by smtp.gmail.com with ESMTPSA id xr8sm124779921pab.26.2016.01.04.01.54.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Jan 2016 01:55:00 -0800 (PST) From: Wenwei Tao To: mb@lightnvm.io Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH] lightnvm: add full block direct to the gc list Date: Mon, 4 Jan 2016 17:54:49 +0800 Message-Id: <1451901289-27149-1-git-send-email-ww.tao0320@gmail.com> X-Mailer: git-send-email 1.8.3.1 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We allocate gcb to queue full block to the gc list, but gcb allocation may fail, if that happens, the block will not get reclaimed. So add the full block direct to the gc list, omit the queuing step. Signed-off-by: Wenwei Tao --- drivers/lightnvm/rrpc.c | 47 ++++++++++------------------------------------- 1 file changed, 10 insertions(+), 37 deletions(-) diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c index 40b0309..27fb98d 100644 --- a/drivers/lightnvm/rrpc.c +++ b/drivers/lightnvm/rrpc.c @@ -475,24 +475,6 @@ static void rrpc_lun_gc(struct work_struct *work) /* TODO: Hint that request queue can be started again */ } -static void rrpc_gc_queue(struct work_struct *work) -{ - struct rrpc_block_gc *gcb = container_of(work, struct rrpc_block_gc, - ws_gc); - struct rrpc *rrpc = gcb->rrpc; - struct rrpc_block *rblk = gcb->rblk; - struct nvm_lun *lun = rblk->parent->lun; - struct rrpc_lun *rlun = &rrpc->luns[lun->id - rrpc->lun_offset]; - - spin_lock(&rlun->lock); - list_add_tail(&rblk->prio, &rlun->prio_list); - spin_unlock(&rlun->lock); - - mempool_free(gcb, rrpc->gcb_pool); - pr_debug("nvm: block '%lu' is full, allow GC (sched)\n", - rblk->parent->id); -} - static const struct block_device_operations rrpc_fops = { .owner = THIS_MODULE, }; @@ -620,39 +602,30 @@ err: return NULL; } -static void rrpc_run_gc(struct rrpc *rrpc, struct rrpc_block *rblk) -{ - struct rrpc_block_gc *gcb; - - gcb = mempool_alloc(rrpc->gcb_pool, GFP_ATOMIC); - if (!gcb) { - pr_err("rrpc: unable to queue block for gc."); - return; - } - - gcb->rrpc = rrpc; - gcb->rblk = rblk; - - INIT_WORK(&gcb->ws_gc, rrpc_gc_queue); - queue_work(rrpc->kgc_wq, &gcb->ws_gc); -} - static void rrpc_end_io_write(struct rrpc *rrpc, struct rrpc_rq *rrqd, sector_t laddr, uint8_t npages) { struct rrpc_addr *p; struct rrpc_block *rblk; struct nvm_lun *lun; + struct rrpc_lun *rlun; int cmnt_size, i; for (i = 0; i < npages; i++) { p = &rrpc->trans_map[laddr + i]; rblk = p->rblk; lun = rblk->parent->lun; + rlun = &rrpc->luns[lun->id - rrpc->lun_offset]; cmnt_size = atomic_inc_return(&rblk->data_cmnt_size); - if (unlikely(cmnt_size == rrpc->dev->pgs_per_blk)) - rrpc_run_gc(rrpc, rblk); + if (unlikely(cmnt_size == rrpc->dev->pgs_per_blk)) { + pr_debug("nvm: block '%lu' is full, allow GC (sched)\n", + rblk->parent->id); + spin_lock(&rlun->lock); + list_add_tail(&rblk->prio, &rlun->prio_list); + spin_unlock(&rlun->lock); + + } } }