From patchwork Thu Feb 4 14:13:23 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Matias_Bj=C3=B8rling?= X-Patchwork-Id: 8222351 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D9A909F1C0 for ; Thu, 4 Feb 2016 14:16:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 05D3C20395 for ; Thu, 4 Feb 2016 14:16:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 24B032038F for ; Thu, 4 Feb 2016 14:16:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965168AbcBDONj (ORCPT ); Thu, 4 Feb 2016 09:13:39 -0500 Received: from mail-wm0-f47.google.com ([74.125.82.47]:33513 "EHLO mail-wm0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965074AbcBDONh (ORCPT ); Thu, 4 Feb 2016 09:13:37 -0500 Received: by mail-wm0-f47.google.com with SMTP id g62so6345777wme.0 for ; Thu, 04 Feb 2016 06:13:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bjorling.me; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type:content-transfer-encoding; bh=5AnuDvyhxXn7FJPSscz4V05e+NAPOPBwj2htIdUzSa0=; b=KMmjmNLPjp8/y7HBiuXuCMYV0ETdfVv5X2tBdftWT+ud5qcLgnWu53YOZPZ3WRP9T6 j1fUDPZksKtNqYwUkHJktPn1Ckw1JRvcJGVxY1/AcBjEoftPx2lalBc6YhoNpooSJfOa atg/oC35XVllMXLPs7NY8FjuEFPadKa/spTZw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-type:content-transfer-encoding; bh=5AnuDvyhxXn7FJPSscz4V05e+NAPOPBwj2htIdUzSa0=; b=I0iw7N76KAetqosOnqyfToYYafH3+yLzpCI0tL/+CqeN+Pe3RFe6GNt6cABGwV/6v7 jujx95jLF5thFMLxTs1GKt8e8Dge7EARH2fX+tS8KYqxbW1a0nvqlg8OrmN0QV6WkBZw 6/MQaXrRTL+bZOiixrRQtB80kk7yNCGrWqvD/X0EbAzOfwE77ggZhWGlB8G2dcZ/MiLC r3Xg3FzHgD08YWju8pbK6tVg4oYDiyVfXTE03CHQwXpFdByzgQzNlSVlJ+xhWRa8v+zC gueFq2LIdQ7cPS7Cx598HSicvaHLfF1u702VqGjIUXVszxrUp69xoAg2H7zFAMwkpL6N Jrjw== X-Gm-Message-State: AG10YOQMZwZ0HGsO94FE4vY/2oCGqwSbpClCQNAosFNs0vfmA1T+vIu1ZfuN61zULufOlg== X-Received: by 10.28.46.82 with SMTP id u79mr32653797wmu.67.1454595216650; Thu, 04 Feb 2016 06:13:36 -0800 (PST) Received: from localhost.localdomain (6164198-cl69.boa.fiberby.dk. [193.106.164.198]) by smtp.gmail.com with ESMTPSA id u130sm26390332wmg.15.2016.02.04.06.13.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 04 Feb 2016 06:13:35 -0800 (PST) From: =?UTF-8?q?Matias=20Bj=C3=B8rling?= To: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, axboe@fb.com Cc: Wenwei Tao , =?UTF-8?q?Matias=20Bj=C3=B8rling?= Subject: [PATCH 1/5] lightnvm: put bio before return Date: Thu, 4 Feb 2016 15:13:23 +0100 Message-Id: <1454595207-22432-2-git-send-email-m@bjorling.me> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1454595207-22432-1-git-send-email-m@bjorling.me> References: <1454595207-22432-1-git-send-email-m@bjorling.me> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-7.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Wenwei Tao The bio is not returned if the data page cannot be allocated. Signed-off-by: Wenwei Tao Signed-off-by: Matias Bjørling --- drivers/lightnvm/rrpc.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c index c4d0b04..775bf6c2 100644 --- a/drivers/lightnvm/rrpc.c +++ b/drivers/lightnvm/rrpc.c @@ -300,8 +300,10 @@ static int rrpc_move_valid_pages(struct rrpc *rrpc, struct rrpc_block *rblk) } page = mempool_alloc(rrpc->page_pool, GFP_NOIO); - if (!page) + if (!page) { + bio_put(bio); return -ENOMEM; + } while ((slot = find_first_zero_bit(rblk->invalid_pages, nr_pgs_per_blk)) < nr_pgs_per_blk) {