From patchwork Sat Apr 13 13:39:59 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namjae Jeon X-Patchwork-Id: 2440741 Return-Path: X-Original-To: patchwork-linux-mmc@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id CEE75DFB78 for ; Sat, 13 Apr 2013 13:40:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753710Ab3DMNkM (ORCPT ); Sat, 13 Apr 2013 09:40:12 -0400 Received: from mail-pb0-f51.google.com ([209.85.160.51]:38135 "EHLO mail-pb0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753684Ab3DMNkK (ORCPT ); Sat, 13 Apr 2013 09:40:10 -0400 Received: by mail-pb0-f51.google.com with SMTP id rr4so1836831pbb.38 for ; Sat, 13 Apr 2013 06:40:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer; bh=Sd5xTFIY9AkR1W0q7/c1gUZDJKonZ/7z/ZOzyweFGLw=; b=h+mMMkOJ9FTyIAvEgT9QeX5Sb4vW6+kTzjFEu2XyLSPq3r9hrnUDbFKWAyI9y0rsRV WeAkFzcDscgADLWAXrWwPIPuhSuBgqoIAsZCIO3WCLYXnOhh95HemI1y85jUasaOAVLj 08ikC0dtGizc9NkjBbYKfeGFRWVzKUw2IoMeYLB+VADUv+lo7i+XQtiL+8pxuPLTptaK DIkE+iFrK88ew728xBWCHm1OQvxufHvSHRJtBPndQ6h+Id3qrL9/j0Y5wupSto3yjGtl t/dnhIM+NQz0rAvJGjj+6M5ZPvXKl2Iz+tmsd1aNBIhLeycw4HMq0WJt61U3wND72kjL dHiA== X-Received: by 10.66.122.97 with SMTP id lr1mr20392721pab.147.1365860410076; Sat, 13 Apr 2013 06:40:10 -0700 (PDT) Received: from linkinjeon-Aspire-One-522.kornet ([121.143.184.70]) by mx.google.com with ESMTPS id kd7sm12640954pbb.34.2013.04.13.06.40.05 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 13 Apr 2013 06:40:09 -0700 (PDT) From: Namjae Jeon To: dwmw2@infradead.org, axboe@kernel.dk, shli@kernel.org, Paul.Clements@steeleye.com, npiggin@kernel.dk, neilb@suse.de, cjb@laptop.org, adrian.hunter@intel.com Cc: linux-mtd@lists.infradead.org, nbd-general@lists.sourceforge.net, linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org, Namjae Jeon , Namjae Jeon , Vivek Trivedi Subject: [PATCH 7/8] dm thin: use generic helper to set max_discard_sectors Date: Sat, 13 Apr 2013 22:39:59 +0900 Message-Id: <1365860399-21465-1-git-send-email-linkinjeon@gmail.com> X-Mailer: git-send-email 1.7.9.5 Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org From: Namjae Jeon It is better to use blk_queue_max_discard_sectors helper function to set max_discard_sectors as it checks max_discard_sectors upper limit UINT_MAX >> 9 similar issue was reported for mmc in below link https://lkml.org/lkml/2013/4/1/292 If multiple discard requests get merged, merged discard request's size exceeds 4GB, there is possibility that merged discard request's __data_len field may overflow. This patch fixes this issue. Signed-off-by: Namjae Jeon Signed-off-by: Vivek Trivedi --- drivers/md/dm-thin.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c index 905b75f..237295a 100644 --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -2513,7 +2513,8 @@ static void set_discard_limits(struct pool_c *pt, struct queue_limits *limits) struct pool *pool = pt->pool; struct queue_limits *data_limits; - limits->max_discard_sectors = pool->sectors_per_block; + blk_queue_max_discard_sectors(bdev_get_queue(pt->data_dev->bdev), + pool->sectors_per_block); /* * discard_granularity is just a hint, and not enforced.