From patchwork Wed Mar 12 15:13:44 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Heinz Mauelshagen X-Patchwork-Id: 3818481 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id DAFAD9F1CD for ; Wed, 12 Mar 2014 15:18:01 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1D7D82027D for ; Wed, 12 Mar 2014 15:18:01 +0000 (UTC) Received: from mx3-phx2.redhat.com (mx3-phx2.redhat.com [209.132.183.24]) by mail.kernel.org (Postfix) with ESMTP id 178F3202F2 for ; Wed, 12 Mar 2014 15:18:00 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id s2CFDuf8011941; Wed, 12 Mar 2014 11:13:56 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id s2CFDtRJ021705 for ; Wed, 12 Mar 2014 11:13:55 -0400 Received: from o.ww.redhat.com (ovpn-116-19.sin2.redhat.com [10.67.116.19]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s2CFDk0W006010 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 12 Mar 2014 11:13:54 -0400 Received: from o.ww.redhat.com (localhost.localdomain [127.0.0.1]) by o.ww.redhat.com (8.14.8/8.14.8) with ESMTP id s2CFDiEG012359; Wed, 12 Mar 2014 16:13:44 +0100 Received: (from mauelsha@localhost) by o.ww.redhat.com (8.14.8/8.14.8/Submit) id s2CFDi8D012358; Wed, 12 Mar 2014 16:13:44 +0100 From: heinzm@redhat.com To: dm-devel@redhat.com Date: Wed, 12 Mar 2014 16:13:44 +0100 Message-Id: <1394637224-12324-1-git-send-email-heinzm@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-loop: dm-devel@redhat.com Cc: Heinz Mauelshagen Subject: [dm-devel] [PATCH 1/1] dm cache: fix accesses past end of origin device X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Heinz Mauelshagen In order to avoid wasting cache space, we do not want to cache any partial block at the end of the origin device. This patch fixes accesses past the end of the origin device whilst trying to promote an undetected partial block with respect to: - recognizing access to the partial block - avoiding out of bounds access to the discard bitset - initializing the per bio data struct to allow cache_end_io to work properly An example of the flaw in the kernel log: [1460175.271246] dm-5: rw=0, want=20971520, limit=20971456 [1460175.271969] device-mapper: cache: promotion failed; couldn't copy block Signed-off-by: Heinz Mauelshagen --- drivers/md/dm-cache-target.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git 3.14.0-rc6.orig/drivers/md/dm-cache-target.c 3.14.0-rc6/drivers/md/dm-cache-target.c index 354bbc1..074b9c8 100644 --- 3.14.0-rc6.orig/drivers/md/dm-cache-target.c +++ 3.14.0-rc6/drivers/md/dm-cache-target.c @@ -2465,20 +2465,18 @@ static int cache_map(struct dm_target *ti, struct bio *bio) bool discarded_block; struct dm_bio_prison_cell *cell; struct policy_result lookup_result; - struct per_bio_data *pb; + struct per_bio_data *pb = init_per_bio_data(bio, pb_data_size); - if (from_oblock(block) > from_oblock(cache->origin_blocks)) { + if (unlikely(from_oblock(block) >= from_oblock(cache->origin_blocks))) { /* * This can only occur if the io goes to a partial block at * the end of the origin device. We don't cache these. * Just remap to the origin and carry on. */ - remap_to_origin_clear_discard(cache, bio, block); + remap_to_origin(cache, bio); return DM_MAPIO_REMAPPED; } - pb = init_per_bio_data(bio, pb_data_size); - if (bio->bi_rw & (REQ_FLUSH | REQ_FUA | REQ_DISCARD)) { defer_bio(cache, bio); return DM_MAPIO_SUBMITTED;