From patchwork Thu May 4 22:15:51 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tycho Andersen X-Patchwork-Id: 9712831 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9D21E60235 for ; Thu, 4 May 2017 22:17:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8E420286A0 for ; Thu, 4 May 2017 22:17:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 817BF286A9; Thu, 4 May 2017 22:17:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id F245B286A0 for ; Thu, 4 May 2017 22:17:20 +0000 (UTC) Received: (qmail 13812 invoked by uid 550); 4 May 2017 22:17:18 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 13770 invoked from network); 4 May 2017 22:17:16 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=docker.com; s=google; h=from:to:cc:subject:date:message-id; bh=XKK+7i6xx4cJY9olY6gIaXyb6pHbzO336QDIAW/kaZw=; b=dyl1jvTLzUi0OTpO3rGnXvjAGACYdAGwNiomwSU9Y0FqC5BOlbWW3gPlwiLKakHb86 s4bsENcO+4xAWQ3GQ7Tl26GxMKoudc83EBspoaPwvKHNz6gU0/Xn/rbKa+twMTmdNmZ6 8YBGL+kKRD6xcESBrGk3vkVKaEKvC5CiznttA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=XKK+7i6xx4cJY9olY6gIaXyb6pHbzO336QDIAW/kaZw=; b=uOEeli5xVnTLA/eHlafQEoB0af9YLdr+lYxHR3eDLcP03RsjZqwQ2vIIkT+tt0x7uZ sFrGJgRVH0KLPTN0QLsNBdVjJ7MVp0PsTpV9Tlz+wrUoKEgHNKkFm75dL9tpevFX2cnh 8wv1NEj8Tkx+WyEkVJZ+gQNCx6EDU3RwrWlgffi2jivgzXNv2CTQDg/trrw6j+CRcXDJ 1ZiactuN1Ga8oi/PXrO9E9TvJuNiq+czmi5Lrih5jJrmJagRyRBqEUfy4k5V9PvCwMAV A7ShpDxZGyGn8FwyFiW/ppfzaBVxknVEjQNr5JWpwO6rQ7+453MDD/sa60QhdMZFGja1 ELsQ== X-Gm-Message-State: AN3rC/6oXjuh7q+mF2vgveiawmfH+X2VGE54SsTzC/1zQX3kuBeA+aVb hl2mhVsHssmNeca+ X-Received: by 10.36.245.201 with SMTP id k192mr5298836ith.104.1493936225014; Thu, 04 May 2017 15:17:05 -0700 (PDT) From: Tycho Andersen To: Tejun Heo , Juerg Haefliger Cc: Christoph Hellwig , linux-ide@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com, Tycho Andersen Date: Thu, 4 May 2017 16:15:51 -0600 Message-Id: <20170504221551.6458-1-tycho@docker.com> X-Mailer: git-send-email 2.11.0 Subject: [kernel-hardening] [PATCH v2] ata-sff: always map page before data transfer X-Virus-Scanned: ClamAV using ClamSMTP The XPFO [1] patchset may unmap pages from physmap if they happened to be destined for userspace. If such a page is unmapped, it needs to be remapped. Rather than test if a page is in the highmem/xpfo unmapped state, Christoph suggested [2] that we simply always map the page. v2: * drop comment about bounce buffer * don't save IRQs before kmap/unmap * formatting Suggested-by: Christoph Hellwig Signed-off-by: Tycho Andersen CC: Juerg Haefliger CC: Tejun Heo [1]: https://lkml.org/lkml/2016/11/4/245 [2]: https://lkml.org/lkml/2016/11/4/253 Reviewed-by: Christoph Hellwig --- v1: https://lkml.org/lkml/2017/5/2/404 --- drivers/ata/libata-sff.c | 44 ++++++++------------------------------------ 1 file changed, 8 insertions(+), 36 deletions(-) diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c index 2bd92dca3e62..01cf07c919bc 100644 --- a/drivers/ata/libata-sff.c +++ b/drivers/ata/libata-sff.c @@ -716,24 +716,10 @@ static void ata_pio_sector(struct ata_queued_cmd *qc) DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); - if (PageHighMem(page)) { - unsigned long flags; - - /* FIXME: use a bounce buffer */ - local_irq_save(flags); - buf = kmap_atomic(page); - - /* do the actual data transfer */ - ap->ops->sff_data_xfer(qc, buf + offset, qc->sect_size, - do_write); - - kunmap_atomic(buf); - local_irq_restore(flags); - } else { - buf = page_address(page); - ap->ops->sff_data_xfer(qc, buf + offset, qc->sect_size, - do_write); - } + /* do the actual data transfer */ + buf = kmap_atomic(page); + ap->ops->sff_data_xfer(qc, buf + offset, qc->sect_size, do_write); + kunmap_atomic(buf); if (!do_write && !PageSlab(page)) flush_dcache_page(page); @@ -861,24 +847,10 @@ static int __atapi_pio_bytes(struct ata_queued_cmd *qc, unsigned int bytes) DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); - if (PageHighMem(page)) { - unsigned long flags; - - /* FIXME: use bounce buffer */ - local_irq_save(flags); - buf = kmap_atomic(page); - - /* do the actual data transfer */ - consumed = ap->ops->sff_data_xfer(qc, buf + offset, - count, rw); - - kunmap_atomic(buf); - local_irq_restore(flags); - } else { - buf = page_address(page); - consumed = ap->ops->sff_data_xfer(qc, buf + offset, - count, rw); - } + /* do the actual data transfer */ + buf = kmap_atomic(page); + consumed = ap->ops->sff_data_xfer(qc, buf + offset, count, rw); + kunmap_atomic(buf); bytes -= min(bytes, consumed); qc->curbytes += count;