From patchwork Thu Sep 7 17:36:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tycho Andersen X-Patchwork-Id: 9942587 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 90FC7600CB for ; Thu, 7 Sep 2017 17:38:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 825F3285B5 for ; Thu, 7 Sep 2017 17:38:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 77152285C2; Thu, 7 Sep 2017 17:38:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 7C12A285B5 for ; Thu, 7 Sep 2017 17:38:22 +0000 (UTC) Received: (qmail 20437 invoked by uid 550); 7 Sep 2017 17:37:26 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 20175 invoked from network); 7 Sep 2017 17:37:19 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=docker.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qifN1icbhvivbJBqhyv8qr4JwV4y53R9jBsqaL/8Tpw=; b=MRVkw5eGwWoYqIgOezCCoa7umSG7400jjrOX4spwmeZor11wAzRj8VaBIfON9+f8+K 2+XDPctp54fX8elod42P23PNAx9S97stM+XjmsOS51kn6oX5PKp+lQrYEOa/3EklyDTJ 2kmOBMTW6+sSaF4TvZZfEqWcBQd1VBuVEyflo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qifN1icbhvivbJBqhyv8qr4JwV4y53R9jBsqaL/8Tpw=; b=GPf6bUMpY4FdjE78mMl30K4KxT55Vt74iCCSdNYMITVMdiDGg+4NRmKIpmEaAYLe/0 wdEVq7WRuWvMYMiQSdFto7LBt+qcTQQ8ccuc2ySY3JBkRE8N592xi4gp5t92/UmC3woA FgtL53/wQIFVSUhqGqvP4PtBQePAqUosrkt0K64BrcXvlQ7Dllg23ftVa4cB/oyHrikH IVBrVbiZTYimQTv3hy6yA/FKipHMgYL8SDEXYuTWhfSHh25fXFTgWt/QysDOOS/YgvuK QOXdZk0tCmas3XH16LycnVxrf8nEUu6OlcdA2tpd+nuQkbotfb1IcpB2/gkwhEydNJa6 SfgA== X-Gm-Message-State: AHPjjUi7/sF4NBxML3siQjFP6ozqy4bdBiFSy/SAvR8h18lyzp/j3ntL 9Y0+WoaLeFvkt79V X-Google-Smtp-Source: AOwi7QC4ew5XI0bdyhxNewtgvQt3hNVrHFiTcuJZ1UIqQpJr/6NxOIoDAe+LTo2d4BSWdP/4GZ956w== X-Received: by 10.107.187.135 with SMTP id l129mr111921iof.284.1504805827564; Thu, 07 Sep 2017 10:37:07 -0700 (PDT) From: Tycho Andersen To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, kernel-hardening@lists.openwall.com, Marco Benatto , Juerg Haefliger , Konrad Rzeszutek Wilk , Tycho Andersen Date: Thu, 7 Sep 2017 11:36:02 -0600 Message-Id: <20170907173609.22696-5-tycho@docker.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170907173609.22696-1-tycho@docker.com> References: <20170907173609.22696-1-tycho@docker.com> Subject: [kernel-hardening] [PATCH v6 04/11] swiotlb: Map the buffer if it was unmapped by XPFO X-Virus-Scanned: ClamAV using ClamSMTP From: Juerg Haefliger v6: * guard against lookup_xpfo() returning NULL CC: Konrad Rzeszutek Wilk Signed-off-by: Juerg Haefliger Signed-off-by: Tycho Andersen --- include/linux/xpfo.h | 4 ++++ lib/swiotlb.c | 3 ++- mm/xpfo.c | 15 +++++++++++++++ 3 files changed, 21 insertions(+), 1 deletion(-) diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h index 442c58ee930e..04590d1dcefa 100644 --- a/include/linux/xpfo.h +++ b/include/linux/xpfo.h @@ -30,6 +30,8 @@ void xpfo_kunmap(void *kaddr, struct page *page); void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp); void xpfo_free_pages(struct page *page, int order); +bool xpfo_page_is_unmapped(struct page *page); + #else /* !CONFIG_XPFO */ static inline void xpfo_kmap(void *kaddr, struct page *page) { } @@ -37,6 +39,8 @@ static inline void xpfo_kunmap(void *kaddr, struct page *page) { } static inline void xpfo_alloc_pages(struct page *page, int order, gfp_t gfp) { } static inline void xpfo_free_pages(struct page *page, int order) { } +static inline bool xpfo_page_is_unmapped(struct page *page) { return false; } + #endif /* CONFIG_XPFO */ #endif /* _LINUX_XPFO_H */ diff --git a/lib/swiotlb.c b/lib/swiotlb.c index a8d74a733a38..d4fee5ca2d9e 100644 --- a/lib/swiotlb.c +++ b/lib/swiotlb.c @@ -420,8 +420,9 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr, { unsigned long pfn = PFN_DOWN(orig_addr); unsigned char *vaddr = phys_to_virt(tlb_addr); + struct page *page = pfn_to_page(pfn); - if (PageHighMem(pfn_to_page(pfn))) { + if (PageHighMem(page) || xpfo_page_is_unmapped(page)) { /* The buffer does not have a mapping. Map it in and copy */ unsigned int offset = orig_addr & ~PAGE_MASK; char *buffer; diff --git a/mm/xpfo.c b/mm/xpfo.c index bff24afcaa2e..cdbcbac582d5 100644 --- a/mm/xpfo.c +++ b/mm/xpfo.c @@ -220,3 +220,18 @@ void xpfo_kunmap(void *kaddr, struct page *page) spin_unlock(&xpfo->maplock); } EXPORT_SYMBOL(xpfo_kunmap); + +bool xpfo_page_is_unmapped(struct page *page) +{ + struct xpfo *xpfo; + + if (!static_branch_unlikely(&xpfo_inited)) + return false; + + xpfo = lookup_xpfo(page); + if (unlikely(!xpfo) && !xpfo->inited) + return false; + + return test_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags); +} +EXPORT_SYMBOL(xpfo_page_is_unmapped);