From patchwork Thu Sep 7 17:36:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tycho Andersen X-Patchwork-Id: 9942591 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id F072A600CB for ; Thu, 7 Sep 2017 17:38:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E091F285B5 for ; Thu, 7 Sep 2017 17:38:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D55B3285C2; Thu, 7 Sep 2017 17:38:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 04234285B5 for ; Thu, 7 Sep 2017 17:38:31 +0000 (UTC) Received: (qmail 21530 invoked by uid 550); 7 Sep 2017 17:37:28 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 20273 invoked from network); 7 Sep 2017 17:37:21 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=docker.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/JE8ZTMJvcM+S4rLuuWdzOZOxSF/zKliEL3MPn0P+2A=; b=R3d4fwLIRhxZT73NemBiGYgI4AYRGl5oV2IDiztr2Rw1deU9eGYR+tI6tEEFwNUzUG 3FCq0fnTyHdfyKr+ztb6xfKV80ZhiApASqDMn9A6/RmfPONX0dC2jtDceO4ry+ki2Kda WQssU4jYDaTehRW4HBvl9vD5dToVlwkiw1JWc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/JE8ZTMJvcM+S4rLuuWdzOZOxSF/zKliEL3MPn0P+2A=; b=m0R/fpi92hvm+iToO9rNMTXSJnBHv/Ymd93PFpKO5NaINt1TTefpHnquM+JkkmBe5u dS5L1AQNY5TuSpMPzdaDo9eH4YBYkUP+25AwCNG7K+yPwS+fdvT/KHoXcW/7GttbkN3x DLAx/lD45XV0Gn6gxl8/giRCYgj1d/0u+IADmsJv6K2yuZa2F4SgvsFGZc3tWLaC9D6Y CQtrQL+ZNncfWyVwbpLaFjO2V23u9kw3lM3uxVElIdNUgFLBP6IYjvenUylA+WyULR6w DxWvojKk5XXjYzcScleud8Pl1ub8CaxP+O5OkSaBhLerKwfu/xn5crVdRTFhdQMgK2h6 oX/w== X-Gm-Message-State: AHPjjUhP0EKyWnE0myqyDp4Ay4gw4Wr4WeKLgf4hsMSLHwApIejxESph vXoZp8cwtkr3n3kY X-Google-Smtp-Source: AOwi7QAw6d+tHfVEY+HxfO9jgaZN6QKKFWxX6yjDxXokadurZOV/xIXj5PMwNfde+CvetQvCNBCRiw== X-Received: by 10.36.196.84 with SMTP id v81mr93066itf.142.1504805829986; Thu, 07 Sep 2017 10:37:09 -0700 (PDT) From: Tycho Andersen To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, kernel-hardening@lists.openwall.com, Marco Benatto , Juerg Haefliger , Tycho Andersen Date: Thu, 7 Sep 2017 11:36:04 -0600 Message-Id: <20170907173609.22696-7-tycho@docker.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170907173609.22696-1-tycho@docker.com> References: <20170907173609.22696-1-tycho@docker.com> Subject: [kernel-hardening] [PATCH v6 06/11] xpfo: add primitives for mapping underlying memory X-Virus-Scanned: ClamAV using ClamSMTP In some cases (on arm64 DMA and data cache flushes) we may have unmapped the underlying pages needed for something via XPFO. Here are some primitives useful for ensuring the underlying memory is mapped/unmapped in the face of xpfo. Signed-off-by: Tycho Andersen --- include/linux/xpfo.h | 22 ++++++++++++++++++++++ mm/xpfo.c | 30 ++++++++++++++++++++++++++++++ 2 files changed, 52 insertions(+) diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h index 04590d1dcefa..304b104ec637 100644 --- a/include/linux/xpfo.h +++ b/include/linux/xpfo.h @@ -32,6 +32,15 @@ void xpfo_free_pages(struct page *page, int order); bool xpfo_page_is_unmapped(struct page *page); +#define XPFO_NUM_PAGES(addr, size) \ + (PFN_UP((unsigned long) (addr) + (size)) - \ + PFN_DOWN((unsigned long) (addr))) + +void xpfo_temp_map(const void *addr, size_t size, void **mapping, + size_t mapping_len); +void xpfo_temp_unmap(const void *addr, size_t size, void **mapping, + size_t mapping_len); + #else /* !CONFIG_XPFO */ static inline void xpfo_kmap(void *kaddr, struct page *page) { } @@ -41,6 +50,19 @@ static inline void xpfo_free_pages(struct page *page, int order) { } static inline bool xpfo_page_is_unmapped(struct page *page) { return false; } +#define XPFO_NUM_PAGES(addr, size) 0 + +static inline void xpfo_temp_map(const void *addr, size_t size, void **mapping, + size_t mapping_len) +{ +} + +static inline void xpfo_temp_unmap(const void *addr, size_t size, + void **mapping, size_t mapping_len) +{ +} + + #endif /* CONFIG_XPFO */ #endif /* _LINUX_XPFO_H */ diff --git a/mm/xpfo.c b/mm/xpfo.c index cdbcbac582d5..f79075bf7d65 100644 --- a/mm/xpfo.c +++ b/mm/xpfo.c @@ -13,6 +13,7 @@ * the Free Software Foundation. */ +#include #include #include #include @@ -235,3 +236,32 @@ bool xpfo_page_is_unmapped(struct page *page) return test_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags); } EXPORT_SYMBOL(xpfo_page_is_unmapped); + +void xpfo_temp_map(const void *addr, size_t size, void **mapping, + size_t mapping_len) +{ + struct page *page = virt_to_page(addr); + int i, num_pages = mapping_len / sizeof(mapping[0]); + + memset(mapping, 0, mapping_len); + + for (i = 0; i < num_pages; i++) { + if (page_to_virt(page + i) >= addr + size) + break; + + if (xpfo_page_is_unmapped(page + i)) + mapping[i] = kmap_atomic(page + i); + } +} +EXPORT_SYMBOL(xpfo_temp_map); + +void xpfo_temp_unmap(const void *addr, size_t size, void **mapping, + size_t mapping_len) +{ + int i, num_pages = mapping_len / sizeof(mapping[0]); + + for (i = 0; i < num_pages; i++) + if (mapping[i]) + kunmap_atomic(mapping[i]); +} +EXPORT_SYMBOL(xpfo_temp_unmap);