From patchwork Thu Jan 21 12:27:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12035855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3767AC433E0 for ; Thu, 21 Jan 2021 12:29:48 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7B2C7239EF for ; Thu, 21 Jan 2021 12:29:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7B2C7239EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=j7YG3NvrBMqk5zopeHF7aksYItgqpp5nk5q8GRXZ7CA=; b=sLjLSC8qbhmBGp36hy4TH8DPm WN+1lnW+46S3BbnYCVUePi6Pqvf4HVu4NTvuXdvg9LkV8x8uANVfBXkXnAerJYWnSe6jQcdyO4BUv gtRnqjSfPbVVM587vwMUjU9WqrdOZuVcr/tF9suAItXohBeKyE1B8EhVVoDIqWG0hCHbOhwpx5SsJ Sf/QYh3lCtrrePwD8jQ4qpufSRoVvdOBJCiLcKveHM+5pYh/Z/0a4iZ+RGUV8HoItB3B54bv7eiIV xzmSk/FdkL6mtmtskQjt3IPPgVfIYYkVS+0dVHbKcK3xeX5oBUyZA5gVDppFH3qv3Qv9uL/3aN+Om w5A57E4yA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z43-0002pv-M6; Thu, 21 Jan 2021 12:27:51 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z40-0002nX-4t; Thu, 21 Jan 2021 12:27:49 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5FC38239D1; Thu, 21 Jan 2021 12:27:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611232067; bh=yaiGaY9CsCce4ziujW0DI0CJqQEDns2mxFIXWr5A2d8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=URpLdAui1A3REY2XYW9SaWUzljd1JeS4pqdT+UqN9xM7doZKzetjoPOxE2ZvyB0EA nmmAWm3E31c7sQtafFElXD25S+aU+Fxr1IGWirOLCQq2urltKHxVtDHt62T9bzjV6B 5QF4DxRwZMibez4Fo7cTgo4JakJjyLu+wpRDxwc7booM5/EQpzElXHYRrLTkeQ+LTQ hq+Eke2Zun77b9rU8157ukwdqkmIVsZplM9+V2OKpWISKe4Wg08EwRA9ft70p2ua8+ xFA0AKpB6lJEOPh1xUI/KbtlTN/rz6KUa6agwHCJ+OQuJn/fpYxdAzIOH/LWlpZsl6 yCQ7lmRDOD8Tg== From: Mike Rapoport To: Andrew Morton Subject: [PATCH v16 01/11] mm: add definition of PMD_PAGE_ORDER Date: Thu, 21 Jan 2021 14:27:13 +0200 Message-Id: <20210121122723.3446-2-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210121122723.3446-1-rppt@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210121_072748_408752_E5FB0D82 X-CRM114-Status: GOOD ( 19.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , David Hildenbrand , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christopher Lameter , Shuah Khan , Thomas Gleixner , Elena Reshetova , linux-arch@vger.kernel.org, Tycho Andersen , linux-nvdimm@lists.01.org, Will Deacon , x86@kernel.org, Matthew Wilcox , Mike Rapoport , Ingo Molnar , Michael Kerrisk , Palmer Dabbelt , Arnd Bergmann , James Bottomley , Hagen Paul Pfeifer , Borislav Petkov , Alexander Viro , Andy Lutomirski , Paul Walmsley , "Kirill A. Shutemov" , Dan Williams , linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , linux-fsdevel@vger.kernel.org, Shakeel Butt , Rick Edgecombe , Roman Gushchin , Mike Rapoport Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport The definition of PMD_PAGE_ORDER denoting the number of base pages in the second-level leaf page is already used by DAX and maybe handy in other cases as well. Several architectures already have definition of PMD_ORDER as the size of second level page table, so to avoid conflict with these definitions use PMD_PAGE_ORDER name and update DAX respectively. Signed-off-by: Mike Rapoport Reviewed-by: David Hildenbrand Cc: Alexander Viro Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christopher Lameter Cc: Dan Williams Cc: Dave Hansen Cc: Elena Reshetova Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: James Bottomley Cc: "Kirill A. Shutemov" Cc: Matthew Wilcox Cc: Mark Rutland Cc: Michael Kerrisk Cc: Palmer Dabbelt Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Rick Edgecombe Cc: Roman Gushchin Cc: Shakeel Butt Cc: Shuah Khan Cc: Thomas Gleixner Cc: Tycho Andersen Cc: Will Deacon Cc: Hagen Paul Pfeifer Cc: Palmer Dabbelt --- fs/dax.c | 11 ++++------- include/linux/pgtable.h | 3 +++ 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 26d5dcd2d69e..0f109eb16196 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -49,9 +49,6 @@ static inline unsigned int pe_order(enum page_entry_size pe_size) #define PG_PMD_COLOUR ((PMD_SIZE >> PAGE_SHIFT) - 1) #define PG_PMD_NR (PMD_SIZE >> PAGE_SHIFT) -/* The order of a PMD entry */ -#define PMD_ORDER (PMD_SHIFT - PAGE_SHIFT) - static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES]; static int __init init_dax_wait_table(void) @@ -98,7 +95,7 @@ static bool dax_is_locked(void *entry) static unsigned int dax_entry_order(void *entry) { if (xa_to_value(entry) & DAX_PMD) - return PMD_ORDER; + return PMD_PAGE_ORDER; return 0; } @@ -1470,7 +1467,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, { struct vm_area_struct *vma = vmf->vma; struct address_space *mapping = vma->vm_file->f_mapping; - XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER); + XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER); unsigned long pmd_addr = vmf->address & PMD_MASK; bool write = vmf->flags & FAULT_FLAG_WRITE; bool sync; @@ -1529,7 +1526,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, * entry is already in the array, for instance), it will return * VM_FAULT_FALLBACK. */ - entry = grab_mapping_entry(&xas, mapping, PMD_ORDER); + entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER); if (xa_is_internal(entry)) { result = xa_to_internal(entry); goto fallback; @@ -1695,7 +1692,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order) if (order == 0) ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn); #ifdef CONFIG_FS_DAX_PMD - else if (order == PMD_ORDER) + else if (order == PMD_PAGE_ORDER) ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE); #endif else diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 8fcdfa52eb4b..ea5c4102c23e 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -28,6 +28,9 @@ #define USER_PGTABLES_CEILING 0UL #endif +/* Number of base pages in a second level leaf page */ +#define PMD_PAGE_ORDER (PMD_SHIFT - PAGE_SHIFT) + /* * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD] * From patchwork Thu Jan 21 12:27:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12035859 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21B53C433DB for ; Thu, 21 Jan 2021 12:30:12 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AB20823975 for ; Thu, 21 Jan 2021 12:30:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AB20823975 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=AzAyYvKkZjlTlsErtcUUVlgUiyGv/nGJFHClerANPww=; b=AqbQtLru4vmQWqsTUR7MJTh5f IrIIsZGMeNuo4shFnkH8sTMibp/7/zhApsJu8vim/VzF7Ty43EqmLkxAkwbFS1I1SBlDH5q4AIKns BBwuTATUWKP0Bxq0PoUMieNFOY9zrjnaGPpbMviWLq4gxMlKbGvl29Uyhd20bCNRt4jB163R9SEXu Kggs+qRMy19lQ/RDThFRNw9hv7hxgcpNxdSJGOqbn+GcrEW/bfhPIIB+UfXrZdTOaeq6kys44jLcP ysHaVEfzL5aT0qoMbxNZK0E5b7xccrjbJDazWWg92op7RClkU3SB9jxmfXelAQOecX5pX4t2BVPBv zHqyrTUrg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z4I-0002wJ-7d; Thu, 21 Jan 2021 12:28:06 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z4A-0002sc-Jo; Thu, 21 Jan 2021 12:28:02 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id C430B239EB; Thu, 21 Jan 2021 12:27:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611232077; bh=iee2dGZmaG4EKHM9QU8vrsOr5P0gX21AcMxI1zm4D/0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fwqTN5s9dGfhafNCAW3/3vPo3qyGrohHCq4/eZNWYfdYlRi1IJ6+Xr3gYM6+1cfCE fJGvWrsLFdhtFIcYbcn/SarR+gLyAewsemDvsG77vVIkYSc2CXIpMwfnqAxoGBKXI9 LpCujH+FGxAF46KUFIC4NkMsdNP5ciyoDHqpTDGUeaGVdrCp/+dzSJ8mXR+Tyt5knU nOicr95ukZkuleEjysKJjLrn9URhintfNIWpCmF1tY9uBnPjIP3EtHVVdlNqMxFXb+ WKV6ijxfhptUtPDrN8yhePWz1bY8ZLp7BSkI1lNUHmekbD12M2DJDYt6Zi4Q0soKPF 9A4mHzXzOK7VQ== From: Mike Rapoport To: Andrew Morton Subject: [PATCH v16 02/11] mmap: make mlock_future_check() global Date: Thu, 21 Jan 2021 14:27:14 +0200 Message-Id: <20210121122723.3446-3-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210121122723.3446-1-rppt@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210121_072758_901948_D7D14E30 X-CRM114-Status: GOOD ( 16.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , David Hildenbrand , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christopher Lameter , Shuah Khan , Thomas Gleixner , Elena Reshetova , linux-arch@vger.kernel.org, Tycho Andersen , linux-nvdimm@lists.01.org, Will Deacon , x86@kernel.org, Matthew Wilcox , Mike Rapoport , Ingo Molnar , Michael Kerrisk , Palmer Dabbelt , Arnd Bergmann , James Bottomley , Hagen Paul Pfeifer , Borislav Petkov , Alexander Viro , Andy Lutomirski , Paul Walmsley , "Kirill A. Shutemov" , Dan Williams , linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , linux-fsdevel@vger.kernel.org, Shakeel Butt , Rick Edgecombe , Roman Gushchin , Mike Rapoport Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport It will be used by the upcoming secret memory implementation. Signed-off-by: Mike Rapoport Cc: Alexander Viro Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christopher Lameter Cc: Dan Williams Cc: Dave Hansen Cc: David Hildenbrand Cc: Elena Reshetova Cc: Hagen Paul Pfeifer Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: James Bottomley Cc: "Kirill A. Shutemov" Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael Kerrisk Cc: Palmer Dabbelt Cc: Palmer Dabbelt Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Rick Edgecombe Cc: Roman Gushchin Cc: Shakeel Butt Cc: Shuah Khan Cc: Thomas Gleixner Cc: Tycho Andersen Cc: Will Deacon --- mm/internal.h | 3 +++ mm/mmap.c | 5 ++--- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 9902648f2206..8e9c660f33ca 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -353,6 +353,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma) extern void mlock_vma_page(struct page *page); extern unsigned int munlock_vma_page(struct page *page); +extern int mlock_future_check(struct mm_struct *mm, unsigned long flags, + unsigned long len); + /* * Clear the page's PageMlocked(). This can be useful in a situation where * we want to unconditionally remove a page from the pagecache -- e.g., diff --git a/mm/mmap.c b/mm/mmap.c index 28ef5e29152a..10b9b8b88913 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1346,9 +1346,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint) return hint; } -static inline int mlock_future_check(struct mm_struct *mm, - unsigned long flags, - unsigned long len) +int mlock_future_check(struct mm_struct *mm, unsigned long flags, + unsigned long len) { unsigned long locked, lock_limit; From patchwork Thu Jan 21 12:27:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12035857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBDE6C433E0 for ; Thu, 21 Jan 2021 12:30:04 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 77ED123975 for ; Thu, 21 Jan 2021 12:30:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 77ED123975 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=oQLObXBGJHBbCxvx8P/JKElsDlQiDN7QKF3VptgwFWY=; b=uFLBc2fVMWPjbAs+cvcTBIxjW UToq9FxSn0xzdBr/ddDfPa/cLWgMOOKhK2uINMxYK4VpJvfPh4ytHk2aii0Gwyg/xrQPKWq7/PIhT rgKifBRzTWC7jgQnfWs5Vm+stc5SQMuLozhS6wovFXtN26va0DKxqm3XrWMIM0WekYX8o5uBTcDUJ GJUFRxs4Jyy/INm8y/Yru5pM8G1UysFzzXYsO2bppCOUFNqEQELG6PE7E6tO+FVBio8UYKixcRtZs Y1t/u5ms14jVVQOi17TfxOQdkPsEjO50b5sXy3ZdIX5koYW8gZKcbo92NfRD6mHpBV3Rr/+0zh9p3 ainIvo04A==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z4a-000340-6t; Thu, 21 Jan 2021 12:28:24 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z4K-0002xE-VQ; Thu, 21 Jan 2021 12:28:13 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2D2CC239E7; Thu, 21 Jan 2021 12:27:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611232087; bh=ctX7cRfj1jYK//1ALJuAZYMpiOhrfecPI7S/gIqb94c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X3nf7ddjwubhwF1Y8j0XFKvW9hIBQAkTbvf2rKZAC3xS9s1/ROoTs0mMP3J4LS0ap 5hiRRBQBE6aI6kXSB/qH34WEZYoS4F/o8cKdwqE0Qo987DRx3yxbkdhXipRfSdq/+w gl1WacJdWHYLuZy7d7c/SYbhLpVCycLuvDtDPT7WwMbO1K3r/Mvhg5E1d/psicrose yw6tYhdaYifcYm6ddS1t+9mbr04vgcsNitZCJ8R/8rz7r5m2fWIMGdg7J9V3nhVGpC yvkcdhB43EWViFKbfkDfAMQAoqLqX0aij2PlXjAI6v78iBg286pwKwCl/kAdDebHmV HHn7AtBqBrGSw== From: Mike Rapoport To: Andrew Morton Subject: [PATCH v16 03/11] riscv/Kconfig: make direct map manipulation options depend on MMU Date: Thu, 21 Jan 2021 14:27:15 +0200 Message-Id: <20210121122723.3446-4-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210121122723.3446-1-rppt@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210121_072809_164763_58386F44 X-CRM114-Status: GOOD ( 15.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , David Hildenbrand , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christopher Lameter , Shuah Khan , Thomas Gleixner , Elena Reshetova , linux-arch@vger.kernel.org, Tycho Andersen , kernel test robot , linux-nvdimm@lists.01.org, Will Deacon , x86@kernel.org, Matthew Wilcox , Mike Rapoport , Ingo Molnar , Michael Kerrisk , Arnd Bergmann , James Bottomley , Borislav Petkov , Alexander Viro , Andy Lutomirski , Paul Walmsley , "Kirill A. Shutemov" , Dan Williams , linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , linux-fsdevel@vger.kernel.org, Shakeel Butt , Rick Edgecombe , Roman Gushchin , Mike Rapoport Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport ARCH_HAS_SET_DIRECT_MAP and ARCH_HAS_SET_MEMORY configuration options have no meaning when CONFIG_MMU is disabled and there is no point to enable them for the nommu case. Add an explicit dependency on MMU for these options. Signed-off-by: Mike Rapoport Reported-by: kernel test robot --- arch/riscv/Kconfig | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index d82303dcc6b6..d35ce19ab1fa 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -25,8 +25,8 @@ config RISCV select ARCH_HAS_KCOV select ARCH_HAS_MMIOWB select ARCH_HAS_PTE_SPECIAL - select ARCH_HAS_SET_DIRECT_MAP - select ARCH_HAS_SET_MEMORY + select ARCH_HAS_SET_DIRECT_MAP if MMU + select ARCH_HAS_SET_MEMORY if MMU select ARCH_HAS_STRICT_KERNEL_RWX if MMU select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT From patchwork Thu Jan 21 12:27:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12035861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABE23C433E0 for ; Thu, 21 Jan 2021 12:30:15 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 36D9323975 for ; Thu, 21 Jan 2021 12:30:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 36D9323975 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=+FYYZVe5i4gudXGUXAP9BYIwTt30T4sHx7QsGSgIt5I=; b=h4KHFL81+U8/0/ekL0x/otiey oRL+g69zwtA2fX9RvSAOOA97liTuwdkdjWD2LfbxqA7yeQt3sIaQdx/qp7WLMKIQGdrOZogkQy9jJ UfUmE27nKJ42JCVB64WxBZRLHIhthqdXNM3zjmTlfkp+1i4sW19rmHFXWWiZzdE4dEB+w+l1S1ASc 2/ulytIo1MyFBobB+altR/CbHHANf4JshZElP5/uXGnG4fel+9/7V/s7FzGbBDCQWsItKWDxj6nc9 2KUXWz75UxXhOZOHQnREcOYUQOjMSWF2BjcdInPdClA+dOjKNIcBX3NZbQCQ4DHaOAPlfOqLKBUfP CrvARW6Vw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z4o-0003A1-5l; Thu, 21 Jan 2021 12:28:38 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z4V-000327-HZ; Thu, 21 Jan 2021 12:28:25 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 52E8E239FC; Thu, 21 Jan 2021 12:28:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611232098; bh=AcBjDgtxR1LKDo9uhtPZ88hRkunrxv7liFNZGJj2KnE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JdQbQD+MzrvVuU0PGxCilY3WqNJFoEajhpfsNtfSw32+xRNaCgAtgu8BbXs8pV2Ds d8KdaIFJ6KBoZGPA1yIGtB/fVkMvKnmUOlcvO2upE1A6ANqQuDbGx2/uVgtRU0BPh4 HptXwS7Ign2gMyn9dQMRYgE/ds7dX+vQnMNSy/CFWESSOHuxkgmaOoUSDt0+AZEnw6 lr4tjfRBxJU0C7o4qtKhbZ9s6Cjmb3RLgWqJS9XyvD9y3G1Rb4MvbW2ShqyqjCEt2R pt7Nnr/4njv+YXte9Rn3bXwzXR5eUwHJ+ch/vWTOX8kuLo8RCj3bkJkTI3N4i1VcGM cSvJV8RcYaQBg== From: Mike Rapoport To: Andrew Morton Subject: [PATCH v16 04/11] set_memory: allow set_direct_map_*_noflush() for multiple pages Date: Thu, 21 Jan 2021 14:27:16 +0200 Message-Id: <20210121122723.3446-5-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210121122723.3446-1-rppt@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210121_072819_800841_8D0C412F X-CRM114-Status: GOOD ( 21.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , David Hildenbrand , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christopher Lameter , Shuah Khan , Thomas Gleixner , Elena Reshetova , linux-arch@vger.kernel.org, Tycho Andersen , linux-nvdimm@lists.01.org, Will Deacon , x86@kernel.org, Matthew Wilcox , Mike Rapoport , Ingo Molnar , Michael Kerrisk , Palmer Dabbelt , Arnd Bergmann , James Bottomley , Hagen Paul Pfeifer , Borislav Petkov , Alexander Viro , Andy Lutomirski , Paul Walmsley , "Kirill A. Shutemov" , Dan Williams , linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , linux-fsdevel@vger.kernel.org, Shakeel Butt , Rick Edgecombe , Roman Gushchin , Mike Rapoport Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport The underlying implementations of set_direct_map_invalid_noflush() and set_direct_map_default_noflush() allow updating multiple contiguous pages at once. Add numpages parameter to set_direct_map_*_noflush() to expose this ability with these APIs. Signed-off-by: Mike Rapoport Acked-by: Catalin Marinas [arm64] Cc: Alexander Viro Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Borislav Petkov Cc: Christopher Lameter Cc: Dan Williams Cc: Dave Hansen Cc: David Hildenbrand Cc: Elena Reshetova Cc: Hagen Paul Pfeifer Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: James Bottomley Cc: "Kirill A. Shutemov" Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael Kerrisk Cc: Palmer Dabbelt Cc: Palmer Dabbelt Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Rick Edgecombe Cc: Roman Gushchin Cc: Shakeel Butt Cc: Shuah Khan Cc: Thomas Gleixner Cc: Tycho Andersen Cc: Will Deacon --- arch/arm64/include/asm/cacheflush.h | 4 ++-- arch/arm64/mm/pageattr.c | 10 ++++++---- arch/riscv/include/asm/set_memory.h | 4 ++-- arch/riscv/mm/pageattr.c | 8 ++++---- arch/x86/include/asm/set_memory.h | 4 ++-- arch/x86/mm/pat/set_memory.c | 8 ++++---- include/linux/set_memory.h | 4 ++-- kernel/power/snapshot.c | 4 ++-- mm/vmalloc.c | 5 +++-- 9 files changed, 27 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 45217f21f1fe..d3598419a284 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -138,8 +138,8 @@ static __always_inline void __flush_icache_all(void) int set_memory_valid(unsigned long addr, int numpages, int enable); -int set_direct_map_invalid_noflush(struct page *page); -int set_direct_map_default_noflush(struct page *page); +int set_direct_map_invalid_noflush(struct page *page, int numpages); +int set_direct_map_default_noflush(struct page *page, int numpages); bool kernel_page_present(struct page *page); #include diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 92eccaf595c8..b53ef37bf95a 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -148,34 +148,36 @@ int set_memory_valid(unsigned long addr, int numpages, int enable) __pgprot(PTE_VALID)); } -int set_direct_map_invalid_noflush(struct page *page) +int set_direct_map_invalid_noflush(struct page *page, int numpages) { struct page_change_data data = { .set_mask = __pgprot(0), .clear_mask = __pgprot(PTE_VALID), }; + unsigned long size = PAGE_SIZE * numpages; if (!debug_pagealloc_enabled() && !rodata_full) return 0; return apply_to_page_range(&init_mm, (unsigned long)page_address(page), - PAGE_SIZE, change_page_range, &data); + size, change_page_range, &data); } -int set_direct_map_default_noflush(struct page *page) +int set_direct_map_default_noflush(struct page *page, int numpages) { struct page_change_data data = { .set_mask = __pgprot(PTE_VALID | PTE_WRITE), .clear_mask = __pgprot(PTE_RDONLY), }; + unsigned long size = PAGE_SIZE * numpages; if (!debug_pagealloc_enabled() && !rodata_full) return 0; return apply_to_page_range(&init_mm, (unsigned long)page_address(page), - PAGE_SIZE, change_page_range, &data); + size, change_page_range, &data); } #ifdef CONFIG_DEBUG_PAGEALLOC diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h index 211eb8244a45..1aaf2720b8f6 100644 --- a/arch/riscv/include/asm/set_memory.h +++ b/arch/riscv/include/asm/set_memory.h @@ -26,8 +26,8 @@ static inline void protect_kernel_text_data(void) {}; static inline int set_memory_rw_nx(unsigned long addr, int numpages) { return 0; } #endif -int set_direct_map_invalid_noflush(struct page *page); -int set_direct_map_default_noflush(struct page *page); +int set_direct_map_invalid_noflush(struct page *page, int numpages); +int set_direct_map_default_noflush(struct page *page, int numpages); bool kernel_page_present(struct page *page); #endif /* __ASSEMBLY__ */ diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c index 5e49e4b4a4cc..9618181b70be 100644 --- a/arch/riscv/mm/pageattr.c +++ b/arch/riscv/mm/pageattr.c @@ -156,11 +156,11 @@ int set_memory_nx(unsigned long addr, int numpages) return __set_memory(addr, numpages, __pgprot(0), __pgprot(_PAGE_EXEC)); } -int set_direct_map_invalid_noflush(struct page *page) +int set_direct_map_invalid_noflush(struct page *page, int numpages) { int ret; unsigned long start = (unsigned long)page_address(page); - unsigned long end = start + PAGE_SIZE; + unsigned long end = start + PAGE_SIZE * numpages; struct pageattr_masks masks = { .set_mask = __pgprot(0), .clear_mask = __pgprot(_PAGE_PRESENT) @@ -173,11 +173,11 @@ int set_direct_map_invalid_noflush(struct page *page) return ret; } -int set_direct_map_default_noflush(struct page *page) +int set_direct_map_default_noflush(struct page *page, int numpages) { int ret; unsigned long start = (unsigned long)page_address(page); - unsigned long end = start + PAGE_SIZE; + unsigned long end = start + PAGE_SIZE * numpages; struct pageattr_masks masks = { .set_mask = PAGE_KERNEL, .clear_mask = __pgprot(0) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index 4352f08bfbb5..6224cb291f6c 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -80,8 +80,8 @@ int set_pages_wb(struct page *page, int numpages); int set_pages_ro(struct page *page, int numpages); int set_pages_rw(struct page *page, int numpages); -int set_direct_map_invalid_noflush(struct page *page); -int set_direct_map_default_noflush(struct page *page); +int set_direct_map_invalid_noflush(struct page *page, int numpages); +int set_direct_map_default_noflush(struct page *page, int numpages); bool kernel_page_present(struct page *page); extern int kernel_set_to_readonly; diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 16f878c26667..d157fd617c99 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2184,14 +2184,14 @@ static int __set_pages_np(struct page *page, int numpages) return __change_page_attr_set_clr(&cpa, 0); } -int set_direct_map_invalid_noflush(struct page *page) +int set_direct_map_invalid_noflush(struct page *page, int numpages) { - return __set_pages_np(page, 1); + return __set_pages_np(page, numpages); } -int set_direct_map_default_noflush(struct page *page) +int set_direct_map_default_noflush(struct page *page, int numpages) { - return __set_pages_p(page, 1); + return __set_pages_p(page, numpages); } #ifdef CONFIG_DEBUG_PAGEALLOC diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h index fe1aa4e54680..c650f82db813 100644 --- a/include/linux/set_memory.h +++ b/include/linux/set_memory.h @@ -15,11 +15,11 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; } #endif #ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP -static inline int set_direct_map_invalid_noflush(struct page *page) +static inline int set_direct_map_invalid_noflush(struct page *page, int numpages) { return 0; } -static inline int set_direct_map_default_noflush(struct page *page) +static inline int set_direct_map_default_noflush(struct page *page, int numpages) { return 0; } diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index d63560e1cf87..64b7aab9aee4 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -86,7 +86,7 @@ static inline void hibernate_restore_unprotect_page(void *page_address) {} static inline void hibernate_map_page(struct page *page) { if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) { - int ret = set_direct_map_default_noflush(page); + int ret = set_direct_map_default_noflush(page, 1); if (ret) pr_warn_once("Failed to remap page\n"); @@ -99,7 +99,7 @@ static inline void hibernate_unmap_page(struct page *page) { if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) { unsigned long addr = (unsigned long)page_address(page); - int ret = set_direct_map_invalid_noflush(page); + int ret = set_direct_map_invalid_noflush(page, 1); if (ret) pr_warn_once("Failed to remap page\n"); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d5f2a84e488a..1da9cd1d0758 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2195,13 +2195,14 @@ struct vm_struct *remove_vm_area(const void *addr) } static inline void set_area_direct_map(const struct vm_struct *area, - int (*set_direct_map)(struct page *page)) + int (*set_direct_map)(struct page *page, + int numpages)) { int i; for (i = 0; i < area->nr_pages; i++) if (page_address(area->pages[i])) - set_direct_map(area->pages[i]); + set_direct_map(area->pages[i], 1); } /* Handle removing and resetting vm mappings related to the vm_struct. */ From patchwork Thu Jan 21 12:27:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12035863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A934C433E0 for ; Thu, 21 Jan 2021 12:31:18 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0F7BD239FF for ; Thu, 21 Jan 2021 12:31:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0F7BD239FF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=QgHQLyV+X6zYo3EL6kvTaMWMiMIatiK5k7yeaONrxZI=; b=L9JjLtPCuU+jQP8McIANsjoSK Wwq6Q6prtPVEK1TD+W+GgPKVFijE0X4Hvmo4ddBZNFkFQmP10zf2Y1ENp9ZCh86Tpj16W9AgZvUSX GhrFKlpE8R0ZelwN8a/ty2k/n+49r8NcQagmtnci4h90c0o7J+PQq7Nw2R0soKuOdusaZvCGdfk8B ylDoxMMxk+Xx0yrqUjwA5mGdsmolkJ//gQTiQdJLVz06BJbs/HdkMVfQsWCRnBVqRyupKU+dpY3HR y3zmGdiQnUUAhovHBIAoxLRuJwVQthuJQ2iZRbDKrJAlOin2wgB0uvZjdMcEJX9zfg6O12crLqAmG 6N8Fln60w==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z5J-0003Qh-Bt; Thu, 21 Jan 2021 12:29:09 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z4g-000377-1E; Thu, 21 Jan 2021 12:28:32 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id E75D5239FE; Thu, 21 Jan 2021 12:28:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611232108; bh=JrQqqTfDn7PtqcmO+RqR6Q5m5y8+aY3Lj4TuxpUnQmI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W6FHAh/K06YlmF+noWbqgt9SesS1Tw5eqpj32YdzJavKZ4E5U7dQRLpEbAKXEfFmA wjeiGKvAJBAhgj5UD827R8iSen/Y7nkBDWW84BL9eZVWtIgUGtoNlvnD0O1R46wd4j h39eUytAYFzZnNBhmLTCDVw+aZBl/DYMdiskJSG0WaEVbLxmKgWUrSbCUqc+vfqCRl uyYoLgU7mnj5HaHfEx1vs7R0ry5WqafVd61tzy7DxdG/xtK6jQ29+wpXd1MJuok8q6 3SeaOeRoS5NXBB5J9lHBRbM+i+WhGweOAN5QfdnBGMiVdJX7I0/VjCtrR1H9iXKVCT r2n9cHffwnMgg== From: Mike Rapoport To: Andrew Morton Subject: [PATCH v16 05/11] set_memory: allow querying whether set_direct_map_*() is actually enabled Date: Thu, 21 Jan 2021 14:27:17 +0200 Message-Id: <20210121122723.3446-6-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210121122723.3446-1-rppt@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210121_072830_274955_7AE80CF3 X-CRM114-Status: GOOD ( 28.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , David Hildenbrand , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christopher Lameter , Shuah Khan , Thomas Gleixner , Elena Reshetova , linux-arch@vger.kernel.org, Tycho Andersen , linux-nvdimm@lists.01.org, Will Deacon , x86@kernel.org, Matthew Wilcox , Mike Rapoport , Ingo Molnar , Michael Kerrisk , Palmer Dabbelt , Arnd Bergmann , James Bottomley , Hagen Paul Pfeifer , Borislav Petkov , Alexander Viro , Andy Lutomirski , Paul Walmsley , "Kirill A. Shutemov" , Dan Williams , linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , linux-fsdevel@vger.kernel.org, Shakeel Butt , Rick Edgecombe , Roman Gushchin , Mike Rapoport Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport On arm64, set_direct_map_*() functions may return 0 without actually changing the linear map. This behaviour can be controlled using kernel parameters, so we need a way to determine at runtime whether calls to set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have any effect. Extend set_memory API with can_set_direct_map() function that allows checking if calling set_direct_map_*() will actually change the page table, replace several occurrences of open coded checks in arm64 with the new function and provide a generic stub for architectures that always modify page tables upon calls to set_direct_map APIs. Signed-off-by: Mike Rapoport Reviewed-by: Catalin Marinas Reviewed-by: David Hildenbrand Cc: Alexander Viro Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Borislav Petkov Cc: Christopher Lameter Cc: Dan Williams Cc: Dave Hansen Cc: Elena Reshetova Cc: Hagen Paul Pfeifer Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: James Bottomley Cc: "Kirill A. Shutemov" Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael Kerrisk Cc: Palmer Dabbelt Cc: Palmer Dabbelt Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Rick Edgecombe Cc: Roman Gushchin Cc: Shakeel Butt Cc: Shuah Khan Cc: Thomas Gleixner Cc: Tycho Andersen Cc: Will Deacon --- arch/arm64/include/asm/Kbuild | 1 - arch/arm64/include/asm/cacheflush.h | 6 ------ arch/arm64/include/asm/set_memory.h | 17 +++++++++++++++++ arch/arm64/kernel/machine_kexec.c | 1 + arch/arm64/mm/mmu.c | 6 +++--- arch/arm64/mm/pageattr.c | 13 +++++++++---- include/linux/set_memory.h | 12 ++++++++++++ 7 files changed, 42 insertions(+), 14 deletions(-) create mode 100644 arch/arm64/include/asm/set_memory.h diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild index 07ac208edc89..73aa25843f65 100644 --- a/arch/arm64/include/asm/Kbuild +++ b/arch/arm64/include/asm/Kbuild @@ -3,5 +3,4 @@ generic-y += early_ioremap.h generic-y += mcs_spinlock.h generic-y += qrwlock.h generic-y += qspinlock.h -generic-y += set_memory.h generic-y += user.h diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index d3598419a284..b1bdf83a73db 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -136,12 +136,6 @@ static __always_inline void __flush_icache_all(void) dsb(ish); } -int set_memory_valid(unsigned long addr, int numpages, int enable); - -int set_direct_map_invalid_noflush(struct page *page, int numpages); -int set_direct_map_default_noflush(struct page *page, int numpages); -bool kernel_page_present(struct page *page); - #include #endif /* __ASM_CACHEFLUSH_H */ diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h new file mode 100644 index 000000000000..ecb6b0f449ab --- /dev/null +++ b/arch/arm64/include/asm/set_memory.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef _ASM_ARM64_SET_MEMORY_H +#define _ASM_ARM64_SET_MEMORY_H + +#include + +bool can_set_direct_map(void); +#define can_set_direct_map can_set_direct_map + +int set_memory_valid(unsigned long addr, int numpages, int enable); + +int set_direct_map_invalid_noflush(struct page *page, int numpages); +int set_direct_map_default_noflush(struct page *page, int numpages); +bool kernel_page_present(struct page *page); + +#endif /* _ASM_ARM64_SET_MEMORY_H */ diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index a0b144cfaea7..0cbc50c4fa5a 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 30c6dd02e706..79604049fff5 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include @@ -492,7 +493,7 @@ static void __init map_mem(pgd_t *pgdp) int flags = 0; u64 i; - if (rodata_full || crash_mem_map || debug_pagealloc_enabled()) + if (can_set_direct_map() || crash_mem_map) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; /* @@ -1468,8 +1469,7 @@ int arch_add_memory(int nid, u64 start, u64 size, * KFENCE requires linear map to be mapped at page granularity, so that * it is possible to protect/unprotect single pages in the KFENCE pool. */ - if (rodata_full || debug_pagealloc_enabled() || - IS_ENABLED(CONFIG_KFENCE)) + if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start), diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index b53ef37bf95a..d505172265b0 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -19,6 +19,11 @@ struct page_change_data { bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED); +bool can_set_direct_map(void) +{ + return rodata_full || debug_pagealloc_enabled(); +} + static int change_page_range(pte_t *ptep, unsigned long addr, void *data) { struct page_change_data *cdata = data; @@ -156,7 +161,7 @@ int set_direct_map_invalid_noflush(struct page *page, int numpages) }; unsigned long size = PAGE_SIZE * numpages; - if (!debug_pagealloc_enabled() && !rodata_full) + if (!can_set_direct_map()) return 0; return apply_to_page_range(&init_mm, @@ -172,7 +177,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages) }; unsigned long size = PAGE_SIZE * numpages; - if (!debug_pagealloc_enabled() && !rodata_full) + if (!can_set_direct_map()) return 0; return apply_to_page_range(&init_mm, @@ -183,7 +188,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages) #ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { - if (!debug_pagealloc_enabled() && !rodata_full) + if (!can_set_direct_map()) return; set_memory_valid((unsigned long)page_address(page), numpages, enable); @@ -208,7 +213,7 @@ bool kernel_page_present(struct page *page) pte_t *ptep; unsigned long addr = (unsigned long)page_address(page); - if (!debug_pagealloc_enabled() && !rodata_full) + if (!can_set_direct_map()) return true; pgdp = pgd_offset_k(addr); diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h index c650f82db813..7b4b6626032d 100644 --- a/include/linux/set_memory.h +++ b/include/linux/set_memory.h @@ -28,7 +28,19 @@ static inline bool kernel_page_present(struct page *page) { return true; } +#else /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */ +/* + * Some architectures, e.g. ARM64 can disable direct map modifications at + * boot time. Let them overrive this query. + */ +#ifndef can_set_direct_map +static inline bool can_set_direct_map(void) +{ + return true; +} +#define can_set_direct_map can_set_direct_map #endif +#endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */ #ifndef set_mce_nospec static inline int set_mce_nospec(unsigned long pfn, bool unmap) From patchwork Thu Jan 21 12:27:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12035865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B768C433E6 for ; Thu, 21 Jan 2021 12:31:20 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DFB8523975 for ; Thu, 21 Jan 2021 12:31:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DFB8523975 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=3rQd+wgvmia3Oc5ZijETpdjweg5TtlNxIcB9eurc6Qo=; b=uzqA7MksI6cdmwqiPjowSJHW1 yzuyj/6PvdC8787eSSJmIjrwYLWxow+1YFFa3GqD05Z28asg8iLdwku6/VfcLIDxlXc1my6CAbYtR YEyQiDm0UhIDybwkXlvGUPoF7BVOuq2ePvLAXvKzAdsPRJ0n060Ke9SmHQ8Arg52mq8Ynj9DmF7+E xkjd09M4UXFeau43TznEKoj7g4QtcBYxDOc2mF/chuIXqmPKH1JeASPtSb4iOBJ1hWAbN4z+YsVno e4COnNQD40eYxQ3Jkky1b3tuHnvAEYZikVlrDxTzI1mv9AnEY4K+QolVSn7GETO8LeCIoaPG0Tk6I RXj0qjpWg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z5t-0003i4-8v; Thu, 21 Jan 2021 12:29:45 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z4q-0003Bz-0r; Thu, 21 Jan 2021 12:28:46 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 415B923975; Thu, 21 Jan 2021 12:28:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611232119; bh=lGsR1oZQWwxrLD73NqWDDlcprqNKrjsYY6yQD4YnAwM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OmfGefjPZ6vZW2FiRxHuNGxYQtb3ojIv6QoYROWeRY+vql8fwNK/Q8hxytlQcBVXa AkrqF+bTztUajCMt0orcEvEzxxQKd5fprHnXTyb4c8/X2+QG8A8+aVc6l7yiLp6vPX IObYUFUmGJwhObNkhskUMPdD98D5B+YmbMJJnlYk5qhQuo4aAi/E/6AF/maRRlg1Pf r1BWldbN2kB535i+GEOQCAQzQ+xfpf9okxBXVgUoY/BrqtIiRLf9ccUKOEFCxWWHQC c8ZLmK4LC5aj1W/z4iOZuvhR+pXeLqe0b3U+BQY8a5yTSJOLxbagnbgJHxkJY0rbEF JFM81UUI9LK9w== From: Mike Rapoport To: Andrew Morton Subject: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas Date: Thu, 21 Jan 2021 14:27:18 +0200 Message-Id: <20210121122723.3446-7-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210121122723.3446-1-rppt@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210121_072840_312995_99CDCDF6 X-CRM114-Status: GOOD ( 36.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , David Hildenbrand , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christopher Lameter , Shuah Khan , Thomas Gleixner , Elena Reshetova , linux-arch@vger.kernel.org, Tycho Andersen , linux-nvdimm@lists.01.org, Will Deacon , x86@kernel.org, Matthew Wilcox , Mike Rapoport , Ingo Molnar , Michael Kerrisk , Palmer Dabbelt , Arnd Bergmann , James Bottomley , Hagen Paul Pfeifer , Borislav Petkov , Alexander Viro , Andy Lutomirski , Paul Walmsley , "Kirill A. Shutemov" , Dan Williams , linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , linux-fsdevel@vger.kernel.org, Shakeel Butt , Rick Edgecombe , Roman Gushchin , Mike Rapoport Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport Introduce "memfd_secret" system call with the ability to create memory areas visible only in the context of the owning process and not mapped not only to other processes but in the kernel page tables as well. The user will create a file descriptor using the memfd_secret() system call. The memory areas created by mmap() calls from this file descriptor will be unmapped from the kernel direct map and they will be only mapped in the page table of the owning mm. The secret memory remains accessible in the process context using uaccess primitives, but it is not accessible using direct/linear map addresses. Functions in the follow_page()/get_user_page() family will refuse to return a page that belongs to the secret memory area. A page that was a part of the secret memory area is cleared when it is freed. The following example demonstrates creation of a secret mapping (error handling is omitted): fd = memfd_secret(0); ftruncate(fd, MAP_SIZE); ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); Signed-off-by: Mike Rapoport Acked-by: Hagen Paul Pfeifer Cc: Alexander Viro Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christopher Lameter Cc: Dan Williams Cc: Dave Hansen Cc: David Hildenbrand Cc: Elena Reshetova Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: James Bottomley Cc: "Kirill A. Shutemov" Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael Kerrisk Cc: Palmer Dabbelt Cc: Palmer Dabbelt Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Rick Edgecombe Cc: Roman Gushchin Cc: Shakeel Butt Cc: Shuah Khan Cc: Thomas Gleixner Cc: Tycho Andersen Cc: Will Deacon --- include/linux/secretmem.h | 24 ++++ include/uapi/linux/magic.h | 1 + kernel/sys_ni.c | 2 + mm/Kconfig | 3 + mm/Makefile | 1 + mm/gup.c | 10 ++ mm/secretmem.c | 278 +++++++++++++++++++++++++++++++++++++ 7 files changed, 319 insertions(+) create mode 100644 include/linux/secretmem.h create mode 100644 mm/secretmem.c diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h new file mode 100644 index 000000000000..70e7db9f94fe --- /dev/null +++ b/include/linux/secretmem.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _LINUX_SECRETMEM_H +#define _LINUX_SECRETMEM_H + +#ifdef CONFIG_SECRETMEM + +bool vma_is_secretmem(struct vm_area_struct *vma); +bool page_is_secretmem(struct page *page); + +#else + +static inline bool vma_is_secretmem(struct vm_area_struct *vma) +{ + return false; +} + +static inline bool page_is_secretmem(struct page *page) +{ + return false; +} + +#endif /* CONFIG_SECRETMEM */ + +#endif /* _LINUX_SECRETMEM_H */ diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h index f3956fc11de6..35687dcb1a42 100644 --- a/include/uapi/linux/magic.h +++ b/include/uapi/linux/magic.h @@ -97,5 +97,6 @@ #define DEVMEM_MAGIC 0x454d444d /* "DMEM" */ #define Z3FOLD_MAGIC 0x33 #define PPC_CMM_MAGIC 0xc7571590 +#define SECRETMEM_MAGIC 0x5345434d /* "SECM" */ #endif /* __LINUX_MAGIC_H__ */ diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index 769ad6225ab1..869aa6b5bf34 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -355,6 +355,8 @@ COND_SYSCALL(pkey_mprotect); COND_SYSCALL(pkey_alloc); COND_SYSCALL(pkey_free); +/* memfd_secret */ +COND_SYSCALL(memfd_secret); /* * Architecture specific weak syscall entries. diff --git a/mm/Kconfig b/mm/Kconfig index 24c045b24b95..5f8243442f66 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -872,4 +872,7 @@ config MAPPING_DIRTY_HELPERS config KMAP_LOCAL bool +config SECRETMEM + def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED + endmenu diff --git a/mm/Makefile b/mm/Makefile index 72227b24a616..b2a564eec27f 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -120,3 +120,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o obj-$(CONFIG_PTDUMP_CORE) += ptdump.o obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o +obj-$(CONFIG_SECRETMEM) += secretmem.o diff --git a/mm/gup.c b/mm/gup.c index e4c224cd9661..3e086b073624 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include @@ -759,6 +760,9 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, struct follow_page_context ctx = { NULL }; struct page *page; + if (vma_is_secretmem(vma)) + return NULL; + page = follow_page_mask(vma, address, foll_flags, &ctx); if (ctx.pgmap) put_dev_pagemap(ctx.pgmap); @@ -892,6 +896,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma)) return -EOPNOTSUPP; + if (vma_is_secretmem(vma)) + return -EFAULT; + if (write) { if (!(vm_flags & VM_WRITE)) { if (!(gup_flags & FOLL_FORCE)) @@ -2031,6 +2038,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, VM_BUG_ON(!pfn_valid(pte_pfn(pte))); page = pte_page(pte); + if (page_is_secretmem(page)) + goto pte_unmap; + head = try_grab_compound_head(page, 1, flags); if (!head) goto pte_unmap; diff --git a/mm/secretmem.c b/mm/secretmem.c new file mode 100644 index 000000000000..904351d12c33 --- /dev/null +++ b/mm/secretmem.c @@ -0,0 +1,278 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright IBM Corporation, 2020 + * + * Author: Mike Rapoport + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include + +#include "internal.h" + +#undef pr_fmt +#define pr_fmt(fmt) "secretmem: " fmt + +/* + * Define mode and flag masks to allow validation of the system call + * parameters. + */ +#define SECRETMEM_MODE_MASK (0x0) +#define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK + +struct secretmem_ctx { + unsigned int mode; +}; + +static struct page *secretmem_alloc_page(gfp_t gfp) +{ + /* + * FIXME: use a cache of large pages to reduce the direct map + * fragmentation + */ + return alloc_page(gfp | __GFP_ZERO); +} + +static vm_fault_t secretmem_fault(struct vm_fault *vmf) +{ + struct address_space *mapping = vmf->vma->vm_file->f_mapping; + struct inode *inode = file_inode(vmf->vma->vm_file); + pgoff_t offset = vmf->pgoff; + unsigned long addr; + struct page *page; + int err; + + if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode)) + return vmf_error(-EINVAL); + +retry: + page = find_lock_page(mapping, offset); + if (!page) { + page = secretmem_alloc_page(vmf->gfp_mask); + if (!page) + return VM_FAULT_OOM; + + err = set_direct_map_invalid_noflush(page, 1); + if (err) { + put_page(page); + return vmf_error(err); + } + + __SetPageUptodate(page); + err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask); + if (unlikely(err)) { + put_page(page); + if (err == -EEXIST) + goto retry; + goto err_restore_direct_map; + } + + addr = (unsigned long)page_address(page); + flush_tlb_kernel_range(addr, addr + PAGE_SIZE); + } + + vmf->page = page; + return VM_FAULT_LOCKED; + +err_restore_direct_map: + /* + * If a split of large page was required, it already happened + * when we marked the page invalid which guarantees that this call + * won't fail + */ + set_direct_map_default_noflush(page, 1); + return vmf_error(err); +} + +static const struct vm_operations_struct secretmem_vm_ops = { + .fault = secretmem_fault, +}; + +static int secretmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + unsigned long len = vma->vm_end - vma->vm_start; + + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0) + return -EINVAL; + + if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len)) + return -EAGAIN; + + vma->vm_ops = &secretmem_vm_ops; + vma->vm_flags |= VM_LOCKED; + + return 0; +} + +bool vma_is_secretmem(struct vm_area_struct *vma) +{ + return vma->vm_ops == &secretmem_vm_ops; +} + +static const struct file_operations secretmem_fops = { + .mmap = secretmem_mmap, +}; + +static bool secretmem_isolate_page(struct page *page, isolate_mode_t mode) +{ + return false; +} + +static int secretmem_migratepage(struct address_space *mapping, + struct page *newpage, struct page *page, + enum migrate_mode mode) +{ + return -EBUSY; +} + +static void secretmem_freepage(struct page *page) +{ + set_direct_map_default_noflush(page, 1); + clear_highpage(page); +} + +static const struct address_space_operations secretmem_aops = { + .freepage = secretmem_freepage, + .migratepage = secretmem_migratepage, + .isolate_page = secretmem_isolate_page, +}; + +bool page_is_secretmem(struct page *page) +{ + struct address_space *mapping = page_mapping(page); + + if (!mapping) + return false; + + return mapping->a_ops == &secretmem_aops; +} + +static struct vfsmount *secretmem_mnt; + +static struct file *secretmem_file_create(unsigned long flags) +{ + struct file *file = ERR_PTR(-ENOMEM); + struct secretmem_ctx *ctx; + struct inode *inode; + + inode = alloc_anon_inode(secretmem_mnt->mnt_sb); + if (IS_ERR(inode)) + return ERR_CAST(inode); + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + goto err_free_inode; + + file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem", + O_RDWR, &secretmem_fops); + if (IS_ERR(file)) + goto err_free_ctx; + + mapping_set_unevictable(inode->i_mapping); + + inode->i_mapping->private_data = ctx; + inode->i_mapping->a_ops = &secretmem_aops; + + /* pretend we are a normal file with zero size */ + inode->i_mode |= S_IFREG; + inode->i_size = 0; + + file->private_data = ctx; + + ctx->mode = flags & SECRETMEM_MODE_MASK; + + return file; + +err_free_ctx: + kfree(ctx); +err_free_inode: + iput(inode); + return file; +} + +SYSCALL_DEFINE1(memfd_secret, unsigned long, flags) +{ + struct file *file; + int fd, err; + + /* make sure local flags do not confict with global fcntl.h */ + BUILD_BUG_ON(SECRETMEM_FLAGS_MASK & O_CLOEXEC); + + if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC)) + return -EINVAL; + + fd = get_unused_fd_flags(flags & O_CLOEXEC); + if (fd < 0) + return fd; + + file = secretmem_file_create(flags); + if (IS_ERR(file)) { + err = PTR_ERR(file); + goto err_put_fd; + } + + file->f_flags |= O_LARGEFILE; + + fd_install(fd, file); + return fd; + +err_put_fd: + put_unused_fd(fd); + return err; +} + +static void secretmem_evict_inode(struct inode *inode) +{ + struct secretmem_ctx *ctx = inode->i_private; + + truncate_inode_pages_final(&inode->i_data); + clear_inode(inode); + kfree(ctx); +} + +static const struct super_operations secretmem_super_ops = { + .evict_inode = secretmem_evict_inode, +}; + +static int secretmem_init_fs_context(struct fs_context *fc) +{ + struct pseudo_fs_context *ctx = init_pseudo(fc, SECRETMEM_MAGIC); + + if (!ctx) + return -ENOMEM; + ctx->ops = &secretmem_super_ops; + + return 0; +} + +static struct file_system_type secretmem_fs = { + .name = "secretmem", + .init_fs_context = secretmem_init_fs_context, + .kill_sb = kill_anon_super, +}; + +static int secretmem_init(void) +{ + int ret = 0; + + secretmem_mnt = kern_mount(&secretmem_fs); + if (IS_ERR(secretmem_mnt)) + ret = PTR_ERR(secretmem_mnt); + + return ret; +} +fs_initcall(secretmem_init); From patchwork Thu Jan 21 12:27:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12035867 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58533C433E0 for ; Thu, 21 Jan 2021 12:31:39 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CF00C236F9 for ; Thu, 21 Jan 2021 12:31:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CF00C236F9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=v+vv97+4X6KDwyIyXH2ExySSpAL1nHAwOyHtuu/olVw=; b=qKro/xVkl1qhMMPZVnP2FzzFk knAy4uKaYkgrYbTqCyXpr2IhH9oy5kfU2Av3JSdkmhxBlm3WI1B2q9129NEI+rTeSxuacdhM+ve7L 3Xf9H2nhv2At2JAjYVJRez2WCZr7w8o98UgXBcb2CbBseyy5x+ltMtbUUvcO/Ph0D4wUFTwafhTUC 2Ac4OBGUQVk08wUmwHXm1eMYk5+vrtmWf//+Mx4FjVp48QXEBZDINBXb8vuF9HT67W9th5Y2sZfg6 Axbhoc8Q0LSKP+dR7XEfwrN4JajqqflrxZ/KRqpsW5v0m1W2GU02K46D0+eGS42ps+liyUMP8fv/k MTY/3GLBQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z69-0003r8-3n; Thu, 21 Jan 2021 12:30:01 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z50-0003Hc-IY; Thu, 21 Jan 2021 12:28:54 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 9461D23A00; Thu, 21 Jan 2021 12:28:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611232129; bh=9kaVLl7ns6FDXpeBythPZbgRKr2kk8KbnqpQJU19OBs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HnqgRuO0LOTHyk6g9Xvq1dFVel9zw9hTm8TQQVORwQic36aIup+MTwSMASfaZvf1o IEVGsxlFFfJ65h/VpDsDNUHorY/gwoZOvTmr7B36D636ZdGudu7wXdmB3XNRwCa0w1 yWknpajjb73468E9zw++w3gkPg1xIeyETOKKeBNAPQ/iW/LpD27xTsgv7XdF0grw6r SZy1m6Uc8YAsXF0lpWvbcgmVHRd+k5NH3LaiggM4EEbBfKral7IAsyFX9dzOsF9xuN Y+oCaINMAuMtRU4hpKXeGa3Vjc+OoGxWaIa434FZ0xh8Anc0Or/4E8XXyIF7/dkAaz klUi9VNgb9kWQ== From: Mike Rapoport To: Andrew Morton Subject: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation Date: Thu, 21 Jan 2021 14:27:19 +0200 Message-Id: <20210121122723.3446-8-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210121122723.3446-1-rppt@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210121_072850_808301_95156F19 X-CRM114-Status: GOOD ( 29.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , David Hildenbrand , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christopher Lameter , Shuah Khan , Thomas Gleixner , Elena Reshetova , linux-arch@vger.kernel.org, Tycho Andersen , linux-nvdimm@lists.01.org, Will Deacon , x86@kernel.org, Matthew Wilcox , Mike Rapoport , Ingo Molnar , Michael Kerrisk , Palmer Dabbelt , Arnd Bergmann , James Bottomley , Hagen Paul Pfeifer , Borislav Petkov , Alexander Viro , Andy Lutomirski , Paul Walmsley , "Kirill A. Shutemov" , Dan Williams , linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , linux-fsdevel@vger.kernel.org, Shakeel Butt , Rick Edgecombe , Roman Gushchin , Mike Rapoport Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport Removing a PAGE_SIZE page from the direct map every time such page is allocated for a secret memory mapping will cause severe fragmentation of the direct map. This fragmentation can be reduced by using PMD-size pages as a pool for small pages for secret memory mappings. Add a gen_pool per secretmem inode and lazily populate this pool with PMD-size pages. As pages allocated by secretmem become unmovable, use CMA to back large page caches so that page allocator won't be surprised by failing attempt to migrate these pages. The CMA area used by secretmem is controlled by the "secretmem=" kernel parameter. This allows explicit control over the memory available for secretmem and provides upper hard limit for secretmem consumption. Signed-off-by: Mike Rapoport Cc: Alexander Viro Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christopher Lameter Cc: Dan Williams Cc: Dave Hansen Cc: David Hildenbrand Cc: Elena Reshetova Cc: Hagen Paul Pfeifer Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: James Bottomley Cc: "Kirill A. Shutemov" Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael Kerrisk Cc: Palmer Dabbelt Cc: Palmer Dabbelt Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Rick Edgecombe Cc: Roman Gushchin Cc: Shakeel Butt Cc: Shuah Khan Cc: Thomas Gleixner Cc: Tycho Andersen Cc: Will Deacon Nacked-by: Michal Hocko --- mm/Kconfig | 2 + mm/secretmem.c | 175 +++++++++++++++++++++++++++++++++++++++++-------- 2 files changed, 150 insertions(+), 27 deletions(-) diff --git a/mm/Kconfig b/mm/Kconfig index 5f8243442f66..ec35bf406439 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -874,5 +874,7 @@ config KMAP_LOCAL config SECRETMEM def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED + select GENERIC_ALLOCATOR + select CMA endmenu diff --git a/mm/secretmem.c b/mm/secretmem.c index 904351d12c33..469211c7cc3a 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -7,12 +7,15 @@ #include #include +#include #include #include #include #include #include +#include #include +#include #include #include #include @@ -35,24 +38,94 @@ #define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK struct secretmem_ctx { + struct gen_pool *pool; unsigned int mode; }; -static struct page *secretmem_alloc_page(gfp_t gfp) +static struct cma *secretmem_cma; + +static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) { + unsigned long nr_pages = (1 << PMD_PAGE_ORDER); + struct gen_pool *pool = ctx->pool; + unsigned long addr; + struct page *page; + int i, err; + + page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN); + if (!page) + return -ENOMEM; + /* - * FIXME: use a cache of large pages to reduce the direct map - * fragmentation + * clear the data left from the prevoius user before dropping the + * pages from the direct map */ - return alloc_page(gfp | __GFP_ZERO); + for (i = 0; i < nr_pages; i++) + clear_highpage(page + i); + + err = set_direct_map_invalid_noflush(page, nr_pages); + if (err) + goto err_cma_release; + + addr = (unsigned long)page_address(page); + err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE); + if (err) + goto err_set_direct_map; + + flush_tlb_kernel_range(addr, addr + PMD_SIZE); + + return 0; + +err_set_direct_map: + /* + * If a split of PUD-size page was required, it already happened + * when we marked the pages invalid which guarantees that this call + * won't fail + */ + set_direct_map_default_noflush(page, nr_pages); +err_cma_release: + cma_release(secretmem_cma, page, nr_pages); + return err; +} + +static void secretmem_free_page(struct secretmem_ctx *ctx, struct page *page) +{ + unsigned long addr = (unsigned long)page_address(page); + struct gen_pool *pool = ctx->pool; + + gen_pool_free(pool, addr, PAGE_SIZE); +} + +static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx, + gfp_t gfp) +{ + struct gen_pool *pool = ctx->pool; + unsigned long addr; + struct page *page; + int err; + + if (gen_pool_avail(pool) < PAGE_SIZE) { + err = secretmem_pool_increase(ctx, gfp); + if (err) + return NULL; + } + + addr = gen_pool_alloc(pool, PAGE_SIZE); + if (!addr) + return NULL; + + page = virt_to_page(addr); + get_page(page); + + return page; } static vm_fault_t secretmem_fault(struct vm_fault *vmf) { + struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data; struct address_space *mapping = vmf->vma->vm_file->f_mapping; struct inode *inode = file_inode(vmf->vma->vm_file); pgoff_t offset = vmf->pgoff; - unsigned long addr; struct page *page; int err; @@ -62,40 +135,25 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf) retry: page = find_lock_page(mapping, offset); if (!page) { - page = secretmem_alloc_page(vmf->gfp_mask); + page = secretmem_alloc_page(ctx, vmf->gfp_mask); if (!page) return VM_FAULT_OOM; - err = set_direct_map_invalid_noflush(page, 1); - if (err) { - put_page(page); - return vmf_error(err); - } - __SetPageUptodate(page); err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask); if (unlikely(err)) { + secretmem_free_page(ctx, page); put_page(page); if (err == -EEXIST) goto retry; - goto err_restore_direct_map; + return vmf_error(err); } - addr = (unsigned long)page_address(page); - flush_tlb_kernel_range(addr, addr + PAGE_SIZE); + set_page_private(page, (unsigned long)ctx); } vmf->page = page; return VM_FAULT_LOCKED; - -err_restore_direct_map: - /* - * If a split of large page was required, it already happened - * when we marked the page invalid which guarantees that this call - * won't fail - */ - set_direct_map_default_noflush(page, 1); - return vmf_error(err); } static const struct vm_operations_struct secretmem_vm_ops = { @@ -141,8 +199,9 @@ static int secretmem_migratepage(struct address_space *mapping, static void secretmem_freepage(struct page *page) { - set_direct_map_default_noflush(page, 1); - clear_highpage(page); + struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page); + + secretmem_free_page(ctx, page); } static const struct address_space_operations secretmem_aops = { @@ -177,13 +236,18 @@ static struct file *secretmem_file_create(unsigned long flags) if (!ctx) goto err_free_inode; + ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE); + if (!ctx->pool) + goto err_free_ctx; + file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem", O_RDWR, &secretmem_fops); if (IS_ERR(file)) - goto err_free_ctx; + goto err_free_pool; mapping_set_unevictable(inode->i_mapping); + inode->i_private = ctx; inode->i_mapping->private_data = ctx; inode->i_mapping->a_ops = &secretmem_aops; @@ -197,6 +261,8 @@ static struct file *secretmem_file_create(unsigned long flags) return file; +err_free_pool: + gen_pool_destroy(ctx->pool); err_free_ctx: kfree(ctx); err_free_inode: @@ -215,6 +281,9 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags) if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC)) return -EINVAL; + if (!secretmem_cma) + return -ENOMEM; + fd = get_unused_fd_flags(flags & O_CLOEXEC); if (fd < 0) return fd; @@ -235,11 +304,37 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags) return err; } +static void secretmem_cleanup_chunk(struct gen_pool *pool, + struct gen_pool_chunk *chunk, void *data) +{ + unsigned long start = chunk->start_addr; + unsigned long end = chunk->end_addr; + struct page *page = virt_to_page(start); + unsigned long nr_pages = (end - start + 1) / PAGE_SIZE; + int i; + + set_direct_map_default_noflush(page, nr_pages); + + for (i = 0; i < nr_pages; i++) + clear_highpage(page + i); + + cma_release(secretmem_cma, page, nr_pages); +} + +static void secretmem_cleanup_pool(struct secretmem_ctx *ctx) +{ + struct gen_pool *pool = ctx->pool; + + gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx); + gen_pool_destroy(pool); +} + static void secretmem_evict_inode(struct inode *inode) { struct secretmem_ctx *ctx = inode->i_private; truncate_inode_pages_final(&inode->i_data); + secretmem_cleanup_pool(ctx); clear_inode(inode); kfree(ctx); } @@ -276,3 +371,29 @@ static int secretmem_init(void) return ret; } fs_initcall(secretmem_init); + +static int __init secretmem_setup(char *str) +{ + phys_addr_t align = PMD_SIZE; + unsigned long reserved_size; + int err; + + reserved_size = memparse(str, NULL); + if (!reserved_size) + return 0; + + if (reserved_size * 2 > PUD_SIZE) + align = PUD_SIZE; + + err = cma_declare_contiguous(0, reserved_size, 0, align, 0, false, + "secretmem", &secretmem_cma); + if (err) { + pr_err("failed to create CMA: %d\n", err); + return err; + } + + pr_info("reserved %luM\n", reserved_size >> 20); + + return 0; +} +__setup("secretmem=", secretmem_setup); From patchwork Thu Jan 21 12:27:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12035869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F76CC433E6 for ; Thu, 21 Jan 2021 12:31:40 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BB855236F9 for ; Thu, 21 Jan 2021 12:31:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BB855236F9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ixKZqVu0jUyCxk5NaLTc/UnUQbow5Po8SCZVTIzlkU8=; b=SHC5WrNIxyAU6iQDTeEWxupZj ONxxK5GN+ohNo3xk5r6o1UqpcGiJEqGAw9y1nQAafhvGvOokFAlDSaPVa3onSVDDI48hoXr2hMyPI GVk0WsoLrgpWBRQ4jdOUVOURIQnlvq1nm+aSgTyPj/SiJPrdojcOq+K+nU0l/ESwVgIBsv6vKPz// KROJsKYPfwqnSuIGXp5K/2kE/V3eByzk47nwdw9HlDBlpUm5n3XmUGjLPScVEa5Y1HvqUvy9syV7y NCZ1dr7mYAJRFstND6VRRWAS1WUwJMoKtB0chPUfcU+eZktWXfm0tDITyCIwoRm2YuYSnfLj+/O41 +ScRwn/KQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z6H-0003wD-BN; Thu, 21 Jan 2021 12:30:09 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z5B-0003O0-Kp; Thu, 21 Jan 2021 12:29:03 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id DEAD6239FF; Thu, 21 Jan 2021 12:28:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611232139; bh=0FIDx2HilEZBzfBHTwB/3do9lxZLUmSzqfQ+KNSAkko=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=N3q4Zj1fPwNHXX7u+q92p7e2Bx8HgrmXjAXxil46IXYidoNyrB1+gmFssLBGCTP2e L/BZKdkPkXcZ9Tu28mgKVF6qUKsbJ6jYHdcOcGb+epLzau0GQ4XegrgAz0xtt33piR n/ahOOXBaroN8wtxpZTeSA2j0Ux5n0vj/5HbOHhxzT2VCd84UfUhew78dic1Ql6oux EnkI5RPuQ2Wgn4Z9cb+62xwsWVsDP9tK7+HYB/h6BLPr9OUWjkeLu4zVGNLnF2kKsH SYFPmc4VhpFJtu/2oFlwZHM/rzYdvp61G7CzrXOjbPrV2d4/1efYWao0/sRsHMbuAT fPBjkDzsw0d4g== From: Mike Rapoport To: Andrew Morton Subject: [PATCH v16 08/11] secretmem: add memcg accounting Date: Thu, 21 Jan 2021 14:27:20 +0200 Message-Id: <20210121122723.3446-9-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210121122723.3446-1-rppt@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210121_072901_925781_8A41E0AE X-CRM114-Status: GOOD ( 23.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , David Hildenbrand , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christopher Lameter , Shuah Khan , Thomas Gleixner , Elena Reshetova , linux-arch@vger.kernel.org, Tycho Andersen , linux-nvdimm@lists.01.org, Will Deacon , x86@kernel.org, Matthew Wilcox , Mike Rapoport , Ingo Molnar , Michael Kerrisk , Palmer Dabbelt , Arnd Bergmann , James Bottomley , Hagen Paul Pfeifer , Borislav Petkov , Alexander Viro , Andy Lutomirski , Paul Walmsley , "Kirill A. Shutemov" , Dan Williams , linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , linux-fsdevel@vger.kernel.org, Shakeel Butt , Rick Edgecombe , Roman Gushchin , Mike Rapoport Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport Account memory consumed by secretmem to memcg. The accounting is updated when the memory is actually allocated and freed. Signed-off-by: Mike Rapoport Acked-by: Roman Gushchin Reviewed-by: Shakeel Butt Cc: Alexander Viro Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christopher Lameter Cc: Dan Williams Cc: Dave Hansen Cc: David Hildenbrand Cc: Elena Reshetova Cc: Hagen Paul Pfeifer Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: James Bottomley Cc: "Kirill A. Shutemov" Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael Kerrisk Cc: Palmer Dabbelt Cc: Palmer Dabbelt Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Rick Edgecombe Cc: Shuah Khan Cc: Thomas Gleixner Cc: Tycho Andersen Cc: Will Deacon --- mm/filemap.c | 3 ++- mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++- 2 files changed, 37 insertions(+), 2 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 2d0c6721879d..bb28dd6d9e22 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -42,6 +42,7 @@ #include #include #include +#include #include "internal.h" #define CREATE_TRACE_POINTS @@ -839,7 +840,7 @@ noinline int __add_to_page_cache_locked(struct page *page, page->mapping = mapping; page->index = offset; - if (!huge) { + if (!huge && !page_is_secretmem(page)) { error = mem_cgroup_charge(page, current->mm, gfp); if (error) goto error; diff --git a/mm/secretmem.c b/mm/secretmem.c index 469211c7cc3a..05026460e2ee 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include @@ -44,6 +45,32 @@ struct secretmem_ctx { static struct cma *secretmem_cma; +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order) +{ + int err; + + err = memcg_kmem_charge_page(page, gfp, order); + if (err) + return err; + + /* + * seceremem caches are unreclaimable kernel allocations, so treat + * them as unreclaimable slab memory for VM statistics purposes + */ + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + PAGE_SIZE << order); + + return 0; +} + +static void secretmem_unaccount_pages(struct page *page, int order) +{ + + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + -PAGE_SIZE << order); + memcg_kmem_uncharge_page(page, order); +} + static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) { unsigned long nr_pages = (1 << PMD_PAGE_ORDER); @@ -56,6 +83,10 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) if (!page) return -ENOMEM; + err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER); + if (err) + goto err_cma_release; + /* * clear the data left from the prevoius user before dropping the * pages from the direct map @@ -65,7 +96,7 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) err = set_direct_map_invalid_noflush(page, nr_pages); if (err) - goto err_cma_release; + goto err_memcg_uncharge; addr = (unsigned long)page_address(page); err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE); @@ -83,6 +114,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) * won't fail */ set_direct_map_default_noflush(page, nr_pages); +err_memcg_uncharge: + secretmem_unaccount_pages(page, PMD_PAGE_ORDER); err_cma_release: cma_release(secretmem_cma, page, nr_pages); return err; @@ -314,6 +347,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool, int i; set_direct_map_default_noflush(page, nr_pages); + secretmem_unaccount_pages(page, PMD_PAGE_ORDER); for (i = 0; i < nr_pages; i++) clear_highpage(page + i); From patchwork Thu Jan 21 12:27:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12035929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F6A8C433DB for ; Thu, 21 Jan 2021 12:31:50 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CDAC723975 for ; Thu, 21 Jan 2021 12:31:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CDAC723975 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=a2qMRmD7wBZ7cpYAfUQGtp3nS/JmTqS7R8cbDloFqps=; b=SEmAINgncQLqDJbZbpYsgHXLw zic+n/mPDb/1QcvYZTf0Sg3fBSGczntiIk+h7FBUHvKUznHDzi7FoO24PjA7ySRFc5I97bqNCi6Y+ SS2nRFQJk+54wbXZFbfJ1jftZ6jgKyf0Ui0K191KEpjpbflNA84wkklTwtOAyznzR4aRyvsFURp6l bfZkK/EwreQ6ukBvGPt46VatO0KFdSHuKI0nq0N6S5pI0kQM9aN6kKfKl6WDT7WdSnwwsRSYxJkNX wiLGb/XBNLWYVvDI8iX22tZZ9xR8bvhEnMtO5xD6kZsKjEXmdbnudTYtOrvp3QrhgHSWK63b7la86 pBWtWUDAA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z6R-000431-Ox; Thu, 21 Jan 2021 12:30:19 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z5L-0003Tb-TM; Thu, 21 Jan 2021 12:29:22 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 5F5F223A04; Thu, 21 Jan 2021 12:29:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611232150; bh=HR3TKghucZ11bpxJACXPfexGlGSNBiuPA9J9vXBDKNs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VGXYup+M8oFWw53bsRjy829mtelBQ5D6L7oDjzoXcg0kc3JFaxBv6u7OXbwROQH26 IiSZ/3VeIKv+Y3/K/O2Zhg12Qmf4+VSl7eeq/Hp3DLYXaWOaS6XB4kQ8TzdffledrL IlKg3HpG+JNqtTL5QOG5LXMcTD/YC7bI4uUb9kSw7+xyQt1f4oFUocC7MjPQ2GUEMI goE7w6tF6juPh7mVUEjXVC6GrZdm7nY4olcErC5MVZqYTNew8/b8i7jHiz9WBUX4GN +gGZ2fzeN4apFGsAAFvJLq5dAW55Rc7O74FFoMuPvNkLbhQ8asFZY/AblLspCxa7LG vn8jveUpGkYkw== From: Mike Rapoport To: Andrew Morton Subject: [PATCH v16 09/11] PM: hibernate: disable when there are active secretmem users Date: Thu, 21 Jan 2021 14:27:21 +0200 Message-Id: <20210121122723.3446-10-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210121122723.3446-1-rppt@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210121_072912_230066_E2FF1FD5 X-CRM114-Status: GOOD ( 19.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , David Hildenbrand , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christopher Lameter , Shuah Khan , Thomas Gleixner , Elena Reshetova , linux-arch@vger.kernel.org, Tycho Andersen , linux-nvdimm@lists.01.org, Will Deacon , x86@kernel.org, Matthew Wilcox , Mike Rapoport , Ingo Molnar , Michael Kerrisk , Palmer Dabbelt , Arnd Bergmann , James Bottomley , Hagen Paul Pfeifer , Borislav Petkov , Alexander Viro , Andy Lutomirski , Paul Walmsley , "Kirill A. Shutemov" , Dan Williams , linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , linux-fsdevel@vger.kernel.org, Shakeel Butt , Rick Edgecombe , Roman Gushchin , Mike Rapoport Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport It is unsafe to allow saving of secretmem areas to the hibernation snapshot as they would be visible after the resume and this essentially will defeat the purpose of secret memory mappings. Prevent hibernation whenever there are active secret memory users. Signed-off-by: Mike Rapoport Cc: Alexander Viro Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christopher Lameter Cc: Dan Williams Cc: Dave Hansen Cc: David Hildenbrand Cc: Elena Reshetova Cc: Hagen Paul Pfeifer Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: James Bottomley Cc: "Kirill A. Shutemov" Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael Kerrisk Cc: Palmer Dabbelt Cc: Palmer Dabbelt Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Rick Edgecombe Cc: Roman Gushchin Cc: Shakeel Butt Cc: Shuah Khan Cc: Thomas Gleixner Cc: Tycho Andersen Cc: Will Deacon --- include/linux/secretmem.h | 6 ++++++ kernel/power/hibernate.c | 5 ++++- mm/secretmem.c | 15 +++++++++++++++ 3 files changed, 25 insertions(+), 1 deletion(-) diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h index 70e7db9f94fe..907a6734059c 100644 --- a/include/linux/secretmem.h +++ b/include/linux/secretmem.h @@ -6,6 +6,7 @@ bool vma_is_secretmem(struct vm_area_struct *vma); bool page_is_secretmem(struct page *page); +bool secretmem_active(void); #else @@ -19,6 +20,11 @@ static inline bool page_is_secretmem(struct page *page) return false; } +static inline bool secretmem_active(void) +{ + return false; +} + #endif /* CONFIG_SECRETMEM */ #endif /* _LINUX_SECRETMEM_H */ diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c index da0b41914177..559acef3fddb 100644 --- a/kernel/power/hibernate.c +++ b/kernel/power/hibernate.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include "power.h" @@ -81,7 +82,9 @@ void hibernate_release(void) bool hibernation_available(void) { - return nohibernate == 0 && !security_locked_down(LOCKDOWN_HIBERNATION); + return nohibernate == 0 && + !security_locked_down(LOCKDOWN_HIBERNATION) && + !secretmem_active(); } /** diff --git a/mm/secretmem.c b/mm/secretmem.c index 05026460e2ee..6ef32ad08184 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -45,6 +45,13 @@ struct secretmem_ctx { static struct cma *secretmem_cma; +static atomic_t secretmem_users; + +bool secretmem_active(void) +{ + return !!atomic_read(&secretmem_users); +} + static int secretmem_account_pages(struct page *page, gfp_t gfp, int order) { int err; @@ -193,6 +200,12 @@ static const struct vm_operations_struct secretmem_vm_ops = { .fault = secretmem_fault, }; +static int secretmem_release(struct inode *inode, struct file *file) +{ + atomic_dec(&secretmem_users); + return 0; +} + static int secretmem_mmap(struct file *file, struct vm_area_struct *vma) { unsigned long len = vma->vm_end - vma->vm_start; @@ -215,6 +228,7 @@ bool vma_is_secretmem(struct vm_area_struct *vma) } static const struct file_operations secretmem_fops = { + .release = secretmem_release, .mmap = secretmem_mmap, }; @@ -330,6 +344,7 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags) file->f_flags |= O_LARGEFILE; fd_install(fd, file); + atomic_inc(&secretmem_users); return fd; err_put_fd: From patchwork Thu Jan 21 12:27:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12035931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E077C433E0 for ; Thu, 21 Jan 2021 12:32:25 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 31CAF23975 for ; Thu, 21 Jan 2021 12:32:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 31CAF23975 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=4lL9ahzGdrdxftxZY6X+rDQaP02/k5Qpqt+RhKfL+HM=; b=aop+PRGmSlJdyo9Uu5Df35S7G ixMIT6dSJYY9upH+YwHZw26d4elgBACjbcdJfz4DAicPw8wth/0K/deBoLvYeR9ZLzTQE59WcrIof 8SZG7kEM/cZ4WeWpquRqq4DvMGdKpX+X2VE/I0HYTRKx2On3tH1/C+UK00EB8K1NGUby0W+vknfA7 Bcs7gEOm56p6zMRu4RgX83uBUUj7jOWNev7FPTL7iEGA5b1NdmCuOwShQXzXfKc5i4txHfXIqBIGd PpH2S5XfRaPFk1EpuLThzHfX41YK7sriHOCYFX7H8LJmLAz4GY3/2IIwRnIcIOzR0mT4zAL7HLcdr R+CA7XFwQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z6c-00047U-TG; Thu, 21 Jan 2021 12:30:30 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z5V-0003Yk-JB; Thu, 21 Jan 2021 12:29:29 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id B63BF239EB; Thu, 21 Jan 2021 12:29:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611232160; bh=VvUQzJ9ylIOXtEKh2OsmZosMBwKc2eyTfqo5kMOKOIM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pfmmFaOnudmyonlvNJVKbq9WQ2T3rI4wG6jnwVZgnl/mUfyNdI5TIGIGJ73G+8qo7 /nVPF6tw12SUQK7JfmOVAEVo1BweJM+uMj2VjVQ1IZkcDd6WoeOeqkl+v57xeVhLkb cs5KbYnFjTsK9EfsTY7zuaJEIti8SP5LHl3W/vW9L52mz/GkSmRvU2ZMTQ96/l+7iJ GepbNioQIJflS/xYwkAOvomTfVJicrBB7eLH0hmgC4fp9re0l5piJNsN471imQuwo5 9yTrqRc3xH3HaJpIjlLvEQfYCB4BSlcNp8wf8Y8tJRNCj7K5PeEEn0cB5igMxFHFQ2 +mGRKY2o4aM9A== From: Mike Rapoport To: Andrew Morton Subject: [PATCH v16 10/11] arch, mm: wire up memfd_secret system call where relevant Date: Thu, 21 Jan 2021 14:27:22 +0200 Message-Id: <20210121122723.3446-11-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210121122723.3446-1-rppt@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210121_072921_849833_0652D183 X-CRM114-Status: GOOD ( 19.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , David Hildenbrand , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christopher Lameter , Shuah Khan , Thomas Gleixner , Elena Reshetova , linux-arch@vger.kernel.org, Tycho Andersen , linux-nvdimm@lists.01.org, Will Deacon , x86@kernel.org, Matthew Wilcox , Mike Rapoport , Ingo Molnar , Michael Kerrisk , Palmer Dabbelt , Arnd Bergmann , James Bottomley , Hagen Paul Pfeifer , Borislav Petkov , Alexander Viro , Andy Lutomirski , Paul Walmsley , "Kirill A. Shutemov" , Dan Williams , linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , linux-fsdevel@vger.kernel.org, Shakeel Butt , Rick Edgecombe , Roman Gushchin , Mike Rapoport Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport Wire up memfd_secret system call on architectures that define ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86. Signed-off-by: Mike Rapoport Acked-by: Palmer Dabbelt Acked-by: Arnd Bergmann Cc: Alexander Viro Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christopher Lameter Cc: Dan Williams Cc: Dave Hansen Cc: David Hildenbrand Cc: Elena Reshetova Cc: Hagen Paul Pfeifer Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: James Bottomley Cc: "Kirill A. Shutemov" Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael Kerrisk Cc: Palmer Dabbelt Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Rick Edgecombe Cc: Roman Gushchin Cc: Shakeel Butt Cc: Shuah Khan Cc: Thomas Gleixner Cc: Tycho Andersen Cc: Will Deacon Acked-by: Catalin Marinas --- arch/arm64/include/uapi/asm/unistd.h | 1 + arch/riscv/include/asm/unistd.h | 1 + arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + include/linux/syscalls.h | 1 + include/uapi/asm-generic/unistd.h | 6 +++++- mm/secretmem.c | 3 +++ scripts/checksyscalls.sh | 4 ++++ 8 files changed, 17 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h index f83a70e07df8..ce2ee8f1e361 100644 --- a/arch/arm64/include/uapi/asm/unistd.h +++ b/arch/arm64/include/uapi/asm/unistd.h @@ -20,5 +20,6 @@ #define __ARCH_WANT_SET_GET_RLIMIT #define __ARCH_WANT_TIME32_SYSCALLS #define __ARCH_WANT_SYS_CLONE3 +#define __ARCH_WANT_MEMFD_SECRET #include diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h index 977ee6181dab..6c316093a1e5 100644 --- a/arch/riscv/include/asm/unistd.h +++ b/arch/riscv/include/asm/unistd.h @@ -9,6 +9,7 @@ */ #define __ARCH_WANT_SYS_CLONE +#define __ARCH_WANT_MEMFD_SECRET #include diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl index 02a349afaf9c..a1578cdf6d91 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -447,3 +447,4 @@ 440 i386 process_madvise sys_process_madvise 441 i386 epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 442 i386 watch_mount sys_watch_mount +443 i386 memfd_secret sys_memfd_secret diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index d9bcc4e02588..d8ecd9df0942 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -364,6 +364,7 @@ 440 common process_madvise sys_process_madvise 441 common epoll_pwait2 sys_epoll_pwait2 442 common watch_mount sys_watch_mount +443 common memfd_secret sys_memfd_secret # # Due to a historical design error, certain syscalls are numbered differently diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index 28bde029109d..4bc70ac0e993 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -1039,6 +1039,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig, asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags); asmlinkage long sys_watch_mount(int dfd, const char __user *path, unsigned int at_flags, int watch_fd, int watch_id); +asmlinkage long sys_memfd_secret(unsigned long flags); /* * Architecture-specific system calls diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h index ad58f661f4aa..26125974a8a2 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -863,9 +863,13 @@ __SYSCALL(__NR_process_madvise, sys_process_madvise) __SC_COMP(__NR_epoll_pwait2, sys_epoll_pwait2, compat_sys_epoll_pwait2) #define __NR_watch_mount 442 __SYSCALL(__NR_watch_mount, sys_watch_mount) +#ifdef __ARCH_WANT_MEMFD_SECRET +#define __NR_memfd_secret 443 +__SYSCALL(__NR_memfd_secret, sys_memfd_secret) +#endif #undef __NR_syscalls -#define __NR_syscalls 443 +#define __NR_syscalls 444 /* * 32 bit systems traditionally used different diff --git a/mm/secretmem.c b/mm/secretmem.c index 6ef32ad08184..3d78b2807a2e 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -427,6 +427,9 @@ static int __init secretmem_setup(char *str) unsigned long reserved_size; int err; + if (!can_set_direct_map()) + return 0; + reserved_size = memparse(str, NULL); if (!reserved_size) return 0; diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh index a18b47695f55..b7609958ee36 100755 --- a/scripts/checksyscalls.sh +++ b/scripts/checksyscalls.sh @@ -40,6 +40,10 @@ cat << EOF #define __IGNORE_setrlimit /* setrlimit */ #endif +#ifndef __ARCH_WANT_MEMFD_SECRET +#define __IGNORE_memfd_secret +#endif + /* Missing flags argument */ #define __IGNORE_renameat /* renameat2 */ From patchwork Thu Jan 21 12:27:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12035933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AB69C433E6 for ; Thu, 21 Jan 2021 12:32:26 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EB64823975 for ; Thu, 21 Jan 2021 12:32:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EB64823975 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=P8zSFhzDpiKdeqJBLZub+XYNg/6kJy5M7ET4KuxhZ7s=; b=zo14uYeEdzDjttph0gxDt+D60 sY3ShHPWv4owpY6WoBe5s3e6FUDKAej4UAQbKA7hbJNq01o7ru7Bkq2FGkCGrCrffpa2xgCrFF1LR KkXK/WeY3XJ6BlUmwGb8Vxrq2sbnIWdyIIU1WyWi5r9oWFLVw04bci68aKJa0k5l0OlUZRfV72hE1 2rsPXj1P0V7VoUA/UErxxVHx1KP9WENkpjHCnuwOanZrpjYSCiigSh8eD4chg0l+Zv4FYO9ad2sqs XkzeOZ0KUNv54CVcbAXcdszzWhAhfUfK83GQ9CIziZD73tur02ukjbsANrw3en4m4zg319AGTK5pE UsMjjCZjw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z6j-0004Dg-1i; Thu, 21 Jan 2021 12:30:37 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l2Z5h-0003dm-A1; Thu, 21 Jan 2021 12:29:42 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1A61623A22; Thu, 21 Jan 2021 12:29:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611232170; bh=QCyk29L+/YnC25PEa9uZAFcJZDrkvY6AexHtzQsre5w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kQpbM1QE1NlNqBdwoey5+R0tvBdgWTTH/hmrIaBDRNZQQJfUZrCciIOLRgleJE0oF FhkrSvNAZoBWIacJ4pNh41o1r0cXq//ctSe3sE4fUbvFaIaZgfksRmGBjyquXaai4I J4I95lLrYv6Q4ZOFikH/ATKVWP4dIVXsXqaBo1h3S0viknOVS7L5A981JnGRzmf88e nAq0S0D6LHUEYIB1tIvLClrZIzjJztYBHu3HTWRCQ3nAHo/g/JcfBhvvlZgiKCaI+p qkqNe0gES2UPTFcP0if8H9X9muHYxD3w7dqhMRMCMQW4p0pjYpN8zfJ5s/o949oif9 dfgphaoPsyDjw== From: Mike Rapoport To: Andrew Morton Subject: [PATCH v16 11/11] secretmem: test: add basic selftest for memfd_secret(2) Date: Thu, 21 Jan 2021 14:27:23 +0200 Message-Id: <20210121122723.3446-12-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210121122723.3446-1-rppt@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210121_072933_750680_AE199DF9 X-CRM114-Status: GOOD ( 26.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , David Hildenbrand , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christopher Lameter , Shuah Khan , Thomas Gleixner , Elena Reshetova , linux-arch@vger.kernel.org, Tycho Andersen , linux-nvdimm@lists.01.org, Will Deacon , x86@kernel.org, Matthew Wilcox , Mike Rapoport , Ingo Molnar , Michael Kerrisk , Palmer Dabbelt , Arnd Bergmann , James Bottomley , Hagen Paul Pfeifer , Borislav Petkov , Alexander Viro , Andy Lutomirski , Paul Walmsley , "Kirill A. Shutemov" , Dan Williams , linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , linux-fsdevel@vger.kernel.org, Shakeel Butt , Rick Edgecombe , Roman Gushchin , Mike Rapoport Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport The test verifies that file descriptor created with memfd_secret does not allow read/write operations, that secret memory mappings respect RLIMIT_MEMLOCK and that remote accesses with process_vm_read() and ptrace() to the secret memory fail. Signed-off-by: Mike Rapoport Cc: Alexander Viro Cc: Andy Lutomirski Cc: Arnd Bergmann Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christopher Lameter Cc: Dan Williams Cc: Dave Hansen Cc: David Hildenbrand Cc: Elena Reshetova Cc: Hagen Paul Pfeifer Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: James Bottomley Cc: "Kirill A. Shutemov" Cc: Mark Rutland Cc: Matthew Wilcox Cc: Michael Kerrisk Cc: Palmer Dabbelt Cc: Palmer Dabbelt Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Rick Edgecombe Cc: Roman Gushchin Cc: Shakeel Butt Cc: Shuah Khan Cc: Thomas Gleixner Cc: Tycho Andersen Cc: Will Deacon --- tools/testing/selftests/vm/.gitignore | 1 + tools/testing/selftests/vm/Makefile | 3 +- tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++++ tools/testing/selftests/vm/run_vmtests | 17 ++ 4 files changed, 316 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/vm/memfd_secret.c diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore index 9a35c3f6a557..c8deddc81e7a 100644 --- a/tools/testing/selftests/vm/.gitignore +++ b/tools/testing/selftests/vm/.gitignore @@ -21,4 +21,5 @@ va_128TBswitch map_fixed_noreplace write_to_hugetlbfs hmm-tests +memfd_secret local_config.* diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile index d42115e4284d..0200fb61646c 100644 --- a/tools/testing/selftests/vm/Makefile +++ b/tools/testing/selftests/vm/Makefile @@ -34,6 +34,7 @@ TEST_GEN_FILES += khugepaged TEST_GEN_FILES += map_fixed_noreplace TEST_GEN_FILES += map_hugetlb TEST_GEN_FILES += map_populate +TEST_GEN_FILES += memfd_secret TEST_GEN_FILES += mlock-random-test TEST_GEN_FILES += mlock2-tests TEST_GEN_FILES += mremap_dontunmap @@ -133,7 +134,7 @@ warn_32bit_failure: endif endif -$(OUTPUT)/mlock-random-test: LDLIBS += -lcap +$(OUTPUT)/mlock-random-test $(OUTPUT)/memfd_secret: LDLIBS += -lcap $(OUTPUT)/gup_test: ../../../../mm/gup_test.h diff --git a/tools/testing/selftests/vm/memfd_secret.c b/tools/testing/selftests/vm/memfd_secret.c new file mode 100644 index 000000000000..c878c2b841fc --- /dev/null +++ b/tools/testing/selftests/vm/memfd_secret.c @@ -0,0 +1,296 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright IBM Corporation, 2020 + * + * Author: Mike Rapoport + */ + +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#include "../kselftest.h" + +#define fail(fmt, ...) ksft_test_result_fail(fmt, ##__VA_ARGS__) +#define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__) +#define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__) + +#ifdef __NR_memfd_secret + +#define PATTERN 0x55 + +static const int prot = PROT_READ | PROT_WRITE; +static const int mode = MAP_SHARED; + +static unsigned long page_size; +static unsigned long mlock_limit_cur; +static unsigned long mlock_limit_max; + +static int memfd_secret(unsigned long flags) +{ + return syscall(__NR_memfd_secret, flags); +} + +static void test_file_apis(int fd) +{ + char buf[64]; + + if ((read(fd, buf, sizeof(buf)) >= 0) || + (write(fd, buf, sizeof(buf)) >= 0) || + (pread(fd, buf, sizeof(buf), 0) >= 0) || + (pwrite(fd, buf, sizeof(buf), 0) >= 0)) + fail("unexpected file IO\n"); + else + pass("file IO is blocked as expected\n"); +} + +static void test_mlock_limit(int fd) +{ + size_t len; + char *mem; + + len = mlock_limit_cur; + mem = mmap(NULL, len, prot, mode, fd, 0); + if (mem == MAP_FAILED) { + fail("unable to mmap secret memory\n"); + return; + } + munmap(mem, len); + + len = mlock_limit_max * 2; + mem = mmap(NULL, len, prot, mode, fd, 0); + if (mem != MAP_FAILED) { + fail("unexpected mlock limit violation\n"); + munmap(mem, len); + return; + } + + pass("mlock limit is respected\n"); +} + +static void try_process_vm_read(int fd, int pipefd[2]) +{ + struct iovec liov, riov; + char buf[64]; + char *mem; + + if (read(pipefd[0], &mem, sizeof(mem)) < 0) { + fail("pipe write: %s\n", strerror(errno)); + exit(KSFT_FAIL); + } + + liov.iov_len = riov.iov_len = sizeof(buf); + liov.iov_base = buf; + riov.iov_base = mem; + + if (process_vm_readv(getppid(), &liov, 1, &riov, 1, 0) < 0) { + if (errno == ENOSYS) + exit(KSFT_SKIP); + exit(KSFT_PASS); + } + + exit(KSFT_FAIL); +} + +static void try_ptrace(int fd, int pipefd[2]) +{ + pid_t ppid = getppid(); + int status; + char *mem; + long ret; + + if (read(pipefd[0], &mem, sizeof(mem)) < 0) { + perror("pipe write"); + exit(KSFT_FAIL); + } + + ret = ptrace(PTRACE_ATTACH, ppid, 0, 0); + if (ret) { + perror("ptrace_attach"); + exit(KSFT_FAIL); + } + + ret = waitpid(ppid, &status, WUNTRACED); + if ((ret != ppid) || !(WIFSTOPPED(status))) { + fprintf(stderr, "weird waitppid result %ld stat %x\n", + ret, status); + exit(KSFT_FAIL); + } + + if (ptrace(PTRACE_PEEKDATA, ppid, mem, 0)) + exit(KSFT_PASS); + + exit(KSFT_FAIL); +} + +static void check_child_status(pid_t pid, const char *name) +{ + int status; + + waitpid(pid, &status, 0); + + if (WIFEXITED(status) && WEXITSTATUS(status) == KSFT_SKIP) { + skip("%s is not supported\n", name); + return; + } + + if ((WIFEXITED(status) && WEXITSTATUS(status) == KSFT_PASS) || + WIFSIGNALED(status)) { + pass("%s is blocked as expected\n", name); + return; + } + + fail("%s: unexpected memory access\n", name); +} + +static void test_remote_access(int fd, const char *name, + void (*func)(int fd, int pipefd[2])) +{ + int pipefd[2]; + pid_t pid; + char *mem; + + if (pipe(pipefd)) { + fail("pipe failed: %s\n", strerror(errno)); + return; + } + + pid = fork(); + if (pid < 0) { + fail("fork failed: %s\n", strerror(errno)); + return; + } + + if (pid == 0) { + func(fd, pipefd); + return; + } + + mem = mmap(NULL, page_size, prot, mode, fd, 0); + if (mem == MAP_FAILED) { + fail("Unable to mmap secret memory\n"); + return; + } + + ftruncate(fd, page_size); + memset(mem, PATTERN, page_size); + + if (write(pipefd[1], &mem, sizeof(mem)) < 0) { + fail("pipe write: %s\n", strerror(errno)); + return; + } + + check_child_status(pid, name); +} + +static void test_process_vm_read(int fd) +{ + test_remote_access(fd, "process_vm_read", try_process_vm_read); +} + +static void test_ptrace(int fd) +{ + test_remote_access(fd, "ptrace", try_ptrace); +} + +static int set_cap_limits(rlim_t max) +{ + struct rlimit new; + cap_t cap = cap_init(); + + new.rlim_cur = max; + new.rlim_max = max; + if (setrlimit(RLIMIT_MEMLOCK, &new)) { + perror("setrlimit() returns error"); + return -1; + } + + /* drop capabilities including CAP_IPC_LOCK */ + if (cap_set_proc(cap)) { + perror("cap_set_proc() returns error"); + return -2; + } + + return 0; +} + +static void prepare(void) +{ + struct rlimit rlim; + + page_size = sysconf(_SC_PAGE_SIZE); + if (!page_size) + ksft_exit_fail_msg("Failed to get page size %s\n", + strerror(errno)); + + if (getrlimit(RLIMIT_MEMLOCK, &rlim)) + ksft_exit_fail_msg("Unable to detect mlock limit: %s\n", + strerror(errno)); + + mlock_limit_cur = rlim.rlim_cur; + mlock_limit_max = rlim.rlim_max; + + printf("page_size: %ld, mlock.soft: %ld, mlock.hard: %ld\n", + page_size, mlock_limit_cur, mlock_limit_max); + + if (page_size > mlock_limit_cur) + mlock_limit_cur = page_size; + if (page_size > mlock_limit_max) + mlock_limit_max = page_size; + + if (set_cap_limits(mlock_limit_max)) + ksft_exit_fail_msg("Unable to set mlock limit: %s\n", + strerror(errno)); +} + +#define NUM_TESTS 4 + +int main(int argc, char *argv[]) +{ + int fd; + + prepare(); + + ksft_print_header(); + ksft_set_plan(NUM_TESTS); + + fd = memfd_secret(0); + if (fd < 0) { + if (errno == ENOSYS) + ksft_exit_skip("memfd_secret is not supported\n"); + else + ksft_exit_fail_msg("memfd_secret failed: %s\n", + strerror(errno)); + } + + test_mlock_limit(fd); + test_file_apis(fd); + test_process_vm_read(fd); + test_ptrace(fd); + + close(fd); + + ksft_exit(!ksft_get_fail_cnt()); +} + +#else /* __NR_memfd_secret */ + +int main(int argc, char *argv[]) +{ + printf("skip: skipping memfd_secret test (missing __NR_memfd_secret)\n"); + return KSFT_SKIP; +} + +#endif /* __NR_memfd_secret */ diff --git a/tools/testing/selftests/vm/run_vmtests b/tools/testing/selftests/vm/run_vmtests index e953f3cd9664..95a67382f132 100755 --- a/tools/testing/selftests/vm/run_vmtests +++ b/tools/testing/selftests/vm/run_vmtests @@ -346,4 +346,21 @@ else exitcode=1 fi +echo "running memfd_secret test" +echo "------------------------------------" +./memfd_secret +ret_val=$? + +if [ $ret_val -eq 0 ]; then + echo "[PASS]" +elif [ $ret_val -eq $ksft_skip ]; then + echo "[SKIP]" + exitcode=$ksft_skip +else + echo "[FAIL]" + exitcode=1 +fi + +exit $exitcode + exit $exitcode