From patchwork Sun Jan 24 12:42:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Topi Miettinen X-Patchwork-Id: 12042325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_RED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DB77C433E0 for ; Sun, 24 Jan 2021 12:43:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 09D5522ADF for ; Sun, 24 Jan 2021 12:43:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 09D5522ADF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A30CE6B0007; Sun, 24 Jan 2021 07:43:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9E2856B000C; Sun, 24 Jan 2021 07:43:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8D02C6B000D; Sun, 24 Jan 2021 07:43:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0140.hostedemail.com [216.40.44.140]) by kanga.kvack.org (Postfix) with ESMTP id 793C96B0007 for ; Sun, 24 Jan 2021 07:43:03 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3C11B180ACEE4 for ; Sun, 24 Jan 2021 12:43:03 +0000 (UTC) X-FDA: 77740633446.22.cable83_37094e22757d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 1AD5D18038E67 for ; Sun, 24 Jan 2021 12:43:03 +0000 (UTC) X-HE-Tag: cable83_37094e22757d X-Filterd-Recvd-Size: 9291 Received: from mail-lf1-f47.google.com (mail-lf1-f47.google.com [209.85.167.47]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Sun, 24 Jan 2021 12:43:02 +0000 (UTC) Received: by mail-lf1-f47.google.com with SMTP id a8so13892238lfi.8 for ; Sun, 24 Jan 2021 04:43:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=z7teIBDKIhcgBPY/liVEk9zmA7Nx4bGejJjM0IUv55g=; b=fqXxPtXlobtVpi6/B08a8S/K4Z6BblxpfILu/yHUBVC4AsIFULvIIePcVUT2uVLf+x fNVo6jgITGjOYIODAuAIsqVV2NHhhIVSNagMfA+8XVRN2kRGLuQFx9iPJs8dQ7te+g69 uubrHayI6WqzAyvUqR3bARcmWkhocDPUI3kLDAz7Dtn1O365Dx9nHyZu58fdkHcF4ZX8 JupKLODY+KXwUmyQ4pKfPXdPDItqYmULsC/SWJSQTssohSWlUvWC/hGl/kxjM1aEDlR7 xCeLVaYcWYCjvSolkcFRm/sEcWkZ98hWEg6UUrA4fEuXtyBWHASycD4zTuZ5mRcgGlyx hG+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=z7teIBDKIhcgBPY/liVEk9zmA7Nx4bGejJjM0IUv55g=; b=sZINEpjKyL8ZwNVBl87Vy9Nca5M7Az8YT6AHoRGs5S9j/3/GQf8puAqDVg/4GOTJAY xiYWREfUl+jd2VpPI6iL8KwkXIdtqDQwAAg90x/GTo9sFIc+BZB+mRRwesJ2JLpVp0yb Z7Gi/6d1lDHCnGRG1nZNcATIOhea0B0tDqwqagv7cv5gew/8j/5uPlL94DLKIVfJrTow EAfSTcRa0zZglFEcLDoxpi4smTjGed1VedNcxF861Ed7Z0M68RRChahja0Dthx1FFHlN Kua88jmn2LmYCNxDIr5yRMgk8bRP92D1z1otQ7FEzsN/Ko3wUAW6ZHIb8YfpAaolLjmR jb1w== X-Gm-Message-State: AOAM5319+slXmOdiLAb61IxP4eDi2+czuoWjz+9qv2B3VeDpDcLDnt+B PWgARQ9o3mQotszF8hV3DTUQIGoPx8c= X-Google-Smtp-Source: ABdhPJyVor3VzWEfUE5wNYmJza3j7FdVBDUmVSsCrbF/z37ekHoGf7jotbezEuPkR091xfdN4rAKIQ== X-Received: by 2002:a19:5510:: with SMTP id n16mr37649lfe.543.1611492181102; Sun, 24 Jan 2021 04:43:01 -0800 (PST) Received: from localhost.localdomain (88-114-221-222.elisa-laajakaista.fi. [88.114.221.222]) by smtp.gmail.com with ESMTPSA id o14sm928250lfi.257.2021.01.24.04.42.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 24 Jan 2021 04:43:00 -0800 (PST) From: Topi Miettinen To: linux-hardening@vger.kernel.org, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Topi Miettinen , Jann Horn , Kees Cook , Matthew Wilcox , Mike Rapoport , Linux API Subject: [PATCH v10 2/2] mm/mremap: optionally randomize mremap(..., MREMAP_MAYMOVE) Date: Sun, 24 Jan 2021 14:42:46 +0200 Message-Id: <20210124124246.19566-2-toiwoton@gmail.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210124124246.19566-1-toiwoton@gmail.com> References: <20210124124246.19566-1-toiwoton@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: New sysctl kernel.randomize_mremap, when set, can be used to force mremap(..., MREMAP_MAYMOVE) to always move the mappings even if not necessary. In addition to improved address space layout randomization, this can expose bugs where the caller is not actually expecting a moved mapping, even though this may sometimes happen without this flag. Example: $ cat mremap.c #define _GNU_SOURCE #include #include int main(void) { void *addr = mmap(NULL, 4096, PROT_READ, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); addr = mremap(addr, 4096, 8192, MREMAP_MAYMOVE); mremap(addr, 4096, 4096, MREMAP_MAYMOVE); return 0; } $ gcc -o mremap mremap.c $ strace -e mmap,mremap ./mremap mmap(NULL, 4096, PROT_READ, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x68a16298d000 mremap(0x68a16298d000, 4096, 8192, MREMAP_MAYMOVE) = 0x68a16298d000 mremap(0x68a16298d000, 4096, 4096, MREMAP_MAYMOVE) = 0x68a16298d000 Setting the sysctl enables randomization: $ sudo sysctl kernel.randomize_mremap=1 $ strace -e mmap,mremap ./mremap mmap(NULL, 4096, PROT_READ, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x6366429cf000 mremap(0x6366429cf000, 4096, 8192, MREMAP_MAYMOVE) = 0x70aa47ad0000 mremap(0x70aa47ad0000, 4096, 4096, MREMAP_MAYMOVE) = 0x5b37dc166000 CC: Andrew Morton CC: Jann Horn CC: Kees Cook CC: Matthew Wilcox CC: Mike Rapoport CC: Linux API Signed-off-by: Topi Miettinen --- Documentation/admin-guide/sysctl/kernel.rst | 9 +++++++ include/linux/mm.h | 2 ++ kernel/sysctl.c | 7 ++++++ mm/mremap.c | 26 +++++++++++++++++++-- 4 files changed, 42 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst index c13f865c806c..eeca8c8f96d0 100644 --- a/Documentation/admin-guide/sysctl/kernel.rst +++ b/Documentation/admin-guide/sysctl/kernel.rst @@ -1029,6 +1029,15 @@ defined, these additional entries are present: number of cycles between interrupts used to feed the pool. +randomize_mremap +================== + +This option, when set, can be used to force mremap(..., +MREMAP_MAYMOVE) to always move the mappings even if not necessary. +In addition to improved address space layout randomization, this can +expose bugs where the caller is not actually expecting a moved +mapping, even though this may sometimes happen without this flag. + randomize_va_space ================== diff --git a/include/linux/mm.h b/include/linux/mm.h index b4915412abbe..98aa466c2901 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2990,6 +2990,8 @@ void drop_slab_node(int nid); extern int randomize_va_space; #endif +extern int randomize_mremap; + const char * arch_vma_name(struct vm_area_struct *vma); #ifdef CONFIG_MMU void print_vma_addr(char *prefix, unsigned long rip); diff --git a/kernel/sysctl.c b/kernel/sysctl.c index afad085960b8..02bd9ba89f27 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -2429,6 +2429,13 @@ static struct ctl_table kern_table[] = { .mode = 0644, .proc_handler = proc_dointvec, }, + { + .procname = "randomize_mremap", + .data = &randomize_mremap, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec, + }, #endif #if defined(CONFIG_S390) && defined(CONFIG_SMP) { diff --git a/mm/mremap.c b/mm/mremap.c index 138abbae4f75..386da905f39f 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -648,6 +648,14 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta) return 1; } +/* + * Force mremap(..., MREMAP_MAYMOVE) to always move the mappings even + * if not necessary. This can expose bugs where the caller is not + * actually expecting a moved mapping, even though this may sometimes + * happen without this flag. + */ +int randomize_mremap __read_mostly = 0; + /* * Expand (or shrink) an existing mapping, potentially moving it at the * same time (controlled by the MREMAP_MAYMOVE flag and available VM space) @@ -665,6 +673,7 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, unsigned long charged = 0; bool locked = false; bool downgraded = false; + bool randomize = false; struct vm_userfaultfd_ctx uf = NULL_VM_UFFD_CTX; LIST_HEAD(uf_unmap_early); LIST_HEAD(uf_unmap); @@ -720,6 +729,7 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, goto out; } + randomize = (flags & MREMAP_MAYMOVE) && randomize_mremap; /* * Always allow a shrinking remap: that just unmaps * the unnecessary pages.. @@ -730,7 +740,7 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, int retval; retval = __do_munmap(mm, addr+new_len, old_len - new_len, - &uf_unmap, true); + &uf_unmap, !randomize); if (retval < 0 && old_len != new_len) { ret = retval; goto out; @@ -738,6 +748,16 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, } else if (retval == 1) downgraded = true; ret = addr; + + /* + * Caller is happy with a new address, so let's move + * even if not necessary + */ + if (randomize) + ret = mremap_to(addr, new_len, 0, new_len, + &locked, flags, &uf, &uf_unmap_early, + &uf_unmap); + goto out; } @@ -751,8 +771,10 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, } /* old_len exactly to the end of the area.. + * But when randomizing, don't just expand the mapping if + * caller is happy with a moved and resized mapping */ - if (old_len == vma->vm_end - addr) { + if (old_len == vma->vm_end - addr && !randomize) { /* can we just expand the current mapping? */ if (vma_expandable(vma, new_len - old_len)) { int pages = (new_len - old_len) >> PAGE_SHIFT;