From patchwork Mon Oct 26 16:05:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Topi Miettinen X-Patchwork-Id: 11857683 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BF0016A2 for ; Mon, 26 Oct 2020 16:06:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 461AF22450 for ; Mon, 26 Oct 2020 16:06:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="qoUXlPfQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 461AF22450 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5BA596B0071; Mon, 26 Oct 2020 12:06:03 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 543936B0072; Mon, 26 Oct 2020 12:06:03 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3E4466B0073; Mon, 26 Oct 2020 12:06:03 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0160.hostedemail.com [216.40.44.160]) by kanga.kvack.org (Postfix) with ESMTP id 0761B6B0071 for ; Mon, 26 Oct 2020 12:06:02 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9D10A181AC9CC for ; Mon, 26 Oct 2020 16:06:02 +0000 (UTC) X-FDA: 77414552964.26.toe69_0c15deb27274 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 5C01B1804B640 for ; Mon, 26 Oct 2020 16:06:02 +0000 (UTC) X-Spam-Summary: 10,1,0,ef32510c05cddc39,d41d8cd98f00b204,toiwoton@gmail.com,,RULES_HIT:1:41:355:379:541:800:960:966:973:982:988:989:1260:1311:1314:1345:1431:1437:1515:1605:1730:1747:1777:1792:1801:2194:2196:2198:2199:2200:2201:2393:2553:2559:2562:2636:2693:2731:2898:2903:2915:3138:3139:3140:3141:3142:3653:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:4605:5007:6119:6261:6653:7514:7903:7904:7974:9040:9413:9592:10004:11026:11232:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12740:12895:12986:13146:13161:13229:13230:13894:14096:14394:14687:21080:21324:21325:21365:21444:21451:21627:21666:21795:21889:21966:21990:30003:30051:30054:30069:30070:30075:30090,0,RBL:209.85.208.193:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.18.84.100;04ygwumhzdc8czbw1bsowbb77pff9yp5mrz9tgtinyz1ki871gyg5aokw9zbxmw.rno6acs6975iiuuegjyezw38z46fs3ieemykun8cd3z9ghaz1w7fap5eg4a3ufd.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none ,DomainC X-HE-Tag: toe69_0c15deb27274 X-Filterd-Recvd-Size: 13917 Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Mon, 26 Oct 2020 16:06:01 +0000 (UTC) Received: by mail-lj1-f193.google.com with SMTP id d24so10767183ljg.10 for ; Mon, 26 Oct 2020 09:06:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=i6SNfrIeRdhbIJ2K270eovGd8McyAIKhl+WbtW1/vTg=; b=qoUXlPfQMIw4Ui+CML3nZUmm6TBm9LwbPLPgySWg1W5Qby/RfMOQf/k2L0Kt3CgPdA KCmLwmdP0HVjSq8sTaDZrZgSeyXDlX4u6JK6E1u0k2igVUoUA+U2Px4C1RLa4+wL59H8 1ukBgPuGAa9fhKvNJwqP6iIyEVormAK1dY2tFZiDg6zOF7IZAobfuFt8dmD/gzlsCi6x WkKBfa8mG+0sTQEMi5sVb3QSpGg1m328u75H3htvBkqLJ/qy+OdGQrYRDhC0Wu3Zn5nG 4dlkQzWZF0/a6t+d0DxxmHPmW1sVsA4bbNNUmmeLFMCzkXB76FnRU1S7+LPEHd67tT2p yK7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=i6SNfrIeRdhbIJ2K270eovGd8McyAIKhl+WbtW1/vTg=; b=Jg14zEfQ8R5LvWBQWm7KdPE6BP+VDi7WojpUKsCRSksPtkrFVgkmBxFFxLCHfcOmr1 Nt8K0O0srwttDLE7J7LfoHo/IUSKepZ/DzlLO7olNLyaIUjSJLN2P+u3/AxCGPSXkx0u /DyDo0+c9TUdmpMLc5NMcjz3kOcG4EgkjmlHVjTyfv8et4IzG/gzE76fHRv7jvuPDrMO 5xpa+wI4UtRaobiZ1jN04KGYpTVjrvsL5FoVt/zZf4fFCt01RSRYr7gwHWIWe5yEL+Z2 5jXYjyk9Se0YzuIbTqmqreRRssUXEcV7b4mx4eYR7Pb6t06CcSK8bK8uva+blIOhFHKH Vvsg== X-Gm-Message-State: AOAM532VgI+KzsnXAhbG0Fa2KNZGSh+/2Bkb4V7PFLlTXgThReSKfYbq fIJkPMZ/jo5ljKFQXHp14mc= X-Google-Smtp-Source: ABdhPJxceJ7qG/cEAKOJqIcrGmxd7hVOyKZLuovvAwlOtosnmpTXf6GkNkDq14SYmeGcMxvsROudMQ== X-Received: by 2002:a05:651c:32a:: with SMTP id b10mr3782921ljp.256.1603728360089; Mon, 26 Oct 2020 09:06:00 -0700 (PDT) Received: from localhost.localdomain (88-114-211-119.elisa-laajakaista.fi. [88.114.211.119]) by smtp.gmail.com with ESMTPSA id l5sm493491lfg.146.2020.10.26.09.05.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Oct 2020 09:05:59 -0700 (PDT) From: Topi Miettinen To: linux-hardening@vger.kernel.org, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Topi Miettinen , Jann Horn , Kees Cook , Matthew Wilcox , Mike Rapoport Subject: [PATCH v4] mm: Optional full ASLR for mmap() and mremap() Date: Mon, 26 Oct 2020 18:05:18 +0200 Message-Id: <20201026160518.9212-1-toiwoton@gmail.com> X-Mailer: git-send-email 2.28.0 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Writing a new value of 3 to /proc/sys/kernel/randomize_va_space enables full randomization of memory mappings created with mmap(NULL, ...). With 2, the base of the VMA used for such mappings is random, but the mappings are created in predictable places within the VMA and in sequential order. With 3, new VMAs are created to fully randomize the mappings. Also mremap(..., MREMAP_MAYMOVE) will move the mappings even if not necessary. The method is to randomize the new address without considering VMAs. If the address fails checks because of overlap with the stack area (or in case of mremap(), overlap with the old mapping), the operation is retried a few times before falling back to old method. On 32 bit systems this may cause problems due to increased VM fragmentation if the address space gets crowded. On all systems, it will reduce performance and increase memory usage due to less efficient use of page tables and inability to merge adjacent VMAs with compatible attributes. In this example with value of 2, dynamic loader, libc, anonymous memory reserved with mmap() and locale-archive are located close to each other: $ cat /proc/self/maps (only first line for each object shown for brevity) 58c1175b1000-58c1175b3000 r--p 00000000 fe:0c 1868624 /usr/bin/cat 79752ec17000-79752f179000 r--p 00000000 fe:0c 2473999 /usr/lib/locale/locale-archive 79752f179000-79752f279000 rw-p 00000000 00:00 0 79752f279000-79752f29e000 r--p 00000000 fe:0c 2402415 /usr/lib/x86_64-linux-gnu/libc-2.31.so 79752f43a000-79752f440000 rw-p 00000000 00:00 0 79752f46f000-79752f470000 r--p 00000000 fe:0c 2400484 /usr/lib/x86_64-linux-gnu/ld-2.31.so 79752f49b000-79752f49c000 rw-p 00000000 00:00 0 7ffdcad9e000-7ffdcadbf000 rw-p 00000000 00:00 0 [stack] 7ffdcadd2000-7ffdcadd6000 r--p 00000000 00:00 0 [vvar] 7ffdcadd6000-7ffdcadd8000 r-xp 00000000 00:00 0 [vdso] With 3, they are located at unrelated addresses: $ echo 3 > /proc/sys/kernel/randomize_va_space $ cat /proc/self/maps (only first line for each object shown for brevity) 1206a8fa000-1206a8fb000 r--p 00000000 fe:0c 2400484 /usr/lib/x86_64-linux-gnu/ld-2.31.so 1206a926000-1206a927000 rw-p 00000000 00:00 0 19174173000-19174175000 rw-p 00000000 00:00 0 ac82f419000-ac82f519000 rw-p 00000000 00:00 0 afa66a42000-afa66fa4000 r--p 00000000 fe:0c 2473999 /usr/lib/locale/locale-archive d8656ba9000-d8656bce000 r--p 00000000 fe:0c 2402415 /usr/lib/x86_64-linux-gnu/libc-2.31.so d8656d6a000-d8656d6e000 rw-p 00000000 00:00 0 5df90b712000-5df90b714000 r--p 00000000 fe:0c 1868624 /usr/bin/cat 7ffe1be4c000-7ffe1be6d000 rw-p 00000000 00:00 0 [stack] 7ffe1bf07000-7ffe1bf0b000 r--p 00000000 00:00 0 [vvar] 7ffe1bf0b000-7ffe1bf0d000 r-xp 00000000 00:00 0 [vdso] CC: Andrew Morton CC: Jann Horn CC: Kees Cook CC: Matthew Wilcox CC: Mike Rapoport Signed-off-by: Topi Miettinen --- v2: also randomize mremap(..., MREMAP_MAYMOVE) v3: avoid stack area and retry in case of bad random address (Jann Horn), improve description in kernel.rst (Matthew Wilcox) v4: use /proc/$pid/maps in the example (Mike Rapaport), CCs (Andrew Morton), only check randomize_va_space == 3 --- Documentation/admin-guide/hw-vuln/spectre.rst | 6 ++-- Documentation/admin-guide/sysctl/kernel.rst | 15 ++++++++++ init/Kconfig | 2 +- mm/internal.h | 8 +++++ mm/mmap.c | 30 +++++++++++++------ mm/mremap.c | 27 +++++++++++++++++ 6 files changed, 75 insertions(+), 13 deletions(-) base-commit: 3650b228f83adda7e5ee532e2b90429c03f7b9ec diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst index e05e581af5cf..9ea250522077 100644 --- a/Documentation/admin-guide/hw-vuln/spectre.rst +++ b/Documentation/admin-guide/hw-vuln/spectre.rst @@ -254,7 +254,7 @@ Spectre variant 2 left by the previous process will also be cleared. User programs should use address space randomization to make attacks - more difficult (Set /proc/sys/kernel/randomize_va_space = 1 or 2). + more difficult (Set /proc/sys/kernel/randomize_va_space = 1, 2 or 3). 3. A virtualized guest attacking the host ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -499,8 +499,8 @@ Spectre variant 2 more overhead and run slower. User programs should use address space randomization - (/proc/sys/kernel/randomize_va_space = 1 or 2) to make attacks more - difficult. + (/proc/sys/kernel/randomize_va_space = 1, 2 or 3) to make attacks + more difficult. 3. VM mitigation ^^^^^^^^^^^^^^^^ diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst index d4b32cc32bb7..bc3bb74d544d 100644 --- a/Documentation/admin-guide/sysctl/kernel.rst +++ b/Documentation/admin-guide/sysctl/kernel.rst @@ -1060,6 +1060,21 @@ that support this feature. Systems with ancient and/or broken binaries should be configured with ``CONFIG_COMPAT_BRK`` enabled, which excludes the heap from process address space randomization. + +3 Additionally enable full randomization of memory mappings created + with mmap(NULL, ...). With 2, the base of the VMA used for such + mappings is random, but the mappings are created in predictable + places within the VMA and in sequential order. With 3, new VMAs + are created to fully randomize the mappings. Also mremap(..., + MREMAP_MAYMOVE) will move the mappings even if not necessary. + + On 32 bit systems this may cause problems due to increased VM + fragmentation if the address space gets crowded. + + On all systems, it will reduce performance and increase memory + usage due to less efficient use of page tables and inability to + merge adjacent VMAs with compatible attributes. + == =========================================================================== diff --git a/init/Kconfig b/init/Kconfig index c9446911cf41..6146e2cd3b77 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1863,7 +1863,7 @@ config COMPAT_BRK also breaks ancient binaries (including anything libc5 based). This option changes the bootup default to heap randomization disabled, and can be overridden at runtime by setting - /proc/sys/kernel/randomize_va_space to 2. + /proc/sys/kernel/randomize_va_space to 2 or 3. On non-ancient distros (post-2000 ones) N is usually a safe choice. diff --git a/mm/internal.h b/mm/internal.h index c43ccdddb0f6..b964c8dbb242 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -618,4 +618,12 @@ struct migration_target_control { gfp_t gfp_mask; }; +#ifndef arch_get_mmap_end +#define arch_get_mmap_end(addr) (TASK_SIZE) +#endif + +#ifndef arch_get_mmap_base +#define arch_get_mmap_base(addr, base) (base) +#endif + #endif /* __MM_INTERNAL_H */ diff --git a/mm/mmap.c b/mm/mmap.c index d91ecb00d38c..3677491e999b 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -47,6 +47,7 @@ #include #include #include +#include #include #include @@ -73,6 +74,8 @@ const int mmap_rnd_compat_bits_max = CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX; int mmap_rnd_compat_bits __read_mostly = CONFIG_ARCH_MMAP_RND_COMPAT_BITS; #endif +#define MAX_RANDOM_MMAP_RETRIES 5 + static bool ignore_rlimit_data; core_param(ignore_rlimit_data, ignore_rlimit_data, bool, 0644); @@ -206,7 +209,7 @@ SYSCALL_DEFINE1(brk, unsigned long, brk) #ifdef CONFIG_COMPAT_BRK /* * CONFIG_COMPAT_BRK can still be overridden by setting - * randomize_va_space to 2, which will still cause mm->start_brk + * randomize_va_space to >= 2, which will still cause mm->start_brk * to be arbitrarily shifted */ if (current->brk_randomized) @@ -1445,6 +1448,23 @@ unsigned long do_mmap(struct file *file, unsigned long addr, if (mm->map_count > sysctl_max_map_count) return -ENOMEM; + /* Pick a random address even outside current VMAs? */ + if (!addr && randomize_va_space == 3) { + int i = MAX_RANDOM_MMAP_RETRIES; + unsigned long max_addr = arch_get_mmap_base(addr, mm->mmap_base); + + do { + /* Try a few times to find a free area */ + addr = arch_mmap_rnd(); + if (addr >= max_addr) + continue; + addr = get_unmapped_area(file, addr, len, pgoff, flags); + } while (--i >= 0 && !IS_ERR_VALUE(addr)); + + if (IS_ERR_VALUE(addr)) + addr = 0; + } + /* Obtain the address to map to. we verify (or select) it and ensure * that it represents a valid section of the address space. */ @@ -2142,14 +2162,6 @@ unsigned long vm_unmapped_area(struct vm_unmapped_area_info *info) return addr; } -#ifndef arch_get_mmap_end -#define arch_get_mmap_end(addr) (TASK_SIZE) -#endif - -#ifndef arch_get_mmap_base -#define arch_get_mmap_base(addr, base) (base) -#endif - /* Get an address range which is currently unmapped. * For shmat() with addr=0. * diff --git a/mm/mremap.c b/mm/mremap.c index 138abbae4f75..c5b2ed2bfd2d 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -24,12 +24,15 @@ #include #include #include +#include #include #include #include "internal.h" +#define MAX_RANDOM_MREMAP_RETRIES 5 + static pmd_t *get_old_pmd(struct mm_struct *mm, unsigned long addr) { pgd_t *pgd; @@ -720,6 +723,30 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, goto out; } + if ((flags & MREMAP_MAYMOVE) && randomize_va_space == 3) { + /* + * Caller is happy with a different address, so let's + * move even if not necessary! + */ + int i = MAX_RANDOM_MREMAP_RETRIES; + unsigned long max_addr = arch_get_mmap_base(addr, mm->mmap_base); + + do { + /* Try a few times to find a free area */ + new_addr = arch_mmap_rnd(); + if (new_addr >= max_addr) + continue; + ret = mremap_to(addr, old_len, new_addr, new_len, + &locked, flags, &uf, &uf_unmap_early, + &uf_unmap); + if (!IS_ERR_VALUE(ret)) + goto out; + } while (--i >= 0); + + /* Give up and try the old address */ + new_addr = addr; + } + /* * Always allow a shrinking remap: that just unmaps * the unnecessary pages..