From patchwork Fri Jul 12 17:00:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13732003 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09026C2BD09 for ; Fri, 12 Jul 2024 17:02:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 36F876B00B5; Fri, 12 Jul 2024 13:01:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2F8BF6B00B6; Fri, 12 Jul 2024 13:01:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D6436B00B7; Fri, 12 Jul 2024 13:01:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D9ABF6B00B5 for ; Fri, 12 Jul 2024 13:01:53 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 928D11A0C11 for ; Fri, 12 Jul 2024 17:01:53 +0000 (UTC) X-FDA: 82331717706.29.207236E Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf22.hostedemail.com (Postfix) with ESMTP id 9D909C0025 for ; Fri, 12 Jul 2024 17:01:51 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=aaD5WhFb; spf=pass (imf22.hostedemail.com: domain of 3fmGRZggKCMEqhjrthuinvvnsl.jvtspu14-ttr2hjr.vyn@flex--jackmanb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3fmGRZggKCMEqhjrthuinvvnsl.jvtspu14-ttr2hjr.vyn@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720803666; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vODRaOCa3ahOVXhr5qs8R058obZ/hmyGBbEWzV1lmI8=; b=NfHzXVAB46TpbwxMd/aHatNvnrFdKpIH5lt0/3oIAWnDZJI4qTXKK8VBG7AKL7uNwNbECA b37LbsXfcCq/XNd6U1b4svv/Jce1Ejc3/1W+GzWog3kismhvA5TRah8mOGtEFPsax6Tn+e 9TNTd8Gov4WkKfolpGOLldDXvXUAEIM= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=aaD5WhFb; spf=pass (imf22.hostedemail.com: domain of 3fmGRZggKCMEqhjrthuinvvnsl.jvtspu14-ttr2hjr.vyn@flex--jackmanb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3fmGRZggKCMEqhjrthuinvvnsl.jvtspu14-ttr2hjr.vyn@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720803666; a=rsa-sha256; cv=none; b=N2IMzfsa/Gc9w2Xzk+zO3C8EcwSc+tvqvgXDREmHIaY+XhBZLtCXNAv+avUwu31O4N55ym WG1toja9tbCX//d0IkJU6YNftHW6yy/2CfWPIm4tVW3Uw99ItS4V1TiC3Ato8p+ASmHm6m wlZS6mhEWAA+0jDzcZt0IKDmqYRozKc= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6525230ceeaso40867057b3.2 for ; Fri, 12 Jul 2024 10:01:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1720803710; x=1721408510; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vODRaOCa3ahOVXhr5qs8R058obZ/hmyGBbEWzV1lmI8=; b=aaD5WhFbTyT+RQQpcLXsTkATO76oQejk2GAwaQnc5w6rjPJT/MLX8i4cpvH5x4fst1 FdJMqVl7Wx1iGaPaXqGheixxBCVyuevpf+lJ9iwrU3TauxHE0JQYrRfzXKNeqFL9DRcr YhNAvuWoP2f07lZEd4zC6p1eZH6PDjLVLIN9E1R8mCUcR3vy9EECedO6aDzD0EBOa/SN /45Mw0QqLJU8FHHmBxHHoMD8SCeGppTuXmHUMpTYip8zD/6p3i0U4R8h/fzbVDEnHXhR CjVZ/YZvpblsmzgc4cU9Td3THjNTKrzO5GxCW9dxUuepGIpwA/Nbn5Em1E4fypTvRfda WRNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720803710; x=1721408510; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vODRaOCa3ahOVXhr5qs8R058obZ/hmyGBbEWzV1lmI8=; b=njQwJ2xmiumDm97Mdr3sJwTN7aHYJL4J323SRP5uNvc0KLZ3OpW929gCs222JUtd3w kEt1TCTEd5T7tCJU4O/qxWK7dnWP3tMw1i1h+VVSZpn1LRjEOCGD567A0bzEru7JLE0y JHjssOUAvcNdq8mcpZ/Kkmj3hkHpxIN0VWMuU42uDVp8hDwmykRy//WRLynvNsF/cUXp BncJUDo20hyt25yqkCa31OXU7+D0573yMpcL1X01qd1iW+pR1ktfYFpkjR79VmNTQshL xas/DEJlSRuul2ibV22PvbvB1td3qqYHmx1Z48Jixun52j5EPuUMkMNMWj1CmicLmyyJ 0xhg== X-Forwarded-Encrypted: i=1; AJvYcCUKCTsaYbEMHrqTOJ/YKChhdfGVl2l9ERB514fSGhl1a13FBDOfXY6fwVTuSgPZxJWCj2pvlKxNgEEYvD8NAf2W1Vw= X-Gm-Message-State: AOJu0Yz6ojLZqzUqr+aW5IR+ufVi9vvHqJEOfkmqQe5RBfh3hIF/zG+A NGKfpw7o1JIq14rcCsClpREBpSoZhXsxTdoSP6R7HMz+zHpgUyOnDfJgFnVMXHXyAPKsTuuOSUk j/OUe7vINEw== X-Google-Smtp-Source: AGHT+IFZSEZj3VcvmAV2TZcIrShIr4wt15JBT1NWiP/B9JdX5vcjtoCXk1eU0WBA34EhNCZasboijrfP9ezs0Q== X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a05:6902:70b:b0:e03:5144:1d48 with SMTP id 3f1490d57ef6-e041b142c52mr23629276.11.1720803710367; Fri, 12 Jul 2024 10:01:50 -0700 (PDT) Date: Fri, 12 Jul 2024 17:00:38 +0000 In-Reply-To: <20240712-asi-rfc-24-v1-0-144b319a40d8@google.com> Mime-Version: 1.0 References: <20240712-asi-rfc-24-v1-0-144b319a40d8@google.com> X-Mailer: b4 0.14-dev Message-ID: <20240712-asi-rfc-24-v1-20-144b319a40d8@google.com> Subject: [PATCH 20/26] mm: asi: Map dynamic percpu memory as nonsensitive From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Sean Christopherson , Paolo Bonzini , Alexandre Chartre , Liran Alon , Jan Setje-Eilers , Catalin Marinas , Will Deacon , Mark Rutland , Andrew Morton , Mel Gorman , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Michal Hocko , Khalid Aziz , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Valentin Schneider , Paul Turner , Reiji Watanabe , Junaid Shahid , Ofir Weisse , Yosry Ahmed , Patrick Bellasi , KP Singh , Alexandra Sandulescu , Matteo Rizzo , Jann Horn Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, Brendan Jackman X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 9D909C0025 X-Stat-Signature: pwpa9ay8rho4tsyr9craperggmiw6xwb X-Rspam-User: X-HE-Tag: 1720803711-847929 X-HE-Meta: U2FsdGVkX1+AB1xwPsNYwtA3zuwO6rBdcot4vR4vdi5Iv/ERT6+mHdUZnNyc+AfW08Kjg+qYi8sJQ4lj9DtyzR4G4cdI1/GX5ZyjxkAWU3Dr5BXV8EoJ8E86qqL1k8vjpjoVqWsQ7olWK6nwOiVFIXoqJ3Vl10GXz59oAcqEyw8vfC1lvAVdCrp1RBHZvEFIJPliPQov/L8VjvrCyVWtY+WY8LUr+Zx6zPfjmy2T1TPMZFBCGj3ljtfMP0J1bDhG4ul881EUKN7mGPBTveB312vmqll2BgDaqVDqi/oJEXM0yeiNFpWK+0M/SsVxLnWreEMVVdYDbnGp3hHh/FsBeQkXFsnA+ntAz+SehBLszxVqvGD8lsO1oc4nn+fzQWf1KlyDlZk7uuSSzAx7FGGSv8o/+4Sx5/ztgUw+xNCHvODp+obkeh5ZICmv9V6iBS/rIBEW3FjcmhYQgHl5baoA5KE0c4BSowzxW3N6ujwu3H0kj9OwEANWVmb0yKfqgz//kD232zYH0OkekA5VOSE6rFsrV06lNiPlatW/pnKo6f8WmJGw89JOH9zbJRQFe4TqSSMEIrAaZqC/xgN7mgy7TdbCp9+jlH3fyyZ8AnVFODacdvSl/p5RPSR0ISQX8G7COz18ewJ0hpS6Sud1M34H4fxR3cR3UIfTPbdj/Q86DBRSCqa6vnqHggMYP/f5jhvGtZ0gq1FHjK2TSmo8E+Lm66iYdVzhysCKEziVga+x/sQLjQolkMxtfbWwLB8X+2hzS0qBoHXaZHSc1cZQdI0EgM+y9iA96iH9K4VozSyiXfJgkjWKF9S4H/uHVjDeKgl8W0js1C0nr2mZG3H2iW1C+htKgjocpyPso94szD3Cw3HW46I5hL9e56UloMicmNNect/rVPuuF+ct753L2d4ODfksR+ikPQVIgbtAT8I7vzJhGOr8QBHlky3rFtPQhuu+w2dLHUeesQlOlFyzt8O IES3Yc/P sTtKUjBbGMpreSA04HpvWqe60dh1eJ6Ha6CuoZz9zBBCI23xWQdZ0KvP+M5sY9UfWRAdFx3JgiL6EiLS7JikMiDohstHjjQX8Z8zCSYP6SeyEyv6etjaXJpOEUNNT08G4vSpIW+W9rolVf58Pp8KBNp0+W0WiguXOGQ4UCeVzhkgl1ktkS+b8vf8LBKjgKSgWa920f5geDiM+biWVLFAKC1vFz3E185SSQ/hlYP/84vE5dWOizGF1ggrezzLw8m1CqXlha7p0M/bMhKi66o0WDPpwOV/vKwqC46HECLoJFcguSUw7qw+4MNUAb1WMy6W3ygtURLiLJ8r9By0QIiv2WdRckMtz0WyOG36FoIcmxM8Gb9Lv+ssiqHPjzhx5opYQn2Mu25fZ2HG0KOPAOuKpzwbyGD8HfweLDip1rPC1U2bGMSUEaDpXdjpVRAnTuBzQZJtgxkVLoZMiBmxHkylj+rmdRhEC7361t+H4jpKtd+ZqmoOKwGtR2oa3pFjWdL6reL5Gcx8GieBjTaC4F43h24ua2bGQH/+j69F9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Reiji Watanabe Currently, all dynamic percpu memory is implicitly (and unintentionally) treated as sensitive memory. Unconditionally map pages for dynamically allocated percpu memory as global nonsensitive memory, other than pages that are allocated for pcpu_{first,reserved}_chunk during early boot via memblock allocator (these will be taken care by the following patch). We don't support sensitive percpu memory allocation yet. Co-developed-by: Junaid Shahid Signed-off-by: Junaid Shahid Signed-off-by: Reiji Watanabe Signed-off-by: Brendan Jackman WIP: Drop VM_SENSITIVE checks from percpu code --- mm/percpu-vm.c | 50 ++++++++++++++++++++++++++++++++++++++++++++------ mm/percpu.c | 4 ++-- 2 files changed, 46 insertions(+), 8 deletions(-) diff --git a/mm/percpu-vm.c b/mm/percpu-vm.c index cd69caf6aa8d8..2935d7fbac415 100644 --- a/mm/percpu-vm.c +++ b/mm/percpu-vm.c @@ -132,11 +132,20 @@ static void pcpu_pre_unmap_flush(struct pcpu_chunk *chunk, pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end)); } -static void __pcpu_unmap_pages(unsigned long addr, int nr_pages) +static void ___pcpu_unmap_pages(unsigned long addr, int nr_pages) { vunmap_range_noflush(addr, addr + (nr_pages << PAGE_SHIFT)); } +static void __pcpu_unmap_pages(unsigned long addr, int nr_pages, + unsigned long vm_flags) +{ + unsigned long size = nr_pages << PAGE_SHIFT; + + asi_unmap(ASI_GLOBAL_NONSENSITIVE, (void *)addr, size); + ___pcpu_unmap_pages(addr, nr_pages); +} + /** * pcpu_unmap_pages - unmap pages out of a pcpu_chunk * @chunk: chunk of interest @@ -153,6 +162,8 @@ static void __pcpu_unmap_pages(unsigned long addr, int nr_pages) static void pcpu_unmap_pages(struct pcpu_chunk *chunk, struct page **pages, int page_start, int page_end) { + struct vm_struct **vms = (struct vm_struct **)chunk->data; + unsigned long vm_flags = vms ? vms[0]->flags : VM_ALLOC; unsigned int cpu; int i; @@ -165,7 +176,7 @@ static void pcpu_unmap_pages(struct pcpu_chunk *chunk, pages[pcpu_page_idx(cpu, i)] = page; } __pcpu_unmap_pages(pcpu_chunk_addr(chunk, cpu, page_start), - page_end - page_start); + page_end - page_start, vm_flags); } } @@ -190,13 +201,38 @@ static void pcpu_post_unmap_tlb_flush(struct pcpu_chunk *chunk, pcpu_chunk_addr(chunk, pcpu_high_unit_cpu, page_end)); } -static int __pcpu_map_pages(unsigned long addr, struct page **pages, - int nr_pages) +/* + * __pcpu_map_pages() should not be called during the percpu initialization, + * as asi_map() depends on the page allocator (which isn't available yet + * during percpu initialization). Instead, ___pcpu_map_pages() can be used + * during the percpu initialization. But, any pages that are mapped with + * ___pcpu_map_pages() will be treated as sensitive memory, unless + * they are explicitly mapped with asi_map() later. + */ +static int ___pcpu_map_pages(unsigned long addr, struct page **pages, + int nr_pages) { return vmap_pages_range_noflush(addr, addr + (nr_pages << PAGE_SHIFT), PAGE_KERNEL, pages, PAGE_SHIFT); } +static int __pcpu_map_pages(unsigned long addr, struct page **pages, + int nr_pages, unsigned long vm_flags) +{ + unsigned long size = nr_pages << PAGE_SHIFT; + int err; + + err = ___pcpu_map_pages(addr, pages, nr_pages); + if (err) + return err; + + /* + * If this fails, pcpu_map_pages()->__pcpu_unmap_pages() will call + * asi_unmap() and clean up any partial mappings. + */ + return asi_map(ASI_GLOBAL_NONSENSITIVE, (void *)addr, size); +} + /** * pcpu_map_pages - map pages into a pcpu_chunk * @chunk: chunk of interest @@ -214,13 +250,15 @@ static int __pcpu_map_pages(unsigned long addr, struct page **pages, static int pcpu_map_pages(struct pcpu_chunk *chunk, struct page **pages, int page_start, int page_end) { + struct vm_struct **vms = (struct vm_struct **)chunk->data; + unsigned long vm_flags = vms ? vms[0]->flags : VM_ALLOC; unsigned int cpu, tcpu; int i, err; for_each_possible_cpu(cpu) { err = __pcpu_map_pages(pcpu_chunk_addr(chunk, cpu, page_start), &pages[pcpu_page_idx(cpu, page_start)], - page_end - page_start); + page_end - page_start, vm_flags); if (err < 0) goto err; @@ -232,7 +270,7 @@ static int pcpu_map_pages(struct pcpu_chunk *chunk, err: for_each_possible_cpu(tcpu) { __pcpu_unmap_pages(pcpu_chunk_addr(chunk, tcpu, page_start), - page_end - page_start); + page_end - page_start, vm_flags); if (tcpu == cpu) break; } diff --git a/mm/percpu.c b/mm/percpu.c index 4e11fc1e6deff..d8309f2ea4e44 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -3328,8 +3328,8 @@ int __init pcpu_page_first_chunk(size_t reserved_size, pcpu_fc_cpu_to_node_fn_t pcpu_populate_pte(unit_addr + (i << PAGE_SHIFT)); /* pte already populated, the following shouldn't fail */ - rc = __pcpu_map_pages(unit_addr, &pages[unit * unit_pages], - unit_pages); + rc = ___pcpu_map_pages(unit_addr, &pages[unit * unit_pages], + unit_pages); if (rc < 0) panic("failed to map percpu area, err=%d\n", rc);