From patchwork Wed Mar 5 15:25:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryosuke Yasuoka X-Patchwork-Id: 14003222 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75F35C282E5 for ; Wed, 5 Mar 2025 19:04:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D230128001C; Wed, 5 Mar 2025 14:04:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C8249280003; Wed, 5 Mar 2025 14:04:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD68D28001C; Wed, 5 Mar 2025 14:04:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 827CA280003 for ; Wed, 5 Mar 2025 14:04:51 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CF6C11C7BDB for ; Wed, 5 Mar 2025 15:26:57 +0000 (UTC) X-FDA: 83187875274.14.963F698 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf08.hostedemail.com (Postfix) with ESMTP id C419A160027 for ; Wed, 5 Mar 2025 15:26:50 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=C0PCJrLS; spf=pass (imf08.hostedemail.com: domain of ryasuoka@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=ryasuoka@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741188411; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Mzd4XGn+uFEzS0vbdAuX3w3DrwjWbNhsj0Skkz0C+/o=; b=PZj1WwzBqECWn7hu6334K2QM99kuVdYNwZesJsSp+2T7uD+LEBQGMNwnV5ur1JRxI8H2EY 8718b8dLqsYWhJ7iAMZo5swR1CfxrwpsyJYaFV/XiX12Rr9GQVXAvcrkxXz1YumlkwYkVY jkIcT7r14em7ntSrokIDMC+X7iLE09U= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=C0PCJrLS; spf=pass (imf08.hostedemail.com: domain of ryasuoka@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=ryasuoka@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741188411; a=rsa-sha256; cv=none; b=h9s+zEx8DlHdDnOo9Vj4r5AU1tPxNs2BroUCDWJcGWjey/ZKcAP4BfVf/ThmTrBd6DTCBc APcLfBb7oA3umLdIblmk8bQ1t5Rs+q2HY/TnYsdEo8vjMp9iJ0m0nlZTt1VVmO6CtIkaN1 /5NxHCBNeFZb0VQjknedXpi3Qv9xA1w= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741188409; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Mzd4XGn+uFEzS0vbdAuX3w3DrwjWbNhsj0Skkz0C+/o=; b=C0PCJrLSXbkUFDgVsGknuDcqUtlxT/9mA4O+7IYDYaJko8xkWS1hnXyVjv/2He1Z68cMT9 jnuSAxYNf3Te82Lv6UsAs+UFpTcRsjJQnvLpk0J9/X195DmQDQgHv+fcv1tH0HN37rnqYX IQszIW6KKFxdaKcecmiQ1xkUAwMYjKY= Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-205-eg51akFUMjCNCQV9Sk0jOw-1; Wed, 05 Mar 2025 10:26:43 -0500 X-MC-Unique: eg51akFUMjCNCQV9Sk0jOw-1 X-Mimecast-MFC-AGG-ID: eg51akFUMjCNCQV9Sk0jOw_1741188401 Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-22349ce68a9so69554025ad.0 for ; Wed, 05 Mar 2025 07:26:43 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741188401; x=1741793201; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Mzd4XGn+uFEzS0vbdAuX3w3DrwjWbNhsj0Skkz0C+/o=; b=lbJZTabOimAYZdj2OBK40KckoII+5nmX/t5LOiQEvGMIf6OBBdPS1G0jbsrAkn+M6L Mt5PhTWHXW3jyoZbnS6BahYJRtw6xJ0KUuCRSbBFKPf/X1Cl5NgpBkvjsA30FroLsw4g DuyfNOouJkCk3uF66wX/EFv+li4Blq3oFuj0ySXje5IpHrq5uawgF1RGmG+ApDfvSqrc tk8+efkc0IlA72nitp0z+Z/BTz6BdnypjbfufWARvqpT3jcTQFOHdFEm5lqAKwaZ70+Z 2vBo93p2kHBVy1aH41+pcFrBgZsxEbnfvapf1OM5PFMzz735UOyTaJHwTEVO4jKn3mR6 KAgA== X-Forwarded-Encrypted: i=1; AJvYcCU/wzLkb1NrwXRQuWjpUXH+NK6cfUJOTW4YlLm3ssvHybiBmoWqr3PALSfseHoEnGVBbgOPKdfkEw==@kvack.org X-Gm-Message-State: AOJu0Yw4gVg/isHQrCSFTVfbuMykPcx6oMcP2WYY7aghMR59ozcU7ybP xpmE8vlVrUgCDLAMTt5uNB3OS1P+Uh0fhzYyWpYHwo2t3xDdmnmOFC9xr1RP8Ub37y0NwWkrJnu R8OZKFrT2Sno7cRo2zTZnvtqpEouCa4zIOdgHyCSbDxbkQyph X-Gm-Gg: ASbGncvrAhxXvtgmkqSKQbbcPx9aAPh+OYe4/9oOzxGizRKlcSf6wT0ervU9Ta4yi1C M6Jvk0dnVEajV6YbHfSfvgn/vPWDE1R5ibhb7jHZw97DicxQyfvL/UQ6zB2u2hCpnAXvP+u/33l gvG8RCA0IjELakkNDVuC9dkSFkBj9/iaCGk8CIpCkjGSf7oMykMJg9Txziee1hYXQRb7uz+uCum T28teIXYccflgQGrfXMaiZKv8im4QEJb/OKAdVFQxSQE/IjUsHfjI7i6aJQrMvw/Q7+4Ll4MSSl faAVX3WBTmn1Mx6b X-Received: by 2002:a17:902:d4c5:b0:220:ea90:1925 with SMTP id d9443c01a7336-223f1d20313mr60493235ad.35.1741188401143; Wed, 05 Mar 2025 07:26:41 -0800 (PST) X-Google-Smtp-Source: AGHT+IECC+BX4uW9YAdKPYnu6dCDuOObXXtjhxLSSHWa2EsQXKtadkIakYvJhvOdDY/oZGXAYMQVLw== X-Received: by 2002:a17:902:d4c5:b0:220:ea90:1925 with SMTP id d9443c01a7336-223f1d20313mr60492655ad.35.1741188400762; Wed, 05 Mar 2025 07:26:40 -0800 (PST) Received: from zeus.elecom ([240b:10:83a2:bd00:6e35:f2f5:2e21:ae3a]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7364b4eff66sm6983292b3a.83.2025.03.05.07.26.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 07:26:40 -0800 (PST) From: Ryosuke Yasuoka To: maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, airlied@gmail.com, simona@ffwll.ch, kraxel@redhat.com, gurchetansingh@chromium.org, olvaffe@gmail.com, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, dmitry.osipenko@collabora.com, jfalempe@redhat.com Cc: Ryosuke Yasuoka , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux.dev, linux-mm@kvack.org Subject: [PATCH drm-next 1/2] vmalloc: Add atomic_vmap Date: Thu, 6 Mar 2025 00:25:53 +0900 Message-ID: <20250305152555.318159-2-ryasuoka@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250305152555.318159-1-ryasuoka@redhat.com> References: <20250305152555.318159-1-ryasuoka@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 6Wf5DnBR6Oxel0qJ_ymeWq5o0BwGi_BJgagxGUKnwNs_1741188401 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: C419A160027 X-Stat-Signature: gg86w8cwrcfqfhqfkdcwagrdbk53a6b8 X-HE-Tag: 1741188410-431455 X-HE-Meta: U2FsdGVkX1/fVRnIMwz7pPzfhwMVeUTtFq/Jlewewj+UiEEGypaFBIm70C35HyT4Mr53BhnafUsK7p/OHLO271mjZqWPYw/obMw6KExFsDOnrDxFSxT31mWOuZitFmyrasym/LeILY5sOE0xSI6DJQpj0FdLFZxDkigAa1pb0IdpWcuXF1q4wKL6Bgm+CFoWpObmnWO/9NpZIBP1PkuF3B8XljyEmCKR4BlvYpCZuTpcjYLeexb+sFXLJp621LLZkRdKiPBD357sOFjUMBNlTpqBZLr1fBe2jEScDDy1fVzCUpUevC32KiInHQxHScEsrm3k700Ass56c9AKEM+Uny2a2PFRG5RRWk8CTRbizVEQfqby8hx8Ryu/LtgA0Qfatk4wq497nFkyM/1ipCgumKi1EvYKuOke5V85Iq4lRRMPsX6IhPZ0tJd1LvPvkcOiP2BMndYm3cDmoEIxkUedXe3zv/8xzxDL9HfMlL5CR1jY53rwKjgiXp9G2gvXGrIrke6y3BWJK63RlX7LTtA0OU9q6VE04aYl/J+/Cs6jzHUTAbgFSt8urzfQo5rOvnjR3Ii1GgGqIET7AGimjuoYc8QwK5evEdCBi6U/fdqvfBgnsJQkua14F90/jUDh97j0OJpaf72hxWyL325Rw6+iHzC9S9D1WY82/6XiiHHI24PCqqSJD9Q0ipdZyejXpdbjaJKWARrqTkOMsEguBldufF8xr7t68i8LRmEN2Uba8m7lD6J6I37Vyy+7gcTx6Oxdw454WJyi3PIvKrqVqJLcxqvwNEyOWvEG1z/toHAc9iG01R6SLt4ggoQIcPElfoiACxCUwfAntRQqr/T19etSLq2uRgyznEU1BTj270F6yLby513mqawnt+QWQ5votkX/hbkZeOn1tSsAut3c+6mnsghSKLjkoMptIaukGK66X4Wp9x3FFOQ6kRpKE0e55gcfDFWPyLTWTO6Ez9KBzL9 UkZQ7Yx6 Ztp6ML1UKrsLxL+wliPHie8lBSzwVRXaShbXZUatxO84mCfIoNmSPvCYbrrapZwEzzGFdS2Y6CNavGO09dRqDYQCJoqs1nN7oqv7OQxNjQCjx+neIpSf/6E29QUy6nojSvvASlJC+EiT52+HmjZY32ZWOV5XoV1zMmGXcIn9VPpariMStvx2jbHxxgGtv1xIqilVNpmL5QTDgBPhrzonjBL/UoCZw/47UVS+pciOpJcf8Wuvph8Mi/m4oEvr2hmc6xhxX9cJ1+RR18C2AdCLcEM62gC5i4KTzXvBybFmHNEneyVWGQM5Hq+1Qzm82v5fYB1IqGPaSjJFohoVNPoFGRNke8lGPUjapyyR+TTkhZHUDtnOaiTUoxERir/2bqKHYd3ofJoKEgdJhmcPwzAdCenU27Ht0ZeN7/VWVcVHtQQ4iXJk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Some drivers can use vmap in drm_panic, however, vmap is sleepable and takes locks. Since drm_panic will vmap in panic handler, atomic_vmap requests pages with GFP_ATOMIC and maps KVA without locks and sleep. Signed-off-by: Ryosuke Yasuoka --- include/linux/vmalloc.h | 2 + mm/internal.h | 5 ++ mm/vmalloc.c | 105 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 112 insertions(+) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 31e9ffd936e3..c7a2a9a1976d 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -190,6 +190,8 @@ void * __must_check vrealloc_noprof(const void *p, size_t size, gfp_t flags) extern void vfree(const void *addr); extern void vfree_atomic(const void *addr); +extern void *atomic_vmap(struct page **pages, unsigned int count, + unsigned long flags, pgprot_t prot); extern void *vmap(struct page **pages, unsigned int count, unsigned long flags, pgprot_t prot); void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot); diff --git a/mm/internal.h b/mm/internal.h index 109ef30fee11..134b332bf5b9 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1278,6 +1278,11 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, void free_zone_device_folio(struct folio *folio); int migrate_device_coherent_folio(struct folio *folio); +struct vm_struct *atomic_get_vm_area_node(unsigned long size, unsigned long align, + unsigned long shift, unsigned long flags, + unsigned long start, unsigned long end, int node, + gfp_t gfp_mask, const void *caller); + struct vm_struct *__get_vm_area_node(unsigned long size, unsigned long align, unsigned long shift, unsigned long flags, unsigned long start, diff --git a/mm/vmalloc.c b/mm/vmalloc.c index a6e7acebe9ad..f5c93779c60a 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1945,6 +1945,57 @@ static inline void setup_vmalloc_vm(struct vm_struct *vm, va->vm = vm; } +static struct vmap_area *atomic_alloc_vmap_area(unsigned long size, + unsigned long align, + unsigned long vstart, unsigned long vend, + int node, gfp_t gfp_mask, + unsigned long va_flags, struct vm_struct *vm) +{ + struct vmap_node *vn; + struct vmap_area *va; + unsigned long addr; + + if (unlikely(!size || offset_in_page(size) || !is_power_of_2(align))) + return ERR_PTR(-EINVAL); + + if (unlikely(!vmap_initialized)) + return ERR_PTR(-EBUSY); + + va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node); + if (unlikely(!va)) + return ERR_PTR(-ENOMEM); + + /* + * Only scan the relevant parts containing pointers to other objects + * to avoid false negatives. + */ + kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask); + + addr = __alloc_vmap_area(&free_vmap_area_root, &free_vmap_area_list, + size, align, vstart, vend); + + trace_alloc_vmap_area(addr, size, align, vstart, vend, addr == vend); + + va->va_start = addr; + va->va_end = addr + size; + va->vm = NULL; + va->flags = va_flags; + + vm->addr = (void *)va->va_start; + vm->size = va_size(va); + va->vm = vm; + + vn = addr_to_node(va->va_start); + + insert_vmap_area(va, &vn->busy.root, &vn->busy.head); + + BUG_ON(!IS_ALIGNED(va->va_start, align)); + BUG_ON(va->va_start < vstart); + BUG_ON(va->va_end > vend); + + return va; +} + /* * Allocate a region of KVA of the specified size and alignment, within the * vstart and vend. If vm is passed in, the two will also be bound. @@ -3106,6 +3157,33 @@ static void clear_vm_uninitialized_flag(struct vm_struct *vm) vm->flags &= ~VM_UNINITIALIZED; } +struct vm_struct *atomic_get_vm_area_node(unsigned long size, unsigned long align, + unsigned long shift, unsigned long flags, + unsigned long start, unsigned long end, int node, + gfp_t gfp_mask, const void *caller) +{ + struct vmap_area *va; + struct vm_struct *area; + + size = ALIGN(size, 1ul << shift); + if (unlikely(!size)) + return NULL; + + area = kzalloc_node(sizeof(*area), gfp_mask, node); + if (unlikely(!area)) + return NULL; + + size += PAGE_SIZE; + area->flags = flags; + area->caller = caller; + + va = atomic_alloc_vmap_area(size, align, start, end, node, gfp_mask, 0, area); + if (IS_ERR(va)) + return NULL; + + return area; +} + struct vm_struct *__get_vm_area_node(unsigned long size, unsigned long align, unsigned long shift, unsigned long flags, unsigned long start, unsigned long end, int node, @@ -3418,6 +3496,33 @@ void vunmap(const void *addr) } EXPORT_SYMBOL(vunmap); +void *atomic_vmap(struct page **pages, unsigned int count, + unsigned long flags, pgprot_t prot) +{ + struct vm_struct *area; + unsigned long addr; + unsigned long size; /* In bytes */ + + if (count > totalram_pages()) + return NULL; + + size = (unsigned long)count << PAGE_SHIFT; + area = atomic_get_vm_area_node(size, 1, PAGE_SHIFT, flags, + VMALLOC_START, VMALLOC_END, + NUMA_NO_NODE, GFP_ATOMIC, + __builtin_return_address(0)); + if (!area) + return NULL; + + addr = (unsigned long)area->addr; + if (vmap_pages_range(addr, addr + size, pgprot_nx(prot), + pages, PAGE_SHIFT) < 0) { + return NULL; + } + + return area->addr; +} + /** * vmap - map an array of pages into virtually contiguous space * @pages: array of page pointers