From patchwork Sun Mar 19 07:09:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13180240 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15BEAC7619A for ; Sun, 19 Mar 2023 07:09:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A5A38900004; Sun, 19 Mar 2023 03:09:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BB4A900002; Sun, 19 Mar 2023 03:09:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 85BA9900004; Sun, 19 Mar 2023 03:09:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 758E3900002 for ; Sun, 19 Mar 2023 03:09:43 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 4424EC0263 for ; Sun, 19 Mar 2023 07:09:43 +0000 (UTC) X-FDA: 80584772646.29.EF2D82C Received: from mail-ed1-f51.google.com (mail-ed1-f51.google.com [209.85.208.51]) by imf08.hostedemail.com (Postfix) with ESMTP id 68E70160018 for ; Sun, 19 Mar 2023 07:09:41 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=E7BbUYx2; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.208.51 as permitted sender) smtp.mailfrom=lstoakes@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679209781; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=csP2U+h4A4N+JyUsV24C6SnnqlSeb33JonwK9o1duFk=; b=WomgqGsgyODJyBug702f3XGPw9PAb3aLGGN6xZJlri6g/z1j/XAMnJcZqXvU8V0goPba9u PRcjVrzVnlibBNNELxWRAHtOv0wn6DEo7TqyRQY8yZajJAzHs6cVDDiUgouTnungmkXSNe rNxL6lCaAwrjEZqlM53ruDdbrVuGA6w= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=E7BbUYx2; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.208.51 as permitted sender) smtp.mailfrom=lstoakes@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679209781; a=rsa-sha256; cv=none; b=Qn77JZkldededNhEZAHRo1XL0IjR+t5YSWTx9tQt0fDXPbqT/j2Tr2UDxlzEUpORrI/k0L /L6nq+XfwfbwOk3DZIMzVGU1HpaOrqqXZV70avU8KL4wLNEUaFBLjbeCGEF22fvvZzACPO eQkk36kPD7/jXR6iVw4Ucjj3TJKfGE4= Received: by mail-ed1-f51.google.com with SMTP id ek18so35442045edb.6 for ; Sun, 19 Mar 2023 00:09:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679209779; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=csP2U+h4A4N+JyUsV24C6SnnqlSeb33JonwK9o1duFk=; b=E7BbUYx2VyQn5O1pHGy3AFX0ymfBCLHPKP5Y3xZwRpm/H/ORpIqsSyumlzF6fxFTMN SY1a+7FlryqU0IeYC7yS4qjOhenHYnANyIqz2mKvafPCfKBU2PnV33pqBXOs2JOCZIIL +KD1JgRFeZhX8S0bSCi1NovBHbT/CTaf2CcMG2IL1dgmBzSHi8Oe8r29QIyjxDH4DEip rbMrF2R8NWa5WZHFzuhkQhRmOu8CnVhC9Gs61cOqvXaFBehc1LqJzAofSWFI+eSCjmF0 xIHOYl+VudPK5mSG4x7D1PTE/K931ydfuzIGMidUKyKYP9nmi0qrAlxd0Mlo3UBh7+95 CcfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679209779; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=csP2U+h4A4N+JyUsV24C6SnnqlSeb33JonwK9o1duFk=; b=T79A6y7ehNmg0u7G0kVQhzbR++3KZzDvBt5gk6T+yer3S4+z4zBLi+QGRIbvMuO2s5 eHv+TXV45nfZdjLr/yrbL8qEGwko/yiGiHODxa/GywTkUB4j5YVBLMXjrQQoNfHYpUlC yOMGn4Z9BJ9If61mGge8RLgCGm/Rkn42QhW763yo0BSLY7rQMNLEnzW6PW850uGUPOTJ c+gqWbC53mTejaYEZmveogv8Ht4T34BY67+3FK+FDI0+KiicJwiL571pAyWIy92CpxOT fxshUnqkYbbY3wCVyykkoVa/HkSNdp3VHcFNyXFVvcRZRskLkuB+0ZO416ZsLgfZvQ4D Vqrw== X-Gm-Message-State: AO0yUKU+jtoXIP4APiq2MIMGURR3HtF0z+VPDSRD8z8uNwQr5rKpqORA knd8zz0Q0ewoh9UDRrunRgKNfXClFao= X-Google-Smtp-Source: AK7set+W4lasYomB07kPXcM5wE+qUni1s4ui3d4L4qeGtG0Gv+O2Px0+D0Q0CSaktIqhgOZmF0Xktw== X-Received: by 2002:aa7:d744:0:b0:4fd:2b05:aa2 with SMTP id a4-20020aa7d744000000b004fd2b050aa2mr9076089eds.42.1679209779674; Sun, 19 Mar 2023 00:09:39 -0700 (PDT) Received: from localhost.localdomain ([2a00:23ee:1938:1bcd:c6e1:42ba:ae87:772e]) by smtp.googlemail.com with ESMTPSA id u8-20020a170906b10800b008c9b44b7851sm2943920ejy.182.2023.03.19.00.09.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Mar 2023 00:09:38 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Lorenzo Stoakes Subject: [PATCH v2 1/4] fs/proc/kcore: Avoid bounce buffer for ktext data Date: Sun, 19 Mar 2023 07:09:30 +0000 Message-Id: <2ed992d6604965fd9eea05fed4473ddf54540989.1679209395.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 68E70160018 X-Stat-Signature: g8mmryymmp461erqeu1xa6j9aoiotzch X-HE-Tag: 1679209781-341454 X-HE-Meta: U2FsdGVkX186QhrDdGeTROP8B0sNE7sQaAb0f3bzb+ZLbbRYnjoOI9SCPOCYwaMsZb1qz3ICoBAW5hYlPuwj0x5iW7izn+Cs8yYqNe0DPIw+qd7CW1Z7mGtZ4lXXFKTqtA0NmI3W/Wz6qawGHTRwcHC5OPOyGZqAE/NAwqQAYXMveFz3dDyiWOS/X6vt24J7roNxkkiATWRuIETloE3F+v9AJvKDPMuBjbD7GNIt3zKrZbLlvy7C8L6/Vjsl9VGoaWBm8WaObsLCzDvCKXsif+FO1bA34NIC+z8kIn6Vg7TlMGr8fsZoG3zOcMx8UICcrixpTTIytlhDocd00cx1t02RR9N6w/Q1ydF1zcq8tOzqDLZd98HVifFMI00J/PvHy4XnrogYLK8MLLK9Ys4g+E+bHBhvoBn7dc+Enl09PL4hwJzm441vfB8AstIkJ4kZHnK4Ow861OOHgRFBEmi0Kp0z5z0x5+i7I86OAt09D7xakBhGKegb+8IpwBeqxdvYHOWnvbRkDzO+aKWM6We79Jc9P71iOuBI4JFL0AoYsADIgVeVTUf/Vnw6LjzoM4xyHbS3TfspRCUe+/+ehpfVWmyqFMu4dsy7Ts5PPI8w+995H6O9Lp21zHQwoVc+xp2fHoAD7n1xw7knpzw2rsACmBU7jFrxZ1ZxaF8l0fL1XotcX9mvWgUTIYUcy03gfanimZqskvD1m3TWlgEUpHfyMpgArWyfF3THsmEZGNRprrl85gCaPC+nCDsABbwxqNEVwoubMa6tbuNO9q1IiFzxWVifDhFB9ACKzAlSS7mM/+6lbJ6y6oVWuTjV1cyXf2RkOd8Tm6tnKfwqNkEfs+mbx+KwA39m/rjCQM3dIZ67u4dVwtOPXRcg3KGYHUFLl50jNqUaXL3ZDBCSFd+APOa17xq59dpgt4ZWMGkpY62BDeb0TVx5vFE15SZltWra3L/fHTCclsVtqLIzVZJ4nd6 +czxOAsn xD5M92JmdcQPdfjsEr53z5G4PbYf9AGzHdqKS9Tw7x/mDNXMkpuAJjD7443aMyqP7Jm6vieVmC3U8+KeojaaormT85PMgZKkrH+CaJfri92Hplmmge+eY+8f/9wlEpmlfA8fxC9NJmSudmHl6pTxnrirTq6Vi8/J0PEFYxqQ7iTa9+6a3KMblKmvix/bP3kJP9WwwRpSU7JhqVTiE2HZaMmnHsTzcMGLJBBQ5HD3/tt4RoygBqcsvcrzt1F1OSRUVtXIuLW+k8AJ3run4oWYxjrMU+W5x9ZmjCCVrmN1h0UMxr5SfAYNSPFzNEQ97Xq884cdySKuhWk6/lf/3ByQVFf3iNecSpUjRCGPEnbJsrc6lRlQFeoqwEnFFPTkhe3l450vjp76pVaQZlzYKcmqWUKYp4yYfwZ/BjCWXaTMQHMIHzaEGpd53p5pf+wnPUcmfmjWl X-Bogosity: Ham, tests=bogofilter, spamicity=0.000103, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit df04abfd181a ("fs/proc/kcore.c: Add bounce buffer for ktext data") introduced the use of a bounce buffer to retrieve kernel text data for /proc/kcore in order to avoid failures arising from hardened user copies enabled by CONFIG_HARDENED_USERCOPY in check_kernel_text_object(). We can avoid doing this if instead of copy_to_user() we use _copy_to_user() which bypasses the hardening check. This is more efficient than using a bounce buffer and simplifies the code. We do so as part an overall effort to eliminate bounce buffer usage in the function with an eye to converting it an iterator read. Signed-off-by: Lorenzo Stoakes Reviewed-by: David Hildenbrand --- fs/proc/kcore.c | 17 +++++------------ 1 file changed, 5 insertions(+), 12 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 71157ee35c1a..556f310d6aa4 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -541,19 +541,12 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) case KCORE_VMEMMAP: case KCORE_TEXT: /* - * Using bounce buffer to bypass the - * hardened user copy kernel text checks. + * We use _copy_to_user() to bypass usermode hardening + * which would otherwise prevent this operation. */ - if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { - if (clear_user(buffer, tsz)) { - ret = -EFAULT; - goto out; - } - } else { - if (copy_to_user(buffer, buf, tsz)) { - ret = -EFAULT; - goto out; - } + if (_copy_to_user(buffer, (char *)start, tsz)) { + ret = -EFAULT; + goto out; } break; default: From patchwork Sun Mar 19 07:09:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13180241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22D2AC7618A for ; Sun, 19 Mar 2023 07:09:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF523900005; Sun, 19 Mar 2023 03:09:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA5D2900002; Sun, 19 Mar 2023 03:09:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F99C900005; Sun, 19 Mar 2023 03:09:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 75866900002 for ; Sun, 19 Mar 2023 03:09:45 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4434740C37 for ; Sun, 19 Mar 2023 07:09:45 +0000 (UTC) X-FDA: 80584772730.26.A700A89 Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) by imf06.hostedemail.com (Postfix) with ESMTP id 5E2EE180013 for ; Sun, 19 Mar 2023 07:09:43 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=kD77EjSA; spf=pass (imf06.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.208.50 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679209783; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=V8uXWzHaUDgnMR3zVzK4ll6XJwTMNIlCa4zu/w/8SY0=; b=xoum6VLWi5HVRbD7mko/JdzB9T3duulqHnEYE8pYq6Z5Oo6yNnoB+OIzf0WBW2wrTKKgxa 7rYAXgcuJhBUnf5q8Xnu+95mvfSUltT/If6Z6MzGwgVfusRuXvdrk93+G5jHff4g1k6snr yElc0m3x+QU7XU9udKX8NsP5/b86FFY= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=kD77EjSA; spf=pass (imf06.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.208.50 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679209783; a=rsa-sha256; cv=none; b=iiXXxQVe98NbF+dDk54IfLCENRGzitLb7OrFl82ku8vRjskN8uEzLpGxNnHYJjezxdHSdu Ndz1/czGZR6fHU2oFGgHcWgzIhxy8evHu6ajxS1EVINQ4AtkYE3P4xcA6hAdfynUzxb7y8 Ji3kOpatKAjeC7kp1i3zlrO71euIPq4= Received: by mail-ed1-f50.google.com with SMTP id z21so35483641edb.4 for ; Sun, 19 Mar 2023 00:09:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679209781; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=V8uXWzHaUDgnMR3zVzK4ll6XJwTMNIlCa4zu/w/8SY0=; b=kD77EjSANPv5buSsqXEhflSmgfuAbw+3r1lZTfUH2Si4lAQ5F0kzSrIlBMvWxHyGM5 WrJBoxIFgqN77KaZWfx9jZJpTTd1XtWADAQ8thPxb/RoqM3tMAU4bhfeOHYTD+lZ69tS aeuBbNWwY1xB3ARatXkNcEY4ZMYWbQ+oxyk05M6ElH2m8XUnXKVC90AhPutyGx/glQkO /gnQf39KUikvmDGyoWmpprZoF+1tCRrknofidgyJ79vExjBPNoHY7nIJWHtw+0Z8Dcuk tL4FqjTHjMeftBEJY03uoDecD4TzfQTt7hca9Hln0v9q0JxjgJpoZpLhl/yE4u0zeTt8 0TCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679209781; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V8uXWzHaUDgnMR3zVzK4ll6XJwTMNIlCa4zu/w/8SY0=; b=R04yvK5+K9dxVVLQ05Afz06/FWjYSiBMm8G9BQPLZnSBURoPJOMrZG2111Bu7Q/xbI 61dkzi8J8MuJ+NYA4DDJNt4hFMCFCL1nDrCpmhdCcMXvGxQecLUQRTo9wKQEKD9FGQKF INEpPqCJwmcHpMW1045y5XVXl9FDGUxTXM4nhaLCJOrKqPG6SuXAmxmS9MayhcX5k8e/ nGjxSGLvlrezLPyVBmJxyRDdP6SGEMPnjmo36IThDS4T8Ah/x1iygsC5UQtgXcD9EqqO FIxk7MVcBZxlGg8954s0LJhV5e6XzbV6EWET8PE/5iPVzUKkpm26wMgOyvJ7yFVA99EL bBsA== X-Gm-Message-State: AO0yUKUaZkr045e5gznZQSe+EBnsBGOH34P/fjjQ9edi+naItTiZk1Gp IbTFFKxKOFU/pk71tNprVGV/TjY2sds= X-Google-Smtp-Source: AK7set9mnjoY0F9bvHw0JKzlTE3bCjtuxWNF0T2RwMxpa2FJSHNLFnLdQB7KeDSZJqZzpZJ9Mxvzog== X-Received: by 2002:a17:906:6886:b0:930:9cec:2fb9 with SMTP id n6-20020a170906688600b009309cec2fb9mr5024214ejr.77.1679209781486; Sun, 19 Mar 2023 00:09:41 -0700 (PDT) Received: from localhost.localdomain ([2a00:23ee:1938:1bcd:c6e1:42ba:ae87:772e]) by smtp.googlemail.com with ESMTPSA id u8-20020a170906b10800b008c9b44b7851sm2943920ejy.182.2023.03.19.00.09.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Mar 2023 00:09:40 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Lorenzo Stoakes Subject: [PATCH v2 2/4] mm: vmalloc: use rwsem, mutex for vmap_area_lock and vmap_block->lock Date: Sun, 19 Mar 2023 07:09:31 +0000 Message-Id: <6c7f1ac0aeb55faaa46a09108d3999e4595870d9.1679209395.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: 53aaaqomgj9jq98tj9xs8mxbbfp3g37d X-Rspam-User: X-Rspamd-Queue-Id: 5E2EE180013 X-Rspamd-Server: rspam06 X-HE-Tag: 1679209783-273824 X-HE-Meta: U2FsdGVkX18b1MZPPyspIu6t47DpVnVlh3WoQSCu1krsJdQkQKRdbXWAyztCEqq06EwQRLw8Hos/7YfpEtE7tMVpcvk9frgAwwllJG0BrjhmOiYzugwMmxXJjDROxzPIReh3MztYXU/njis0l30jAzpx9D8w2ao7AdDB17U/7EvPWZQFm1hNttQQzrp/CpvFDXvwYifExnM9BEcnPCqseznhRt1U7eodghkJDysMe4pEC5GYKseV3jozgMtQE3gUxHV/k7GL3kNHc4EPIXWdG1Fs+h3X1X3TDiqO84yVITfuKhuELtJCN5CGfFfC4MZAS8P1/lV74SwxhkisRcT+5FI7NLq933va9F+8n1x9S4BTs8IEBSl990w9Kdsao1Lraw52kyOTnWj/ku1vc6wwptaAxZq2ko+H4vZNH/qdCQJThJAaQcs+javWYs8iHzPZI7y919hjPcCZ5zJgIIfZ+rtbBl/mgGet9a3Bz3XiHCixIt/QBTEofDF6b8b53aCaYfmpncO+nDAV+uQg44soDsGMI5Z+RBp1npVujvWum/ppawSJm8B4PsTk895Gf6WxTlhDNy+e6D/hRQl4Mx6mx2jlnFXXqnWV4Vzx6pP0sgrol0nIs2y3Mw4CMWa08sT10D1sb7LYIcm5vIg8aE/hCTQdnHMp7fcDK5X++V79gpBQnabxrJKVmOWlycjDuY4bm2Gfvki+gFZRTU2uBDUkb2j0JiFLEti3gLkbgGK30i3kR3upvsY0P3wQhrFdVZ8M/fUB9dVW5ul7vgow8dExr9MKYyuFDSrziErq07/MiWPg9DqguPScPbvH8ToalY6f+CX6Jw6FNOvhfWxOFipQzgeBrrznDIUIrVhU73m2Nt1FaR+O8q5q9risuZJsVfpeXmZXsKZVYcWSTicNqrTZa9l9MZBizv5nOdF3Sw+SxD/690GDj9bizGXaS42FXSLjWsKvtYnah31wKy/D4M8 gCGAbOJl WOHlgP6++C4LPZGAkZxsAL7hzvlcHH/rvVC4SAnQWrSTsCqYBFzl6dEjlyfaq/yCCqwZ4JrNyzLo9aiLvPvu+Z8ALVC1jmjEMvjwh0zi4MxGBoDZCbfeEUTMQd5CWAkakolqtOnXo5oCxZuIXBVdZss0N3EEnNHbpf6AJxSzTlgLMLNYU04gvLgyQaZ1wdq5JLVRfmn4sHiVu+zDRyeiSSRonw8+D3xRSdkxAs8bDaS0fFWVcVecHw63KepsuQ5wG6P7dwPKMxHCNcpIZk67P9T+F306gzqJF7gRKWC8irD8pLt3ux0cJt/ox2tX8dLsc7+rkfbP9ICVNXFPc6aJJdGOQ/2Lrd/MbljG9n5hLUPRWkEFY5XtU3yJGVHZGZvzh0o4tTVIKmROjcy5PJAQfBId8CrvppwHv0O89XbOgugp5U+5VAWlKQWmNI5846g70EQ1P+QadfghKuZtJdBDhOZLGhUdCu8IMPqbeP9OAUV64iWE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: vmalloc() is, by design, not permitted to be used in atomic context and already contains components which may sleep, so avoiding spin locks is not a problem from the perspective of atomic context. The global vmap_area_lock is held when the red/black tree rooted in vmap_are_root is accessed and thus is rather long-held and under potentially high contention. It is likely to be under contention for reads rather than write, so replace it with a rwsem. Each individual vmap_block->lock is likely to be held for less time but under low contention, so a mutex is not an outrageous choice here. A subset of test_vmalloc.sh performance results:- fix_size_alloc_test 0.40% full_fit_alloc_test 2.08% long_busy_list_alloc_test 0.34% random_size_alloc_test -0.25% random_size_align_alloc_test 0.06% ... all tests cycles 0.2% This represents a tiny reduction in performance that sits barely above noise. The reason for making this change is to build a basis for vread() to be usable asynchronously, this eliminating the need for a bounce buffer when copying data to userland in read_kcore() and allowing that to be converted to an iterator form. Signed-off-by: Lorenzo Stoakes Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 77 +++++++++++++++++++++++++++------------------------- 1 file changed, 40 insertions(+), 37 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 978194dc2bb8..c24b27664a97 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include @@ -725,7 +726,7 @@ EXPORT_SYMBOL(vmalloc_to_pfn); #define DEBUG_AUGMENT_LOWEST_MATCH_CHECK 0 -static DEFINE_SPINLOCK(vmap_area_lock); +static DECLARE_RWSEM(vmap_area_lock); static DEFINE_SPINLOCK(free_vmap_area_lock); /* Export for kexec only */ LIST_HEAD(vmap_area_list); @@ -1537,9 +1538,9 @@ static void free_vmap_area(struct vmap_area *va) /* * Remove from the busy tree/list. */ - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); unlink_va(va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); /* * Insert/Merge it back to the free tree/list. @@ -1627,9 +1628,9 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, va->vm = NULL; va->flags = va_flags; - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); insert_vmap_area(va, &vmap_area_root, &vmap_area_list); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); BUG_ON(!IS_ALIGNED(va->va_start, align)); BUG_ON(va->va_start < vstart); @@ -1854,9 +1855,9 @@ struct vmap_area *find_vmap_area(unsigned long addr) { struct vmap_area *va; - spin_lock(&vmap_area_lock); + down_read(&vmap_area_lock); va = __find_vmap_area(addr, &vmap_area_root); - spin_unlock(&vmap_area_lock); + up_read(&vmap_area_lock); return va; } @@ -1865,11 +1866,11 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr) { struct vmap_area *va; - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); va = __find_vmap_area(addr, &vmap_area_root); if (va) unlink_va(va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); return va; } @@ -1914,7 +1915,7 @@ struct vmap_block_queue { }; struct vmap_block { - spinlock_t lock; + struct mutex lock; struct vmap_area *va; unsigned long free, dirty; DECLARE_BITMAP(used_map, VMAP_BBMAP_BITS); @@ -1991,7 +1992,7 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) } vaddr = vmap_block_vaddr(va->va_start, 0); - spin_lock_init(&vb->lock); + mutex_init(&vb->lock); vb->va = va; /* At least something should be left free */ BUG_ON(VMAP_BBMAP_BITS <= (1UL << order)); @@ -2026,9 +2027,9 @@ static void free_vmap_block(struct vmap_block *vb) tmp = xa_erase(&vmap_blocks, addr_to_vb_idx(vb->va->va_start)); BUG_ON(tmp != vb); - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); unlink_va(vb->va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); free_vmap_area_noflush(vb->va); kfree_rcu(vb, rcu_head); @@ -2047,7 +2048,7 @@ static void purge_fragmented_blocks(int cpu) if (!(vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS)) continue; - spin_lock(&vb->lock); + mutex_lock(&vb->lock); if (vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS) { vb->free = 0; /* prevent further allocs after releasing lock */ vb->dirty = VMAP_BBMAP_BITS; /* prevent purging it again */ @@ -2056,10 +2057,10 @@ static void purge_fragmented_blocks(int cpu) spin_lock(&vbq->lock); list_del_rcu(&vb->free_list); spin_unlock(&vbq->lock); - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); list_add_tail(&vb->purge, &purge); } else - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); } rcu_read_unlock(); @@ -2101,9 +2102,9 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) list_for_each_entry_rcu(vb, &vbq->free, free_list) { unsigned long pages_off; - spin_lock(&vb->lock); + mutex_lock(&vb->lock); if (vb->free < (1UL << order)) { - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); continue; } @@ -2117,7 +2118,7 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) spin_unlock(&vbq->lock); } - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); break; } @@ -2144,16 +2145,16 @@ static void vb_free(unsigned long addr, unsigned long size) order = get_order(size); offset = (addr & (VMAP_BLOCK_SIZE - 1)) >> PAGE_SHIFT; vb = xa_load(&vmap_blocks, addr_to_vb_idx(addr)); - spin_lock(&vb->lock); + mutex_lock(&vb->lock); bitmap_clear(vb->used_map, offset, (1UL << order)); - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); vunmap_range_noflush(addr, addr + size); if (debug_pagealloc_enabled_static()) flush_tlb_kernel_range(addr, addr + size); - spin_lock(&vb->lock); + mutex_lock(&vb->lock); /* Expand dirty range */ vb->dirty_min = min(vb->dirty_min, offset); @@ -2162,10 +2163,10 @@ static void vb_free(unsigned long addr, unsigned long size) vb->dirty += 1UL << order; if (vb->dirty == VMAP_BBMAP_BITS) { BUG_ON(vb->free); - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); free_vmap_block(vb); } else - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); } static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) @@ -2183,7 +2184,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) rcu_read_lock(); list_for_each_entry_rcu(vb, &vbq->free, free_list) { - spin_lock(&vb->lock); + mutex_lock(&vb->lock); if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { unsigned long va_start = vb->va->va_start; unsigned long s, e; @@ -2196,7 +2197,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) flush = 1; } - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); } rcu_read_unlock(); } @@ -2451,9 +2452,9 @@ static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, unsigned long flags, const void *caller) { - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); setup_vmalloc_vm_locked(vm, va, flags, caller); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); } static void clear_vm_uninitialized_flag(struct vm_struct *vm) @@ -3507,9 +3508,9 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags if (!vb) goto finished; - spin_lock(&vb->lock); + mutex_lock(&vb->lock); if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); goto finished; } for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { @@ -3536,7 +3537,7 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags count -= n; } unlock: - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); finished: /* zero-fill the left dirty or free regions */ @@ -3576,13 +3577,15 @@ long vread(char *buf, char *addr, unsigned long count) unsigned long buflen = count; unsigned long n, size, flags; + might_sleep(); + addr = kasan_reset_tag(addr); /* Don't allow overflow */ if ((unsigned long) addr + count < count) count = -(unsigned long) addr; - spin_lock(&vmap_area_lock); + down_read(&vmap_area_lock); va = find_vmap_area_exceed_addr((unsigned long)addr); if (!va) goto finished; @@ -3639,7 +3642,7 @@ long vread(char *buf, char *addr, unsigned long count) count -= n; } finished: - spin_unlock(&vmap_area_lock); + up_read(&vmap_area_lock); if (buf == buf_start) return 0; @@ -3980,14 +3983,14 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, } /* insert all vm's */ - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); for (area = 0; area < nr_vms; area++) { insert_vmap_area(vas[area], &vmap_area_root, &vmap_area_list); setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC, pcpu_get_vm_areas); } - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); /* * Mark allocated areas as accessible. Do it now as a best-effort @@ -4114,7 +4117,7 @@ static void *s_start(struct seq_file *m, loff_t *pos) __acquires(&vmap_area_lock) { mutex_lock(&vmap_purge_lock); - spin_lock(&vmap_area_lock); + down_read(&vmap_area_lock); return seq_list_start(&vmap_area_list, *pos); } @@ -4128,7 +4131,7 @@ static void s_stop(struct seq_file *m, void *p) __releases(&vmap_area_lock) __releases(&vmap_purge_lock) { - spin_unlock(&vmap_area_lock); + up_read(&vmap_area_lock); mutex_unlock(&vmap_purge_lock); } From patchwork Sun Mar 19 07:09:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13180242 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12F64C7619A for ; Sun, 19 Mar 2023 07:09:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 43EDC900006; Sun, 19 Mar 2023 03:09:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C849900002; Sun, 19 Mar 2023 03:09:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F328900006; Sun, 19 Mar 2023 03:09:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 0F039900002 for ; Sun, 19 Mar 2023 03:09:47 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id DAD5EC0C69 for ; Sun, 19 Mar 2023 07:09:46 +0000 (UTC) X-FDA: 80584772772.07.DE78D27 Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) by imf22.hostedemail.com (Postfix) with ESMTP id 059E5C0017 for ; Sun, 19 Mar 2023 07:09:44 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Z5A4SYAM; spf=pass (imf22.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.208.50 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679209785; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ML/fjqBQiFdniaLp2U4iIoVxIHX7dLvD1UO1EbWAfgg=; b=oc2hEUT1Mq56uGMxgJWFa8j7P+EXF0Lnqv/90BLNrOkMI3fdtsNa0J1egh1mPUsLrklWU6 p4BZr1isLvj4GzTVgsMlZEbVWL7SclQYDpxsg2QJHOMJ6tv1QgJq7D2v+BxiT4oLx4Zqbr KUU7YWb8BrLuyt1jaSbue4ACpQdd6oQ= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Z5A4SYAM; spf=pass (imf22.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.208.50 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679209785; a=rsa-sha256; cv=none; b=Xf4nk2CCNcolnnXYUQvYfMF8yzD/6znZQVp/RuPMhD2+xAYMsYNzPuzK9Eb88DkO7V8Oep 7/z3W3hr7uSxaw0SOvmA7oozcWaJiiFboX8qOYOqEyvLlZpY5YEB9GUTu1BCwqy0X+M24r 8yg9NuiD3oVFs5QIHFG4euLeikz0EfE= Received: by mail-ed1-f50.google.com with SMTP id z21so35483747edb.4 for ; Sun, 19 Mar 2023 00:09:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679209783; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ML/fjqBQiFdniaLp2U4iIoVxIHX7dLvD1UO1EbWAfgg=; b=Z5A4SYAMeoS3RR8Z4VWJcLUiygTK9MPz2hpEcGzak+h9lDhV9fLONDhT9zbE2C+yNg 8rv7BvOpHO0Sx3zejcitLQz6k0a4MVQIxsp9pkabAiX3qbecdxCSYy1n/OTf5RrL2ibo imKjO+t9688AnjvhOoFAdoaDlUwgG7oddRgRcNPVlJEtwMAZHJUFCIouxn9LGFRg5sPp 3specuIgov4Tk2AoSaONSPqii/LRvwchK3NxsaP4ID8Jpud36J7LBJY7/MyOYZ+Yryl/ eLLC8ao/emzfoxD23mCr58ZoewzZOJcncj0ZpZrlVsnuTdhu6B12vmmKBw5twY17tH8n 24xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679209783; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ML/fjqBQiFdniaLp2U4iIoVxIHX7dLvD1UO1EbWAfgg=; b=5eNosSEf137Pzyt+BTimLWQNM7eNsYJHs/9oa+YsbHpdw1Ngk+V7LlTS58TWz35afW kFL5t1pZrQn///CuzPFtTiuKFycSWPQ3jDVO57pQ0zc3Tr1nb4bORFeGft6SNIBzd6aB BSKYU/abG0kxRYu7d9EYkQQSr6/TNAPU7ZjN2pXH6RLKxFyT/bWPyY6ArfaSubnwpONT tOTVrgSMirlb8EFal6g5ha7swKv5/GZO1aLo6A/D3qODbDr0mYC5mJNVQUokhyJibvgj 09JnXmlxZJJG9tHBqyt3dTP51IuLV4e6vDiA4T1xonk8TFbnXGe9M+B3CMVdYBsA07NY CNqg== X-Gm-Message-State: AO0yUKXWBkKqLB5FS/d7t3RsQFVVlnjEgEh8Sjc7bu5HGOYW0e/lPCww SoE7McZ/DH+9q77SRUFF/Hx095tcjpA= X-Google-Smtp-Source: AK7set+MDLQIHRjSLb5brGnPCVCIAh7+aXOeit8OMTHh4Z0ogtj8/iZA2kquj6wEYzeRrXr2kOJBAQ== X-Received: by 2002:a17:907:3e0e:b0:933:3da2:436 with SMTP id hp14-20020a1709073e0e00b009333da20436mr4510936ejc.54.1679209783409; Sun, 19 Mar 2023 00:09:43 -0700 (PDT) Received: from localhost.localdomain ([2a00:23ee:1938:1bcd:c6e1:42ba:ae87:772e]) by smtp.googlemail.com with ESMTPSA id u8-20020a170906b10800b008c9b44b7851sm2943920ejy.182.2023.03.19.00.09.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Mar 2023 00:09:42 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Lorenzo Stoakes Subject: [PATCH v2 3/4] fs/proc/kcore: convert read_kcore() to read_kcore_iter() Date: Sun, 19 Mar 2023 07:09:32 +0000 Message-Id: <32f8fad50500d0cd0927a66638c5890533725d30.1679209395.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 059E5C0017 X-Stat-Signature: y1dek81ckgo8tuda4ddaeetixtojbona X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1679209784-129211 X-HE-Meta: U2FsdGVkX18AyVmgGHD007OMvyvK8d7fm1sdqgY/2cL+hsLkQjtmkctXmaE2vPakFoQ9R17vY/GhCS0/2ryQxEkFcCoDBpLPao8W0qLI29RfuyJmefptONipP92gCkvjA9t2MU6yME+I8GgSwbR553qpVRob3hdDwKqQdE7hHrwFKQde6OdWQKHWA2ZtSU9ZmZw2CfMOcdU8o3KDHk6t8+ZvkNAZBwUMIse+yDIeLG1uMWOYz8+AtNRDBm/8U23nfqRgeE8J+Q4+C/o6tFGsFac4jSEaDyjFQ/6veO+utO5+SF3dR+Ydo6eKasuei4u+Ubi4m/YgsMzyCbwvvBdmH8EnLZgJzBBAY/Y8t7BQdC5Vb4HpVBHiaKYmWCnhiuQDMzaT0WlvqSbiedk+JYQsYe0swhXera3yBCwx4O/EFPGIJgGxketm6MG7Jbc5VCFoWLeRr3BDQBdu0kOZL+gVuSbZMgfL8Dn7R2FsSInT+o4C55YaxYQfzpxgNPYQ0oKu59BVxomaRTqarD2TNK53Qrhs0nANN8thyb+USa9Wza/rz/oST67SCR8EdNjjlxCMxg3edF3aAYGPe38jMmeJ8tziBlAJ9GTh0eDscg5I+D+U4aGin1OeaJ4JL6nEpk2hi9FJkfDVVUF6kM6BEbd3ZvacKR5/P9HnxeY0gGc8WiGF3WWlPUVr/07ZWF8i5ygGerZDrA3TIhSXsMJWnqapDXdl4rvrBn639L/9JWUT+Q47VHeaWCogNIALpNRmCBy0Ebt/zkj0boRhJrmqYoPjwmXNji0tkwD7RbDSIo3kBZPcJb5Gi5WOuA7WwTtgsm+u0zb06HgToF/hWtNlKDgOr2E65D6U/uQpMCtyNNRRjAL0yc10qIXuhdizmzs9Kw0KwNCt0YohdyRMhG8i8vcaAfE51DY5RcFwVFH9LzAn+lZIu9KvcbhyKMoI1+yzmcLatKqFiPTIbehq8987aNf CwKJ3BhS +VtIe2DnqqDYjh5fUclHXHd1GIDj/iUH7WWai9Ya8AYUi4KBSJgDYvbh2zIvfApc1hmWtxjg9hFdp9bGWkMFe2M2h0pHqJw+rMSDzTnX3vFqvZOWSRGTAaN7tMSZzZU0vy+4weMPf8hhweSw500oPg7fSi9+y/2Fi/4oMEXR6GGngZTwXXVXMc+l2idRq2EzFtHBHnVKyK9/Ukwb2HRVeOIaFWrgU5UdQM8TOqv5cg8rnFT4IvO3dccgyfxsgaaOu5biIWbVqUJixMNtSUSUkz/49tb//0zLl9Ks9L5xy42bkMEjXWaFFfPv6gyE0gCqh3UlCKjY03pNDskznV1ZNCJ0WZBz9mXajIqLZy1KZnP00zkY54neaVeNe2Jfw3rhKaghqIm709yWcESbY29XzJyXTixevbKm4329nDY0sUp43RQp+ZiEHI4tl7T+U7k9X093YVygTb9Yj5RTSRtapoXkrbjvVpKPzfyConiBbChlfTNg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now we have eliminated spinlocks from the vread() case, convert read_kcore() to read_kcore_iter(). For the time being we still use a bounce buffer for vread(), however in the next patch we will convert this to interact directly with the iterator and eliminate the bounce buffer altogether. Signed-off-by: Lorenzo Stoakes --- fs/proc/kcore.c | 58 ++++++++++++++++++++++++------------------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 556f310d6aa4..25e0eeb8d498 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -24,7 +24,7 @@ #include #include #include -#include +#include #include #include #include @@ -308,9 +308,12 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, } static ssize_t -read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) +read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) { + struct file *file = iocb->ki_filp; char *buf = file->private_data; + loff_t *ppos = &iocb->ki_pos; + size_t phdrs_offset, notes_offset, data_offset; size_t page_offline_frozen = 1; size_t phdrs_len, notes_len; @@ -318,6 +321,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) size_t tsz; int nphdr; unsigned long start; + size_t buflen = iov_iter_count(iter); size_t orig_buflen = buflen; int ret = 0; @@ -333,7 +337,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) notes_offset = phdrs_offset + phdrs_len; /* ELF file header. */ - if (buflen && *fpos < sizeof(struct elfhdr)) { + if (buflen && *ppos < sizeof(struct elfhdr)) { struct elfhdr ehdr = { .e_ident = { [EI_MAG0] = ELFMAG0, @@ -355,19 +359,18 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) .e_phnum = nphdr, }; - tsz = min_t(size_t, buflen, sizeof(struct elfhdr) - *fpos); - if (copy_to_user(buffer, (char *)&ehdr + *fpos, tsz)) { + tsz = min_t(size_t, buflen, sizeof(struct elfhdr) - *ppos); + if (copy_to_iter((char *)&ehdr + *ppos, tsz, iter) != tsz) { ret = -EFAULT; goto out; } - buffer += tsz; buflen -= tsz; - *fpos += tsz; + *ppos += tsz; } /* ELF program headers. */ - if (buflen && *fpos < phdrs_offset + phdrs_len) { + if (buflen && *ppos < phdrs_offset + phdrs_len) { struct elf_phdr *phdrs, *phdr; phdrs = kzalloc(phdrs_len, GFP_KERNEL); @@ -397,22 +400,21 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) phdr++; } - tsz = min_t(size_t, buflen, phdrs_offset + phdrs_len - *fpos); - if (copy_to_user(buffer, (char *)phdrs + *fpos - phdrs_offset, - tsz)) { + tsz = min_t(size_t, buflen, phdrs_offset + phdrs_len - *ppos); + if (copy_to_iter((char *)phdrs + *ppos - phdrs_offset, tsz, + iter) != tsz) { kfree(phdrs); ret = -EFAULT; goto out; } kfree(phdrs); - buffer += tsz; buflen -= tsz; - *fpos += tsz; + *ppos += tsz; } /* ELF note segment. */ - if (buflen && *fpos < notes_offset + notes_len) { + if (buflen && *ppos < notes_offset + notes_len) { struct elf_prstatus prstatus = {}; struct elf_prpsinfo prpsinfo = { .pr_sname = 'R', @@ -447,24 +449,23 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) vmcoreinfo_data, min(vmcoreinfo_size, notes_len - i)); - tsz = min_t(size_t, buflen, notes_offset + notes_len - *fpos); - if (copy_to_user(buffer, notes + *fpos - notes_offset, tsz)) { + tsz = min_t(size_t, buflen, notes_offset + notes_len - *ppos); + if (copy_to_iter(notes + *ppos - notes_offset, tsz, iter) != tsz) { kfree(notes); ret = -EFAULT; goto out; } kfree(notes); - buffer += tsz; buflen -= tsz; - *fpos += tsz; + *ppos += tsz; } /* * Check to see if our file offset matches with any of * the addresses in the elf_phdr on our list. */ - start = kc_offset_to_vaddr(*fpos - data_offset); + start = kc_offset_to_vaddr(*ppos - data_offset); if ((tsz = (PAGE_SIZE - (start & ~PAGE_MASK))) > buflen) tsz = buflen; @@ -497,7 +498,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) } if (!m) { - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -508,14 +509,14 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) case KCORE_VMALLOC: vread(buf, (char *)start, tsz); /* we have to zero-fill user buffer even if no read */ - if (copy_to_user(buffer, buf, tsz)) { + if (copy_to_iter(buf, tsz, iter) != tsz) { ret = -EFAULT; goto out; } break; case KCORE_USER: /* User page is handled prior to normal kernel page: */ - if (copy_to_user(buffer, (char *)start, tsz)) { + if (copy_to_iter((char *)start, tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -531,7 +532,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) */ if (!page || PageOffline(page) || is_page_hwpoison(page) || !pfn_is_ram(pfn)) { - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -541,25 +542,24 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) case KCORE_VMEMMAP: case KCORE_TEXT: /* - * We use _copy_to_user() to bypass usermode hardening + * We use _copy_to_iter() to bypass usermode hardening * which would otherwise prevent this operation. */ - if (_copy_to_user(buffer, (char *)start, tsz)) { + if (_copy_to_iter((char *)start, tsz, iter) != tsz) { ret = -EFAULT; goto out; } break; default: pr_warn_once("Unhandled KCORE type: %d\n", m->type); - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) != tsz) { ret = -EFAULT; goto out; } } skip: buflen -= tsz; - *fpos += tsz; - buffer += tsz; + *ppos += tsz; start += tsz; tsz = (buflen > PAGE_SIZE ? PAGE_SIZE : buflen); } @@ -603,7 +603,7 @@ static int release_kcore(struct inode *inode, struct file *file) } static const struct proc_ops kcore_proc_ops = { - .proc_read = read_kcore, + .proc_read_iter = read_kcore_iter, .proc_open = open_kcore, .proc_release = release_kcore, .proc_lseek = default_llseek, From patchwork Sun Mar 19 07:09:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13180243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D991C7618E for ; Sun, 19 Mar 2023 07:09:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 21ED0900002; Sun, 19 Mar 2023 03:09:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1A5FA280001; Sun, 19 Mar 2023 03:09:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F12D2900007; Sun, 19 Mar 2023 03:09:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E216D900002 for ; Sun, 19 Mar 2023 03:09:48 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B401B140B3F for ; Sun, 19 Mar 2023 07:09:48 +0000 (UTC) X-FDA: 80584772856.02.FF4B8CD Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) by imf19.hostedemail.com (Postfix) with ESMTP id D4B7D1A0010 for ; Sun, 19 Mar 2023 07:09:46 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=qg4JfEok; spf=pass (imf19.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.208.49 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679209786; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ay2I6E7X+s9rgNiMvSqEAwkwcRQ0HQ/eFKvjtZ5gGaM=; b=uWy1rHqefEOL3CWlHsRmSs7r+6h9VLrx88QEN8KuX6NPrWd0FnRQDZ9KzW8x6paurwRpYP 4qqq/qBL1TCJDJuDlfDtZMagRrM0ct7TdMvH9f8v1gS4vxM3ZVELymmNMpc5TcD3aIf/67 UkkbBBqcKfDNjrPpH7c23Z2hg1cmym4= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=qg4JfEok; spf=pass (imf19.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.208.49 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679209787; a=rsa-sha256; cv=none; b=3TrAHKoC57V/MRWkvKXULjHO7F01Jvm/3PsQF3k1gwRPx2a40clIvbP+9d+p9v3xcC6/C2 0JxYgM5dnioEKImf0XXuaAotsWvwiWokghqTYtJQeSufC9WCNjf2+e21Op8QmAzs44Ybzv 2NkNd7A/4cfEH3cOftQTgOQ/NuuQd84= Received: by mail-ed1-f49.google.com with SMTP id y4so35515945edo.2 for ; Sun, 19 Mar 2023 00:09:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679209785; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ay2I6E7X+s9rgNiMvSqEAwkwcRQ0HQ/eFKvjtZ5gGaM=; b=qg4JfEok2IbeOBbE15vPGb6fxFMwm3osYe4121wl6/wuw3nUKAZTo9eCXEHKEAucps Z3g1QJDJngKzPz7zzLGyFr73SQob7YMOhEAq0N69oguhcAsph5uBh3qo1oul4LtpT3ez UTUQeDn6ShR1299B7Nc3LXa3OOe4b+t+F7SP5So37OcHxQYUOcpRXbyZZ/R7DQicuK5S ze/qeriWyym5iwBvoz9CcNZBjzhaq+dlH9N6msil+nI+DUhYvRQqaW7T2AgmhvFiB5VN 1mKeCZ3GDWhP3w8nHLrKwQElB259N50qABO2XV4NJqOggvgz47KhxPD0BiopioEU1sTB /Uqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679209785; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ay2I6E7X+s9rgNiMvSqEAwkwcRQ0HQ/eFKvjtZ5gGaM=; b=fDSkSmCOdY5rAB6lGzVU0/OPA/VBcCoQIwX+HIIbjwg7jIpxlJOqjx3PhEaX2SaGWg 6C/IBBvSlv+piwj4L29Br1l2IKVi+BXkLJBinF+OEC8S7/80hLe9Lj8S+b7EvCCrslu4 QRgRIGZmhy5MuqP0iY3cRt6nvfrsHlULDvr14jyJtlSQ8l4WPZGa63P+gd6suiF9jgOi IbokTmfvl7V57MerTAhjkxENerWAxrwjlVj05Z1n5AkKUDQ5AjErTT069YsSlyQOu14O k4j44bRBy+aLGy0DgBF/cAEwVr1dFrfXFVsOsq+2L1McdAlj1amkkdva7DWsDvd1gSoL LO8Q== X-Gm-Message-State: AO0yUKUiZEeiqWb43fdMWuZtkbhjLvq6o/sNtisJTh/Gt74bWANtxHTs 8B1BBW5cjMOVhOKlJ9ZO5CTsi2TGLF0= X-Google-Smtp-Source: AK7set+FByhR3QDaaOdxDdO9lHVIRn8pNe8XzlgLkLTytG+TSg8g/xKLS2t/Br17U/XJKeK5a1dHCg== X-Received: by 2002:a17:906:69c9:b0:870:b950:18d4 with SMTP id g9-20020a17090669c900b00870b95018d4mr5272766ejs.5.1679209785207; Sun, 19 Mar 2023 00:09:45 -0700 (PDT) Received: from localhost.localdomain ([2a00:23ee:1938:1bcd:c6e1:42ba:ae87:772e]) by smtp.googlemail.com with ESMTPSA id u8-20020a170906b10800b008c9b44b7851sm2943920ejy.182.2023.03.19.00.09.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Mar 2023 00:09:44 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Lorenzo Stoakes Subject: [PATCH v2 4/4] mm: vmalloc: convert vread() to vread_iter() Date: Sun, 19 Mar 2023 07:09:33 +0000 Message-Id: <7f9dad4deade9639cf7af7a8b01143bca882ff02.1679209395.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: yzbxaf3joyz573fosttqqtbfaq43b33a X-Rspamd-Queue-Id: D4B7D1A0010 X-HE-Tag: 1679209786-148886 X-HE-Meta: U2FsdGVkX18OLjDmBRkqhzudTPgr9gmCbUY7YfFlBEqZotyaqIvcuMl3StzG79wZOe3UR5C6GVzsERIjl+k4hd8mnefYK8Yr3sqPSNVDyadYMm66HqgZ/+Rz9YzsZqXgdlE5tAk+Ya81y6S34BE/Fob+tCF8NrGIyrUTAAJOpp81cpH6IWK7AjBaFvS5SlA68A/9As69pBIIh/mw3xR5KHBmV9/CYf6HlOHb1Q54q34reMmzrc65HVFkCIC6GvAJ3oivQ7XdF+3QcFyxImAPkTWdOdPYo1DI3f/+l0ZFwOmyqIGqUdIDbRhH/tvi3IozVXK7zrgYCmwlyl7Ta2eAcRY5vJ9x+11zFrScVEiv1i0r8di6CUGFwRWADpJxQHsnWGRp1z8Rz8Xp9UHPJGwlUkVjAoCblmCdsbdur3DQUFMDi3QR3tsIILIz3QmRM8K4MCJEWIVU5NfRXbHwoXeejh2V+x2nx1wW6hgow5BsqU3HdIeP3liNP1iB5tyNBfXilGEoLEI9Pvzjpyj1B+wBmEQkAGfevdi0T53ru2nEf+GhUE7etJO/WdaGmfgoRDFpGyJz0HQfnM2GPStCZoAYh3AO5SH+gF6yGMcNoep1cB3A6bHdv64jXEgUukTZ/t3UmaXd3leXEz8gBpkEhOBIsl4dbizQRviHuv9ZxLlFg8nqKHBFOm2RrH0hvYPIXsLvRB8dLnxmfKTCuGKS5ohCrw2uANUAHZNoHtktqqMUjYNSTDv3ivfYzaixX0w50DZ+9Dd1/FmR/iN1DEye70rMOa38r2KOQnacoCu5aNgN9oQ+Xhgyg7UbeduWcp7vvoMUI6gJ7GSvVFd/W4yOKmsHbFWRoSERR0iAfVb1CYapbSQ3BpSsOPbgW9sSkW50j86b24vfVPG74N3XQnNhzKNMMm5EGSwbV2M0KNEcEhDIJwD69kytk7vbFwVK9VJ7DsoIv+F9ot04W/PYUGsn/lv EXv2Q5Ed xg4QCoqOYEkElR6fSWF0Tbz7o2AkCoEAsaMvAE4BsfaQ09yWYGfC4jFFSBmWBUTDs52Z0CRWCn2pMtQTeUSEp3gKIc6eJfMcspYIyWRmmeML1PpFBRuPHnGnDRiS+1gvO+LO1U4gO9JDKiM9gkYYPgg4sqgpoJT819i+yP+H6ZEjPiBs4BcKBUz8jMuLaIbuWIEk9reJgVDl3rmuZrJDElX932jY6QZNvz4ZWvNsaKzJzKE1ko2js06q/wpSxeylETcPo0+7wWl6JbyAsH5Mif/k2MQkyIElgrX0NY/4zEMf/PO5vOAMv1EIYen8MXJN9DeqtcQsBLO55QwfEMwTcSKu/SiIdy6UJm793e5+mwePVr8UUnlNPwYeBTcuLlCQF3DQTMRuB234OIqtR4IKfBuf+KSak5cEeUwQ4kdqswo0qxAqKM1MDrtzyF3n4S3PiuuYJ24BcSjGeYolga2ZM+rJlFw0P4BB72k9wOhsHGseOIHc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Having previously laid the foundation for converting vread() to an iterator function, pull the trigger and do so. This patch attempts to provide minimal refactoring and to reflect the existing logic as best we can, with the exception of aligned_vread_iter() which drops the use of the deprecated kmap_atomic() in favour of kmap_local_page(). All existing logic to zero portions of memory not read remain and there should be no functional difference other than a performance improvement in /proc/kcore access to vmalloc regions. Now we have discarded with the need for a bounce buffer at all in read_kcore_iter(), we dispense with the one allocated there altogether. Signed-off-by: Lorenzo Stoakes --- fs/proc/kcore.c | 21 +-------- include/linux/vmalloc.h | 3 +- mm/nommu.c | 10 ++-- mm/vmalloc.c | 101 +++++++++++++++++++++------------------- 4 files changed, 62 insertions(+), 73 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 25e0eeb8d498..a0ed3ca35cce 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -307,13 +307,9 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, *i = ALIGN(*i + descsz, 4); } -static ssize_t -read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) +static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) { - struct file *file = iocb->ki_filp; - char *buf = file->private_data; loff_t *ppos = &iocb->ki_pos; - size_t phdrs_offset, notes_offset, data_offset; size_t page_offline_frozen = 1; size_t phdrs_len, notes_len; @@ -507,9 +503,7 @@ read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) switch (m->type) { case KCORE_VMALLOC: - vread(buf, (char *)start, tsz); - /* we have to zero-fill user buffer even if no read */ - if (copy_to_iter(buf, tsz, iter) != tsz) { + if (vread_iter(iter, (char *)start, tsz) != tsz) { ret = -EFAULT; goto out; } @@ -582,10 +576,6 @@ static int open_kcore(struct inode *inode, struct file *filp) if (ret) return ret; - filp->private_data = kmalloc(PAGE_SIZE, GFP_KERNEL); - if (!filp->private_data) - return -ENOMEM; - if (kcore_need_update) kcore_update_ram(); if (i_size_read(inode) != proc_root_kcore->size) { @@ -596,16 +586,9 @@ static int open_kcore(struct inode *inode, struct file *filp) return 0; } -static int release_kcore(struct inode *inode, struct file *file) -{ - kfree(file->private_data); - return 0; -} - static const struct proc_ops kcore_proc_ops = { .proc_read_iter = read_kcore_iter, .proc_open = open_kcore, - .proc_release = release_kcore, .proc_lseek = default_llseek, }; diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 69250efa03d1..6beb2ace6a7a 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -9,6 +9,7 @@ #include /* pgprot_t */ #include #include +#include #include @@ -251,7 +252,7 @@ static inline void set_vm_flush_reset_perms(void *addr) #endif /* for /proc/kcore */ -extern long vread(char *buf, char *addr, unsigned long count); +extern long vread_iter(struct iov_iter *iter, char *addr, size_t count); /* * Internals. Don't use.. diff --git a/mm/nommu.c b/mm/nommu.c index 57ba243c6a37..e0fcd948096e 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -36,6 +36,7 @@ #include #include +#include #include #include #include @@ -198,14 +199,13 @@ unsigned long vmalloc_to_pfn(const void *addr) } EXPORT_SYMBOL(vmalloc_to_pfn); -long vread(char *buf, char *addr, unsigned long count) +long vread_iter(struct iov_iter *iter, char *addr, size_t count) { /* Don't allow overflow */ - if ((unsigned long) buf + count < count) - count = -(unsigned long) buf; + if ((unsigned long) addr + count < count) + count = -(unsigned long) addr; - memcpy(buf, addr, count); - return count; + return copy_to_iter(addr, count, iter); } /* diff --git a/mm/vmalloc.c b/mm/vmalloc.c index c24b27664a97..f19509a6eef4 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -37,7 +37,6 @@ #include #include #include -#include #include #include #include @@ -3446,20 +3445,20 @@ EXPORT_SYMBOL(vmalloc_32_user); * small helper routine , copy contents to buf from addr. * If the page is not present, fill zero. */ - -static int aligned_vread(char *buf, char *addr, unsigned long count) +static void aligned_vread_iter(struct iov_iter *iter, + char *addr, size_t count) { - struct page *p; - int copied = 0; + struct page *page; - while (count) { + while (count > 0) { unsigned long offset, length; + size_t copied = 0; offset = offset_in_page(addr); length = PAGE_SIZE - offset; if (length > count) length = count; - p = vmalloc_to_page(addr); + page = vmalloc_to_page(addr); /* * To do safe access to this _mapped_ area, we need * lock. But adding lock here means that we need to add @@ -3467,23 +3466,24 @@ static int aligned_vread(char *buf, char *addr, unsigned long count) * interface, rarely used. Instead of that, we'll use * kmap() and get small overhead in this access function. */ - if (p) { + if (page) { /* We can expect USER0 is not used -- see vread() */ - void *map = kmap_atomic(p); - memcpy(buf, map + offset, length); - kunmap_atomic(map); - } else - memset(buf, 0, length); + void *map = kmap_local_page(page); + + copied = copy_to_iter(map + offset, length, iter); + kunmap_local(map); + } + + if (copied < length) + iov_iter_zero(length - copied, iter); addr += length; - buf += length; - copied += length; count -= length; } - return copied; } -static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags) +static void vmap_ram_vread_iter(struct iov_iter *iter, char *addr, int count, + unsigned long flags) { char *start; struct vmap_block *vb; @@ -3496,7 +3496,7 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags * handle it here. */ if (!(flags & VMAP_BLOCK)) { - aligned_vread(buf, addr, count); + aligned_vread_iter(iter, addr, count); return; } @@ -3517,22 +3517,24 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags if (!count) break; start = vmap_block_vaddr(vb->va->va_start, rs); - while (addr < start) { + + if (addr < start) { + size_t to_zero = min_t(size_t, start - addr, count); + + iov_iter_zero(to_zero, iter); + addr += to_zero; + count -= (int)to_zero; if (count == 0) goto unlock; - *buf = '\0'; - buf++; - addr++; - count--; } + /*it could start reading from the middle of used region*/ offset = offset_in_page(addr); n = ((re - rs + 1) << PAGE_SHIFT) - offset; if (n > count) n = count; - aligned_vread(buf, start+offset, n); + aligned_vread_iter(iter, start + offset, n); - buf += n; addr += n; count -= n; } @@ -3541,15 +3543,15 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags finished: /* zero-fill the left dirty or free regions */ - if (count) - memset(buf, 0, count); + if (count > 0) + iov_iter_zero(count, iter); } /** - * vread() - read vmalloc area in a safe way. - * @buf: buffer for reading data - * @addr: vm address. - * @count: number of bytes to be read. + * vread_iter() - read vmalloc area in a safe way to an iterator. + * @iter: the iterator to which data should be written. + * @addr: vm address. + * @count: number of bytes to be read. * * This function checks that addr is a valid vmalloc'ed area, and * copy data from that area to a given buffer. If the given memory range @@ -3569,13 +3571,13 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags * (same number as @count) or %0 if [addr...addr+count) doesn't * include any intersection with valid vmalloc area */ -long vread(char *buf, char *addr, unsigned long count) +long vread_iter(struct iov_iter *iter, char *addr, size_t count) { struct vmap_area *va; struct vm_struct *vm; - char *vaddr, *buf_start = buf; - unsigned long buflen = count; - unsigned long n, size, flags; + char *vaddr; + size_t buflen = count; + size_t n, size, flags; might_sleep(); @@ -3595,7 +3597,7 @@ long vread(char *buf, char *addr, unsigned long count) goto finished; list_for_each_entry_from(va, &vmap_area_list, list) { - if (!count) + if (count == 0) break; vm = va->vm; @@ -3619,36 +3621,39 @@ long vread(char *buf, char *addr, unsigned long count) if (addr >= vaddr + size) continue; - while (addr < vaddr) { + + if (addr < vaddr) { + size_t to_zero = min_t(size_t, vaddr - addr, count); + + iov_iter_zero(to_zero, iter); + addr += to_zero; + count -= to_zero; if (count == 0) goto finished; - *buf = '\0'; - buf++; - addr++; - count--; } + n = vaddr + size - addr; if (n > count) n = count; if (flags & VMAP_RAM) - vmap_ram_vread(buf, addr, n, flags); + vmap_ram_vread_iter(iter, addr, n, flags); else if (!(vm->flags & VM_IOREMAP)) - aligned_vread(buf, addr, n); + aligned_vread_iter(iter, addr, n); else /* IOREMAP area is treated as memory hole */ - memset(buf, 0, n); - buf += n; + iov_iter_zero(n, iter); + addr += n; count -= n; } finished: up_read(&vmap_area_lock); - if (buf == buf_start) + if (count == buflen) return 0; /* zero-fill memory holes */ - if (buf != buf_start + buflen) - memset(buf, 0, buflen - (buf - buf_start)); + if (count > 0) + iov_iter_zero(count, iter); return buflen; }