From patchwork Mon Mar 20 23:42:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13182032 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC68BC6FD1D for ; Mon, 20 Mar 2023 23:43:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6D3D96B0078; Mon, 20 Mar 2023 19:43:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 65EE66B007B; Mon, 20 Mar 2023 19:43:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4FD576B007D; Mon, 20 Mar 2023 19:43:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 3DA676B0078 for ; Mon, 20 Mar 2023 19:43:01 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 1869FA070A for ; Mon, 20 Mar 2023 23:43:01 +0000 (UTC) X-FDA: 80590904562.15.1A2D4CD Received: from mail-wr1-f42.google.com (mail-wr1-f42.google.com [209.85.221.42]) by imf11.hostedemail.com (Postfix) with ESMTP id 3FC7E40016 for ; Mon, 20 Mar 2023 23:42:59 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="T4eKFt/8"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.42 as permitted sender) smtp.mailfrom=lstoakes@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679355779; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cY5q38znnygXEk7MQGfhMNT6fWhXSJdRMwyHM9vnLSQ=; b=E5pos46GpCyOscmBvaH5d1FP2v3eiL+IpzO/xgshJ1zU9IDeVtQafNpEW/Gu4DmkTIgKEf ocYn7RIZ5Q4jev7UeIO5qG0133xrGGBlY1gsgD8ZTnkqGTz+kfP9H5NhpeiRPdm+ETWv4G 3riawyoZFWmMHB+ChdwZqhzFzbqjAPo= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="T4eKFt/8"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.42 as permitted sender) smtp.mailfrom=lstoakes@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679355779; a=rsa-sha256; cv=none; b=XzuGwzCLbf5tXnbFSO5ITwU44p0GP4wXynXLMo74RhgsHKSkKcpZOJLiVgOrxwX7wlBIHg kiioBADEQVhUfyRijQ/zAUfvE8hG0btaktuTjuUDeD4ZYSh7jdGEHS9TUjxhzmlCAxCKQG u1DqifGKR8tKtT86Tgg+SZUF/YIlSq0= Received: by mail-wr1-f42.google.com with SMTP id h17so11993163wrt.8 for ; Mon, 20 Mar 2023 16:42:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679355777; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cY5q38znnygXEk7MQGfhMNT6fWhXSJdRMwyHM9vnLSQ=; b=T4eKFt/8PJCLdp758DXxE3Vk0DRGyaMCroLjSUQXTEzRpD4/0CYOWkP8UPy0HH9xTK D2YT5geOUrLotf3OYl1OMdAkwMC7w0ZMId2Bg6p6yiqc5/B9ougzR4bmhUqUvKWVMN58 QPsTXHg0EIN5kHbUjtcLcLDCCYBEeLtUHM6JMjHcUDmmaP65jYPnfNvNn4hIQJ6shUsX n3Bc5flv9tNucl6q4s6EewhDEjgwqxzm8Yb/25h0KnuX8YkkPvaD/ua4FPoiF2K4vUYH eiCEn3r0gGSY4uDZhG8EJMHEKVQrNQcLvoQCNi/Yx9T/7okh+2a+rwBioaaXzE8dAaQK 0F5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679355777; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cY5q38znnygXEk7MQGfhMNT6fWhXSJdRMwyHM9vnLSQ=; b=RQKRjX69xY3gorUPHF1irL8PFuDVBlYML3/U2CQ2q6GRB4D0cRTAJkWJWXuudIy8Ot AvynAPA/Z5wlFAUI2CTwxWfcv9MgSPtGtEZkSubvWpGvQF1Cl3mArQ1HGRce1UriSsv5 tUd4diDuF4dS+++cq+rSJY1GfcrhYzPL4+aHr7v/Lt5MiXRaCeR76dIigX+oQYz0Wd63 WTMr+o8EBX9juT7Z1ersgbOYx2xb+Ni4i6EAZpGXn51nVXYR89ut8ToZh1+v/bs0XsUX uvowLf9GzpQjnGdwbaLBL/sDThrWGhUKlutI0jTOuZG41zGJsVzD0FPl3k6sX6f2rRGi AU1g== X-Gm-Message-State: AO0yUKU148fT1xIxhUueC1dXtzWbt2HNHrjbzn5pOLjisMThQ5Ukh1TU Vw79ByoHK7i9c3qw6lVJzUQi/e+6p0s= X-Google-Smtp-Source: AK7set/4SLRYEk0+UkeFk1Oiw2Lls+y3KoTMgnJhrqI/l2imylwBOsmTmUlFNVHmyS+Z4+blKnja8A== X-Received: by 2002:a5d:61c9:0:b0:2d4:3f3b:cdb7 with SMTP id q9-20020a5d61c9000000b002d43f3bcdb7mr725746wrv.67.1679355777214; Mon, 20 Mar 2023 16:42:57 -0700 (PDT) Received: from lucifer.home (host86-146-209-214.range86-146.btcentralplus.com. [86.146.209.214]) by smtp.googlemail.com with ESMTPSA id u1-20020a05600c440100b003e209186c07sm17504541wmn.19.2023.03.20.16.42.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 16:42:55 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Jens Axboe , Alexander Viro , Lorenzo Stoakes Subject: [PATCH v3 1/4] fs/proc/kcore: avoid bounce buffer for ktext data Date: Mon, 20 Mar 2023 23:42:42 +0000 Message-Id: <08f9787b1fd0d552b65c62547f5382d5a5c7dbe4.1679355227.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 3FC7E40016 X-Stat-Signature: zidcbks1ju16ficu9njfu337phuejybp X-HE-Tag: 1679355779-490064 X-HE-Meta: U2FsdGVkX1+hy3/NfvuvpT0GqCGkWLOx4lkJ+ZV1Cs/dKJen9KH8Sim86wN645YFXhfKPa4nADNoTKGRDVq5WCNS1H2fbDFt2fmX3wjeErvgQKpm4LTBjMSaRqtggrpnI+AjgALGRbiVf+P+Nw8una/DSFx/LQ7/UeHV0kgXlXcLOybvIKGVPBCxHjY+1HdVR4Vyk544K53M8GRU/kmTWFlzD9mvX/gnai7krNsJ2m1QYqYeJLFy381kW6LDIAgbbn/AM09Pbxwudw4WhEExxO5pBCxlsFrGxmNPTdKc2AooF+jd/rnqHlJTGlU3vinLALDUuO7srNW67/eTWlvxsFtKAlgOtES5j1rWXXmtQbz3lyvmsE9Fcg0L6WtEbbUXk77L7ppWOYR84V0ExSniTbWEQnRwcpAfmSApvECFDGGJbNvN+HsLLF3l3lEXPMyEWEXkqvmVS5ydPXVW8MrTqbmZqnHEyctHSymNjwLF6apCnqCtCENPTFnnWQCaybQ51zhvxOn7ioSB10yZyoECz4JiRIHNhQFVGBkZ4obg6pCUw3X1vHyxhRu3qDuZOzEI7X7ZvGVKTIgqm3XcoGXacfQjHaxg51R5pDW+SF/mN9spGiZwsOui9YtwbT7zO7ZnlHiyISD9dhKF3IY3NQp0Amk86qgvGcurTgqz71+0zkUr6IBkHB3mNmvsiQJSODV01zCUxE/dR7xYDdbOvdI7WoeqQHFC9LzqyJGe1CTife/10hh0VnEFy9cEnkY8nHoSvKaVinETTa/ZH3+WEJbMtILouWkNCrGf4A1lKCxgFMJVbqWbvSas+G1kvicd6cOwkVt8AmuQyUz+1JmE6nAmug1OwnDycQKAx/eP+67KlCrLn/3ZzfDMswpXsCLjzxlr3z9ECklBZP0K7aRvXmEVRVxor1LtrPF2RtMN4JpI4YGLP/8geNa72ntlMCom6Qcd5bdUqYeU/0YtMiair9f CTNKyYjv L7ZyHIeG5Sz7/gEyKb2fFUdLJmKYWU+BICGKkl0qGcB9nTGFoV0Vy8dAbzo4BGS2BnlUbyavjTdMVQH3MLN/pJ4e38xBXCz9cu2CTLdDztWTJG7gip02OwRDY4Vt5AgYmSq0pfnhM4qpcDLYFyfXq/WmGO3ZYxFx/H3+sTn57eHnVmiMhQUpKm1eprrHhz9gRUY+Ys6AUgHQlAxy3ruhLMD1wo13VYgq5fF1a8W4k4Uuk4dNfEzpIwlxswnpBtzDYyau04sJVDColSy8T4JXwjVHeiaafwAPze1dLjugyIs5ftCa8F6Upk9vGh8RtVMeIwwPLkg+E2rPZgQDqAHodcL1WAbtC+dKK/yT389XfnSR+E1QfS0keJoPtLD9xEKN1spEJVivscGrB/oynB7kTlYzf1H6M9qdsQeuIL+3Q2hwguqxmpcx1DdVwyjDtGpValHIdjIPSzv0eDlas/vOy4dGOXyXi48QQKurtA/MpWiSmnKM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit df04abfd181a ("fs/proc/kcore.c: Add bounce buffer for ktext data") introduced the use of a bounce buffer to retrieve kernel text data for /proc/kcore in order to avoid failures arising from hardened user copies enabled by CONFIG_HARDENED_USERCOPY in check_kernel_text_object(). We can avoid doing this if instead of copy_to_user() we use _copy_to_user() which bypasses the hardening check. This is more efficient than using a bounce buffer and simplifies the code. We do so as part an overall effort to eliminate bounce buffer usage in the function with an eye to converting it an iterator read. Signed-off-by: Lorenzo Stoakes Reviewed-by: David Hildenbrand --- fs/proc/kcore.c | 17 +++++------------ 1 file changed, 5 insertions(+), 12 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 71157ee35c1a..556f310d6aa4 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -541,19 +541,12 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) case KCORE_VMEMMAP: case KCORE_TEXT: /* - * Using bounce buffer to bypass the - * hardened user copy kernel text checks. + * We use _copy_to_user() to bypass usermode hardening + * which would otherwise prevent this operation. */ - if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { - if (clear_user(buffer, tsz)) { - ret = -EFAULT; - goto out; - } - } else { - if (copy_to_user(buffer, buf, tsz)) { - ret = -EFAULT; - goto out; - } + if (_copy_to_user(buffer, (char *)start, tsz)) { + ret = -EFAULT; + goto out; } break; default: From patchwork Mon Mar 20 23:42:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13182034 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F77AC6FD1D for ; Mon, 20 Mar 2023 23:43:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 76FA66B007B; Mon, 20 Mar 2023 19:43:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 721B66B007D; Mon, 20 Mar 2023 19:43:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C14B6B007E; Mon, 20 Mar 2023 19:43:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4BA0F6B007B for ; Mon, 20 Mar 2023 19:43:05 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 11780121036 for ; Mon, 20 Mar 2023 23:43:05 +0000 (UTC) X-FDA: 80590904730.09.631001E Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) by imf06.hostedemail.com (Postfix) with ESMTP id 1B3FE180008 for ; Mon, 20 Mar 2023 23:43:02 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="M6jtlC5/"; spf=pass (imf06.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.52 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679355783; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ML/fjqBQiFdniaLp2U4iIoVxIHX7dLvD1UO1EbWAfgg=; b=z0civEybDSa1dY1/a3U05vJNHnRsFY7UbdA6Q9UbhdXRI/eYyClnkInoxMbVh09ap5lLop WObxz3Pvp+ImxU6WwaSzs4DpCj/0wmflyAem0emiNVY2ruyntWZ/zQxJUdo2ploDlhfxOA FnRH/G93jkLaXYDBb22/eFXD8Sa64Y4= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="M6jtlC5/"; spf=pass (imf06.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.52 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679355783; a=rsa-sha256; cv=none; b=dAkUrEmdvL2L73ipZqJEL3MmF+Q2PT43Olwr5kTGa4EAjZIz0r1tz87S2DTKh7hX5rHwXW XFuRzAOoU9sNGKpbOAR2SFQWl0oZY4csIk1WSDWclqdPlEg0+roIjIdlTDvzeIsvQqyrO5 7Lm3JV+q6NFFMCxqRXmqr92XykL9fg8= Received: by mail-wm1-f52.google.com with SMTP id w11so7489227wmo.2 for ; Mon, 20 Mar 2023 16:43:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679355781; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ML/fjqBQiFdniaLp2U4iIoVxIHX7dLvD1UO1EbWAfgg=; b=M6jtlC5/vJmiXbdHsI9oOM2KucMVCzV6haLZB1PopMBrkSD5WieTnBqyr6OrwmIp3g hD9mBRvc4sPWDH6ghBxN3wZkjaCwNBhgB56XKpSt7hbB3SAtbX9Hno/GhOXUp0uSkoJP Zses6gYNWsSR9Qp9Sm+la0vQRrPLiMqZbHMlr8Uo9P1kHOlxXK65TDCYtYMZeQN6VqFg UOQ+I6EP2qTaW1ass6+vTSSOIfhcZ4TlCSPy0ZcunqLgfPbsIedbxmNJtxSB2bj3xtTV oNW+2XBSwS2vwp573I6bwP5NR8wkD7jVfOCYSF5jc4rdofqO4baUH4Jm/tACIHOz3WMs S77g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679355781; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ML/fjqBQiFdniaLp2U4iIoVxIHX7dLvD1UO1EbWAfgg=; b=mtUKj/xZvYRQVIw9+n4hPNt19CTJl2A5xH/jyhcH3gJd2IBlrTl68jAHWXL6E4NWvT VXgkE9n/eJFyQAKJ+Izmjrx+aDi4ghUp1ms9I9kQXDnnbAJffbgLKiofnEnpgKroPw81 z4BBh8hRDhoDF/qwNuV2JgsRByeG3ee9jNShgyBn6J0Hnmck7mKDHH+4Y2HTQ+bISOsY MxvMpRLs5M2oEgCPswsAd0EFLVp2UVyDBn0DVh0sjsAA0oVw7rKBfHJMOjQGDO9AEXRg gjflpZ88LrdjvRmUNDD9M4FqfQZWWjpxajyDZSmLJ/+BOolQspn1j2WhMD8DYel5Bkjt XvdA== X-Gm-Message-State: AO0yUKXNAWZn01reVRmn5nnj1sAw69fOisZ07c/TFLccIYk4kaf9MFpD +xE3XrcU54l3NFxfaJCCIANBvlGMw40= X-Google-Smtp-Source: AK7set+5IspP0M+LW3yIKnbie5GYRusdTlTtejDJT01Lj/5lgx0jT6pFil+M358oALjvNSCJU/jdmQ== X-Received: by 2002:a7b:c3ce:0:b0:3ed:355c:4610 with SMTP id t14-20020a7bc3ce000000b003ed355c4610mr832789wmj.35.1679355780849; Mon, 20 Mar 2023 16:43:00 -0700 (PDT) Received: from lucifer.home (host86-146-209-214.range86-146.btcentralplus.com. [86.146.209.214]) by smtp.googlemail.com with ESMTPSA id u1-20020a05600c440100b003e209186c07sm17504541wmn.19.2023.03.20.16.42.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 16:42:58 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Jens Axboe , Alexander Viro , Lorenzo Stoakes Subject: [PATCH v3 2/4] fs/proc/kcore: convert read_kcore() to read_kcore_iter() Date: Mon, 20 Mar 2023 23:42:43 +0000 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 6qejotpwadjuy4udroupbaza9afsrt31 X-Rspamd-Queue-Id: 1B3FE180008 X-HE-Tag: 1679355782-462550 X-HE-Meta: U2FsdGVkX1/s+LvyqI4nqgiIbq/S/Bnan+7V6ZXZySs5jjL+aRHiImEjkPCeoxQvjYeOcyOxUCOs+yDs5x6Qia45tyhWnaaCaEWSFNdle77z1Lh2YHq+NWSLUB/g55Qk6ast81rI9PsmOr56lglICuOKYswXs0AjuJc8kg+Rs8pPFScSaifZOPFUSq/fgvWC1xWaXBnnUkJsMeZZ8K0zHlHTviV+v5H/vuHLxijK2hynfaN9o6Q06S0LFYiT4vfulqnKeT6z1k/yF4r+9mxCSQps172J63WGAvKhBtYhr+rBX0kReDRRSB9bcSRQ0rCFyRro2HI2BI/ey9VSSv31Kp9vsz2HOmcIDdrLf0CXAoBz50KajkXupEhem+hfbHZdOAi1wplyLXEIwUDlsWxewvZ+ZZhzhqE4faWPrRtYYfI8IEmhbM4YHinELNhrCyCktekNZJMRfSG74T39uMbaPTa7xafAnNY6uMMV8m4r22KLjip9WzKzjNbADhbwWPXQAS7qHBh2ibi9mlEsadn5BTcS5fFu5zeAMD4fwVa08LUhcbnzn+ZkdrGh0j6FUjlvVtcdmRTl3f/1byHZsteix2He9dEDBSq/Sqmky3tgOsRs6jLxx13JoZv3jniM+cmZy+KUgh9tG1QJU3QDEs9O33SckSvRslkwikiEkIejG31k715fIMIr/0BMWTjyhmbHLy4HZNg5j9aH+ZmvtEeLUiIMERxpQ+a4UxfpVF1aJL5JRzLMYGrtps9EjF4oWE9nsTPG9K3QTJhOvAtaMCfLrhBcwBb9Ga15+vdkDPvKF+eUk56VkdSaTWRbKjpF0+kKmSMU0nxRV1IM3zDAhYLtLsqoDb3OXNo+AOxEkYh4sxCMD/iWmpZI1YQo0maU29LwmItYhYJno87Dz2X4/mhHOfqJ53NdNsSB4G0syz6wVDGwFvIGIP0yBD5Rnm/O2POYgn8bfADBmHtYsY9oHJW CIWh0A28 OxlpkzO1beQCuR4h5AoJdfnxdT9qAI/zfLT0q8jIzLC38tqtYt2LJDcXclHrhA4hV5Uqo24e1rJ6PR1keERCEWQvqrPRVbbBnMmiU51YfH3YtkE7rluv72YeEn8wcqZkbQBFCe8aS/zkMNW+p5JIFr/FggxLlLVLgrUiN5SkjeUV0ouf9bk0zaOnzrgi52ZCwjQFahc/6jOAdSXDqQcDYCvDfisKHEsjX5tWwzh/FSKOdwgIl0EcvpwMQ2TkWN4HO1WkDhu8WVAIXuozwxriK8ys5LU+FGN8+y+YghXKODkOnaj61n3s0hwukwjpHAn1IAq1XbwinUE8pSSAGvl8XmR04l5xwwEpcarrc2mHQ0RDr+d9SmPhrfK3ueXoVbo4wOT4/inj7ujRAnX/klm8EZUAQN7wLHypLJXXut/oGb8xiB6dSZnkrT4xLocw2s1kQcc4p1U/mCRWP+ILXCc32gK420e0E6aqCi2rKKRi8CVzZBoXHJpkTK+WVc88fpoKBaWDy/Fmn+RzZC7AkKLEGlg0ptvwMLnspHkOKLzaIV7OnIc7vHBVCLaOu3cR5Pr2UIO7Y4l1kocHEhzw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now we have eliminated spinlocks from the vread() case, convert read_kcore() to read_kcore_iter(). For the time being we still use a bounce buffer for vread(), however in the next patch we will convert this to interact directly with the iterator and eliminate the bounce buffer altogether. Signed-off-by: Lorenzo Stoakes --- fs/proc/kcore.c | 58 ++++++++++++++++++++++++------------------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 556f310d6aa4..25e0eeb8d498 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -24,7 +24,7 @@ #include #include #include -#include +#include #include #include #include @@ -308,9 +308,12 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, } static ssize_t -read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) +read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) { + struct file *file = iocb->ki_filp; char *buf = file->private_data; + loff_t *ppos = &iocb->ki_pos; + size_t phdrs_offset, notes_offset, data_offset; size_t page_offline_frozen = 1; size_t phdrs_len, notes_len; @@ -318,6 +321,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) size_t tsz; int nphdr; unsigned long start; + size_t buflen = iov_iter_count(iter); size_t orig_buflen = buflen; int ret = 0; @@ -333,7 +337,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) notes_offset = phdrs_offset + phdrs_len; /* ELF file header. */ - if (buflen && *fpos < sizeof(struct elfhdr)) { + if (buflen && *ppos < sizeof(struct elfhdr)) { struct elfhdr ehdr = { .e_ident = { [EI_MAG0] = ELFMAG0, @@ -355,19 +359,18 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) .e_phnum = nphdr, }; - tsz = min_t(size_t, buflen, sizeof(struct elfhdr) - *fpos); - if (copy_to_user(buffer, (char *)&ehdr + *fpos, tsz)) { + tsz = min_t(size_t, buflen, sizeof(struct elfhdr) - *ppos); + if (copy_to_iter((char *)&ehdr + *ppos, tsz, iter) != tsz) { ret = -EFAULT; goto out; } - buffer += tsz; buflen -= tsz; - *fpos += tsz; + *ppos += tsz; } /* ELF program headers. */ - if (buflen && *fpos < phdrs_offset + phdrs_len) { + if (buflen && *ppos < phdrs_offset + phdrs_len) { struct elf_phdr *phdrs, *phdr; phdrs = kzalloc(phdrs_len, GFP_KERNEL); @@ -397,22 +400,21 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) phdr++; } - tsz = min_t(size_t, buflen, phdrs_offset + phdrs_len - *fpos); - if (copy_to_user(buffer, (char *)phdrs + *fpos - phdrs_offset, - tsz)) { + tsz = min_t(size_t, buflen, phdrs_offset + phdrs_len - *ppos); + if (copy_to_iter((char *)phdrs + *ppos - phdrs_offset, tsz, + iter) != tsz) { kfree(phdrs); ret = -EFAULT; goto out; } kfree(phdrs); - buffer += tsz; buflen -= tsz; - *fpos += tsz; + *ppos += tsz; } /* ELF note segment. */ - if (buflen && *fpos < notes_offset + notes_len) { + if (buflen && *ppos < notes_offset + notes_len) { struct elf_prstatus prstatus = {}; struct elf_prpsinfo prpsinfo = { .pr_sname = 'R', @@ -447,24 +449,23 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) vmcoreinfo_data, min(vmcoreinfo_size, notes_len - i)); - tsz = min_t(size_t, buflen, notes_offset + notes_len - *fpos); - if (copy_to_user(buffer, notes + *fpos - notes_offset, tsz)) { + tsz = min_t(size_t, buflen, notes_offset + notes_len - *ppos); + if (copy_to_iter(notes + *ppos - notes_offset, tsz, iter) != tsz) { kfree(notes); ret = -EFAULT; goto out; } kfree(notes); - buffer += tsz; buflen -= tsz; - *fpos += tsz; + *ppos += tsz; } /* * Check to see if our file offset matches with any of * the addresses in the elf_phdr on our list. */ - start = kc_offset_to_vaddr(*fpos - data_offset); + start = kc_offset_to_vaddr(*ppos - data_offset); if ((tsz = (PAGE_SIZE - (start & ~PAGE_MASK))) > buflen) tsz = buflen; @@ -497,7 +498,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) } if (!m) { - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -508,14 +509,14 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) case KCORE_VMALLOC: vread(buf, (char *)start, tsz); /* we have to zero-fill user buffer even if no read */ - if (copy_to_user(buffer, buf, tsz)) { + if (copy_to_iter(buf, tsz, iter) != tsz) { ret = -EFAULT; goto out; } break; case KCORE_USER: /* User page is handled prior to normal kernel page: */ - if (copy_to_user(buffer, (char *)start, tsz)) { + if (copy_to_iter((char *)start, tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -531,7 +532,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) */ if (!page || PageOffline(page) || is_page_hwpoison(page) || !pfn_is_ram(pfn)) { - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -541,25 +542,24 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) case KCORE_VMEMMAP: case KCORE_TEXT: /* - * We use _copy_to_user() to bypass usermode hardening + * We use _copy_to_iter() to bypass usermode hardening * which would otherwise prevent this operation. */ - if (_copy_to_user(buffer, (char *)start, tsz)) { + if (_copy_to_iter((char *)start, tsz, iter) != tsz) { ret = -EFAULT; goto out; } break; default: pr_warn_once("Unhandled KCORE type: %d\n", m->type); - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) != tsz) { ret = -EFAULT; goto out; } } skip: buflen -= tsz; - *fpos += tsz; - buffer += tsz; + *ppos += tsz; start += tsz; tsz = (buflen > PAGE_SIZE ? PAGE_SIZE : buflen); } @@ -603,7 +603,7 @@ static int release_kcore(struct inode *inode, struct file *file) } static const struct proc_ops kcore_proc_ops = { - .proc_read = read_kcore, + .proc_read_iter = read_kcore_iter, .proc_open = open_kcore, .proc_release = release_kcore, .proc_lseek = default_llseek, From patchwork Mon Mar 20 23:42:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13182035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CD44C6FD1C for ; Mon, 20 Mar 2023 23:43:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B8E226B007D; Mon, 20 Mar 2023 19:43:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B17BA6B007E; Mon, 20 Mar 2023 19:43:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F66D6B0080; Mon, 20 Mar 2023 19:43:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7FC9E6B007D for ; Mon, 20 Mar 2023 19:43:07 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 3D5B7160F87 for ; Mon, 20 Mar 2023 23:43:07 +0000 (UTC) X-FDA: 80590904814.24.2BE8CE2 Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) by imf06.hostedemail.com (Postfix) with ESMTP id 6252C180008 for ; Mon, 20 Mar 2023 23:43:05 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=joYGkNHD; spf=pass (imf06.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.52 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679355785; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=b4AaWDqV89kHBZGOLd4BouvpuEGkoHHTR7oX6vXaXkQ=; b=6mhqaeytS0NS5lSQDJENt8kYAas0F8ZVkIy1334jBiXXSaxb/yOsel4XnQLWb/Lf6sOTZM +TpuwXneaT/9njwD+zTiQ1RTfUCrQnEv9yRWkGDFNim2NJTjNOMtpgUUli1CAs0sIigbeC +xoOjJ4o8wt4iGRDscgNeidJp/iF1Iw= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=joYGkNHD; spf=pass (imf06.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.52 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679355785; a=rsa-sha256; cv=none; b=dQNTNwffUgJKvlXuGTTf7E8brxOWsNkyenmIn8wYDmqWv4qYFq08VrdJFdv15cwZhEdRBR 1X6IBrAn7R1jYvJiJ2u15I2jj3QHXcsCUaS/fD8iJzu4mu6WY9PEJx5VT8eWUl1bJ5zwrI qJuQqQmlAU5EeHUemUTxmbkguWGJdF0= Received: by mail-wm1-f52.google.com with SMTP id w11so7489259wmo.2 for ; Mon, 20 Mar 2023 16:43:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679355784; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=b4AaWDqV89kHBZGOLd4BouvpuEGkoHHTR7oX6vXaXkQ=; b=joYGkNHDTIsLDeWPgNoFM1Q2okpaE69pfNRmsJ+y/FqybsxiiD+p7xTXMJv2UFV0A4 61fb425zxzibhUQzrQsMe+rvlxgYOETUMHea7etZsESI8rZOo4ch6rs8uwDgEw4AZwQq WptL6DO4Mh3xD3x/xEpN4MzKVjKbqOFOuXI8KHNI2LyUGuRKrkBrDQzECkZx7pRpslH5 mDklMyfPXPFzXOVbZnRQD9T7rso1xeq9+UI4bd/sNZgmWlqJJWEsu0WvCuvldUOHOgsz ISy5IEqLL/JqGRLSc/sCbOBrebzi6Wz+mu3+a8zo1CDe3MyBUJbx31A/G3Kquds1CT5J UH7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679355784; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=b4AaWDqV89kHBZGOLd4BouvpuEGkoHHTR7oX6vXaXkQ=; b=YMaWGDUpWqMqPylv2ebh7w5ySwua36jqbHXPr2Q7ml29ZieIbIFRQXkNk7uPNFBCTr mZgX3BWFZGIZa8gLsObh4thavqoFHGcNUBG7BN1MNkZcKgXjPI+pbrSDWnOQLagWYHRp pECi3rnmYwVFFf5UjddZ5JjSvG7gpCLDFbtc421Z5sHnRUoaaZC5rX+OiEAksWuuKawW FnC+byEuyrb0lx7slMETjGBD2HxdseoKO82Tu76NELgdpae71M1KWVo4XalfCN2HScTx qsGnv6Pk1XUOEyQ1v6QHxMlXlqMoI7sI05CInhdLnoGmKMJFhpnd8QJwMYoRreP+NNx3 uXCA== X-Gm-Message-State: AO0yUKWxXD/qA1wnzP7NE3nAz5S2cR+4o3v/ND0Yp7ZtEKP5mJSu5liA D909cF2MR88sAR3qmdKLDB1CmwsB02c= X-Google-Smtp-Source: AK7set9UWWC6CCPypNwh9/g+mA241iJtMd+z1H8zy+rUDOsZ1zyTSPvDhBDWMVBAvr03Fg0Tb8CvAA== X-Received: by 2002:a05:600c:296:b0:3ea:f873:13aa with SMTP id 22-20020a05600c029600b003eaf87313aamr773429wmk.40.1679355783915; Mon, 20 Mar 2023 16:43:03 -0700 (PDT) Received: from lucifer.home (host86-146-209-214.range86-146.btcentralplus.com. [86.146.209.214]) by smtp.googlemail.com with ESMTPSA id u1-20020a05600c440100b003e209186c07sm17504541wmn.19.2023.03.20.16.43.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 16:43:02 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Jens Axboe , Alexander Viro , Lorenzo Stoakes Subject: [PATCH v3 3/4] iov_iter: add copy_page_to_iter_atomic() Date: Mon, 20 Mar 2023 23:42:44 +0000 Message-Id: <31482908634cbb68adafedb65f0b21888c194a1b.1679355227.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: t3ornyznjsgn6r8imh6rit7knu1n6oko X-Rspamd-Queue-Id: 6252C180008 X-HE-Tag: 1679355785-495869 X-HE-Meta: U2FsdGVkX1882XSzFlrTbT2seydQPnlp59MNwkL6iHs+MPeJX9t36igXqH1w8WXuGPQHGVKAttACIIRqW+ffR/GjLWd0TjGc9ajDqnuFmZYRqUZqoNhCZ78WBPhK2/KLRhm18RiXNOjrhXEfm0t8vLbtj7BOmN0TaFjGaPleLIq8aE4xSRIJS3OK6Rc1r7LC3Ovdk8GmBN+HarkhhBNBFOeJETCzC3eomH4JPYs1V+NdUYEjDCCz1LfDdcQyAd3psTwq/yn4Rp0q/+OfWsB0u8kfKm/qscPPjT0RUAQA5R2BSnu+zt+TeXss5VaxfdpOzHyJ5B7HUOrwZyhPtOYtmvd0Rf4UkVlL5V/wZW7inMlydoXfx9mNd50iSQK+98ozWoTYpSEO1UwIFbJ8JmKtllq77yyLaOmaHM8GTexKXpxvFGxfkhxjW04NPUiQMYHsJpTXjPdn6QEQk520n5oQYoagb7nPN4BFYf3CUTB0V1Gi/aNyhJDQgmUC4RFblRiZGFY1/b2QCfK8vX3+NXRAu5znLcO23bd9MmJKvNLDV3z0+GxDqO/bbdo1xQB0CKyeHcK4wz4vFdOk+THzZGqVMsqeS4Ick0cfcA8uOxSDsfGN5pScGsLfvOUAu6lmJyKd4lTIMgkXvQVOqGdNQ4gzdHQHNjt7XZHY1vPTDQvEEHzY510kjwsY3bX4sDtDRkffKXS76W7Qogniskl0XFRRVe4fo3OrCvstt/iTPeSUPlKWYuPSo5zr/yWU2W5e/uSCyQbHCcIaX6BHQPWbksWqDoTe4/MKWg1Ah0kwWFbUGw+SkW6OmuIX1WYRVgyk3hHN1bMjI4bmLdJW/GeKk9gAxQjoywj+tFxanZnGkGMEVAo9PhazEoHa6mNETwQ0Rs/ld3cRVWzmyyRJIIJ3B3HkxXLD/CzBUFJK83YWYECg1BF/Yk+JBXb106BDRssNU+aBDj5HLPvWbvXIF1NUBBZ 9YQ5nysM 9qB2D2geqNvzyFhQ9bii2NTBCavZ8Eru1YYY73NFLRFuZeo5m239sJOxSvJq9bKYuGfEtpUBsH4PrOrIXwrnEqB2GcaY2Tc55Oc+mKy+NpFyDkI81VTNPOn2WMDFUFNmyYA929Tv7xtoQLhBSGW48zU9wP9cXmKltfK5kbARsBRSTtGX2AwJnyrdPYEnkwin8pEnAV5C4DG2pG9Uo6N5xNZSSqvJ/fcIVsw+9oIQ3ShQEtR3svFm33aQHpcS6LKCvHTyd/Gq8xdqkdMiTDaPY3kr64lXr7KE9+LIIiYoEKlkf8TBUyF26WJnddEWWeScj8ONgqFTjVAG3TqrjALrQlUgcF3ofZqBXvJZPXYjVF6Kga8uqqp9fqT2pZLx1D5+yqc8Vbf05wnKM2PxfuR+0oLMA+MPlzCuuEP4PFEd6pokS31RboxDrjKZyrZVY1eZrdzofHwCp2MWehISIIVIEJ60wfIUJZqHhn3bCNkZim2UPmTfh8LXbEWIrVwaxxUZcEPCL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide an atomic context equivalent for copy_page_to_iter(). This eschews the might_fault() check copies memory in the same way that copy_page_from_iter_atomic() does. This functions assumes a non-compound page, however this mimics the existing behaviour of copy_page_from_iter_atomic(). I am keeping the behaviour consistent between the two, deferring any such change to an explicit folio-fication effort. This is being added in order that an iteratable form of vread() can be implemented with known prefaulted pages to avoid the need for mutex locking. Signed-off-by: Lorenzo Stoakes --- include/linux/uio.h | 2 ++ lib/iov_iter.c | 28 ++++++++++++++++++++++++++++ 2 files changed, 30 insertions(+) diff --git a/include/linux/uio.h b/include/linux/uio.h index 27e3fd942960..fab07103090f 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -154,6 +154,8 @@ static inline struct iovec iov_iter_iovec(const struct iov_iter *iter) size_t copy_page_from_iter_atomic(struct page *page, unsigned offset, size_t bytes, struct iov_iter *i); +size_t copy_page_to_iter_atomic(struct page *page, unsigned offset, + size_t bytes, struct iov_iter *i); void iov_iter_advance(struct iov_iter *i, size_t bytes); void iov_iter_revert(struct iov_iter *i, size_t bytes); size_t fault_in_iov_iter_readable(const struct iov_iter *i, size_t bytes); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 274014e4eafe..48ca1c5dfc04 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -821,6 +821,34 @@ size_t copy_page_from_iter_atomic(struct page *page, unsigned offset, size_t byt } EXPORT_SYMBOL(copy_page_from_iter_atomic); +size_t copy_page_to_iter_atomic(struct page *page, unsigned offset, size_t bytes, + struct iov_iter *i) +{ + char *kaddr = kmap_local_page(page); + char *p = kaddr + offset; + size_t copied = 0; + + if (!page_copy_sane(page, offset, bytes) || + WARN_ON_ONCE(i->data_source)) + goto out; + + if (unlikely(iov_iter_is_pipe(i))) { + copied = copy_page_to_iter_pipe(page, offset, bytes, i); + goto out; + } + + iterate_and_advance(i, bytes, base, len, off, + copyout(base, p + off, len), + memcpy(base, p + off, len) + ) + copied = bytes; + +out: + kunmap_local(kaddr); + return copied; +} +EXPORT_SYMBOL(copy_page_to_iter_atomic); + static void pipe_advance(struct iov_iter *i, size_t size) { struct pipe_inode_info *pipe = i->pipe; From patchwork Mon Mar 20 23:42:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13182036 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D15DC6FD1C for ; Mon, 20 Mar 2023 23:43:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 266D56B007E; Mon, 20 Mar 2023 19:43:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 217116B0080; Mon, 20 Mar 2023 19:43:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 092626B0081; Mon, 20 Mar 2023 19:43:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E99FB6B007E for ; Mon, 20 Mar 2023 19:43:10 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C5F6C40FA8 for ; Mon, 20 Mar 2023 23:43:10 +0000 (UTC) X-FDA: 80590904940.29.587DD1B Received: from mail-wm1-f53.google.com (mail-wm1-f53.google.com [209.85.128.53]) by imf12.hostedemail.com (Postfix) with ESMTP id D84DE4001B for ; Mon, 20 Mar 2023 23:43:08 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=mUtoxkAN; spf=pass (imf12.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.53 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679355789; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=djyPCeztN5boBqxO3bRuQFLy3bknB1j3IWxmRWH4tok=; b=nDEG5YIxiOHODwdF+DcKzaX1WP1MyXT/tJYJbWDZDjpPkqzcO0wE/Fo08yej0yHhE8U2nj 46MGXKqWY/tJo/6pi4nbti6/P9cDHEeoyoPfYA65EXIS+xLuAeRXKy3tw4ujKEpwdFupmV tggT+MjEGlnu9nfkw0SdJ4jztOM7pC0= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=mUtoxkAN; spf=pass (imf12.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.53 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679355789; a=rsa-sha256; cv=none; b=iYWoKKYDpc+DhQfXngciP+rawVHGRqYMmXF4ync+HE7dNanziGdUwsrOFGBlVWpCnrvkXS BNJBriXfS6a7G4O2Ee/Kx/Wh1lUOYF5YQK/STNNefSYorW/lx3/RxJ5y1SyD+iXj7gKT0s viZZSlHq5/jpDjAy2JUJJRY1rKajOpY= Received: by mail-wm1-f53.google.com with SMTP id p34so3519750wms.3 for ; Mon, 20 Mar 2023 16:43:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679355787; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=djyPCeztN5boBqxO3bRuQFLy3bknB1j3IWxmRWH4tok=; b=mUtoxkANtVgpfOalE4dGxuJCefGteLevKtBYGXis3PKEqIklZM90aLXKJ9Z1vr5f3c 7wMmgkv3DwwdarXp5D+bRxLWU3n/B7m9weBA7A2w3gA3p/j+VC9F6j+QwPTkyLdVIrhk PZoH0eqKToyPvC1SuBCwG5W0F4IieOmwN66Mfx2+zeleS2JbY1Ki9x/WEpEwz0DGt3oe 8VG2Zo2pdpzMac4H1P4aq5S9j4r1yaEG6nQMISsxeIbsrW3PTwYT+S9LtkFwESdTDGV9 Ni5o70rny4SOKR73eSlKaW0Jaw+IdhzV4LgYWeanChPQiTu0fBMQQJAVwKY/tYvuJY2D 9Ikw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679355787; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=djyPCeztN5boBqxO3bRuQFLy3bknB1j3IWxmRWH4tok=; b=hP3q/FKieYOmZ0Osod9doKk/YbPpEQjjnr+hV5QJP17VnCqfhraj4Q85AGqsYeCkR6 aBC0VpUDp9I181np8mPUHheuP/nrQOuzPE8cIWv0FrpC4ZYw93p9GysaSox8O+qrBSCw m+WYut2Mz8DcoIlQOiRutG3BQxWMDgqkrn37XGtkwxdlCcDBGRv4cHvHMO7h/HBAXJdy 6q9VVPoi0GGtxMaxRLsOxcaHwXFOQD0I+WenmB9mp4QwbDq8t6XpBtRmeQnm0Fv0PIju wJFSLp8eXyHjZs3LGz9fkURKDqXaUA3fgPIbIftD8YMZObs1R9laf0KQWtnCCkqgdQlL nZpg== X-Gm-Message-State: AO0yUKWMHfChPRkYHE7sBn+05Ge0RpchFAwu0WuH1Sdjf4nfET7lrQwo uFfDHTqWpdIdIf5fFWR8Qk2nVJY4Dn4= X-Google-Smtp-Source: AK7set9XjbvfLKuz/RZ1JXHJOXIGdOPPRKhhBW3E4KHfqZbvmVqS9zxmCZDy99TDO2ca3jD4GqYu+Q== X-Received: by 2002:a7b:ce90:0:b0:3ed:b048:73f4 with SMTP id q16-20020a7bce90000000b003edb04873f4mr946092wmj.5.1679355786746; Mon, 20 Mar 2023 16:43:06 -0700 (PDT) Received: from lucifer.home (host86-146-209-214.range86-146.btcentralplus.com. [86.146.209.214]) by smtp.googlemail.com with ESMTPSA id u1-20020a05600c440100b003e209186c07sm17504541wmn.19.2023.03.20.16.43.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 16:43:04 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Jens Axboe , Alexander Viro , Lorenzo Stoakes Subject: [PATCH v3 4/4] mm: vmalloc: convert vread() to vread_iter() Date: Mon, 20 Mar 2023 23:42:45 +0000 Message-Id: <6b3899bbbf1f4bd6b7133c8b6f27b3a8791607b0.1679355227.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: D84DE4001B X-Stat-Signature: zzcrrftgbu7abyaaxnp4fi4npm4tr7t9 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1679355788-635475 X-HE-Meta: U2FsdGVkX1+wHocmOMExT+5/vRYTZjkkX8V8bXbw0pDpsVIKZjtsB0I0U8k6/Hh2TnHxG0FUoRI/P+PrXlPkM7rq+pi623Wtrj0Oy+pOjKJIAFqRRC1a/0Y68s6xpH6Huq5N1b/p9c+o5IJ4EXHLzoH8N9r4297JtZy2ab0040dOT0WQPOLls/GTFCUyQUuBcIo4Jqa1d4Yd12afgLEaN13snirXy8GoAbh02k+gdhS3VmMRsk6t72jmMO8Q4TSvMP9BmY5VlPLlW3kJ4Sf2btagyEQ8qMZA8E4L/w6Vd9BCXqKX/oIHzzxtzKhGMFtDFocKWGzIiGJu88b9pijFUPIafeJyZ0czPfpWXulKy/T4ntGfBOe2XNpPTKeDBi2V1KzoxsYvuXzsFjBijbKhO+SIHhUhXfnot7vmlD9nmWvMya4Sqwr4fn9ewcQN7TbOKo/dzLFTNZC5oaLMsVyPNkYphdrwVihNBksQ+YC0czMcfu8wbjpw+tGECBOF6/qWx5NjI1eU05TXJ9sqsZNRCidun0eVN1OyGLLCIErD0J5cVe92ttT2xI9faU1FYnzudF/CKZX6D4l4Px9SC839Xaxd45Cpf47/RlcbgX3v6pIWrA4CzYQgkYUwwj3Owa3mGToETPFyNU0wxHxmKP8KPOPOdvonkuCB1n8Jfamr52n3neCPPy0b7j93XN0ESJR2V2Yo+WKpqwvQOtVzzyU+mx4j+LorOancLN8/tNU3MWHXVPoxNx/LRHSmtLndApakIfx05PqbNH2+zvUPdDXDC+BeNLrNMlgmYvTID6R31qxnCgB2Pic7y3qROaJ63nXMo9O9QYjLbYU1Vg1siuXirycRdCERBT3pE1um4yXCwqQT73JZR5M/NzhMG4tNQth/UiYn0cWvj0CEaBB+gcpcUBKV4shjrdQTc7gE/8tvIWVoqecu+GTkS3WD4Y2g7aYv+N4y+BOqLjmR15e/gD1 4jYhgA60 SEB5itag5IhnKgtkBfk4VwA+QxdZNLJ6QBAMBTMmof7BXiElgzjPC/aVZi//19tAp5EkGNYFmin9wn3i7zHLhVxe6ddstE0Lkrq2yJ4DA1Gp+ZDG9mZtCD6s8VMMwV1ZVSRzVdmi4ZVqzDF3eRSujwPtBBB3coGRfEPvXdzVjY9EJWsnGE02mDXfTY2IvPMyMzLDyvFX2Y6kJYsb+T+QIDQN4c9akqkB8p7u7VvZyzKnWW4uOmkOMpSPYJAwQ7QFKfw4nN3I+khNfLl8v15L1WWrGKyZLmpIZzhKz0xP75a5rAuKbwyowiMSPjHE9rnHy4wGrvh1buyLm3YQ0n+I1zCES4vW6wJai73i6Wl7AsQFt2P+PWfHhl5bjW9wUer0USbdNRbrslqbS8ZpsrtBJFqlCtvePTvS8+DNe/nH12I5UbINlf8fjyaHFnDhTcG9Qr/endnAAfbxVWEqEWybLxUbKTuOGfK00HNc4nPS8Ds7afo7GwAufJyQLiJ02Zueuav9I X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Having previously laid the foundation for converting vread() to an iterator function, pull the trigger and do so. This patch attempts to provide minimal refactoring and to reflect the existing logic as best we can, for example we continue to zero portions of memory not read, as before. Overall, there should be no functional difference other than a performance improvement in /proc/kcore access to vmalloc regions. Now we have eliminated the need for a bounce buffer in read_kcore_iter(), we dispense with it. We need to ensure userland pages are faulted in before proceeding, as we take spin locks. Additionally, we must account for the fact that at any point a copy may fail if this happens, we exit indicating fewer bytes retrieved than expected. Signed-off-by: Lorenzo Stoakes --- fs/proc/kcore.c | 26 ++--- include/linux/vmalloc.h | 3 +- mm/nommu.c | 10 +- mm/vmalloc.c | 234 +++++++++++++++++++++++++--------------- 4 files changed, 160 insertions(+), 113 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 25e0eeb8d498..221e16f75ba5 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -307,13 +307,9 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, *i = ALIGN(*i + descsz, 4); } -static ssize_t -read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) +static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) { - struct file *file = iocb->ki_filp; - char *buf = file->private_data; loff_t *ppos = &iocb->ki_pos; - size_t phdrs_offset, notes_offset, data_offset; size_t page_offline_frozen = 1; size_t phdrs_len, notes_len; @@ -507,9 +503,12 @@ read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) switch (m->type) { case KCORE_VMALLOC: - vread(buf, (char *)start, tsz); - /* we have to zero-fill user buffer even if no read */ - if (copy_to_iter(buf, tsz, iter) != tsz) { + /* + * Make sure user pages are faulted in as we acquire + * spinlocks in vread_iter(). + */ + if (fault_in_iov_iter_writeable(iter, tsz) || + vread_iter(iter, (char *)start, tsz) != tsz) { ret = -EFAULT; goto out; } @@ -582,10 +581,6 @@ static int open_kcore(struct inode *inode, struct file *filp) if (ret) return ret; - filp->private_data = kmalloc(PAGE_SIZE, GFP_KERNEL); - if (!filp->private_data) - return -ENOMEM; - if (kcore_need_update) kcore_update_ram(); if (i_size_read(inode) != proc_root_kcore->size) { @@ -596,16 +591,9 @@ static int open_kcore(struct inode *inode, struct file *filp) return 0; } -static int release_kcore(struct inode *inode, struct file *file) -{ - kfree(file->private_data); - return 0; -} - static const struct proc_ops kcore_proc_ops = { .proc_read_iter = read_kcore_iter, .proc_open = open_kcore, - .proc_release = release_kcore, .proc_lseek = default_llseek, }; diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 69250efa03d1..461aa5637f65 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -9,6 +9,7 @@ #include /* pgprot_t */ #include #include +#include #include @@ -251,7 +252,7 @@ static inline void set_vm_flush_reset_perms(void *addr) #endif /* for /proc/kcore */ -extern long vread(char *buf, char *addr, unsigned long count); +extern long vread_iter(struct iov_iter *iter, const char *addr, size_t count); /* * Internals. Don't use.. diff --git a/mm/nommu.c b/mm/nommu.c index 57ba243c6a37..e0fcd948096e 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -36,6 +36,7 @@ #include #include +#include #include #include #include @@ -198,14 +199,13 @@ unsigned long vmalloc_to_pfn(const void *addr) } EXPORT_SYMBOL(vmalloc_to_pfn); -long vread(char *buf, char *addr, unsigned long count) +long vread_iter(struct iov_iter *iter, char *addr, size_t count) { /* Don't allow overflow */ - if ((unsigned long) buf + count < count) - count = -(unsigned long) buf; + if ((unsigned long) addr + count < count) + count = -(unsigned long) addr; - memcpy(buf, addr, count); - return count; + return copy_to_iter(addr, count, iter); } /* diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 978194dc2bb8..ebfa1e9fe6f9 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -37,7 +37,6 @@ #include #include #include -#include #include #include #include @@ -3442,62 +3441,95 @@ void *vmalloc_32_user(unsigned long size) EXPORT_SYMBOL(vmalloc_32_user); /* - * small helper routine , copy contents to buf from addr. - * If the page is not present, fill zero. + * Atomically zero bytes in the iterator. + * + * Returns the number of zeroed bytes. */ +size_t zero_iter(struct iov_iter *iter, size_t count) +{ + size_t remains = count; + + while (remains > 0) { + size_t num, copied; + + num = remains < PAGE_SIZE ? remains : PAGE_SIZE; + copied = copy_page_to_iter_atomic(ZERO_PAGE(0), 0, num, iter); + remains -= copied; + + if (copied < num) + break; + } + + return count - remains; +} -static int aligned_vread(char *buf, char *addr, unsigned long count) +/* + * small helper routine, copy contents to iter from addr. + * If the page is not present, fill zero. + * + * Returns the number of copied bytes. + */ +static size_t aligned_vread_iter(struct iov_iter *iter, + const char *addr, size_t count) { - struct page *p; - int copied = 0; + size_t remains = count; + struct page *page; - while (count) { + while (remains > 0) { unsigned long offset, length; + size_t copied = 0; offset = offset_in_page(addr); length = PAGE_SIZE - offset; - if (length > count) - length = count; - p = vmalloc_to_page(addr); + if (length > remains) + length = remains; + page = vmalloc_to_page(addr); /* - * To do safe access to this _mapped_ area, we need - * lock. But adding lock here means that we need to add - * overhead of vmalloc()/vfree() calls for this _debug_ - * interface, rarely used. Instead of that, we'll use - * kmap() and get small overhead in this access function. + * To do safe access to this _mapped_ area, we need lock. But + * adding lock here means that we need to add overhead of + * vmalloc()/vfree() calls for this _debug_ interface, rarely + * used. Instead of that, we'll use an local mapping via + * copy_page_to_iter_atomic() and accept a small overhead in + * this access function. */ - if (p) { - /* We can expect USER0 is not used -- see vread() */ - void *map = kmap_atomic(p); - memcpy(buf, map + offset, length); - kunmap_atomic(map); - } else - memset(buf, 0, length); + if (page) + copied = copy_page_to_iter_atomic(page, offset, length, + iter); + + /* Zero anything we were unable to copy. */ + copied += zero_iter(iter, length - copied); + + addr += copied; + remains -= copied; - addr += length; - buf += length; - copied += length; - count -= length; + if (copied != length) + break; } - return copied; + + return count - remains; } -static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags) +/* + * Read from a vm_map_ram region of memory. + * + * Returns the number of copied bytes. + */ +static size_t vmap_ram_vread_iter(struct iov_iter *iter, const char *addr, + size_t count, unsigned long flags) { char *start; struct vmap_block *vb; unsigned long offset; - unsigned int rs, re, n; + unsigned int rs, re; + size_t remains, n; /* * If it's area created by vm_map_ram() interface directly, but * not further subdividing and delegating management to vmap_block, * handle it here. */ - if (!(flags & VMAP_BLOCK)) { - aligned_vread(buf, addr, count); - return; - } + if (!(flags & VMAP_BLOCK)) + return aligned_vread_iter(iter, addr, count); /* * Area is split into regions and tracked with vmap_block, read out @@ -3505,50 +3537,65 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags */ vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); if (!vb) - goto finished; + goto finished_zero; spin_lock(&vb->lock); if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { spin_unlock(&vb->lock); - goto finished; + goto finished_zero; } + + remains = count; for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { - if (!count) - break; + size_t copied; + + if (remains == 0) + goto finished; + start = vmap_block_vaddr(vb->va->va_start, rs); - while (addr < start) { - if (count == 0) - goto unlock; - *buf = '\0'; - buf++; - addr++; - count--; + + if (addr < start) { + size_t to_zero = min_t(size_t, start - addr, remains); + size_t zeroed = zero_iter(iter, to_zero); + + addr += zeroed; + remains -= zeroed; + + if (remains == 0 || zeroed != to_zero) + goto finished; } + /*it could start reading from the middle of used region*/ offset = offset_in_page(addr); n = ((re - rs + 1) << PAGE_SHIFT) - offset; - if (n > count) - n = count; - aligned_vread(buf, start+offset, n); + if (n > remains) + n = remains; + + copied = aligned_vread_iter(iter, start + offset, n); - buf += n; - addr += n; - count -= n; + addr += copied; + remains -= copied; + + if (copied != n) + goto finished; } -unlock: + spin_unlock(&vb->lock); -finished: +finished_zero: /* zero-fill the left dirty or free regions */ - if (count) - memset(buf, 0, count); + return count - remains + zero_iter(iter, remains); +finished: + /* We couldn't copy/zero everything */ + spin_unlock(&vb->lock); + return count - remains; } /** - * vread() - read vmalloc area in a safe way. - * @buf: buffer for reading data - * @addr: vm address. - * @count: number of bytes to be read. + * vread_iter() - read vmalloc area in a safe way to an iterator. + * @iter: the iterator to which data should be written. + * @addr: vm address. + * @count: number of bytes to be read. * * This function checks that addr is a valid vmalloc'ed area, and * copy data from that area to a given buffer. If the given memory range @@ -3568,13 +3615,12 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags * (same number as @count) or %0 if [addr...addr+count) doesn't * include any intersection with valid vmalloc area */ -long vread(char *buf, char *addr, unsigned long count) +long vread_iter(struct iov_iter *iter, const char *addr, size_t count) { struct vmap_area *va; struct vm_struct *vm; - char *vaddr, *buf_start = buf; - unsigned long buflen = count; - unsigned long n, size, flags; + char *vaddr; + size_t n, size, flags, remains; addr = kasan_reset_tag(addr); @@ -3582,18 +3628,22 @@ long vread(char *buf, char *addr, unsigned long count) if ((unsigned long) addr + count < count) count = -(unsigned long) addr; + remains = count; + spin_lock(&vmap_area_lock); va = find_vmap_area_exceed_addr((unsigned long)addr); if (!va) - goto finished; + goto finished_zero; /* no intersects with alive vmap_area */ - if ((unsigned long)addr + count <= va->va_start) - goto finished; + if ((unsigned long)addr + remains <= va->va_start) + goto finished_zero; list_for_each_entry_from(va, &vmap_area_list, list) { - if (!count) - break; + size_t copied; + + if (remains == 0) + goto finished; vm = va->vm; flags = va->flags & VMAP_FLAGS_MASK; @@ -3608,6 +3658,7 @@ long vread(char *buf, char *addr, unsigned long count) if (vm && (vm->flags & VM_UNINITIALIZED)) continue; + /* Pair with smp_wmb() in clear_vm_uninitialized_flag() */ smp_rmb(); @@ -3616,38 +3667,45 @@ long vread(char *buf, char *addr, unsigned long count) if (addr >= vaddr + size) continue; - while (addr < vaddr) { - if (count == 0) + + if (addr < vaddr) { + size_t to_zero = min_t(size_t, vaddr - addr, remains); + size_t zeroed = zero_iter(iter, to_zero); + + addr += zeroed; + remains -= zeroed; + + if (remains == 0 || zeroed != to_zero) goto finished; - *buf = '\0'; - buf++; - addr++; - count--; } + n = vaddr + size - addr; - if (n > count) - n = count; + if (n > remains) + n = remains; if (flags & VMAP_RAM) - vmap_ram_vread(buf, addr, n, flags); + copied = vmap_ram_vread_iter(iter, addr, n, flags); else if (!(vm->flags & VM_IOREMAP)) - aligned_vread(buf, addr, n); + copied = aligned_vread_iter(iter, addr, n); else /* IOREMAP area is treated as memory hole */ - memset(buf, 0, n); - buf += n; - addr += n; - count -= n; + copied = zero_iter(iter, n); + + addr += copied; + remains -= copied; + + if (copied != n) + goto finished; } -finished: - spin_unlock(&vmap_area_lock); - if (buf == buf_start) - return 0; +finished_zero: + spin_unlock(&vmap_area_lock); /* zero-fill memory holes */ - if (buf != buf_start + buflen) - memset(buf, 0, buflen - (buf - buf_start)); + return count - remains + zero_iter(iter, remains); +finished: + /* Nothing remains, or We couldn't copy/zero everything. */ + spin_unlock(&vmap_area_lock); - return buflen; + return count - remains; } /**