From patchwork Sun Mar 19 00:20:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13180170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B00F9C74A5B for ; Sun, 19 Mar 2023 00:20:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 37C97900006; Sat, 18 Mar 2023 20:20:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2DCDB900004; Sat, 18 Mar 2023 20:20:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 15741900006; Sat, 18 Mar 2023 20:20:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id F3018900004 for ; Sat, 18 Mar 2023 20:20:20 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D22B9A0D3D for ; Sun, 19 Mar 2023 00:20:20 +0000 (UTC) X-FDA: 80583741000.04.C116927 Received: from mail-wr1-f53.google.com (mail-wr1-f53.google.com [209.85.221.53]) by imf25.hostedemail.com (Postfix) with ESMTP id EEF81A000E for ; Sun, 19 Mar 2023 00:20:18 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=nC6RYf6j; spf=pass (imf25.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.53 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679185219; a=rsa-sha256; cv=none; b=3tQQEk+ear7evGUQNpJK3RXU+8OvgLKxKE+2n46xr+MMIgs3KmPGRgB66yp3JrJf9ZVaEe vl2zRxa9EekSttX3lbjlKNmdgSfUH+nX6eIib2PsmdJLXYTpf7rP6lpJH3Ne9/mrGZyp9Q g6Tt65Nbl0Kh4YRRaFKDQ+JGE7kz29E= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=nC6RYf6j; spf=pass (imf25.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.53 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679185219; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=csP2U+h4A4N+JyUsV24C6SnnqlSeb33JonwK9o1duFk=; b=duFc4SguXDQfM/uoPw80+wj6tknTeUtEX9UI0UtJJYQ0lMxdD/5JfkiLRoxjhjdxTY0I8E uFNQoRFU9u5+BgTqgu0S5z+QWgr0k1cn+l9DUsG69DI09UaiP14T8PMfYJOwEQygF0DxyW XuZ6cC/wkYjZOJXWXuc9n8BSlDohgqE= Received: by mail-wr1-f53.google.com with SMTP id t15so7374171wrz.7 for ; Sat, 18 Mar 2023 17:20:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679185217; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=csP2U+h4A4N+JyUsV24C6SnnqlSeb33JonwK9o1duFk=; b=nC6RYf6jBdzIKQ4o3vm7PvlkCYMlOb1FdP26eTaA7XVeOCh6KBdfA7t8A9kV0DJyUu Zla8U4rG4p3JsUlcTbj1hALd7X+zuQQyHPo31R1hXSlO6RI1L+zQQ6nLiJl+1/Xd7U7U apPCOgsxAoHSLFM0qPvVpo7u61XC0kmIRa5pxuaYfocJcUN0AOST09bBvQuEcJGH8esh Ib/zsOkFPFJkM6s012r+iEmCzFsVMHBLJiz8a/qppPITwzE0psbE/fxPchSR/bE9d1mw OfIHu0q2z+2gyGNfQdqreV8GYmkXjgOOuUaUsHA43nqXVPlwr56UszPzKQAefMoPfCHJ 5Cqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679185217; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=csP2U+h4A4N+JyUsV24C6SnnqlSeb33JonwK9o1duFk=; b=e6bhgt8xPxFTl+7NYB11q0e4cYKRgCvLb2DIk0jyhqmdC9On2oHa/WeB4w9LgAABLC V3ZbLvQnInrDOXh8f64FWAsWhoPakMfzc6clhMVT54QzcuLdWUXN4abzjLev9z8FKIq2 ai8I/ItBnNt3pTTrdywWWJ+IiRi0W8WoQqhzZ+1v5i/7c/ObBn7uqOIGyIGXGm/2dSpp L70I6LmryGTNEV6QivXoBuVCSfmDBiRHN8KXCrSEBA4kFZe/2pbAHtgdCd5fa8BEuwVt Frw2tmqpGCvopQD+MGS6zrZn0Shunu4PzCCbPimegjcKra/ZaIhOfArS67zWU92L3xFM voEQ== X-Gm-Message-State: AO0yUKUagH58fpDw3U6BV/GFbYE/gIcGMuPsfZHpUkndXLh7io+wRRwd Tv5AOo85cybLZwMKXcwEahNSXYU1dxY= X-Google-Smtp-Source: AK7set+dDEGx+X3Pskgz3ooddARr7XyBMTyRwnPZALsSabKkf9CNmLvTCoIx1rhzU9tGuRxE4a3syA== X-Received: by 2002:a5d:54c1:0:b0:2d3:fba4:e61d with SMTP id x1-20020a5d54c1000000b002d3fba4e61dmr4877179wrv.12.1679185217216; Sat, 18 Mar 2023 17:20:17 -0700 (PDT) Received: from lucifer.home (host86-146-209-214.range86-146.btcentralplus.com. [86.146.209.214]) by smtp.googlemail.com with ESMTPSA id x14-20020adfdd8e000000b002cff0c57b98sm5399639wrl.18.2023.03.18.17.20.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 18 Mar 2023 17:20:16 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Lorenzo Stoakes Subject: [PATCH 1/4] fs/proc/kcore: Avoid bounce buffer for ktext data Date: Sun, 19 Mar 2023 00:20:09 +0000 Message-Id: <2ed992d6604965fd9eea05fed4473ddf54540989.1679183626.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: EEF81A000E X-Rspamd-Server: rspam01 X-Stat-Signature: 7j3ymnddp1j9qfgm3ex6eaqwjw4w5nbx X-HE-Tag: 1679185218-635169 X-HE-Meta: U2FsdGVkX1+54vx7sJcaFMq4wWqvtkCxB26XPrNHAxtPlqB9zCrpiYmqVHgdRvBWUH0hz14OvgPj99brHUDJPhu8DXMNH8VbuNb3QdWA8R3/Shme5yGpTrJ8XbJSORgAF/j1SnUJlyycHlyyfhReK7+qfHEkqZgEMQDS6Br17YrxNf2n9wGNBZ73aDcLm3AA0tqtOzcFJbqREdX5DQINenhViUGrvnlGXy9RilXt3AGTuoB0TRVnGQZDAVl+f0iJE6KLyIDe81NbyzSOiBZSIj1xdCYJHwueWMLhjmqu5n/Yy0aJUFaccvelveT21Ea9f3GOOC49KQLQd9kfy1gKnRxERmSezgKmm2tcZUTM710FAmFvKxAWD5A6cysSTORNMZDhySkQCjqohbl6wSC/csp2q9Me4/GOug77zMmrsyeg6MHSLThJWO2xLtPTyjn0phzlXGgK57KZ75nV0xHb0+LjfD6+bgnSakeaGRL3KamFQ8W9SE2a/QjO+R4N5QRTSTP5BJqJ1g0k2zTw45gs38t5Err9i+e5/rzKdon/4suwzakfor6AUYh/WSvH+D+smaYiGFYuZZgzXhcffzb6tsetlzvLyePbwrqmygtAYbiO4ekPw0qKW1Zy1Ij9WljyanVtHUReDtyrqOTbL9ukxvVHYIuhYv8IgvrlxRUjng+PNzJpDzvvQfYpEJjxxMpAhUXfqxoxWztEpwEEhgBzV1BMs8ua9SxKhMyXr3wyloqSc4iXot+sevHPvbk9s94EJss6bFsFxfElue0cxIWznNtZlZNPjkWudSQ2mrXPId86ShQPXOdQ0yDp2g87+UpebOMwuKbEeEZ9fwhJg8C8Mcq7c7b4ph7jNo+VIdYyryPKuIwVYr+4OqtYT3SiWnClX+sBYEKN+iC6wbCeuznbRC1BXmt8vCXFj9Ka1sqIgr10hq7gEL16Fax1h5OPtx9ybrI2BqLQs1zeB5vhAJG 9v0qdtUd RcyCYv3Qs6SU7E84MeI+Qly2WrIXY2o6HKaUNbV4P7rVJ85UZ5W2c7IqSl/HBrENS9EsKnhASfPpWEq3AEIwwBuTQpFrvnJaP5TqCv6spNH8+NOrK+6jGmq4r6UafqcG2ZpOFkkftE5UdE37j8XnCI+iYrKxNYpdFVoPiSk8jS7y3PYGFlSFHc54QGGczhO16/RhiUwimc/86MLT6kKAFzkGO0JI1KQbFoT5TljYTfr9P/KyEssI6ArEp9izsgj9LPYpbFr68uWFb/tYo8AEFJm1r8AOhM16m2i7R4M9qrIUd1TraQnIMZW0RTUgESdnfuP8Sdyqeyg5Og+5PmyQWFJwsaU6nCd4HFZhwLwyRhf4xfRhhHAUEm4NXE+TiDUMXHjOVpUVsmD10UsCVtSb+4o7snHcRKbL4olLmclj9UkKmG9iw8C1eQu8KFWyaeuHNurDvVurlUKIKDdA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit df04abfd181a ("fs/proc/kcore.c: Add bounce buffer for ktext data") introduced the use of a bounce buffer to retrieve kernel text data for /proc/kcore in order to avoid failures arising from hardened user copies enabled by CONFIG_HARDENED_USERCOPY in check_kernel_text_object(). We can avoid doing this if instead of copy_to_user() we use _copy_to_user() which bypasses the hardening check. This is more efficient than using a bounce buffer and simplifies the code. We do so as part an overall effort to eliminate bounce buffer usage in the function with an eye to converting it an iterator read. Signed-off-by: Lorenzo Stoakes --- fs/proc/kcore.c | 17 +++++------------ 1 file changed, 5 insertions(+), 12 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 71157ee35c1a..556f310d6aa4 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -541,19 +541,12 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) case KCORE_VMEMMAP: case KCORE_TEXT: /* - * Using bounce buffer to bypass the - * hardened user copy kernel text checks. + * We use _copy_to_user() to bypass usermode hardening + * which would otherwise prevent this operation. */ - if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { - if (clear_user(buffer, tsz)) { - ret = -EFAULT; - goto out; - } - } else { - if (copy_to_user(buffer, buf, tsz)) { - ret = -EFAULT; - goto out; - } + if (_copy_to_user(buffer, (char *)start, tsz)) { + ret = -EFAULT; + goto out; } break; default: From patchwork Sun Mar 19 00:20:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13180171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA803C76196 for ; Sun, 19 Mar 2023 00:20:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 773F0900007; Sat, 18 Mar 2023 20:20:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6FEA5900004; Sat, 18 Mar 2023 20:20:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B272900007; Sat, 18 Mar 2023 20:20:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 3BF8B900004 for ; Sat, 18 Mar 2023 20:20:22 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 11B561A02A9 for ; Sun, 19 Mar 2023 00:20:22 +0000 (UTC) X-FDA: 80583741084.01.CF8462E Received: from mail-wr1-f53.google.com (mail-wr1-f53.google.com [209.85.221.53]) by imf01.hostedemail.com (Postfix) with ESMTP id 3C80940003 for ; Sun, 19 Mar 2023 00:20:20 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=BIGAFMU+; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.53 as permitted sender) smtp.mailfrom=lstoakes@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679185220; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=V8uXWzHaUDgnMR3zVzK4ll6XJwTMNIlCa4zu/w/8SY0=; b=uKtf3QjFBGTC71w7nLJ2tD1Sr1Dhy/wStzoWq2IkiDfbI075n+zUScOBZ7g8DLAf5zzXtp OeIEC5xTfC+osWJAQ6f66fb3croF8T4s4dQwpj531YjG92V3drLa/H49pCY6rL0x+FmVu8 FJrn9cgT+n4hROmNi00fX1Xp80C18Qc= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=BIGAFMU+; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.53 as permitted sender) smtp.mailfrom=lstoakes@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679185220; a=rsa-sha256; cv=none; b=zJSsfL9ZLuK4Dg5jCdoYUXHiypwI0N8Jxqh0+DoPap4yh3LmilpFPkG/h6qjD2sWe4AqLT X+Y22hl86pWYaQp5F3Qkt6VB3S2HMralkDG1fbBwK32W4B9FvPEVtEKh4k7n3tOWcB8NJg FufvyDG7chOum7hzGU/5D2gv99gu5ls= Received: by mail-wr1-f53.google.com with SMTP id t15so7374200wrz.7 for ; Sat, 18 Mar 2023 17:20:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679185218; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=V8uXWzHaUDgnMR3zVzK4ll6XJwTMNIlCa4zu/w/8SY0=; b=BIGAFMU+LM58A19CRYns7kudOoBi/HrNebk/7eL6lz1wzLpeTtZe13tBuzDTR0ocEH 4Piwt9StM4RAAIJ5mkuuwccSNfIPlvjLCkP3BSqrApZwTXhHoPOiuEcsvd+0ZYAtw0Gd KSlzj8h4p6+BTNW3jcdrnry2D85k7wVkh4ZVb1K5O/DFQ2mp+nPbj/Ia0f7t9Ws49+Yl PWou5LVOSkGFPj6zE5kMDDO8XOiggLFV+saVeOmrT+j47nvpI/eBYKzMD6PnPAydRvEF WP5SPYriCXkvUTilNFaw/KB8EobK3C8+BYNpFknl8cU6Dcg5Bgn4/fH8V0qLdCcSt+Ol oP7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679185218; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V8uXWzHaUDgnMR3zVzK4ll6XJwTMNIlCa4zu/w/8SY0=; b=RAAioUVYrAylZhABytWrxM1H3SI5vKDtb1OkmceDM5WNPTVAUxN3wJFVn+41Vg64zt qFq+KeT4TiUR4IMryoHtGp/APhiefCInhPHF35Qp350uMgWHAUjq3O164x+IxN40w2mX 8lkORL8pzWJWOfqRnCs5uhhymJJXCPlw0F0O1IE6I4XDxGVIMyqQ7bwOK5Osz4szmdlf iHyQUqjSgOpdRBWb//980qfGehmiJtxiEY5fn/ZUjWdh7xzW/XHem4MAr8nIg+iGpAkG HdQTICeKenVPsPvzfU8wbt4q79r7HmMFU7VObWy5thy/IZ23fKWDXriycCQ76bxiEZG+ K+mg== X-Gm-Message-State: AO0yUKXxLF1RfLbsfmGSp0Z/n4fm7ob6RIQj4NsYoja1uyui6FJvj5FZ D+UIeVbnQpq8yiVqnqz7A6XiE27iVvc= X-Google-Smtp-Source: AK7set88Id+IUbuwMnBgQ9I23eA+dbbFAnqxLKiDglrp/xFZfCeV1KXHlD/ma2eGgGL4GBuHx1xvUw== X-Received: by 2002:a5d:624e:0:b0:2d0:33aa:26db with SMTP id m14-20020a5d624e000000b002d033aa26dbmr10479638wrv.56.1679185218406; Sat, 18 Mar 2023 17:20:18 -0700 (PDT) Received: from lucifer.home (host86-146-209-214.range86-146.btcentralplus.com. [86.146.209.214]) by smtp.googlemail.com with ESMTPSA id x14-20020adfdd8e000000b002cff0c57b98sm5399639wrl.18.2023.03.18.17.20.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 18 Mar 2023 17:20:17 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Lorenzo Stoakes Subject: [PATCH 2/4] mm: vmalloc: use rwsem, mutex for vmap_area_lock and vmap_block->lock Date: Sun, 19 Mar 2023 00:20:10 +0000 Message-Id: <6c7f1ac0aeb55faaa46a09108d3999e4595870d9.1679183626.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 3C80940003 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: nx5bk6a1fecq9hywu4gfejsm6hgrhggp X-HE-Tag: 1679185220-86660 X-HE-Meta: U2FsdGVkX1/Gts0Wo0nq6TZl3cSg+62U7rMINhZ8KZWDEW81O7gGWHIMu8fIu9LtsnRma2gX+JebVg4fITF3lCPZFQ9P1vMKliH9YVkfy6opPQXl+C7Rkk7xm9MqP7xUj/rdvE99GH8elDQZjJ/mMzJ+WvhzNOdISvsZCpKecQK6xAa0oo+iMJ/E2n6FIL0ZWo7VDer+w2e8rfkX6X57lP7dFRanJFDLtz2WIgouDFSAi1GDOvJM2J1QvhC593gnDLnX4E0KmDp9r/S+aFG/FORkihGUCoAS4yWWBuuqAaNVkqTllxdKMwjvV1zTDteTO+UA1XpEE7mtZ1K+u0KiZDAKc4Mdptb2cE5/6hMQJmv2B5wL1Klj+ANPubsMKY1hYKCJz6IAr7mz/4Y85iROIEoMoiWf57iyxP8iLY1AoLOj9QIy6fhJQ7/Fzk0yFDefV7g0Jo9uLvPvfo5vf/6/wm8IHSaO9+GV5nw6J4tWh4htQMrSeAVOly7v2HRluXAeRyjhUTE2yoOs4lJfn6j/z9RAE5ehJ0oMJe/y8PVeskw//H1LlgHVe1yPy/EHJ23NwDcgyHBorezuMjMkXWwX/l7rkyGvEhfAdC42yEFWO98rJBdwVgec2SrlilmyYxSfGjkUScCyU/Y3+YdxsKYRJeH0xvJ4heDcwikJmljVsADYGk+BVuscjzGZFnymjLt+5WUa0S7xbQ7vWsPgsfsSNI084iH78eQ2VSKiXgS33p/D93pATNO6ckl8cS+CcOzE1g1zu+7YwIGgu59MCDPRyAGnNgIGYVOIoVHzBFilTCciy5DDBf2yXAW81wgWq9Z2DOv6itXFO4wbHLKsXN6qKK8c03VBsyH4fxgxG+3n0Rdk4tpski/p7WUApaCatHYYepmNb3tWVP1H9r/eOwYYyCcsLlZsX5pM9HeyYc1uEEqlqorlBVOiYZnNSKfr+OqHjAkilKqoPJMEcKCgTpI p8/DNEof zDjHsvDePj75lpyAAAWGWhqsnOIjpMlWAhpQJTd4W/LDDMKgw++HtbEQOSM+mmfCIDEJaGozJZwENDsjguP1aYlm1cvNlU41nRQnKnQlbDA67u6l0YNaLov3Jrt3G+txPX9rHqQ3ejIkp3zMv1gxdobIX8HNQSIdG2t22RrHpFLkHEQYm3DjXOZttOu7exnpuAYSE/v0LFiAjveX/PNskcfbTbPX5EI6EDoDluNiHYHPmhX5wzxsBE17EfP9IWD7nhusmZ/XPNh5ZBD5E9zAA3SvvE3+5W8XP4ohtw8RU5Xnyw3HZdHH2XmiuUKLj275145hRMcr0TC/a5+ynPQis/1eF0bpudni+3w5XvTXt09MizsXi5kS1WIiAIEVJU4wc/Mh2L4KrtI58FuIOiKwNPNkxynaSGT8pTNQ0w0fl5sox46YJp9PoAwPHPfYk11+QoxQSldR7DigPaOy61jA2nZe9RZs1HowyQC9Cg4+jMtdkYYtigBXVWgMFGD/9PBz/I1AF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: vmalloc() is, by design, not permitted to be used in atomic context and already contains components which may sleep, so avoiding spin locks is not a problem from the perspective of atomic context. The global vmap_area_lock is held when the red/black tree rooted in vmap_are_root is accessed and thus is rather long-held and under potentially high contention. It is likely to be under contention for reads rather than write, so replace it with a rwsem. Each individual vmap_block->lock is likely to be held for less time but under low contention, so a mutex is not an outrageous choice here. A subset of test_vmalloc.sh performance results:- fix_size_alloc_test 0.40% full_fit_alloc_test 2.08% long_busy_list_alloc_test 0.34% random_size_alloc_test -0.25% random_size_align_alloc_test 0.06% ... all tests cycles 0.2% This represents a tiny reduction in performance that sits barely above noise. The reason for making this change is to build a basis for vread() to be usable asynchronously, this eliminating the need for a bounce buffer when copying data to userland in read_kcore() and allowing that to be converted to an iterator form. Signed-off-by: Lorenzo Stoakes --- mm/vmalloc.c | 77 +++++++++++++++++++++++++++------------------------- 1 file changed, 40 insertions(+), 37 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 978194dc2bb8..c24b27664a97 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include @@ -725,7 +726,7 @@ EXPORT_SYMBOL(vmalloc_to_pfn); #define DEBUG_AUGMENT_LOWEST_MATCH_CHECK 0 -static DEFINE_SPINLOCK(vmap_area_lock); +static DECLARE_RWSEM(vmap_area_lock); static DEFINE_SPINLOCK(free_vmap_area_lock); /* Export for kexec only */ LIST_HEAD(vmap_area_list); @@ -1537,9 +1538,9 @@ static void free_vmap_area(struct vmap_area *va) /* * Remove from the busy tree/list. */ - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); unlink_va(va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); /* * Insert/Merge it back to the free tree/list. @@ -1627,9 +1628,9 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, va->vm = NULL; va->flags = va_flags; - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); insert_vmap_area(va, &vmap_area_root, &vmap_area_list); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); BUG_ON(!IS_ALIGNED(va->va_start, align)); BUG_ON(va->va_start < vstart); @@ -1854,9 +1855,9 @@ struct vmap_area *find_vmap_area(unsigned long addr) { struct vmap_area *va; - spin_lock(&vmap_area_lock); + down_read(&vmap_area_lock); va = __find_vmap_area(addr, &vmap_area_root); - spin_unlock(&vmap_area_lock); + up_read(&vmap_area_lock); return va; } @@ -1865,11 +1866,11 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr) { struct vmap_area *va; - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); va = __find_vmap_area(addr, &vmap_area_root); if (va) unlink_va(va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); return va; } @@ -1914,7 +1915,7 @@ struct vmap_block_queue { }; struct vmap_block { - spinlock_t lock; + struct mutex lock; struct vmap_area *va; unsigned long free, dirty; DECLARE_BITMAP(used_map, VMAP_BBMAP_BITS); @@ -1991,7 +1992,7 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) } vaddr = vmap_block_vaddr(va->va_start, 0); - spin_lock_init(&vb->lock); + mutex_init(&vb->lock); vb->va = va; /* At least something should be left free */ BUG_ON(VMAP_BBMAP_BITS <= (1UL << order)); @@ -2026,9 +2027,9 @@ static void free_vmap_block(struct vmap_block *vb) tmp = xa_erase(&vmap_blocks, addr_to_vb_idx(vb->va->va_start)); BUG_ON(tmp != vb); - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); unlink_va(vb->va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); free_vmap_area_noflush(vb->va); kfree_rcu(vb, rcu_head); @@ -2047,7 +2048,7 @@ static void purge_fragmented_blocks(int cpu) if (!(vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS)) continue; - spin_lock(&vb->lock); + mutex_lock(&vb->lock); if (vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS) { vb->free = 0; /* prevent further allocs after releasing lock */ vb->dirty = VMAP_BBMAP_BITS; /* prevent purging it again */ @@ -2056,10 +2057,10 @@ static void purge_fragmented_blocks(int cpu) spin_lock(&vbq->lock); list_del_rcu(&vb->free_list); spin_unlock(&vbq->lock); - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); list_add_tail(&vb->purge, &purge); } else - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); } rcu_read_unlock(); @@ -2101,9 +2102,9 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) list_for_each_entry_rcu(vb, &vbq->free, free_list) { unsigned long pages_off; - spin_lock(&vb->lock); + mutex_lock(&vb->lock); if (vb->free < (1UL << order)) { - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); continue; } @@ -2117,7 +2118,7 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) spin_unlock(&vbq->lock); } - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); break; } @@ -2144,16 +2145,16 @@ static void vb_free(unsigned long addr, unsigned long size) order = get_order(size); offset = (addr & (VMAP_BLOCK_SIZE - 1)) >> PAGE_SHIFT; vb = xa_load(&vmap_blocks, addr_to_vb_idx(addr)); - spin_lock(&vb->lock); + mutex_lock(&vb->lock); bitmap_clear(vb->used_map, offset, (1UL << order)); - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); vunmap_range_noflush(addr, addr + size); if (debug_pagealloc_enabled_static()) flush_tlb_kernel_range(addr, addr + size); - spin_lock(&vb->lock); + mutex_lock(&vb->lock); /* Expand dirty range */ vb->dirty_min = min(vb->dirty_min, offset); @@ -2162,10 +2163,10 @@ static void vb_free(unsigned long addr, unsigned long size) vb->dirty += 1UL << order; if (vb->dirty == VMAP_BBMAP_BITS) { BUG_ON(vb->free); - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); free_vmap_block(vb); } else - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); } static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) @@ -2183,7 +2184,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) rcu_read_lock(); list_for_each_entry_rcu(vb, &vbq->free, free_list) { - spin_lock(&vb->lock); + mutex_lock(&vb->lock); if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { unsigned long va_start = vb->va->va_start; unsigned long s, e; @@ -2196,7 +2197,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) flush = 1; } - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); } rcu_read_unlock(); } @@ -2451,9 +2452,9 @@ static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, unsigned long flags, const void *caller) { - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); setup_vmalloc_vm_locked(vm, va, flags, caller); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); } static void clear_vm_uninitialized_flag(struct vm_struct *vm) @@ -3507,9 +3508,9 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags if (!vb) goto finished; - spin_lock(&vb->lock); + mutex_lock(&vb->lock); if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); goto finished; } for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { @@ -3536,7 +3537,7 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags count -= n; } unlock: - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); finished: /* zero-fill the left dirty or free regions */ @@ -3576,13 +3577,15 @@ long vread(char *buf, char *addr, unsigned long count) unsigned long buflen = count; unsigned long n, size, flags; + might_sleep(); + addr = kasan_reset_tag(addr); /* Don't allow overflow */ if ((unsigned long) addr + count < count) count = -(unsigned long) addr; - spin_lock(&vmap_area_lock); + down_read(&vmap_area_lock); va = find_vmap_area_exceed_addr((unsigned long)addr); if (!va) goto finished; @@ -3639,7 +3642,7 @@ long vread(char *buf, char *addr, unsigned long count) count -= n; } finished: - spin_unlock(&vmap_area_lock); + up_read(&vmap_area_lock); if (buf == buf_start) return 0; @@ -3980,14 +3983,14 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, } /* insert all vm's */ - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); for (area = 0; area < nr_vms; area++) { insert_vmap_area(vas[area], &vmap_area_root, &vmap_area_list); setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC, pcpu_get_vm_areas); } - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); /* * Mark allocated areas as accessible. Do it now as a best-effort @@ -4114,7 +4117,7 @@ static void *s_start(struct seq_file *m, loff_t *pos) __acquires(&vmap_area_lock) { mutex_lock(&vmap_purge_lock); - spin_lock(&vmap_area_lock); + down_read(&vmap_area_lock); return seq_list_start(&vmap_area_list, *pos); } @@ -4128,7 +4131,7 @@ static void s_stop(struct seq_file *m, void *p) __releases(&vmap_area_lock) __releases(&vmap_purge_lock) { - spin_unlock(&vmap_area_lock); + up_read(&vmap_area_lock); mutex_unlock(&vmap_purge_lock); } From patchwork Sun Mar 19 00:20:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13180172 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BC0BC74A5B for ; Sun, 19 Mar 2023 00:20:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 09517900008; Sat, 18 Mar 2023 20:20:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 01DD9900004; Sat, 18 Mar 2023 20:20:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB2F8900008; Sat, 18 Mar 2023 20:20:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CCFA9900004 for ; Sat, 18 Mar 2023 20:20:23 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A7E58160B67 for ; Sun, 19 Mar 2023 00:20:23 +0000 (UTC) X-FDA: 80583741126.19.FF7E6DF Received: from mail-wr1-f42.google.com (mail-wr1-f42.google.com [209.85.221.42]) by imf18.hostedemail.com (Postfix) with ESMTP id BACC21C0008 for ; Sun, 19 Mar 2023 00:20:21 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="n3/WQOG/"; spf=pass (imf18.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.42 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679185221; a=rsa-sha256; cv=none; b=FKzp7rBmD/khqj5jNlJHGu+PqI0IOG3XCkALp+QB4YI5WTOfM9zFw2YoCB0vTTBzXgRuVb /aLnDwrBXyUxuCaN4KtYx7j0+oztHTgASIlTBN1M1SNUApLE1H6qPZrYy8rppsluM4htPw A64zkgw7yKLqiBIhJJ+EnakbeZg+3FE= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="n3/WQOG/"; spf=pass (imf18.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.42 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679185221; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ML/fjqBQiFdniaLp2U4iIoVxIHX7dLvD1UO1EbWAfgg=; b=xk4GlJk+NMlcAcdR4B4slWOQHb5lgZzvNxFioOyB5tC19vAFUZ2U+IvXBod9fResSSKff0 gW+xnNYaYvjoTmbX+1TdCyA4Q6yc897pjdeIvf2laeMhFLj4cz2KttRsSAjqn3V6FmNFeZ VIG4B4IIIEvuZbpsKgxTiYFcu0SWnHU= Received: by mail-wr1-f42.google.com with SMTP id o7so7383520wrg.5 for ; Sat, 18 Mar 2023 17:20:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679185220; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ML/fjqBQiFdniaLp2U4iIoVxIHX7dLvD1UO1EbWAfgg=; b=n3/WQOG/ih4PqFSwkZOumfiFVTaTsKhtI63BBDGI+/vv/zPkXwXwDN0ApMTMTk8SYB ZMBRckYTo15ECbqaHnrBo48DrtOZUTNYDSw8c2T1sKBr5u2pwlk3r9mfSp26dS+X7GiD hT0ZMmPNSr5XADXvConvouz1CmLldien5XgSGZJMS+0yZw8l6uxPhbwG5PXY7KrqrM5C pMxQhA41a0VwFVargL4lvGa73JGXOiX13ei56eMROD8jAJ7I7o5+7YinueUteFCHDPXM 230nGcCMkJr4sHIdUsxmxEX2nkRrEyhgq3V6csMtQPw4EcM45xMSJc3lLbFfE8xRTgJJ bZCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679185220; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ML/fjqBQiFdniaLp2U4iIoVxIHX7dLvD1UO1EbWAfgg=; b=324dJhy+ISYGGem6VZv6LRlcA3g1sVKT1MTveBAkSMtdXFcJHgT2oqfeUFyblqe0UC ed1gd0aY/fqBU7wPE52plMs8aKGYNyz5j3x2x9pgA9ovNewVE4H+LlBUiuVF5iY3uigo pdBxDpnzUtQ49YLrUFRpdmZ861u3Aip/ZyTuGBOt5TcQEe+m58gx16T+Voek5Bim3Uf5 4q69YXLi1N5n8EhBy0NMb0t2CpFw7eTuVG0Vn+yBwwFIV1NJgBxqi82+QXrpfw+UMwjk II5beI+mwvsXETvB808nKIVgUVd5NGqCS6gjVAgeGyDLStQ0w6Rs+cdH6+18BPzBAa5i RsDw== X-Gm-Message-State: AO0yUKX3/L4CkAytMcLOqUQD+5DS0G6JACaYm1iz3XN/3ooPuEt99yia tEV/x5+JM98yo1U4cjGdVSm8/5tBBVQ= X-Google-Smtp-Source: AK7set95d6yiRVYGBDcNsU8nn/6kRPj65SWvFG+EDet5fLk/OpQVCKm3COqIdxAlwCybBefp9ljiDQ== X-Received: by 2002:a5d:54c1:0:b0:2d3:fba4:e61d with SMTP id x1-20020a5d54c1000000b002d3fba4e61dmr4877232wrv.12.1679185220032; Sat, 18 Mar 2023 17:20:20 -0700 (PDT) Received: from lucifer.home (host86-146-209-214.range86-146.btcentralplus.com. [86.146.209.214]) by smtp.googlemail.com with ESMTPSA id x14-20020adfdd8e000000b002cff0c57b98sm5399639wrl.18.2023.03.18.17.20.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 18 Mar 2023 17:20:19 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Lorenzo Stoakes Subject: [PATCH 3/4] fs/proc/kcore: convert read_kcore() to read_kcore_iter() Date: Sun, 19 Mar 2023 00:20:11 +0000 Message-Id: <32f8fad50500d0cd0927a66638c5890533725d30.1679183626.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: BACC21C0008 X-Rspamd-Server: rspam01 X-Stat-Signature: 7ei57e1mqtsfc9mamtm53yzr4s8mse4a X-HE-Tag: 1679185221-311070 X-HE-Meta: U2FsdGVkX18tdt85jSyGxDUlIQUAtnhNuZFHDNkMJKBk97oZS0ykxlT2gLOpxQ87k0hi9n0ZZ/JG8HsNUOFZLac4SCHaFlWke4bUpgTNtJ0r/bEMPXUNzQ6OPocE5NCF8dF2fX2jQNt3jZLt0bInftmu9zlQaWh8TJmcJSvZW7Sme2h7WTwwFr0nYtDDNNSYbTQVv9qhVHs5pJGBXggUnI8cJH2nZXbymWgGn/EIEIxXL7b4ieJc19ojSTGlEmmj3y2opEs68L9gEiR5Qam+J5f5Ql3EKTFja7obAdHUEBFWqYWfrIrARvJU7OBuThY8xAA5PmcuIV/LYP0znz6M4M7GHReYdyExp3EvE9v6WMxQQUbX6mTmum9mKSEa10yRw6pqPPEWb6ssnLHLUOhDEyhhPhejl4uSpk2JDZwp39rODiAEeXKFLLv4rZPCf2E0RUbXSx3T5w4HYAi8MjhYKyeaflo7seyuB4F7JBUlXk8kx7snjofXmokRXj4Ah2gAwMzNxUKptZWJm2p04/OAQ336L/5C256N3kWNSbXGz8acp/3pdlpYkvHmjbbdn0sXaHbmLxwqe5pZVIrHl95O80MFVsZmdUd5o61TKTtQne30ZidIQgNvgCpmQKeEpJ2h+eelHqm5Q48lL5xXmKTYRAVafnWtPN8N8p3pSqs2ph1XUchHoyurhhSlx5bVN1AJFRXU3ZXYZRTZgovTMTgs/UT94+HmkrDvnz43CjsZS+GI9PlSHEg2KzexRIT5+jbh3QZQ67GoTaxe4vKljfg2+glG+YiCDTiyeue1bHC3Yi5m8w6kq+ZUXsLPzIPbUafxBnWmZavmO4zLkoaDAOIrQZORDL3Wq5X6Dre35dG1wbi6pCCWNlWURhQCq2Fw7msPtK/7NW4riPMy1Qnj26W0u1SvDXlU+EGS58MiMKaRlFQnKFQ+rxiHJ0ut10+rv38a8tdhsLSdcw3G2EzhUvL kL07Ilj3 LsBME+pzaNccpUhiIMVmTk7gfqb2kXHqxojytjaNgPlEy21eMFUy94U2fIa5/KyjSpYiQZ9X1PvukqWeLNDur8IoM4kUunjqvuoHp6sBb+ep0WMPofb1tx4y0t+SEg4hf5GZXs346zNnlIo9Zxwuml4AsP7G6uuKNSNHixW/9eeR6a+hfKyeZUTkfhNc1G0pbBPU8x/5E1ERe2Jiu9KRnKQ6FRIqSMrQGMeG7QrCSvS1mKOpAH9hfN63uM8RE8SWPOnwJ4poS8CofMcL5SEEU8ra2qABzsoWWPx8UrB9JVIhPShcb+hp9NlJoJHVskCLjiKY/88J26BTCHoie/fS3Dr4BRl3OBA0x8FEZSg91NYvQMJb+TmKdLTdr4REXT9w/HYinNsOeuYwrXcGh3e/nMT8HrtanNmMEtqUdiKV/RRDIqSx3V9X+UnQEvRMQMNgvdpoksMacusL8nUaCnm4cE6KwPUWKrr5TdAgLYz4LKQKX0IZhSFbBlX97T4jsDDhNFLpx X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now we have eliminated spinlocks from the vread() case, convert read_kcore() to read_kcore_iter(). For the time being we still use a bounce buffer for vread(), however in the next patch we will convert this to interact directly with the iterator and eliminate the bounce buffer altogether. Signed-off-by: Lorenzo Stoakes --- fs/proc/kcore.c | 58 ++++++++++++++++++++++++------------------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 556f310d6aa4..25e0eeb8d498 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -24,7 +24,7 @@ #include #include #include -#include +#include #include #include #include @@ -308,9 +308,12 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, } static ssize_t -read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) +read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) { + struct file *file = iocb->ki_filp; char *buf = file->private_data; + loff_t *ppos = &iocb->ki_pos; + size_t phdrs_offset, notes_offset, data_offset; size_t page_offline_frozen = 1; size_t phdrs_len, notes_len; @@ -318,6 +321,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) size_t tsz; int nphdr; unsigned long start; + size_t buflen = iov_iter_count(iter); size_t orig_buflen = buflen; int ret = 0; @@ -333,7 +337,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) notes_offset = phdrs_offset + phdrs_len; /* ELF file header. */ - if (buflen && *fpos < sizeof(struct elfhdr)) { + if (buflen && *ppos < sizeof(struct elfhdr)) { struct elfhdr ehdr = { .e_ident = { [EI_MAG0] = ELFMAG0, @@ -355,19 +359,18 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) .e_phnum = nphdr, }; - tsz = min_t(size_t, buflen, sizeof(struct elfhdr) - *fpos); - if (copy_to_user(buffer, (char *)&ehdr + *fpos, tsz)) { + tsz = min_t(size_t, buflen, sizeof(struct elfhdr) - *ppos); + if (copy_to_iter((char *)&ehdr + *ppos, tsz, iter) != tsz) { ret = -EFAULT; goto out; } - buffer += tsz; buflen -= tsz; - *fpos += tsz; + *ppos += tsz; } /* ELF program headers. */ - if (buflen && *fpos < phdrs_offset + phdrs_len) { + if (buflen && *ppos < phdrs_offset + phdrs_len) { struct elf_phdr *phdrs, *phdr; phdrs = kzalloc(phdrs_len, GFP_KERNEL); @@ -397,22 +400,21 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) phdr++; } - tsz = min_t(size_t, buflen, phdrs_offset + phdrs_len - *fpos); - if (copy_to_user(buffer, (char *)phdrs + *fpos - phdrs_offset, - tsz)) { + tsz = min_t(size_t, buflen, phdrs_offset + phdrs_len - *ppos); + if (copy_to_iter((char *)phdrs + *ppos - phdrs_offset, tsz, + iter) != tsz) { kfree(phdrs); ret = -EFAULT; goto out; } kfree(phdrs); - buffer += tsz; buflen -= tsz; - *fpos += tsz; + *ppos += tsz; } /* ELF note segment. */ - if (buflen && *fpos < notes_offset + notes_len) { + if (buflen && *ppos < notes_offset + notes_len) { struct elf_prstatus prstatus = {}; struct elf_prpsinfo prpsinfo = { .pr_sname = 'R', @@ -447,24 +449,23 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) vmcoreinfo_data, min(vmcoreinfo_size, notes_len - i)); - tsz = min_t(size_t, buflen, notes_offset + notes_len - *fpos); - if (copy_to_user(buffer, notes + *fpos - notes_offset, tsz)) { + tsz = min_t(size_t, buflen, notes_offset + notes_len - *ppos); + if (copy_to_iter(notes + *ppos - notes_offset, tsz, iter) != tsz) { kfree(notes); ret = -EFAULT; goto out; } kfree(notes); - buffer += tsz; buflen -= tsz; - *fpos += tsz; + *ppos += tsz; } /* * Check to see if our file offset matches with any of * the addresses in the elf_phdr on our list. */ - start = kc_offset_to_vaddr(*fpos - data_offset); + start = kc_offset_to_vaddr(*ppos - data_offset); if ((tsz = (PAGE_SIZE - (start & ~PAGE_MASK))) > buflen) tsz = buflen; @@ -497,7 +498,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) } if (!m) { - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -508,14 +509,14 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) case KCORE_VMALLOC: vread(buf, (char *)start, tsz); /* we have to zero-fill user buffer even if no read */ - if (copy_to_user(buffer, buf, tsz)) { + if (copy_to_iter(buf, tsz, iter) != tsz) { ret = -EFAULT; goto out; } break; case KCORE_USER: /* User page is handled prior to normal kernel page: */ - if (copy_to_user(buffer, (char *)start, tsz)) { + if (copy_to_iter((char *)start, tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -531,7 +532,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) */ if (!page || PageOffline(page) || is_page_hwpoison(page) || !pfn_is_ram(pfn)) { - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -541,25 +542,24 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) case KCORE_VMEMMAP: case KCORE_TEXT: /* - * We use _copy_to_user() to bypass usermode hardening + * We use _copy_to_iter() to bypass usermode hardening * which would otherwise prevent this operation. */ - if (_copy_to_user(buffer, (char *)start, tsz)) { + if (_copy_to_iter((char *)start, tsz, iter) != tsz) { ret = -EFAULT; goto out; } break; default: pr_warn_once("Unhandled KCORE type: %d\n", m->type); - if (clear_user(buffer, tsz)) { + if (iov_iter_zero(tsz, iter) != tsz) { ret = -EFAULT; goto out; } } skip: buflen -= tsz; - *fpos += tsz; - buffer += tsz; + *ppos += tsz; start += tsz; tsz = (buflen > PAGE_SIZE ? PAGE_SIZE : buflen); } @@ -603,7 +603,7 @@ static int release_kcore(struct inode *inode, struct file *file) } static const struct proc_ops kcore_proc_ops = { - .proc_read = read_kcore, + .proc_read_iter = read_kcore_iter, .proc_open = open_kcore, .proc_release = release_kcore, .proc_lseek = default_llseek, From patchwork Sun Mar 19 00:20:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13180173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DDD6C7618A for ; Sun, 19 Mar 2023 00:20:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 37BE9900009; Sat, 18 Mar 2023 20:20:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 29002900004; Sat, 18 Mar 2023 20:20:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 108F7900009; Sat, 18 Mar 2023 20:20:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id ED0EB900004 for ; Sat, 18 Mar 2023 20:20:24 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id D281540C17 for ; Sun, 19 Mar 2023 00:20:24 +0000 (UTC) X-FDA: 80583741168.13.62A094B Received: from mail-wr1-f54.google.com (mail-wr1-f54.google.com [209.85.221.54]) by imf09.hostedemail.com (Postfix) with ESMTP id E8ABE140003 for ; Sun, 19 Mar 2023 00:20:22 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=huWPKdm+; spf=pass (imf09.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.54 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679185223; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MTgm8eMaEME071JBKDzGLJgbn/ShH65iS57yVvkZwv4=; b=62sHf36tyobZcI6h7i57uQqmsUuaLvLdGrIEHd5n8LOay54FcqwTr9P0a+zwhyEBYiliLl XhSMw4uEOI1ssBsJHKSndJkWrj92b03PtUTOvnKq3/CRArlJm1xNRjQgqOyUEZ+m2ckhBM O/GeUo7JjMm5FCkgc6fYH9BWpp8Dnj8= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=huWPKdm+; spf=pass (imf09.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.54 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679185223; a=rsa-sha256; cv=none; b=Buv5/mSECb1TPRmBd7SknbHxbcxKAblOBLP3WWM0DKXWou7cj1AYMfLckadeTvw83c50fn tDI4PQj8UO+g1LPyDKTcikHwqd/wTu0RWvW27ay9LccQS4PiyLFjSvGzmTsN1DhhZIvWyo tGC71Gp5KZl4bAE8DGk+DawSGBWSB3k= Received: by mail-wr1-f54.google.com with SMTP id o7so7383538wrg.5 for ; Sat, 18 Mar 2023 17:20:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679185221; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MTgm8eMaEME071JBKDzGLJgbn/ShH65iS57yVvkZwv4=; b=huWPKdm+tRAG9YrVt08wts9oXX1u/L6VFZzCMnd846EM/acbas78Ee3zKgv7qmyDtD bQ4WNX1Lu+YmfrpGKGF9SyFdXzzkh/lHE4Iecb+gC8DONfvV1VWDU6hLr7f8aTtTmN7t HsZ+37pEqNuXSetVcUlvZvK5lC1BPpSHqEtT5J2tC7y9SBf3pcvrpBPKD0wEiJ0KPZiC ECZxUl8khPH3gNwSahF7FMoeapYF2Ooz4nvhyNQPEhGwJDLaneWyb6hY1U7a+bvCC0nr vFI3/O5GQGCBZMjn7KZkrweyNUhrDcliQX2NCWuxFeYw+PB+NuIWEt//techJMcX6FDx cMHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679185221; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MTgm8eMaEME071JBKDzGLJgbn/ShH65iS57yVvkZwv4=; b=TKTzCqH67Y++epHxh78a+JuazrvSnKl1tiJdtFI6D3mW1BXcvKKb1ar5kSPOcwpjgR bG4gjOHN6af5cvEXiWMFI0nEYsRHlLdEyGkN6H7+zsFWBLz8+IpQXuvFdZWiFcTJPQUp /v1Nef59wXVfKJQ8JqyM9JQ1k2N+NgWZLccAVGDIxB/dXcC/aZtNegVTwFpLK2YrbNsc E7e9REEZMvmGgOP9shiVWDPIFMo4Oa13nbmfofA39b6qMM9rY/H5SweE3YxDVQp4I6cO AqsBowLBsTs6Z/ICwR1CT0B4MkoqxD4oPLPPIqtt0SMfRz4ah7ULHRITlYckO8f4FXWM FcWA== X-Gm-Message-State: AO0yUKV0NT0ZsGaL+IWShp/EEhVJ7L8XYJRWoVIjdohr5c1k8K78WuH6 k0wUF9jopEkz69trY7B+dJONjbXkMp0= X-Google-Smtp-Source: AK7set/8N5Yaps2ztad7B6HhBHCnmCxkP9fmkUJlMKQ2+v0qjXt1bYYTjp1A7l8qM/wVSHyDxqm1wQ== X-Received: by 2002:adf:fd12:0:b0:2ce:306d:6515 with SMTP id e18-20020adffd12000000b002ce306d6515mr9876348wrr.34.1679185221216; Sat, 18 Mar 2023 17:20:21 -0700 (PDT) Received: from lucifer.home (host86-146-209-214.range86-146.btcentralplus.com. [86.146.209.214]) by smtp.googlemail.com with ESMTPSA id x14-20020adfdd8e000000b002cff0c57b98sm5399639wrl.18.2023.03.18.17.20.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 18 Mar 2023 17:20:20 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Lorenzo Stoakes Subject: [PATCH 4/4] mm: vmalloc: convert vread() to vread_iter() Date: Sun, 19 Mar 2023 00:20:12 +0000 Message-Id: <119871ea9507eac7be5d91db38acdb03981e049e.1679183626.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: zt185yopr4gdg7ffot1cjqd9nfps1wm8 X-Rspam-User: X-Rspamd-Queue-Id: E8ABE140003 X-Rspamd-Server: rspam06 X-HE-Tag: 1679185222-371831 X-HE-Meta: U2FsdGVkX19QoUkR4mM/5wsUFXvhmIONnhXmtjyq8fs4hi2o0DEc/DvL8aa/wmjgnlrm7OoOsSEFVx80YJmVkEb45eNoauV8xWpr+gFdAQj9wPIKQpHNlnCj53lhAVq40aVZsNRwnU4xzqu0mQ5WPhpBg9YZcEmPD4cU9l3gpTVwV4U9lrFznRIjpJ/IGh4Me0xyDEU8ggXs/QqUjk5MJqGmIZUYgm+S43ZkKbrDHi3+2fotjoR8qVj3skACIS+JDCSFTvVVDIHVHY6Ri6c8xcmabwVzJozRbo6kee1pUJHHFWcnNasCH/h8GrRc0u+7vVs1xdLf6STmfPPES4nEkts1eYZC6scHXw/PgekhyHaYz9YWmeg6Tmn8nWST59tEjNHNmrehgoCTLnBFVxmOSgybh3NXn8acYhqDiHs/IOChVD4MGEPXzC+A3UepD81/PlEbSRDCgVWk7oVn1bcuDpW1p1Eye3ij00yBcNj2BBLxt5NAAt4ZGAMtLjsZXBbC7Od8ohjW7NZlAbXaJ4MMNvmk1nam1WUiA2OWsGQ3fCZjzsY3iCMi4CryHGrEa1eV3A3ce8OUUyx2Dk3eWY/vmGv1OcVBDStmWB0SoN2Kmhx0sznucu7resC46TydxAJwYRFv6ToNKukQLUs3TtBNdU+Hn4840UYDFTb/JOhVHScFjP4sZP5TF0CTgme4IT4SLQ1JYhZq2I/u3Pgxn4OC/5RA9lTd2b1Q1WrnZI7L23Gl8pQgWKMkw9bTzZ0Lzlvb2hFVROjo2ah5PdhTNNjbXffPMoZ+wwNQ63x8vTY7OYqo5sLc0vd8zCzJzIxkDSOIS9zmMRlrBzChPgcj1VkrzVB1v2NqIp1im3xukYsnwx8hDgQTHBl5yfJH4ZUC320dyPFe76xpdhui9LXIPDryATTFTup7HFqqlCrCWNIFF3/6KXWeSReZF9DesVFr5udrRLoTi7U389jiDu22GTL ZiYi961e SPEJPPb+4E5vClhJmsHP0RZkSG7opK7+rnbSO9Jr2Wa+YRK02xshUmhbFEh/oUwuHZUeKUYNSeZkxIRvzUxUjHVap4EPQ4rdwNumgvQLzwaDHB9JmwBGz/BEv3pMWOYgZy4Yc6/YQSBP2RqqEpWZ/zw47LnHoqYi5PKBMqIUBZOUvn8E3Ri6/45/vfA/frAwZnB2Wo64i3QS9feiYBlLXdDh/rZDB9rSDUA5rkfD+STuxxWgzj1GXgbWmly78MoeeYgh69jvn9JlhFmsMLapPA4fokA3i/cf34uMStf3KMWreqK6PUARNXiZBmVlWzECTw8qCrCBb/noEbhReSgYOgQ0GvdwJ5gQjC7q7IpIMn/rzZicTyvSTrPtnJRRArXIrntuSoF608Ac6hDmjEMSwGpkCS3dG3GEz+UfSP9ByZ0QO9pxOZqgd2f2oQvo13CVvVfna1vVrtFmfTrAeFcLjHBzPf/h7akuhQiH6UWBjFRSqLLxDRkDK6PXj3cy1Mnls1kIJ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Having previously laid the foundation for converting vread() to an iterator function, pull the trigger and do so. This patch attempts to provide minimal refactoring and to reflect the existing logic as best we can, with the exception of aligned_vread_iter() which drops the use of the deprecated kmap_atomic() in favour of kmap_local_page(). All existing logic to zero portions of memory not read remain and there should be no functional difference other than a performance improvement in /proc/kcore access to vmalloc regions. Now we have discarded with the need for a bounce buffer at all in read_kcore_iter(), we dispense with the one allocated there altogether. Signed-off-by: Lorenzo Stoakes --- fs/proc/kcore.c | 21 +-------- include/linux/vmalloc.h | 3 +- mm/vmalloc.c | 101 +++++++++++++++++++++------------------- 3 files changed, 57 insertions(+), 68 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 25e0eeb8d498..8a07f04c9203 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -307,13 +307,9 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, *i = ALIGN(*i + descsz, 4); } -static ssize_t -read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) +static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) { - struct file *file = iocb->ki_filp; - char *buf = file->private_data; loff_t *ppos = &iocb->ki_pos; - size_t phdrs_offset, notes_offset, data_offset; size_t page_offline_frozen = 1; size_t phdrs_len, notes_len; @@ -507,9 +503,7 @@ read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) switch (m->type) { case KCORE_VMALLOC: - vread(buf, (char *)start, tsz); - /* we have to zero-fill user buffer even if no read */ - if (copy_to_iter(buf, tsz, iter) != tsz) { + if (vread_iter((char *)start, tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -582,10 +576,6 @@ static int open_kcore(struct inode *inode, struct file *filp) if (ret) return ret; - filp->private_data = kmalloc(PAGE_SIZE, GFP_KERNEL); - if (!filp->private_data) - return -ENOMEM; - if (kcore_need_update) kcore_update_ram(); if (i_size_read(inode) != proc_root_kcore->size) { @@ -596,16 +586,9 @@ static int open_kcore(struct inode *inode, struct file *filp) return 0; } -static int release_kcore(struct inode *inode, struct file *file) -{ - kfree(file->private_data); - return 0; -} - static const struct proc_ops kcore_proc_ops = { .proc_read_iter = read_kcore_iter, .proc_open = open_kcore, - .proc_release = release_kcore, .proc_lseek = default_llseek, }; diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 69250efa03d1..f70ebdf21f22 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -9,6 +9,7 @@ #include /* pgprot_t */ #include #include +#include #include @@ -251,7 +252,7 @@ static inline void set_vm_flush_reset_perms(void *addr) #endif /* for /proc/kcore */ -extern long vread(char *buf, char *addr, unsigned long count); +extern long vread_iter(char *addr, size_t count, struct iov_iter *iter); /* * Internals. Don't use.. diff --git a/mm/vmalloc.c b/mm/vmalloc.c index c24b27664a97..3a32754266dc 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -37,7 +37,6 @@ #include #include #include -#include #include #include #include @@ -3446,20 +3445,20 @@ EXPORT_SYMBOL(vmalloc_32_user); * small helper routine , copy contents to buf from addr. * If the page is not present, fill zero. */ - -static int aligned_vread(char *buf, char *addr, unsigned long count) +static void aligned_vread_iter(char *addr, size_t count, + struct iov_iter *iter) { - struct page *p; - int copied = 0; + struct page *page; - while (count) { + while (count > 0) { unsigned long offset, length; + size_t copied = 0; offset = offset_in_page(addr); length = PAGE_SIZE - offset; if (length > count) length = count; - p = vmalloc_to_page(addr); + page = vmalloc_to_page(addr); /* * To do safe access to this _mapped_ area, we need * lock. But adding lock here means that we need to add @@ -3467,23 +3466,24 @@ static int aligned_vread(char *buf, char *addr, unsigned long count) * interface, rarely used. Instead of that, we'll use * kmap() and get small overhead in this access function. */ - if (p) { + if (page) { /* We can expect USER0 is not used -- see vread() */ - void *map = kmap_atomic(p); - memcpy(buf, map + offset, length); - kunmap_atomic(map); - } else - memset(buf, 0, length); + void *map = kmap_local_page(page); + + copied = copy_to_iter(map + offset, length, iter); + kunmap_local(map); + } + + if (copied < length) + iov_iter_zero(length - copied, iter); addr += length; - buf += length; - copied += length; count -= length; } - return copied; } -static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags) +static void vmap_ram_vread_iter(char *addr, int count, unsigned long flags, + struct iov_iter *iter) { char *start; struct vmap_block *vb; @@ -3496,7 +3496,7 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags * handle it here. */ if (!(flags & VMAP_BLOCK)) { - aligned_vread(buf, addr, count); + aligned_vread_iter(addr, count, iter); return; } @@ -3517,22 +3517,24 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags if (!count) break; start = vmap_block_vaddr(vb->va->va_start, rs); - while (addr < start) { + + if (addr < start) { + size_t to_zero = min_t(size_t, start - addr, count); + + iov_iter_zero(to_zero, iter); + addr += to_zero; + count -= (int)to_zero; if (count == 0) goto unlock; - *buf = '\0'; - buf++; - addr++; - count--; } + /*it could start reading from the middle of used region*/ offset = offset_in_page(addr); n = ((re - rs + 1) << PAGE_SHIFT) - offset; if (n > count) n = count; - aligned_vread(buf, start+offset, n); + aligned_vread_iter(start + offset, n, iter); - buf += n; addr += n; count -= n; } @@ -3541,15 +3543,15 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags finished: /* zero-fill the left dirty or free regions */ - if (count) - memset(buf, 0, count); + if (count > 0) + iov_iter_zero(count, iter); } /** - * vread() - read vmalloc area in a safe way. - * @buf: buffer for reading data - * @addr: vm address. - * @count: number of bytes to be read. + * vread_iter() - read vmalloc area in a safe way to an iterator. + * @addr: vm address. + * @count: number of bytes to be read. + * @iter: the iterator to which data should be written. * * This function checks that addr is a valid vmalloc'ed area, and * copy data from that area to a given buffer. If the given memory range @@ -3569,13 +3571,13 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags * (same number as @count) or %0 if [addr...addr+count) doesn't * include any intersection with valid vmalloc area */ -long vread(char *buf, char *addr, unsigned long count) +long vread_iter(char *addr, size_t count, struct iov_iter *iter) { struct vmap_area *va; struct vm_struct *vm; - char *vaddr, *buf_start = buf; - unsigned long buflen = count; - unsigned long n, size, flags; + char *vaddr; + size_t buflen = count; + size_t n, size, flags; might_sleep(); @@ -3595,7 +3597,7 @@ long vread(char *buf, char *addr, unsigned long count) goto finished; list_for_each_entry_from(va, &vmap_area_list, list) { - if (!count) + if (count == 0) break; vm = va->vm; @@ -3619,36 +3621,39 @@ long vread(char *buf, char *addr, unsigned long count) if (addr >= vaddr + size) continue; - while (addr < vaddr) { + + if (addr < vaddr) { + size_t to_zero = min_t(size_t, vaddr - addr, count); + + iov_iter_zero(to_zero, iter); + addr += to_zero; + count -= to_zero; if (count == 0) goto finished; - *buf = '\0'; - buf++; - addr++; - count--; } + n = vaddr + size - addr; if (n > count) n = count; if (flags & VMAP_RAM) - vmap_ram_vread(buf, addr, n, flags); + vmap_ram_vread_iter(addr, n, flags, iter); else if (!(vm->flags & VM_IOREMAP)) - aligned_vread(buf, addr, n); + aligned_vread_iter(addr, n, iter); else /* IOREMAP area is treated as memory hole */ - memset(buf, 0, n); - buf += n; + iov_iter_zero(n, iter); + addr += n; count -= n; } finished: up_read(&vmap_area_lock); - if (buf == buf_start) + if (count == buflen) return 0; /* zero-fill memory holes */ - if (buf != buf_start + buflen) - memset(buf, 0, buflen - (buf - buf_start)); + if (count > 0) + iov_iter_zero(count, iter); return buflen; }