From patchwork Tue Sep 17 07:31:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 13805941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EB6BC35FEC for ; Tue, 17 Sep 2024 07:31:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 331056B0089; Tue, 17 Sep 2024 03:31:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E1A16B008A; Tue, 17 Sep 2024 03:31:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A9476B0092; Tue, 17 Sep 2024 03:31:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 000466B0089 for ; Tue, 17 Sep 2024 03:31:36 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A65AD401CA for ; Tue, 17 Sep 2024 07:31:36 +0000 (UTC) X-FDA: 82573410192.03.25143AE Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf19.hostedemail.com (Postfix) with ESMTP id 16CC91A0004 for ; Tue, 17 Sep 2024 07:31:34 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf19.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726558286; a=rsa-sha256; cv=none; b=qoat9HEFpkFRpDk1UGjdR+4lzOn64My8Gpo+cwiH+BIgRpxnWEB6oYqZotvz0q+hiwGXuC BSKMDjnsEAecYos7RI+ZXD6h45vPBuCpGU4yOtrHNUXbSNrs7SQZxCNqNBnDYl/6dsfE5h 8MxpE/K4Tou8UtOg2obR0HRTefyPjBk= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf19.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726558286; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6lpxn0RtcL6Fkqv+YkOVwAEEKQgcrbZLeJiP74sUbcE=; b=oGJVUhfkopmE/7WhBt34c1PW2Fyr/i7X1vFHGreme3xfb7w5lzN2GueS/BSWnwkT8iA+V9 LYScJrq/uXrfB9a5lmGCPmg3HPCAsAbcf4bZ2aAdpSzZ449OjMX4QUIcVoyXlESNE85Vtu 0BaekUih/u1VnmrfWss+uT4sG25RANU= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 85F251063; Tue, 17 Sep 2024 00:32:03 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.61.158]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E0D793F64C; Tue, 17 Sep 2024 00:31:28 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: Anshuman Khandual , Andrew Morton , David Hildenbrand , Ryan Roberts , "Mike Rapoport (IBM)" , Arnd Bergmann , x86@kernel.org, linux-m68k@lists.linux-m68k.org, linux-fsdevel@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Geert Uytterhoeven , Guo Ren Subject: [PATCH V2 1/7] m68k/mm: Change pmd_val() Date: Tue, 17 Sep 2024 13:01:11 +0530 Message-Id: <20240917073117.1531207-2-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240917073117.1531207-1-anshuman.khandual@arm.com> References: <20240917073117.1531207-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: 9eqptwwnpg8iuykf9nfr7cbxad8n4dtp X-Rspamd-Queue-Id: 16CC91A0004 X-Rspamd-Server: rspam02 X-HE-Tag: 1726558294-680365 X-HE-Meta: U2FsdGVkX19jgaDgNQ5gmaq7lpbMM2AWinYYdLY8ibjG1Y6P45LQHb6gtjqf0AclJlR65mC4wRRV1wGIQqY7CXz1icFy1GW0lM6VZwerX4aLBatutrknMCkml0s5JihQyFSjFVE9iUL0irw3hEv+hCchYVoFdHDxWGzv/M4tgPQGHr3bJaxVLi9/8HYTVsiYFxgPc2C5Vs2v3FLLIw0wCanbr1wX1Uc4BAe9NrqoahxB0GGoCZZRK8SeSPbUd4lHtAIQpPD2dQGnKbUXL4YFqSdQAsZ5d8LgQnpSVVuFxuZKtN24+vJ9nzynkReTvjB7pxGowXjcs7q+/ciXyEf0PvalKgstVGMNt6x7jeclJiXcVsGTEbTIsDY9wo91js0UInrNGSxmh7aH78YjbtAZHKH/2K2Ow6b/T+X2qjjSOPGEIaFfUzk2rcVyVCNbcraTXHSYWHpnb1awNTA8Y4QLJWhk74zrHmGL8U+xRN+2lwV1aAVQxnZI9USG44N9sGdJD79R+k7D17kVWIz/bM5yFEFfgtlyRVVezsa12lIYbPfFozym82ypjVvEBO4QwMzI7AwmdWS5t+iwFXZqRWnTWKml2RATgAgw7BQRZWuVqRT6ZjUVciMbX0qI4ku1Seays78nSf08w6GYtYdXhyn8jUpQf4xqZ6woPs9Gk29EGRn4EYo5ld2YjzZ4qwR0dQxi7dp35RTNv99GM2ffjF+/GTpv4yh08Brgl8KzPv4GTU9qr2EpuRtUwQug9Le0bdhxAIpfSQShG2uafbcxSOvsLvs61f5IULk6dVHoQInASQRLFWSexsGtMB1ibR30J3PwfQvJ71356+A4YNjGGCNuZku9mwstj1/uk7Aiy5GYim3dj1orzM0mn0GZG7hU8ihtcRIkz2IOP2jwFqEsQTrQ9SdQTpUZyqkwH9BA4gBuoAaieMkyxPPfd/f8TFq1l3LUHocd2RAmRbbNMqfK+TQ EhQOgWYw k3gArwvX3d89DOJKOH72n4Z7kOSS5CkccP73AonBTKSkIXECafzta8tV2mHQvqGtzzZ9hnMVRQi4tyCcfkJFGlib85J6LZhwn2yVsITwzRHp5TG0xHpjEPPawvH2+scIuPZHzkqQneKwFBFkl24ojD5ISRbIftHX7fqjFzVHkwTls9oVinXII3Sn/yz5cKamuucgMw0DRn0GnGNuZLTqinqfzB1P0NZ5frmGtcPP8hc28wIYM3P9Bp99Xi5nQRpa2Qj9ofacTsaVrd4HrT0oLot0KAPvxV3Dk1FwtdW+vpsMkFaVIBYxAsQZDSw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This changes platform's pmd_val() to access the pmd_t element directly like other architectures rather than current pointer address based dereferencing that prevents transition into pmdp_get(). Cc: Geert Uytterhoeven Cc: Guo Ren Cc: Arnd Bergmann Cc: linux-m68k@lists.linux-m68k.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual Reviewed-by: Ryan Roberts Acked-by: David Hildenbrand --- arch/m68k/include/asm/page.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/m68k/include/asm/page.h b/arch/m68k/include/asm/page.h index 8cfb84b49975..be3f2c2a656c 100644 --- a/arch/m68k/include/asm/page.h +++ b/arch/m68k/include/asm/page.h @@ -19,7 +19,7 @@ */ #if !defined(CONFIG_MMU) || CONFIG_PGTABLE_LEVELS == 3 typedef struct { unsigned long pmd; } pmd_t; -#define pmd_val(x) ((&x)->pmd) +#define pmd_val(x) ((x).pmd) #define __pmd(x) ((pmd_t) { (x) } ) #endif From patchwork Tue Sep 17 07:31:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 13805942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA54DC35FEB for ; Tue, 17 Sep 2024 07:31:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6DB396B008A; Tue, 17 Sep 2024 03:31:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 68AE16B0092; Tue, 17 Sep 2024 03:31:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 552A96B0093; Tue, 17 Sep 2024 03:31:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 3A2546B008A for ; Tue, 17 Sep 2024 03:31:44 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id BDCA61A01D6 for ; Tue, 17 Sep 2024 07:31:43 +0000 (UTC) X-FDA: 82573410486.28.AAE2687 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf27.hostedemail.com (Postfix) with ESMTP id 24A104000A for ; Tue, 17 Sep 2024 07:31:40 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726558190; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Kf+QKFudpY6BIdljIQcwJegJXtg0lo4RbS3FDlRfCHA=; b=JuvV+UCN9o1CnQDQ/2ogNYPMx7xnLNZkc2oxsTu+GGa4dhi/7JuPHu1m+GXMOGzLHG4MVv 5XYzcpzPfUPakWj6gkJhiS94wgIQ4aiMzPipHwTp8yk8f0rpAjSWIV8Oivp+mBr/zaOLBH EFWFVZ4g6gZhR8MAkxlJKTbHpRfEsNo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726558190; a=rsa-sha256; cv=none; b=wi+qByFlwra3dmcxZ+Dr9wYl/TqTNGQMcVxn8B7RAnUpgwX7TMzY47R20S5tFJDmCVANc4 3y1P8a5NHUJ+j26OcXEveNa9+shH44Tf2MuGBOTmjStYuWQn0O8up6ZgJxkxhQTb4YqGZk YkgbSd+o/rhhWLd4FBht6VUZFxrLaac= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D43661063; Tue, 17 Sep 2024 00:32:09 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.61.158]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B83D53F64C; Tue, 17 Sep 2024 00:31:34 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: Anshuman Khandual , Andrew Morton , David Hildenbrand , Ryan Roberts , "Mike Rapoport (IBM)" , Arnd Bergmann , x86@kernel.org, linux-m68k@lists.linux-m68k.org, linux-fsdevel@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH V2 2/7] x86/mm: Drop page table entry address output from pxd_ERROR() Date: Tue, 17 Sep 2024 13:01:12 +0530 Message-Id: <20240917073117.1531207-3-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240917073117.1531207-1-anshuman.khandual@arm.com> References: <20240917073117.1531207-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 24A104000A X-Stat-Signature: z3d4irnz7tqgxxhrpkyqa77gyg568xbf X-HE-Tag: 1726558300-395841 X-HE-Meta: U2FsdGVkX18Kt0ZsRxWn+SPJ1mMB0+N9HQ1UEqL7nj7sOSbk+J4re8gUpszynxHJfZ5hwBeNrcF5EEAgroexGxX+eL0QW7T49ym2bM4qOmLGK9gToiZxf0ruFnSrfcbKl442uxj69H/auKWwlGiDA6BZglwBYg1hRaQ0ncEGrm3mGW+zOcmnhDWJrZXVmH1IADC3GM5PUSoWBkP3zIJOom21WRqt5fZdeyUVe5wHBPXECvX4TZ3C4JLFgyfNMoKOR17hzkUkATPLrBmFUoqoes/C9OGYUpnWDqn5o2CaO+HZE27OiAVL3DfEpOWLH8CrivxajQKFjLaGdvPvADEIu+FZSqjUinna0r087znmd9ZaH+16mYNsSJgyJVQUZ5aTrnMz1CQO6O+HIqfl4j3kgPxhULKPgEfsaZMe93TTn1fRgyruGnTg6H+8by9BLMHbQIQBkfYyApiMhG57gtoHmKYmFdl6UDEYBZ/bSP+6Dy6UAW0szh83a+3asZPDW42hrYXFkAR7TTdm5jTQ+DFzwtycXSNVMBaGzGNcStinzeYfR/77Fx8PRZ4z3ipCkr3YOlRsH1x6rMsVIuJa86ffbqDLB3Upwv/UrdEijGK9Al/AyeEycWLblfPCCn1m6E92UV5d1wbnLZ2DOGue/EQolRyA5HBJKhfBM46JfBfnCsv+JGEwF86O99lm8SltL1P8EDzhQH9joYqDG+Uw/Qr4XDYbUkjzVZLQbXfX24913NT76SMW4mO9ZUKyo8cHvj+nxc6guyo4rkLvBKf40XeTs/olbyBB/Vg91HrfP9UZQ3Fms/laXmsRg1PBr+4Bxo6Ne3lZbH9LcmpJTvR9vW3CKzymYOY+7+zMJ+S0kYAAzbZrPdQN9yCwSOccHONGD77ko1xl01THWhIxBcQSdJE/Jijj7sRCKVqpn7x/jJ62UxdVL5p09kH53Gc+CN8hoTWlhtcW8LrL8UR5eqfof0/ z8dnRNCA fvX6Vn0WES09ZndLE5l7NQE6Rut9G+zOO3viZUkQbkV6PHiV8ck8yHkSdtLu57i4MatsDxW79/LcFIVrhv+axOg80yD++wshB59RX7aeUD2i5oE1TTVqRF3nBdTOQhqvE4PTmdh21TsK7ju+Qehk8kWBWGsWCiPXwqGp/dDLhO8VpxrPmAZnnQJLFBtS6Qgh4s6T/Uj0loLnfUBpcfteNog6A4Ex9ITTXbOqed+f5ZhjV+IAIy8RS4o8hx+Elj0EbbXkQD/1anbS2TbQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This drops page table entry address output from all pxd_ERROR() definitions which now matches with other architectures. This also prevents build issues while transitioning into pxdp_get() based page table entry accesses. The mentioned build error is caused with changed macros pxd_ERROR() ends up doing &pxdp_get(pxd) which does not make sense and generates "error: lvalue required as unary '&' operand" warning. Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual Acked-by: David Hildenbrand --- arch/x86/include/asm/pgtable-3level.h | 12 ++++++------ arch/x86/include/asm/pgtable_64.h | 20 ++++++++++---------- 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h index dabafba957ea..e1fa4dd87753 100644 --- a/arch/x86/include/asm/pgtable-3level.h +++ b/arch/x86/include/asm/pgtable-3level.h @@ -10,14 +10,14 @@ */ #define pte_ERROR(e) \ - pr_err("%s:%d: bad pte %p(%08lx%08lx)\n", \ - __FILE__, __LINE__, &(e), (e).pte_high, (e).pte_low) + pr_err("%s:%d: bad pte (%08lx%08lx)\n", \ + __FILE__, __LINE__, (e).pte_high, (e).pte_low) #define pmd_ERROR(e) \ - pr_err("%s:%d: bad pmd %p(%016Lx)\n", \ - __FILE__, __LINE__, &(e), pmd_val(e)) + pr_err("%s:%d: bad pmd (%016Lx)\n", \ + __FILE__, __LINE__, pmd_val(e)) #define pgd_ERROR(e) \ - pr_err("%s:%d: bad pgd %p(%016Lx)\n", \ - __FILE__, __LINE__, &(e), pgd_val(e)) + pr_err("%s:%d: bad pgd (%016Lx)\n", \ + __FILE__, __LINE__, pgd_val(e)) #define pxx_xchg64(_pxx, _ptr, _val) ({ \ _pxx##val_t *_p = (_pxx##val_t *)_ptr; \ diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h index 3c4407271d08..4e462c825cab 100644 --- a/arch/x86/include/asm/pgtable_64.h +++ b/arch/x86/include/asm/pgtable_64.h @@ -32,24 +32,24 @@ extern void paging_init(void); static inline void sync_initial_page_table(void) { } #define pte_ERROR(e) \ - pr_err("%s:%d: bad pte %p(%016lx)\n", \ - __FILE__, __LINE__, &(e), pte_val(e)) + pr_err("%s:%d: bad pte (%016lx)\n", \ + __FILE__, __LINE__, pte_val(e)) #define pmd_ERROR(e) \ - pr_err("%s:%d: bad pmd %p(%016lx)\n", \ - __FILE__, __LINE__, &(e), pmd_val(e)) + pr_err("%s:%d: bad pmd (%016lx)\n", \ + __FILE__, __LINE__, pmd_val(e)) #define pud_ERROR(e) \ - pr_err("%s:%d: bad pud %p(%016lx)\n", \ - __FILE__, __LINE__, &(e), pud_val(e)) + pr_err("%s:%d: bad pud (%016lx)\n", \ + __FILE__, __LINE__, pud_val(e)) #if CONFIG_PGTABLE_LEVELS >= 5 #define p4d_ERROR(e) \ - pr_err("%s:%d: bad p4d %p(%016lx)\n", \ - __FILE__, __LINE__, &(e), p4d_val(e)) + pr_err("%s:%d: bad p4d (%016lx)\n", \ + __FILE__, __LINE__, p4d_val(e)) #endif #define pgd_ERROR(e) \ - pr_err("%s:%d: bad pgd %p(%016lx)\n", \ - __FILE__, __LINE__, &(e), pgd_val(e)) + pr_err("%s:%d: bad pgd (%016lx)\n", \ + __FILE__, __LINE__, pgd_val(e)) struct mm_struct; From patchwork Tue Sep 17 07:31:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 13805943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C022CC35FEB for ; Tue, 17 Sep 2024 07:31:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 554906B0092; Tue, 17 Sep 2024 03:31:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 504306B0093; Tue, 17 Sep 2024 03:31:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3CC406B0095; Tue, 17 Sep 2024 03:31:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 20D646B0092 for ; Tue, 17 Sep 2024 03:31:48 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id CA3B41201D1 for ; Tue, 17 Sep 2024 07:31:47 +0000 (UTC) X-FDA: 82573410654.23.632ACBE Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf11.hostedemail.com (Postfix) with ESMTP id 2FF064000D for ; Tue, 17 Sep 2024 07:31:45 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf11.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726558183; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WBPoMFgWT2ZiqZhDgBB8E2TVvrfOMNhXDz2ju2nc0jk=; b=LsVjf2TuKeeqX01bIPA+ufjcnPiLwKpMnDttfOpYsV8GdoNMiiWnvMkbTu/N6tpWv3HLxk 18yN5FW/v9HEMfCrd5cHF5mS4nIK119DawodYphvlpVHm7B0LyP8jEsSQX14VmRSM2B6Rg lS/hNuuryyNJOSFv36yfPIHUFP12LsM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726558183; a=rsa-sha256; cv=none; b=egk1mgjBR0CYGxIiqfRmasOJHBmmU3ls8kNnNdAutdOuz8jNS24fHNhOLFEtetzy23Rjoz OKevYJbx/PqZ5ErFIVX/2PN6KSfROn+RGet9M1kmp/iNGKJovuZqjP3nVYCdw3NM6Odz4q iH1fZa1r4/AH4kr7WHpS8/pgGseGZhg= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf11.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0434D106F; Tue, 17 Sep 2024 00:32:15 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.61.158]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 11DEC3F64C; Tue, 17 Sep 2024 00:31:40 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: Anshuman Khandual , Andrew Morton , David Hildenbrand , Ryan Roberts , "Mike Rapoport (IBM)" , Arnd Bergmann , x86@kernel.org, linux-m68k@lists.linux-m68k.org, linux-fsdevel@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org Subject: [PATCH V2 3/7] mm: Use ptep_get() for accessing PTE entries Date: Tue, 17 Sep 2024 13:01:13 +0530 Message-Id: <20240917073117.1531207-4-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240917073117.1531207-1-anshuman.khandual@arm.com> References: <20240917073117.1531207-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 2FF064000D X-Stat-Signature: ha6j7bqw96u8mfrm91s3mybh56udj1m5 X-Rspam-User: X-HE-Tag: 1726558305-417326 X-HE-Meta: U2FsdGVkX19H+fAbvDJCNDZNZl+iipoGo9lk8o/miGXY7COXu9zmSNayy3OvYQmhPYDjKKbPqqptY33mkzq87JrjoOVvPAvS8yJcg39zuvDHMwYcNUa7sHamFQs7g41cxg5DQQzMgtg+hQu/IKViRwembWhuYwfT63TGQHDY9dGBroCL82uNSylKZCC6EbsroG56mNdlxZRdIvPVM98C+dzEf4+FZxe2x4HvDGMRYY8BNJfhkptOAHYr0jdIoREwSe8Kqn6qH/FzTOU+YrK4QY1stYkonmHJsPfmyo5QAyVwxgIs8vMEtMIr+2Bs8l97IS50lZEw5rXCo33w6eTS9hTe1IyxocZUvd8Ui93T+rb9GNRJ4SRISR5WnRPsmYBvo+NhMdbfjkoP4RiKwoEJPTuFUORwm+aysPjRQuypM04JerjNUfW4RVQqMgDXIBdrEj88emojGxG9oHj+i8HIZpuVCUyru5FZVMsefi6btpEBStBXj4FEoEBZeBqV/4iIkqbQDxUcUrvD63n2NEILYrKJu19Kd8kkFV651jsJfQkbtfk0PLeVhlsBNFeGLLl5uLG5bJ5ajfAmdmde1zGz3U6K3E453bAsyfZBkTiWxTCKEe14g778xkmYM0ThFDbKK6zYQ2/SPiSJiRNfyIcs3ZQZ56wu/fdjqvxCv2hB3lw0Q30pumpVkjvIzfXoJsAnRAt1cQlx04uRswy9sQp0y3ddbN0VtQ/Yxbt3fYkhgFlMT8wnPxE1RtNW28dFiHkyhMWJyrxtHTSN4O/uWQ66LGKTvPr2+Xec3Uhvc2TOsyisLGjoDpp4XRBPtISt+LqCZH/B7qr2hSZbfyyQLyPJ8qbE0NDgljqHN86MqmKYaYTW/uNGa4XHAXQO8I+94wmDiLkeRbrybcwZkumNepmETyu7QVfeTc6Q2OzkT0MUqdyUxoHDCLdQJu0ncdfjRGBnn7W32rsNELMCH5OoSil gOq/pAyk 2M5w/nwGI5VJh6w/kHPNWz5qcgFWO2WyUH2Fy4XqvFRPhD0O+i9TWB4Mi8woWDuCQ0AQfAQrCX6n9WADrfFtQLe2inKCU/gjPPWC7ZearEGOIPJwrAbGa8vzbqOteXgHeYxBeSNdg3pnhOLOXxzVVkhoFP4VIc2eURXXETIkX3KmwQSAnKc63VeLa0ifabR7GtOM3YpkLxTc0+p+bJU5irEZP9XzH38mghFxPTEgcd47sFYrMiwDQsTA6gqhaiQYpmmN3uO9/kIbizv0VS0L2GiWaMUKtQCC47mnULA7xosUXfcKYd95chHlv5I1pvwFPO+lbcMRCsTYoNUqXb0BaLdFjH51EUjXDl9vLxD5ndITUICky+KSaHIgZ9g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Convert PTE accesses via ptep_get() helper that defaults as READ_ONCE() but also provides the platform an opportunity to override when required. This stores read page table entry value in a local variable which can be used in multiple instances there after. This helps in avoiding multiple memory load operations as well possible race conditions. Cc: Andrew Morton Cc: David Hildenbrand Cc: Ryan Roberts Cc: "Mike Rapoport (IBM)" Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual Reviewed-by: Ryan Roberts --- include/linux/pgtable.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 2a6a3cccfc36..547eeae8c43f 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1060,7 +1060,8 @@ static inline int pgd_same(pgd_t pgd_a, pgd_t pgd_b) */ #define set_pte_safe(ptep, pte) \ ({ \ - WARN_ON_ONCE(pte_present(*ptep) && !pte_same(*ptep, pte)); \ + pte_t __old = ptep_get(ptep); \ + WARN_ON_ONCE(pte_present(__old) && !pte_same(__old, pte)); \ set_pte(ptep, pte); \ }) From patchwork Tue Sep 17 07:31:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 13805944 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B97ECC35FEB for ; Tue, 17 Sep 2024 07:31:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 507876B0093; Tue, 17 Sep 2024 03:31:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 490076B0095; Tue, 17 Sep 2024 03:31:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E2726B0096; Tue, 17 Sep 2024 03:31:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 050BB6B0093 for ; Tue, 17 Sep 2024 03:31:56 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 6B1D31401A0 for ; Tue, 17 Sep 2024 07:31:56 +0000 (UTC) X-FDA: 82573411032.20.E3D9E86 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf09.hostedemail.com (Postfix) with ESMTP id A95A614000A for ; Tue, 17 Sep 2024 07:31:54 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf09.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726558283; a=rsa-sha256; cv=none; b=wjLCGtp0fk+DPgvWqvGl2FgyZIU3eAMFBgDyn3GKtO2/l61UY685onrq0fDqRJ4S59Wkeb rmTf42ffppogZsBlyazsrOVdWH/DNyF6s7MTdSNfb5uu8qdykk/7CjTo/BQxkbjE6BGMYV kRaDl+j/hhXWLQS2/f1t+86oVoUEG1E= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf09.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726558283; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=62Yfe2hEn3kMms/kObacDTTloBjetxe+IGf2cAuDm/8=; b=zoe3Vc2w6z9F0U8EncPPsLtsdP8zzukPWSfrKdHdEUKig9cipelbDjPFqrPcvYiiG4vKuP 1sr8DU/goRyQ8GrWVBSeVxqKy8kJUtBkA75cg4N2t2tiqdBlVE2W8vYHu88Tf81Fqtyb/B Nfgsa1DZbX9g8rqVOzVUn06gvTp6Uho= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6DEA31063; Tue, 17 Sep 2024 00:32:23 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.61.158]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 36A203F64C; Tue, 17 Sep 2024 00:31:45 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: Anshuman Khandual , Andrew Morton , David Hildenbrand , Ryan Roberts , "Mike Rapoport (IBM)" , Arnd Bergmann , x86@kernel.org, linux-m68k@lists.linux-m68k.org, linux-fsdevel@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Dimitri Sivanich , Muchun Song , Andrey Ryabinin , Miaohe Lin , Naoya Horiguchi , Pasha Tatashin , Dennis Zhou , Tejun Heo , Christoph Lameter , Uladzislau Rezki , Christoph Hellwig Subject: [PATCH V2 4/7] mm: Use pmdp_get() for accessing PMD entries Date: Tue, 17 Sep 2024 13:01:14 +0530 Message-Id: <20240917073117.1531207-5-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240917073117.1531207-1-anshuman.khandual@arm.com> References: <20240917073117.1531207-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: A95A614000A X-Rspamd-Server: rspam01 X-Stat-Signature: y8mxann96c1mrdhawiyxoh4qgu8zujxs X-HE-Tag: 1726558314-539212 X-HE-Meta: U2FsdGVkX1+i/JfgILsoM5GgG2bMtzCRUxY4DcTI+4izNb8e7viP2wl28ljMnYxI3Za1AbQ3xBImxUrxwWi/bOLAiqbRZy691VMtn0mgQGl60GkMZsjSmBwWGIHmoWWx9rwFF1ZmMiLEDv1jPx3d3Ftdo5vCR0pPan3dQObTzQ7QPNBRphRRlR5TmnPaZTvau2qcOtpHtN4weh5GPCfK8agjVFBK98uc/PM055URoT9xxGDQTsT3AXa6WnJg9WW3FIKsp5p3hifMReLofN8gYNHrZhkSidDEfYBQT3IQi6RZqHtCG7ALakDg/WnZKpPOWcUGZBD81UNcPjFhidS+xmBVqOFqIyiK0Bb0e/HRBQ2vvAYyojI5VwxHkuQ+DZiv7FE/sDXCDJxJSyS9cjBS8B+eBQdm3LAt250GT3f+R5HaKKYvv6FY0cazb6LyS3P70UqCzXXEH+mX07bhuGTFWzulncrz4Kk8WabUjB+OL5mjMCcCQ5/5+MFsLRErPcmqmcIGMvLsmbYq+nP9NEUGpgdMtVGZVdnpM909GCd4q+xWaWbAiEPyOlVOIJHXcixEHkvWZj+b/x2iPKpDQ6Uij9VEgEl6lkmTXLUWgEa/Er4wxC9eQO+HhUC/iThzA8W/bcyD72QiYgD1dazYhUPL1Jv3rmtHTIUQu3smk74fyLNJ+wPh0/T0Ftk7gXTvSPmfNRByzMI/mG1ea0F28VV22aGiOTe54tbA8GA3ULRpAldEgQwhIHz6IHzxl18h+bSAhN+H4SGtSqMXcC4qmLIQV8KNOFqnwbcd7qrO/2E1wBk12Yw+BZXC49a5DS+2rvI+A2xeomxe/v58+nfnL7BWtVy/w4kDINXdGQ9OQ5d3Oef65+POWyS5M/QGl3rSDwoC3t0XA4+sDa4EF4x9DsDzaLseDepQcWlYNRQvdD3nVOowAFhY5IQctYH6GZKL6CcHWo95QP5xKuREcKxh+wn wQBNP9VR wsQzMmXjEOep2dzI2NKndAYNa+u8qecaPp5M5UBJ56zQJJ3rs8n25OVhCkwNE485C+fdkVksnoCFozlm0JDAtn0z78rOucBpJpdtSTCxR/HcVSaQ7PEo/uCfaL0WcymrUGWx6zl8cSOT0gSjGBfxE+T4Ht1kJf+l00I4bq1BC6kQGmA6K5rjC0AM8/dUXDzL5cqNcvrAz0Oul3aJp0Wd/LdWOkJOeK8VEPk1ZXrYWKvOOpQ6f1pqk+r1nSaBHxeRMLyeew1eVbtNviXjGkxvw0/CJDpGahJ6nMnhxvQuOqF4HYwGNy6cmYuFiIebvpvJSao4DeQzObuvJNopR7rXysfy7dmANBAW8TNJC8C3cS43AfeD0GL3nN8tgxxF9x9mc3tkyUSguiCL7KEkO9xCJnnhO1Tr6CK9PhDNKHeJkSMQgxCFytKxA5/qq66v9DOGGHEdWiPJXcP/hAjhBmvDYs9x0e005fu3ZZ6I1w5Yvh0SEtwbzVcB5LEfFCEOox+lgsOSxZOwI3npzM/f+paGfYBWFQukVFPhOn8vUImkJoMToMPTdo3sGsWJjgC2xfz12I1lVnRy+/GXkntU/xXP8BQAQANfhp48d8EltVVp6Xwynd6b1h+ivplDLjD63gGbskzZs5zl9Q4jcfAXa/7fdMp7MySNcMS/NO9txzVOx2dKZ/ryfjP4GETkQW7assgtZqkvHB9+oLC3fEVkqLZbF6ttEi6lm+FZVXGMzZH6s1/IlBAUxqDimFxOIEw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Convert PMD accesses via pmdp_get() helper that defaults as READ_ONCE() but also provides the platform an opportunity to override when required. This stores read page table entry value in a local variable which can be used in multiple instances there after. This helps in avoiding multiple memory load operations as well possible race conditions. Cc: Dimitri Sivanich Cc: Muchun Song Cc: Andrey Ryabinin Cc: Miaohe Lin Cc: Naoya Horiguchi Cc: Pasha Tatashin Cc: Dennis Zhou Cc: Tejun Heo Cc: Christoph Lameter Cc: Uladzislau Rezki Cc: Christoph Hellwig Cc: Andrew Morton Cc: David Hildenbrand Cc: Ryan Roberts Cc: "Mike Rapoport (IBM)" Cc: linux-kernel@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Cc: linux-mm@kvack.org Cc: kasan-dev@googlegroups.com Signed-off-by: Anshuman Khandual --- drivers/misc/sgi-gru/grufault.c | 7 ++-- fs/proc/task_mmu.c | 28 +++++++------- include/linux/huge_mm.h | 4 +- include/linux/mm.h | 2 +- include/linux/pgtable.h | 15 ++++---- mm/gup.c | 14 +++---- mm/huge_memory.c | 66 +++++++++++++++++---------------- mm/hugetlb_vmemmap.c | 4 +- mm/kasan/init.c | 10 ++--- mm/kasan/shadow.c | 4 +- mm/khugepaged.c | 4 +- mm/madvise.c | 6 +-- mm/memory-failure.c | 6 +-- mm/memory.c | 25 +++++++------ mm/mempolicy.c | 4 +- mm/migrate.c | 4 +- mm/migrate_device.c | 10 ++--- mm/mlock.c | 6 +-- mm/mprotect.c | 2 +- mm/mremap.c | 4 +- mm/page_table_check.c | 2 +- mm/pagewalk.c | 4 +- mm/percpu.c | 2 +- mm/pgtable-generic.c | 20 +++++----- mm/ptdump.c | 2 +- mm/rmap.c | 4 +- mm/sparse-vmemmap.c | 4 +- mm/vmalloc.c | 15 ++++---- 28 files changed, 145 insertions(+), 133 deletions(-) diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c index 3557d78ee47a..804f275ece99 100644 --- a/drivers/misc/sgi-gru/grufault.c +++ b/drivers/misc/sgi-gru/grufault.c @@ -208,7 +208,7 @@ static int atomic_pte_lookup(struct vm_area_struct *vma, unsigned long vaddr, pgd_t *pgdp; p4d_t *p4dp; pud_t *pudp; - pmd_t *pmdp; + pmd_t *pmdp, old_pmd; pte_t pte; pgdp = pgd_offset(vma->vm_mm, vaddr); @@ -224,10 +224,11 @@ static int atomic_pte_lookup(struct vm_area_struct *vma, unsigned long vaddr, goto err; pmdp = pmd_offset(pudp, vaddr); - if (unlikely(pmd_none(*pmdp))) + old_pmd = pmdp_get(pmdp); + if (unlikely(pmd_none(old_pmd))) goto err; #ifdef CONFIG_X86_64 - if (unlikely(pmd_leaf(*pmdp))) + if (unlikely(pmd_leaf(old_pmd))) pte = ptep_get((pte_t *)pmdp); else #endif diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 5f171ad7b436..f0c63884d008 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -861,12 +861,13 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr, struct page *page = NULL; bool present = false; struct folio *folio; + pmd_t old_pmd = pmdp_get(pmd); - if (pmd_present(*pmd)) { - page = vm_normal_page_pmd(vma, addr, *pmd); + if (pmd_present(old_pmd)) { + page = vm_normal_page_pmd(vma, addr, old_pmd); present = true; - } else if (unlikely(thp_migration_supported() && is_swap_pmd(*pmd))) { - swp_entry_t entry = pmd_to_swp_entry(*pmd); + } else if (unlikely(thp_migration_supported() && is_swap_pmd(old_pmd))) { + swp_entry_t entry = pmd_to_swp_entry(old_pmd); if (is_pfn_swap_entry(entry)) page = pfn_swap_entry_to_page(entry); @@ -883,7 +884,7 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr, else mss->file_thp += HPAGE_PMD_SIZE; - smaps_account(mss, page, true, pmd_young(*pmd), pmd_dirty(*pmd), + smaps_account(mss, page, true, pmd_young(old_pmd), pmd_dirty(old_pmd), locked, present); } #else @@ -1426,7 +1427,7 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma, static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp) { - pmd_t old, pmd = *pmdp; + pmd_t old, pmd = pmdp_get(pmdp); if (pmd_present(pmd)) { /* See comment in change_huge_pmd() */ @@ -1468,10 +1469,10 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, goto out; } - if (!pmd_present(*pmd)) + if (!pmd_present(pmdp_get(pmd))) goto out; - folio = pmd_folio(*pmd); + folio = pmd_folio(pmdp_get(pmd)); /* Clear accessed and referenced bits. */ pmdp_test_and_clear_young(vma, addr, pmd); @@ -1769,7 +1770,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, if (ptl) { unsigned int idx = (addr & ~PMD_MASK) >> PAGE_SHIFT; u64 flags = 0, frame = 0; - pmd_t pmd = *pmdp; + pmd_t pmd = pmdp_get(pmdp); struct page *page = NULL; struct folio *folio = NULL; @@ -2189,7 +2190,7 @@ static unsigned long pagemap_thp_category(struct pagemap_scan_private *p, static void make_uffd_wp_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp) { - pmd_t old, pmd = *pmdp; + pmd_t old, pmd = pmdp_get(pmdp); if (pmd_present(pmd)) { old = pmdp_invalidate_ad(vma, addr, pmdp); @@ -2416,7 +2417,7 @@ static int pagemap_scan_thp_entry(pmd_t *pmd, unsigned long start, return -ENOENT; categories = p->cur_vma_category | - pagemap_thp_category(p, vma, start, *pmd); + pagemap_thp_category(p, vma, start, pmdp_get(pmd)); if (!pagemap_scan_is_interesting_page(categories, p)) goto out_unlock; @@ -2946,10 +2947,11 @@ static int gather_pte_stats(pmd_t *pmd, unsigned long addr, ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { struct page *page; + pmd_t old_pmd = pmdp_get(pmd); - page = can_gather_numa_stats_pmd(*pmd, vma, addr); + page = can_gather_numa_stats_pmd(old_pmd, vma, addr); if (page) - gather_stats(page, md, pmd_dirty(*pmd), + gather_stats(page, md, pmd_dirty(old_pmd), HPAGE_PMD_SIZE/PAGE_SIZE); spin_unlock(ptl); return 0; diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index e25d9ebfdf89..38b5de040d02 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -369,7 +369,9 @@ static inline int is_swap_pmd(pmd_t pmd) static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) { - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) + pmd_t old_pmd = pmdp_get(pmd); + + if (is_swap_pmd(old_pmd) || pmd_trans_huge(old_pmd) || pmd_devmap(old_pmd)) return __pmd_trans_huge_lock(pmd, vma); else return NULL; diff --git a/include/linux/mm.h b/include/linux/mm.h index 147073601716..258e49323306 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2921,7 +2921,7 @@ static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc) static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) { - return ptlock_ptr(page_ptdesc(pmd_page(*pmd))); + return ptlock_ptr(page_ptdesc(pmd_page(pmdp_get(pmd)))); } static inline spinlock_t *ptep_lockptr(struct mm_struct *mm, pte_t *pte) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 547eeae8c43f..ea283ce958a7 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -367,7 +367,7 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) { - pmd_t pmd = *pmdp; + pmd_t pmd = pmdp_get(pmdp); int r = 1; if (!pmd_young(pmd)) r = 0; @@ -598,7 +598,7 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, unsigned long address, pmd_t *pmdp) { - pmd_t pmd = *pmdp; + pmd_t pmd = pmdp_get(pmdp); pmd_clear(pmdp); page_table_check_pmd_clear(mm, pmd); @@ -876,7 +876,7 @@ static inline pte_t pte_sw_mkyoung(pte_t pte) static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long address, pmd_t *pmdp) { - pmd_t old_pmd = *pmdp; + pmd_t old_pmd = pmdp_get(pmdp); set_pmd_at(mm, address, pmdp, pmd_wrprotect(old_pmd)); } #else @@ -945,7 +945,7 @@ extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp); static inline pmd_t generic_pmdp_establish(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp, pmd_t pmd) { - pmd_t old_pmd = *pmdp; + pmd_t old_pmd = pmdp_get(pmdp); set_pmd_at(vma->vm_mm, address, pmdp, pmd); return old_pmd; } @@ -1067,7 +1067,8 @@ static inline int pgd_same(pgd_t pgd_a, pgd_t pgd_b) #define set_pmd_safe(pmdp, pmd) \ ({ \ - WARN_ON_ONCE(pmd_present(*pmdp) && !pmd_same(*pmdp, pmd)); \ + pmd_t __old = pmdp_get(pmdp); \ + WARN_ON_ONCE(pmd_present(__old) && !pmd_same(__old, pmd)); \ set_pmd(pmdp, pmd); \ }) @@ -1271,9 +1272,9 @@ static inline int pud_none_or_clear_bad(pud_t *pud) static inline int pmd_none_or_clear_bad(pmd_t *pmd) { - if (pmd_none(*pmd)) + if (pmd_none(pmdp_get(pmd))) return 1; - if (unlikely(pmd_bad(*pmd))) { + if (unlikely(pmd_bad(pmdp_get(pmd)))) { pmd_clear_bad(pmd); return 1; } diff --git a/mm/gup.c b/mm/gup.c index 54d0dc3831fb..aeeac0a54944 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -699,7 +699,7 @@ static struct page *follow_huge_pmd(struct vm_area_struct *vma, struct follow_page_context *ctx) { struct mm_struct *mm = vma->vm_mm; - pmd_t pmdval = *pmd; + pmd_t pmdval = pmdp_get(pmd); struct page *page; int ret; @@ -714,7 +714,7 @@ static struct page *follow_huge_pmd(struct vm_area_struct *vma, if ((flags & FOLL_DUMP) && is_huge_zero_pmd(pmdval)) return ERR_PTR(-EFAULT); - if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags)) + if (pmd_protnone(pmdp_get(pmd)) && !gup_can_follow_protnone(vma, flags)) return NULL; if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page)) @@ -957,7 +957,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); - pmdval = *pmd; + pmdval = pmdp_get(pmd); if (unlikely(!pmd_present(pmdval))) { spin_unlock(ptl); return no_page_table(vma, flags, address); @@ -1120,7 +1120,7 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address, if (pud_none(*pud)) return -EFAULT; pmd = pmd_offset(pud, address); - if (!pmd_present(*pmd)) + if (!pmd_present(pmdp_get(pmd))) return -EFAULT; pte = pte_offset_map(pmd, address); if (!pte) @@ -2898,7 +2898,7 @@ static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, if (!folio) goto pte_unmap; - if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) || + if (unlikely(pmd_val(pmd) != pmd_val(pmdp_get(pmdp))) || unlikely(pte_val(pte) != pte_val(ptep_get(ptep)))) { gup_put_folio(folio, 1, flags); goto pte_unmap; @@ -3007,7 +3007,7 @@ static int gup_fast_devmap_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, if (!gup_fast_devmap_leaf(fault_pfn, addr, end, flags, pages, nr)) return 0; - if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { + if (unlikely(pmd_val(orig) != pmd_val(pmdp_get(pmdp)))) { gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); return 0; } @@ -3074,7 +3074,7 @@ static int gup_fast_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, if (!folio) return 0; - if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { + if (unlikely(pmd_val(orig) != pmd_val(pmdp_get(pmdp)))) { gup_put_folio(folio, refs, flags); return 0; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 67c86a5d64a6..bb63de935937 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1065,7 +1065,7 @@ static void set_huge_zero_folio(pgtable_t pgtable, struct mm_struct *mm, struct folio *zero_folio) { pmd_t entry; - if (!pmd_none(*pmd)) + if (!pmd_none(pmdp_get(pmd))) return; entry = mk_pmd(&zero_folio->page, vma->vm_page_prot); entry = pmd_mkhuge(entry); @@ -1144,17 +1144,17 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, pgtable_t pgtable) { struct mm_struct *mm = vma->vm_mm; - pmd_t entry; + pmd_t entry, old_pmd = pmdp_get(pmd); spinlock_t *ptl; ptl = pmd_lock(mm, pmd); - if (!pmd_none(*pmd)) { + if (!pmd_none(old_pmd)) { if (write) { - if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) { - WARN_ON_ONCE(!is_huge_zero_pmd(*pmd)); + if (pmd_pfn(old_pmd) != pfn_t_to_pfn(pfn)) { + WARN_ON_ONCE(!is_huge_zero_pmd(old_pmd)); goto out_unlock; } - entry = pmd_mkyoung(*pmd); + entry = pmd_mkyoung(old_pmd); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); if (pmdp_set_access_flags(vma, addr, pmd, entry, 1)) update_mmu_cache_pmd(vma, addr, pmd); @@ -1318,7 +1318,7 @@ void touch_pmd(struct vm_area_struct *vma, unsigned long addr, { pmd_t _pmd; - _pmd = pmd_mkyoung(*pmd); + _pmd = pmd_mkyoung(pmdp_get(pmd)); if (write) _pmd = pmd_mkdirty(_pmd); if (pmdp_set_access_flags(vma, addr & HPAGE_PMD_MASK, @@ -1329,17 +1329,18 @@ void touch_pmd(struct vm_area_struct *vma, unsigned long addr, struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, int flags, struct dev_pagemap **pgmap) { - unsigned long pfn = pmd_pfn(*pmd); + pmd_t old_pmd = pmdp_get(pmd); + unsigned long pfn = pmd_pfn(old_pmd); struct mm_struct *mm = vma->vm_mm; struct page *page; int ret; assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !pmd_write(*pmd)) + if (flags & FOLL_WRITE && !pmd_write(old_pmd)) return NULL; - if (pmd_present(*pmd) && pmd_devmap(*pmd)) + if (pmd_present(old_pmd) && pmd_devmap(old_pmd)) /* pass */; else return NULL; @@ -1772,7 +1773,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, if (!ptl) goto out_unlocked; - orig_pmd = *pmd; + orig_pmd = pmdp_get(pmd); if (is_huge_zero_pmd(orig_pmd)) goto out; @@ -1990,7 +1991,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; spinlock_t *ptl; - pmd_t oldpmd, entry; + pmd_t oldpmd, entry, old_pmd; bool prot_numa = cp_flags & MM_CP_PROT_NUMA; bool uffd_wp = cp_flags & MM_CP_UFFD_WP; bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; @@ -2005,13 +2006,14 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, if (!ptl) return 0; + old_pmd = pmdp_get(pmd); #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION - if (is_swap_pmd(*pmd)) { - swp_entry_t entry = pmd_to_swp_entry(*pmd); + if (is_swap_pmd(old_pmd)) { + swp_entry_t entry = pmd_to_swp_entry(old_pmd); struct folio *folio = pfn_swap_entry_folio(entry); pmd_t newpmd; - VM_BUG_ON(!is_pmd_migration_entry(*pmd)); + VM_BUG_ON(!is_pmd_migration_entry(old_pmd)); if (is_writable_migration_entry(entry)) { /* * A protection check is difficult so @@ -2022,17 +2024,17 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, else entry = make_readable_migration_entry(swp_offset(entry)); newpmd = swp_entry_to_pmd(entry); - if (pmd_swp_soft_dirty(*pmd)) + if (pmd_swp_soft_dirty(old_pmd)) newpmd = pmd_swp_mksoft_dirty(newpmd); } else { - newpmd = *pmd; + newpmd = old_pmd; } if (uffd_wp) newpmd = pmd_swp_mkuffd_wp(newpmd); else if (uffd_wp_resolve) newpmd = pmd_swp_clear_uffd_wp(newpmd); - if (!pmd_same(*pmd, newpmd)) + if (!pmd_same(old_pmd, newpmd)) set_pmd_at(mm, addr, pmd, newpmd); goto unlock; } @@ -2046,13 +2048,13 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, * data is likely to be read-cached on the local CPU and * local/remote hits to the zero page are not interesting. */ - if (is_huge_zero_pmd(*pmd)) + if (is_huge_zero_pmd(old_pmd)) goto unlock; - if (pmd_protnone(*pmd)) + if (pmd_protnone(old_pmd)) goto unlock; - folio = pmd_folio(*pmd); + folio = pmd_folio(old_pmd); toptier = node_is_toptier(folio_nid(folio)); /* * Skip scanning top tier node if normal numa @@ -2266,8 +2268,8 @@ spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) { spinlock_t *ptl; ptl = pmd_lock(vma->vm_mm, pmd); - if (likely(is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || - pmd_devmap(*pmd))) + if (likely(is_swap_pmd(pmdp_get(pmd)) || pmd_trans_huge(pmdp_get(pmd)) || + pmd_devmap(pmdp_get(pmd)))) return ptl; spin_unlock(ptl); return NULL; @@ -2404,8 +2406,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, VM_BUG_ON(haddr & ~HPAGE_PMD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma); - VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd) - && !pmd_devmap(*pmd)); + VM_BUG_ON(!is_pmd_migration_entry(pmdp_get(pmd)) && !pmd_trans_huge(pmdp_get(pmd)) + && !pmd_devmap(pmdp_get(pmd))); count_vm_event(THP_SPLIT_PMD); @@ -2438,7 +2440,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, return; } - if (is_huge_zero_pmd(*pmd)) { + if (is_huge_zero_pmd(pmdp_get(pmd))) { /* * FIXME: Do we want to invalidate secondary mmu by calling * mmu_notifier_arch_invalidate_secondary_tlbs() see comments below @@ -2451,11 +2453,11 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, return __split_huge_zero_page_pmd(vma, haddr, pmd); } - pmd_migration = is_pmd_migration_entry(*pmd); + pmd_migration = is_pmd_migration_entry(pmdp_get(pmd)); if (unlikely(pmd_migration)) { swp_entry_t entry; - old_pmd = *pmd; + old_pmd = pmdp_get(pmd); entry = pmd_to_swp_entry(old_pmd); page = pfn_swap_entry_to_page(entry); write = is_writable_migration_entry(entry); @@ -2620,9 +2622,9 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, * require a folio to check the PMD against. Otherwise, there * is a risk of replacing the wrong folio. */ - if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || - is_pmd_migration_entry(*pmd)) { - if (folio && folio != pmd_folio(*pmd)) + if (pmd_trans_huge(pmdp_get(pmd)) || pmd_devmap(pmdp_get(pmd)) || + is_pmd_migration_entry(pmdp_get(pmd))) { + if (folio && folio != pmd_folio(pmdp_get(pmd))) return; __split_huge_pmd_locked(vma, pmd, address, freeze); } @@ -2719,7 +2721,7 @@ static bool __discard_anon_folio_pmd_locked(struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; int ref_count, map_count; - pmd_t orig_pmd = *pmdp; + pmd_t orig_pmd = pmdp_get(pmdp); if (folio_test_dirty(folio) || pmd_dirty(orig_pmd)) return false; diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 0c3f56b3578e..9deb82654d5b 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -70,7 +70,7 @@ static int vmemmap_split_pmd(pmd_t *pmd, struct page *head, unsigned long start, } spin_lock(&init_mm.page_table_lock); - if (likely(pmd_leaf(*pmd))) { + if (likely(pmd_leaf(pmdp_get(pmd)))) { /* * Higher order allocations from buddy allocator must be able to * be treated as indepdenent small pages (as they can be freed @@ -104,7 +104,7 @@ static int vmemmap_pmd_entry(pmd_t *pmd, unsigned long addr, walk->action = ACTION_CONTINUE; spin_lock(&init_mm.page_table_lock); - head = pmd_leaf(*pmd) ? pmd_page(*pmd) : NULL; + head = pmd_leaf(pmdp_get(pmd)) ? pmd_page(pmdp_get(pmd)) : NULL; /* * Due to HugeTLB alignment requirements and the vmemmap * pages being at the start of the hotplugged memory diff --git a/mm/kasan/init.c b/mm/kasan/init.c index 89895f38f722..4418bcdcb2aa 100644 --- a/mm/kasan/init.c +++ b/mm/kasan/init.c @@ -121,7 +121,7 @@ static int __ref zero_pmd_populate(pud_t *pud, unsigned long addr, continue; } - if (pmd_none(*pmd)) { + if (pmd_none(pmdp_get(pmd))) { pte_t *p; if (slab_is_available()) @@ -300,7 +300,7 @@ static void kasan_free_pte(pte_t *pte_start, pmd_t *pmd) return; } - pte_free_kernel(&init_mm, (pte_t *)page_to_virt(pmd_page(*pmd))); + pte_free_kernel(&init_mm, (pte_t *)page_to_virt(pmd_page(pmdp_get(pmd)))); pmd_clear(pmd); } @@ -311,7 +311,7 @@ static void kasan_free_pmd(pmd_t *pmd_start, pud_t *pud) for (i = 0; i < PTRS_PER_PMD; i++) { pmd = pmd_start + i; - if (!pmd_none(*pmd)) + if (!pmd_none(pmdp_get(pmd))) return; } @@ -381,10 +381,10 @@ static void kasan_remove_pmd_table(pmd_t *pmd, unsigned long addr, next = pmd_addr_end(addr, end); - if (!pmd_present(*pmd)) + if (!pmd_present(pmdp_get(pmd))) continue; - if (kasan_pte_table(*pmd)) { + if (kasan_pte_table(pmdp_get(pmd))) { if (IS_ALIGNED(addr, PMD_SIZE) && IS_ALIGNED(next, PMD_SIZE)) { pmd_clear(pmd); diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index d6210ca48dda..aec16a7236f7 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -202,9 +202,9 @@ static bool shadow_mapped(unsigned long addr) if (pud_leaf(*pud)) return true; pmd = pmd_offset(pud, addr); - if (pmd_none(*pmd)) + if (pmd_none(pmdp_get(pmd))) return false; - if (pmd_leaf(*pmd)) + if (pmd_leaf(pmdp_get(pmd))) return true; pte = pte_offset_kernel(pmd, addr); return !pte_none(ptep_get(pte)); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index cdd1d8655a76..793da996313f 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1192,7 +1192,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, if (pte) pte_unmap(pte); spin_lock(pmd_ptl); - BUG_ON(!pmd_none(*pmd)); + BUG_ON(!pmd_none(pmdp_get(pmd))); /* * We can only use set_pmd_at when establishing * hugepmds and never for establishing regular pmds that @@ -1229,7 +1229,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); spin_lock(pmd_ptl); - BUG_ON(!pmd_none(*pmd)); + BUG_ON(!pmd_none(pmdp_get(pmd))); folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); pgtable_trans_huge_deposit(mm, pmd, pgtable); diff --git a/mm/madvise.c b/mm/madvise.c index 89089d84f8df..382c55d2ec94 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -357,7 +357,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, !can_do_file_pageout(vma); #ifdef CONFIG_TRANSPARENT_HUGEPAGE - if (pmd_trans_huge(*pmd)) { + if (pmd_trans_huge(pmdp_get(pmd))) { pmd_t orig_pmd; unsigned long next = pmd_addr_end(addr, end); @@ -366,7 +366,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, if (!ptl) return 0; - orig_pmd = *pmd; + orig_pmd = pmdp_get(pmd); if (is_huge_zero_pmd(orig_pmd)) goto huge_unlock; @@ -655,7 +655,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, int nr, max_nr; next = pmd_addr_end(addr, end); - if (pmd_trans_huge(*pmd)) + if (pmd_trans_huge(pmdp_get(pmd))) if (madvise_free_huge_pmd(tlb, vma, pmd, addr, next)) return 0; diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 7066fc84f351..305dbef3cc4d 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -422,9 +422,9 @@ static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma, if (pud_devmap(*pud)) return PUD_SHIFT; pmd = pmd_offset(pud, address); - if (!pmd_present(*pmd)) + if (!pmd_present(pmdp_get(pmd))) return 0; - if (pmd_devmap(*pmd)) + if (pmd_devmap(pmdp_get(pmd))) return PMD_SHIFT; pte = pte_offset_map(pmd, address); if (!pte) @@ -775,7 +775,7 @@ static int check_hwpoisoned_entry(pte_t pte, unsigned long addr, short shift, static int check_hwpoisoned_pmd_entry(pmd_t *pmdp, unsigned long addr, struct hwpoison_walk *hwp) { - pmd_t pmd = *pmdp; + pmd_t pmd = pmdp_get(pmdp); unsigned long pfn; unsigned long hwpoison_vaddr; diff --git a/mm/memory.c b/mm/memory.c index ebfc9768f801..5520e1f6a1b9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -189,7 +189,7 @@ void mm_trace_rss_stat(struct mm_struct *mm, int member) static void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr) { - pgtable_t token = pmd_pgtable(*pmd); + pgtable_t token = pmd_pgtable(pmdp_get(pmd)); pmd_clear(pmd); pte_free_tlb(tlb, token, addr); mm_dec_nr_ptes(tlb->mm); @@ -421,7 +421,7 @@ void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte) { spinlock_t *ptl = pmd_lock(mm, pmd); - if (likely(pmd_none(*pmd))) { /* Has another populated it ? */ + if (likely(pmd_none(pmdp_get(pmd)))) { /* Has another populated it ? */ mm_inc_nr_ptes(mm); /* * Ensure all pte setup (eg. pte page lock and page clearing) are @@ -462,7 +462,7 @@ int __pte_alloc_kernel(pmd_t *pmd) return -ENOMEM; spin_lock(&init_mm.page_table_lock); - if (likely(pmd_none(*pmd))) { /* Has another populated it ? */ + if (likely(pmd_none(pmdp_get(pmd)))) { /* Has another populated it ? */ smp_wmb(); /* See comment in pmd_install() */ pmd_populate_kernel(&init_mm, pmd, new); new = NULL; @@ -1710,7 +1710,8 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb, pmd = pmd_offset(pud, addr); do { next = pmd_addr_end(addr, end); - if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) { + if (is_swap_pmd(pmdp_get(pmd)) || pmd_trans_huge(pmdp_get(pmd)) || + pmd_devmap(pmdp_get(pmd))) { if (next - addr != HPAGE_PMD_SIZE) __split_huge_pmd(vma, pmd, addr, false, NULL); else if (zap_huge_pmd(tlb, vma, pmd, addr)) { @@ -1720,7 +1721,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb, /* fall through */ } else if (details && details->single_folio && folio_test_pmd_mappable(details->single_folio) && - next - addr == HPAGE_PMD_SIZE && pmd_none(*pmd)) { + next - addr == HPAGE_PMD_SIZE && pmd_none(pmdp_get(pmd))) { spinlock_t *ptl = pmd_lock(tlb->mm, pmd); /* * Take and drop THP pmd lock so that we cannot return @@ -1729,7 +1730,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb, */ spin_unlock(ptl); } - if (pmd_none(*pmd)) { + if (pmd_none(pmdp_get(pmd))) { addr = next; continue; } @@ -1975,7 +1976,7 @@ static pmd_t *walk_to_pmd(struct mm_struct *mm, unsigned long addr) if (!pmd) return NULL; - VM_BUG_ON(pmd_trans_huge(*pmd)); + VM_BUG_ON(pmd_trans_huge(pmdp_get(pmd))); return pmd; } @@ -2577,7 +2578,7 @@ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud, pmd = pmd_alloc(mm, pud, addr); if (!pmd) return -ENOMEM; - VM_BUG_ON(pmd_trans_huge(*pmd)); + VM_BUG_ON(pmd_trans_huge(pmdp_get(pmd))); do { next = pmd_addr_end(addr, end); err = remap_pte_range(mm, pmd, addr, next, @@ -2846,11 +2847,11 @@ static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud, } do { next = pmd_addr_end(addr, end); - if (pmd_none(*pmd) && !create) + if (pmd_none(pmdp_get(pmd)) && !create) continue; - if (WARN_ON_ONCE(pmd_leaf(*pmd))) + if (WARN_ON_ONCE(pmd_leaf(pmdp_get(pmd)))) return -EINVAL; - if (!pmd_none(*pmd) && WARN_ON_ONCE(pmd_bad(*pmd))) { + if (!pmd_none(pmdp_get(pmd)) && WARN_ON_ONCE(pmd_bad(pmdp_get(pmd)))) { if (!create) continue; pmd_clear_bad(pmd); @@ -6167,7 +6168,7 @@ int follow_pte(struct vm_area_struct *vma, unsigned long address, goto out; pmd = pmd_offset(pud, address); - VM_BUG_ON(pmd_trans_huge(*pmd)); + VM_BUG_ON(pmd_trans_huge(pmdp_get(pmd))); ptep = pte_offset_map_lock(mm, pmd, address, ptlp); if (!ptep) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b858e22b259d..03f2df44b07f 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -505,11 +505,11 @@ static void queue_folios_pmd(pmd_t *pmd, struct mm_walk *walk) struct folio *folio; struct queue_pages *qp = walk->private; - if (unlikely(is_pmd_migration_entry(*pmd))) { + if (unlikely(is_pmd_migration_entry(pmdp_get(pmd)))) { qp->nr_failed++; return; } - folio = pmd_folio(*pmd); + folio = pmd_folio(pmdp_get(pmd)); if (is_huge_zero_folio(folio)) { walk->action = ACTION_CONTINUE; return; diff --git a/mm/migrate.c b/mm/migrate.c index 923ea80ba744..a1dd5c8f88dd 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -369,9 +369,9 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd) spinlock_t *ptl; ptl = pmd_lock(mm, pmd); - if (!is_pmd_migration_entry(*pmd)) + if (!is_pmd_migration_entry(pmdp_get(pmd))) goto unlock; - migration_entry_wait_on_locked(pmd_to_swp_entry(*pmd), ptl); + migration_entry_wait_on_locked(pmd_to_swp_entry(pmdp_get(pmd)), ptl); return; unlock: spin_unlock(ptl); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 6d66dc1c6ffa..3a08cef6cd39 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -67,19 +67,19 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, pte_t *ptep; again: - if (pmd_none(*pmdp)) + if (pmd_none(pmdp_get(pmdp))) return migrate_vma_collect_hole(start, end, -1, walk); - if (pmd_trans_huge(*pmdp)) { + if (pmd_trans_huge(pmdp_get(pmdp))) { struct folio *folio; ptl = pmd_lock(mm, pmdp); - if (unlikely(!pmd_trans_huge(*pmdp))) { + if (unlikely(!pmd_trans_huge(pmdp_get(pmdp)))) { spin_unlock(ptl); goto again; } - folio = pmd_folio(*pmdp); + folio = pmd_folio(pmdp_get(pmdp)); if (is_huge_zero_folio(folio)) { spin_unlock(ptl); split_huge_pmd(vma, pmdp, addr); @@ -596,7 +596,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, pmdp = pmd_alloc(mm, pudp, addr); if (!pmdp) goto abort; - if (pmd_trans_huge(*pmdp) || pmd_devmap(*pmdp)) + if (pmd_trans_huge(pmdp_get(pmdp)) || pmd_devmap(pmdp_get(pmdp))) goto abort; if (pte_alloc(mm, pmdp)) goto abort; diff --git a/mm/mlock.c b/mm/mlock.c index e3e3dc2b2956..c3c479e9d0f8 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -363,11 +363,11 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { - if (!pmd_present(*pmd)) + if (!pmd_present(pmdp_get(pmd))) goto out; - if (is_huge_zero_pmd(*pmd)) + if (is_huge_zero_pmd(pmdp_get(pmd))) goto out; - folio = pmd_folio(*pmd); + folio = pmd_folio(pmdp_get(pmd)); if (vma->vm_flags & VM_LOCKED) mlock_folio(folio); else diff --git a/mm/mprotect.c b/mm/mprotect.c index 222ab434da54..121fb448b0db 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -381,7 +381,7 @@ static inline long change_pmd_range(struct mmu_gather *tlb, break; } - if (pmd_none(*pmd)) + if (pmd_none(pmdp_get(pmd))) goto next; /* invoke the mmu notifier if the pmd is populated */ diff --git a/mm/mremap.c b/mm/mremap.c index e7ae140fc640..d42ac62bd34e 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -63,7 +63,7 @@ static pmd_t *get_old_pmd(struct mm_struct *mm, unsigned long addr) return NULL; pmd = pmd_offset(pud, addr); - if (pmd_none(*pmd)) + if (pmd_none(pmdp_get(pmd))) return NULL; return pmd; @@ -97,7 +97,7 @@ static pmd_t *alloc_new_pmd(struct mm_struct *mm, struct vm_area_struct *vma, if (!pmd) return NULL; - VM_BUG_ON(pmd_trans_huge(*pmd)); + VM_BUG_ON(pmd_trans_huge(pmdp_get(pmd))); return pmd; } diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 509c6ef8de40..48a2cf56c80e 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -241,7 +241,7 @@ void __page_table_check_pmd_set(struct mm_struct *mm, pmd_t *pmdp, pmd_t pmd) page_table_check_pmd_flags(pmd); - __page_table_check_pmd_clear(mm, *pmdp); + __page_table_check_pmd_clear(mm, pmdp_get(pmdp)); if (pmd_user_accessible_page(pmd)) { page_table_check_set(pmd_pfn(pmd), PMD_SIZE >> PAGE_SHIFT, pmd_write(pmd)); diff --git a/mm/pagewalk.c b/mm/pagewalk.c index ae2f08ce991b..c3019a160e77 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -86,7 +86,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, do { again: next = pmd_addr_end(addr, end); - if (pmd_none(*pmd)) { + if (pmd_none(pmdp_get(pmd))) { if (ops->pte_hole) err = ops->pte_hole(addr, next, depth, walk); if (err) @@ -112,7 +112,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, * Check this here so we only break down trans_huge * pages when we _need_ to */ - if ((!walk->vma && (pmd_leaf(*pmd) || !pmd_present(*pmd))) || + if ((!walk->vma && (pmd_leaf(pmdp_get(pmd)) || !pmd_present(pmdp_get(pmd)))) || walk->action == ACTION_CONTINUE || !(ops->pte_entry)) continue; diff --git a/mm/percpu.c b/mm/percpu.c index 20d91af8c033..7ee77c0fd5e3 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -3208,7 +3208,7 @@ void __init __weak pcpu_populate_pte(unsigned long addr) } pmd = pmd_offset(pud, addr); - if (!pmd_present(*pmd)) { + if (!pmd_present(pmdp_get(pmd))) { pte_t *new; new = memblock_alloc(PTE_TABLE_SIZE, PTE_TABLE_SIZE); diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index a78a4adf711a..920947bb76cd 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -51,7 +51,7 @@ void pud_clear_bad(pud_t *pud) */ void pmd_clear_bad(pmd_t *pmd) { - pmd_ERROR(*pmd); + pmd_ERROR(pmdp_get(pmd)); pmd_clear(pmd); } @@ -110,7 +110,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp, pmd_t entry, int dirty) { - int changed = !pmd_same(*pmdp, entry); + int changed = !pmd_same(pmdp_get(pmdp), entry); VM_BUG_ON(address & ~HPAGE_PMD_MASK); if (changed) { set_pmd_at(vma->vm_mm, address, pmdp, entry); @@ -137,10 +137,10 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma, pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) { - pmd_t pmd; + pmd_t pmd, old_pmd = pmdp_get(pmdp); VM_BUG_ON(address & ~HPAGE_PMD_MASK); - VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) && - !pmd_devmap(*pmdp)); + VM_BUG_ON(pmd_present(old_pmd) && !pmd_trans_huge(old_pmd) && + !pmd_devmap(old_pmd)); pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp); flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); return pmd; @@ -198,8 +198,10 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) { - VM_WARN_ON_ONCE(!pmd_present(*pmdp)); - pmd_t old = pmdp_establish(vma, address, pmdp, pmd_mkinvalid(*pmdp)); + pmd_t old_pmd = pmdp_get(pmdp); + + VM_WARN_ON_ONCE(!pmd_present(old_pmd)); + pmd_t old = pmdp_establish(vma, address, pmdp, pmd_mkinvalid(old_pmd)); flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); return old; } @@ -209,7 +211,7 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) { - VM_WARN_ON_ONCE(!pmd_present(*pmdp)); + VM_WARN_ON_ONCE(!pmd_present(pmdp_get(pmdp))); return pmdp_invalidate(vma, address, pmdp); } #endif @@ -225,7 +227,7 @@ pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address, pmd_t pmd; VM_BUG_ON(address & ~HPAGE_PMD_MASK); - VM_BUG_ON(pmd_trans_huge(*pmdp)); + VM_BUG_ON(pmd_trans_huge(pmdp_get(pmdp))); pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp); /* collapse entails shooting down ptes not pmd */ diff --git a/mm/ptdump.c b/mm/ptdump.c index 106e1d66e9f9..e17588a32012 100644 --- a/mm/ptdump.c +++ b/mm/ptdump.c @@ -99,7 +99,7 @@ static int ptdump_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next, struct mm_walk *walk) { struct ptdump_state *st = walk->private; - pmd_t val = READ_ONCE(*pmd); + pmd_t val = pmdp_get(pmd); #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) if (pmd_page(val) == virt_to_page(lm_alias(kasan_early_shadow_pte))) diff --git a/mm/rmap.c b/mm/rmap.c index 2490e727e2dc..32e4920e419d 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1034,9 +1034,9 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) } else { #ifdef CONFIG_TRANSPARENT_HUGEPAGE pmd_t *pmd = pvmw->pmd; - pmd_t entry; + pmd_t entry, old_pmd = pmdp_get(pmd); - if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) + if (!pmd_dirty(old_pmd) && !pmd_write(old_pmd)) continue; flush_cache_range(vma, address, diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index edcc7a6b0f6f..c89706e107ce 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -187,7 +187,7 @@ static void * __meminit vmemmap_alloc_block_zero(unsigned long size, int node) pmd_t * __meminit vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node) { pmd_t *pmd = pmd_offset(pud, addr); - if (pmd_none(*pmd)) { + if (pmd_none(pmdp_get(pmd))) { void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node); if (!p) return NULL; @@ -332,7 +332,7 @@ int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end, return -ENOMEM; pmd = pmd_offset(pud, addr); - if (pmd_none(READ_ONCE(*pmd))) { + if (pmd_none(pmdp_get(pmd))) { void *p; p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index a0df1e2e155a..1da56cbe5feb 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -150,7 +150,7 @@ static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end, if (!IS_ALIGNED(phys_addr, PMD_SIZE)) return 0; - if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr)) + if (pmd_present(pmdp_get(pmd)) && !pmd_free_pte_page(pmd, addr)) return 0; return pmd_set_huge(pmd, phys_addr, prot); @@ -371,7 +371,7 @@ static void vunmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, next = pmd_addr_end(addr, end); cleared = pmd_clear_huge(pmd); - if (cleared || pmd_bad(*pmd)) + if (cleared || pmd_bad(pmdp_get(pmd))) *mask |= PGTBL_PMD_MODIFIED; if (cleared) @@ -743,7 +743,7 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) pgd_t *pgd = pgd_offset_k(addr); p4d_t *p4d; pud_t *pud; - pmd_t *pmd; + pmd_t *pmd, old_pmd; pte_t *ptep, pte; /* @@ -776,11 +776,12 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) return NULL; pmd = pmd_offset(pud, addr); - if (pmd_none(*pmd)) + old_pmd = pmdp_get(pmd); + if (pmd_none(old_pmd)) return NULL; - if (pmd_leaf(*pmd)) - return pmd_page(*pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - if (WARN_ON_ONCE(pmd_bad(*pmd))) + if (pmd_leaf(old_pmd)) + return pmd_page(old_pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(pmd_bad(old_pmd))) return NULL; ptep = pte_offset_kernel(pmd, addr); From patchwork Tue Sep 17 07:31:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 13805945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2C74C35FEC for ; Tue, 17 Sep 2024 07:32:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 79F266B0088; Tue, 17 Sep 2024 03:32:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 727C36B0095; Tue, 17 Sep 2024 03:32:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57BEC6B0096; Tue, 17 Sep 2024 03:32:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3226C6B0088 for ; Tue, 17 Sep 2024 03:32:05 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B035A401CA for ; Tue, 17 Sep 2024 07:32:04 +0000 (UTC) X-FDA: 82573411368.25.A9D867B Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf04.hostedemail.com (Postfix) with ESMTP id EF1294000B for ; Tue, 17 Sep 2024 07:32:02 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf04.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726558292; a=rsa-sha256; cv=none; b=VeAMcSQS5arg3Yn058XGxLbNHLqilb/T22etTrhx4qTJZZO4PRRONA0NlGhwGMxlgRhiqu oUwTnNTchDhgC8DP6A51fYm8GFrCmKmCrdQ+4GRaA/lm2LL2BWxNZx5qehX45RGYAhZG4k C82ytX2XBwj2GmAU0DT2GVDxXwOvrLA= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf04.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726558292; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CEFgZetd4pxu5jlYUNdyrjKeZ6YDsS89h9L42GW3kkM=; b=sPiP1UI2NCAXQYDgg9CYBGVZ9tpqriA3krrSNhX/z0oznJAoL26CCODUpgoBNq9EYgY63Z bY4uObuRsPD/dzkY7d0JQOdNTBTWwnhxNu3GV5WU5/k77ViSHTqhxoqT1i2isOkkTHiMtJ S6z8345DeZjAgEqRQmZRVxxipLucRt4= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 950FD106F; Tue, 17 Sep 2024 00:32:31 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.61.158]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9716F3F64C; Tue, 17 Sep 2024 00:31:54 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: Anshuman Khandual , Andrew Morton , David Hildenbrand , Ryan Roberts , "Mike Rapoport (IBM)" , Arnd Bergmann , x86@kernel.org, linux-m68k@lists.linux-m68k.org, linux-fsdevel@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Dimitri Sivanich , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Muchun Song , Andrey Ryabinin , Miaohe Lin , Naoya Horiguchi , Pasha Tatashin Subject: [PATCH V2 5/7] mm: Use pudp_get() for accessing PUD entries Date: Tue, 17 Sep 2024 13:01:15 +0530 Message-Id: <20240917073117.1531207-6-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240917073117.1531207-1-anshuman.khandual@arm.com> References: <20240917073117.1531207-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: EF1294000B X-Rspamd-Server: rspam01 X-Stat-Signature: x77k4smajo74ow56rbw7con67u19smh1 X-HE-Tag: 1726558322-614938 X-HE-Meta: U2FsdGVkX1884LlnrItxaRryhEyhpvMAP5etg+1iDYBdqAqAa1ffvOW/ZG7WYhBtwWAiEisH/rfBRln/xIZS0AISDyAF4cSZ+HVaATGjHSy84RnSk9h8MWEmn9oMuvt1AZQ5UZFm13twRSx0gLYYTV4gB+N4bewnsP+hajavyJdyDpD5ckGrRu1vObignw4ckmc95C4jCEYtZsUy7AfQQFLEkDtYfGSyQfO3DlZEBlalmPkhKrj35g6CcMywkGwq/QwYKyUxW5NzXaiO27AC6GcFtDwI1Fg4Lp2zqHJAl5lscVau5iH3CtYC/bSS9PaI+vbFLtSFX0zAyAVyf8FqxSi3RMW83LDZLtOynHaJLlI2Tdjk++31mF9oj1VvNUPBkOP+H4U8MxAjhLDfHRJ061eSNEEkqmFvbDEOcdHbsW6HV7/KE1D9+C8BjDYaNAGKtmunqam90Cb6wGuA10aW9txhfEErvVNj3gFt+5pxRgDyWolN+R6qIaebYx6Fo5Y4swQqJ9T+qddqqadqKaAwPbAqlVqRCOIj9wLOjLJ0claTxqyTj8cmRjrDa1EduKYNQmsQVDldnrajXZT6d2FmYYfN7ph5YfgxKbgD+cuDZ8FO7ba8LG9hUSxQhEKTsBtJxEkz3txPOcZipwW9hT8g1WIPnIEQxPrJuBUdhQHL8j79gOulFl76MtczZV4rwtaOfcjnIFuoHgwNwMvFoYGlAEI4PcqAgCg3QPMnmfR0t1YWXfkd01tUNt+SMGc9GQUHpdcbf1gidnPfJp/WaHCxIQ04RAJr3L/3b8T9NxlnyiDMxu90HxC/GxDYOhY39fVPiJd1XpkpWW35H5AFYv0QK8To96MWGdvnHxdRQ08kuszcJRDwZHPcYOFIY5I1SKumvWHyzvkJYH/+suj8jjU6srqglJA1QA6qNjXxvu4jjNgm2NFDx5Kv4cvaQprp9fYaAjr6IXIiYuTGqjBN8RO fWRGKCUV 3dMl02cbJZVmbFt4FK3Gi/m1cBmDyJcOMTmP+uJPkTuJNc42mzjmjKJDRtmg0W7RDUTI0JnEO9GvLx0KwKZ+DK3OODTbD1MXswBt+DbmbzxxbL130/CDNhwtaUEkoPoCGuPNKsQT6Kd6Tp7myHW0JRFAv/uCsTOOERF+62i+maNIE3AgIQRQgSyzvKvlFcpZ25JPupblOJl57H5aC1ZOonueTF/BeMMB4XptsoGeSGY/59tJkhY3rMgGpm3OR7TqAs7jdPOkmZwhZO1ludIlh7CYTlCTHp5PIYZx7cvp059ZhFQR3crtQP7A7tfXvazJaBTn/oEa2USSlazP1YojXSB99iI000M6WT4YRIg8VezPmUh0neDq4OBW/mp8H6S0M+hy7SDyOR2s15a/eITaRIHnG2LrZybsbwE0jXOjJ3OUSEj5Oyq72I/ofGhc9uOZJ4IQixb2ho8+DR+FnRCs0SGFi455bnym+NWfT7fZuVdSf9F1oa3Laphpxn+6gXgwKh/FmZX8UTOp8EbZ9Cg4Ll+brWP6RxHOEsLeykAlmPec0fM9IwGIfmERWpcI0PaQahb1lpxlPX+SQKlcD0/AA56OpI8pgQpq0Zew2yG0lWYGsUKdRwJFe/Dv4kzyzlTXOBsRqAahKkpvZ8102c5/PyRWZ7C7b/soLVmhX X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Convert PUD accesses via pudp_get() helper that defaults as READ_ONCE() but also provides the platform an opportunity to override when required. This stores read page table entry value in a local variable which can be used in multiple instances there after. This helps in avoiding multiple memory load operations as well possible race conditions. Cc: Dimitri Sivanich Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: "Jérôme Glisse" Cc: Muchun Song Cc: Andrey Ryabinin Cc: Miaohe Lin Cc: Naoya Horiguchi Cc: Pasha Tatashin Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-perf-users@vger.kernel.org Cc: kasan-dev@googlegroups.com Signed-off-by: Anshuman Khandual --- drivers/misc/sgi-gru/grufault.c | 2 +- fs/userfaultfd.c | 2 +- include/linux/huge_mm.h | 2 +- include/linux/mm.h | 2 +- include/linux/pgtable.h | 13 ++++++++----- kernel/events/core.c | 2 +- mm/gup.c | 12 ++++++------ mm/hmm.c | 2 +- mm/huge_memory.c | 24 +++++++++++++++--------- mm/hugetlb.c | 6 +++--- mm/kasan/init.c | 10 +++++----- mm/kasan/shadow.c | 4 ++-- mm/mapping_dirty_helpers.c | 2 +- mm/memory-failure.c | 4 ++-- mm/memory.c | 14 +++++++------- mm/page_table_check.c | 2 +- mm/page_vma_mapped.c | 2 +- mm/pagewalk.c | 6 +++--- mm/percpu.c | 2 +- mm/pgalloc-track.h | 2 +- mm/pgtable-generic.c | 6 +++--- mm/ptdump.c | 4 ++-- mm/rmap.c | 2 +- mm/sparse-vmemmap.c | 2 +- mm/vmalloc.c | 15 ++++++++------- mm/vmscan.c | 4 ++-- 26 files changed, 79 insertions(+), 69 deletions(-) diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c index 804f275ece99..95d479d5e40f 100644 --- a/drivers/misc/sgi-gru/grufault.c +++ b/drivers/misc/sgi-gru/grufault.c @@ -220,7 +220,7 @@ static int atomic_pte_lookup(struct vm_area_struct *vma, unsigned long vaddr, goto err; pudp = pud_offset(p4dp, vaddr); - if (unlikely(pud_none(*pudp))) + if (unlikely(pud_none(pudp_get(pudp)))) goto err; pmdp = pmd_offset(pudp, vaddr); diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 27a3e9285fbf..00719a0f688c 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -310,7 +310,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, if (!p4d_present(*p4d)) goto out; pud = pud_offset(p4d, address); - if (!pud_present(*pud)) + if (!pud_present(pudp_get(pud))) goto out; pmd = pmd_offset(pud, address); again: diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 38b5de040d02..66a19622d95b 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -379,7 +379,7 @@ static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd, static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) { - if (pud_trans_huge(*pud) || pud_devmap(*pud)) + if (pud_trans_huge(pudp_get(pud)) || pud_devmap(pudp_get(pud))) return __pud_trans_huge_lock(pud, vma); else return NULL; diff --git a/include/linux/mm.h b/include/linux/mm.h index 258e49323306..1bb1599b5779 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2832,7 +2832,7 @@ static inline pud_t *pud_alloc(struct mm_struct *mm, p4d_t *p4d, static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) { - return (unlikely(pud_none(*pud)) && __pmd_alloc(mm, pud, address))? + return (unlikely(pud_none(pudp_get(pud))) && __pmd_alloc(mm, pud, address)) ? NULL: pmd_offset(pud, address); } #endif /* CONFIG_MMU */ diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index ea283ce958a7..eb993ef0946f 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -611,7 +611,7 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm, unsigned long address, pud_t *pudp) { - pud_t pud = *pudp; + pud_t pud = pudp_get(pudp); pud_clear(pudp); page_table_check_pud_clear(mm, pud); @@ -893,7 +893,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm, static inline void pudp_set_wrprotect(struct mm_struct *mm, unsigned long address, pud_t *pudp) { - pud_t old_pud = *pudp; + pud_t old_pud = pudp_get(pudp); set_pud_at(mm, address, pudp, pud_wrprotect(old_pud)); } @@ -1074,7 +1074,8 @@ static inline int pgd_same(pgd_t pgd_a, pgd_t pgd_b) #define set_pud_safe(pudp, pud) \ ({ \ - WARN_ON_ONCE(pud_present(*pudp) && !pud_same(*pudp, pud)); \ + pud_t __old = pudp_get(pudp); \ + WARN_ON_ONCE(pud_present(__old) && !pud_same(__old, pud)); \ set_pud(pudp, pud); \ }) @@ -1261,9 +1262,11 @@ static inline int p4d_none_or_clear_bad(p4d_t *p4d) static inline int pud_none_or_clear_bad(pud_t *pud) { - if (pud_none(*pud)) + pud_t old_pud = pudp_get(pud); + + if (pud_none(old_pud)) return 1; - if (unlikely(pud_bad(*pud))) { + if (unlikely(pud_bad(old_pud))) { pud_clear_bad(pud); return 1; } diff --git a/kernel/events/core.c b/kernel/events/core.c index 8a6c6bbcd658..35e2f2789246 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7619,7 +7619,7 @@ static u64 perf_get_pgtable_size(struct mm_struct *mm, unsigned long addr) return p4d_leaf_size(p4d); pudp = pud_offset_lockless(p4dp, p4d, addr); - pud = READ_ONCE(*pudp); + pud = pudp_get(pudp); if (!pud_present(pud)) return 0; diff --git a/mm/gup.c b/mm/gup.c index aeeac0a54944..300fc7eb306c 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -606,7 +606,7 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; struct page *page; - pud_t pud = *pudp; + pud_t pud = pudp_get(pudp); unsigned long pfn = pud_pfn(pud); int ret; @@ -989,7 +989,7 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, struct mm_struct *mm = vma->vm_mm; pudp = pud_offset(p4dp, address); - pud = READ_ONCE(*pudp); + pud = pudp_get(pudp); if (!pud_present(pud)) return no_page_table(vma, flags, address); if (pud_leaf(pud)) { @@ -1117,7 +1117,7 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address, if (p4d_none(*p4d)) return -EFAULT; pud = pud_offset(p4d, address); - if (pud_none(*pud)) + if (pud_none(pudp_get(pud))) return -EFAULT; pmd = pmd_offset(pud, address); if (!pmd_present(pmdp_get(pmd))) @@ -3025,7 +3025,7 @@ static int gup_fast_devmap_pud_leaf(pud_t orig, pud_t *pudp, unsigned long addr, if (!gup_fast_devmap_leaf(fault_pfn, addr, end, flags, pages, nr)) return 0; - if (unlikely(pud_val(orig) != pud_val(*pudp))) { + if (unlikely(pud_val(orig) != pud_val(pudp_get(pudp)))) { gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); return 0; } @@ -3118,7 +3118,7 @@ static int gup_fast_pud_leaf(pud_t orig, pud_t *pudp, unsigned long addr, if (!folio) return 0; - if (unlikely(pud_val(orig) != pud_val(*pudp))) { + if (unlikely(pud_val(orig) != pud_val(pudp_get(pudp)))) { gup_put_folio(folio, refs, flags); return 0; } @@ -3219,7 +3219,7 @@ static int gup_fast_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr, pudp = pud_offset_lockless(p4dp, p4d, addr); do { - pud_t pud = READ_ONCE(*pudp); + pud_t pud = pudp_get(pudp); next = pud_addr_end(addr, end); if (unlikely(!pud_present(pud))) diff --git a/mm/hmm.c b/mm/hmm.c index 7e0229ae4a5a..c1b093d670b8 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -423,7 +423,7 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, /* Normally we don't want to split the huge page */ walk->action = ACTION_CONTINUE; - pud = READ_ONCE(*pudp); + pud = pudp_get(pudp); if (!pud_present(pud)) { spin_unlock(ptl); return hmm_vma_walk_hole(start, end, -1, walk); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index bb63de935937..69e1400a51ec 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1243,17 +1243,18 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, { struct mm_struct *mm = vma->vm_mm; pgprot_t prot = vma->vm_page_prot; - pud_t entry; + pud_t entry, old_pud; spinlock_t *ptl; ptl = pud_lock(mm, pud); - if (!pud_none(*pud)) { + old_pud = pudp_get(pud); + if (!pud_none(old_pud)) { if (write) { - if (pud_pfn(*pud) != pfn_t_to_pfn(pfn)) { - WARN_ON_ONCE(!is_huge_zero_pud(*pud)); + if (pud_pfn(old_pud) != pfn_t_to_pfn(pfn)) { + WARN_ON_ONCE(!is_huge_zero_pud(old_pud)); goto out_unlock; } - entry = pud_mkyoung(*pud); + entry = pud_mkyoung(old_pud); entry = maybe_pud_mkwrite(pud_mkdirty(entry), vma); if (pudp_set_access_flags(vma, addr, pud, entry, 1)) update_mmu_cache_pud(vma, addr, pud); @@ -1476,7 +1477,7 @@ void touch_pud(struct vm_area_struct *vma, unsigned long addr, { pud_t _pud; - _pud = pud_mkyoung(*pud); + _pud = pud_mkyoung(pudp_get(pud)); if (write) _pud = pud_mkdirty(_pud); if (pudp_set_access_flags(vma, addr & HPAGE_PUD_MASK, @@ -2284,9 +2285,10 @@ spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) { spinlock_t *ptl; + pud_t old_pud = pudp_get(pud); ptl = pud_lock(vma->vm_mm, pud); - if (likely(pud_trans_huge(*pud) || pud_devmap(*pud))) + if (likely(pud_trans_huge(old_pud) || pud_devmap(old_pud))) return ptl; spin_unlock(ptl); return NULL; @@ -2317,10 +2319,12 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, unsigned long haddr) { + pud_t old_pud = pudp_get(pud); + VM_BUG_ON(haddr & ~HPAGE_PUD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PUD_SIZE, vma); - VM_BUG_ON(!pud_trans_huge(*pud) && !pud_devmap(*pud)); + VM_BUG_ON(!pud_trans_huge(old_pud) && !pud_devmap(old_pud)); count_vm_event(THP_SPLIT_PUD); @@ -2332,13 +2336,15 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, { spinlock_t *ptl; struct mmu_notifier_range range; + pud_t old_pud; mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, address & HPAGE_PUD_MASK, (address & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE); mmu_notifier_invalidate_range_start(&range); ptl = pud_lock(vma->vm_mm, pud); - if (unlikely(!pud_trans_huge(*pud) && !pud_devmap(*pud))) + old_pud = pudp_get(pud); + if (unlikely(!pud_trans_huge(old_pud) && !pud_devmap(old_pud))) goto out; __split_huge_pud_locked(vma, pud, range.start); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index aaf508be0a2b..a3820242b01e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7328,7 +7328,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, goto out; spin_lock(&mm->page_table_lock); - if (pud_none(*pud)) { + if (pud_none(pudp_get(pud))) { pud_populate(mm, pud, (pmd_t *)((unsigned long)spte & PAGE_MASK)); mm_inc_nr_pmds(mm); @@ -7417,7 +7417,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, pte = (pte_t *)pud; } else { BUG_ON(sz != PMD_SIZE); - if (want_pmd_share(vma, addr) && pud_none(*pud)) + if (want_pmd_share(vma, addr) && pud_none(pudp_get(pud))) pte = huge_pmd_share(mm, vma, addr, pud); else pte = (pte_t *)pmd_alloc(mm, pud, addr); @@ -7461,7 +7461,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, if (sz == PUD_SIZE) /* must be pud huge, non-present or none */ return (pte_t *)pud; - if (!pud_present(*pud)) + if (!pud_present(pudp_get(pud))) return NULL; /* must have a valid entry and size to go further */ diff --git a/mm/kasan/init.c b/mm/kasan/init.c index 4418bcdcb2aa..f4cf519443e1 100644 --- a/mm/kasan/init.c +++ b/mm/kasan/init.c @@ -162,7 +162,7 @@ static int __ref zero_pud_populate(p4d_t *p4d, unsigned long addr, continue; } - if (pud_none(*pud)) { + if (pud_none(pudp_get(pud))) { pmd_t *p; if (slab_is_available()) { @@ -315,7 +315,7 @@ static void kasan_free_pmd(pmd_t *pmd_start, pud_t *pud) return; } - pmd_free(&init_mm, (pmd_t *)page_to_virt(pud_page(*pud))); + pmd_free(&init_mm, (pmd_t *)page_to_virt(pud_page(pudp_get(pud)))); pud_clear(pud); } @@ -326,7 +326,7 @@ static void kasan_free_pud(pud_t *pud_start, p4d_t *p4d) for (i = 0; i < PTRS_PER_PUD; i++) { pud = pud_start + i; - if (!pud_none(*pud)) + if (!pud_none(pudp_get(pud))) return; } @@ -407,10 +407,10 @@ static void kasan_remove_pud_table(pud_t *pud, unsigned long addr, next = pud_addr_end(addr, end); - if (!pud_present(*pud)) + if (!pud_present(pudp_get(pud))) continue; - if (kasan_pmd_table(*pud)) { + if (kasan_pmd_table(pudp_get(pud))) { if (IS_ALIGNED(addr, PUD_SIZE) && IS_ALIGNED(next, PUD_SIZE)) { pud_clear(pud); diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index aec16a7236f7..dbd8164c75f1 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -197,9 +197,9 @@ static bool shadow_mapped(unsigned long addr) if (p4d_none(*p4d)) return false; pud = pud_offset(p4d, addr); - if (pud_none(*pud)) + if (pud_none(pudp_get(pud))) return false; - if (pud_leaf(*pud)) + if (pud_leaf(pudp_get(pud))) return true; pmd = pmd_offset(pud, addr); if (pmd_none(pmdp_get(pmd))) diff --git a/mm/mapping_dirty_helpers.c b/mm/mapping_dirty_helpers.c index 2f8829b3541a..c556cc4e3480 100644 --- a/mm/mapping_dirty_helpers.c +++ b/mm/mapping_dirty_helpers.c @@ -149,7 +149,7 @@ static int wp_clean_pud_entry(pud_t *pud, unsigned long addr, unsigned long end, struct mm_walk *walk) { #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD - pud_t pudval = READ_ONCE(*pud); + pud_t pudval = pudp_get(pud); /* Do not split a huge pud */ if (pud_trans_huge(pudval) || pud_devmap(pudval)) { diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 305dbef3cc4d..fbb63401fb51 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -417,9 +417,9 @@ static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma, if (!p4d_present(*p4d)) return 0; pud = pud_offset(p4d, address); - if (!pud_present(*pud)) + if (!pud_present(pudp_get(pud))) return 0; - if (pud_devmap(*pud)) + if (pud_devmap(pudp_get(pud))) return PUD_SHIFT; pmd = pmd_offset(pud, address); if (!pmd_present(pmdp_get(pmd))) diff --git a/mm/memory.c b/mm/memory.c index 5520e1f6a1b9..801750e4337c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1753,7 +1753,7 @@ static inline unsigned long zap_pud_range(struct mmu_gather *tlb, pud = pud_offset(p4d, addr); do { next = pud_addr_end(addr, end); - if (pud_trans_huge(*pud) || pud_devmap(*pud)) { + if (pud_trans_huge(pudp_get(pud)) || pud_devmap(pudp_get(pud))) { if (next - addr != HPAGE_PUD_SIZE) { mmap_assert_locked(tlb->mm); split_huge_pud(vma, pud, addr); @@ -2836,7 +2836,7 @@ static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud, unsigned long next; int err = 0; - BUG_ON(pud_leaf(*pud)); + BUG_ON(pud_leaf(pudp_get(pud))); if (create) { pmd = pmd_alloc_track(mm, pud, addr, mask); @@ -2883,11 +2883,11 @@ static int apply_to_pud_range(struct mm_struct *mm, p4d_t *p4d, } do { next = pud_addr_end(addr, end); - if (pud_none(*pud) && !create) + if (pud_none(pudp_get(pud)) && !create) continue; - if (WARN_ON_ONCE(pud_leaf(*pud))) + if (WARN_ON_ONCE(pud_leaf(pudp_get(pud)))) return -EINVAL; - if (!pud_none(*pud) && WARN_ON_ONCE(pud_bad(*pud))) { + if (!pud_none(pudp_get(pud)) && WARN_ON_ONCE(pud_bad(pudp_get(pud)))) { if (!create) continue; pud_clear_bad(pud); @@ -6099,7 +6099,7 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) return -ENOMEM; ptl = pud_lock(mm, pud); - if (!pud_present(*pud)) { + if (!pud_present(pudp_get(pud))) { mm_inc_nr_pmds(mm); smp_wmb(); /* See comment in pmd_install() */ pud_populate(mm, pud, new); @@ -6164,7 +6164,7 @@ int follow_pte(struct vm_area_struct *vma, unsigned long address, goto out; pud = pud_offset(p4d, address); - if (pud_none(*pud) || unlikely(pud_bad(*pud))) + if (pud_none(pudp_get(pud)) || unlikely(pud_bad(pudp_get(pud)))) goto out; pmd = pmd_offset(pud, address); diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 48a2cf56c80e..2a22d098b0b1 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -254,7 +254,7 @@ void __page_table_check_pud_set(struct mm_struct *mm, pud_t *pudp, pud_t pud) if (&init_mm == mm) return; - __page_table_check_pud_clear(mm, *pudp); + __page_table_check_pud_clear(mm, pudp_get(pudp)); if (pud_user_accessible_page(pud)) { page_table_check_set(pud_pfn(pud), PUD_SIZE >> PAGE_SHIFT, pud_write(pud)); diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index ae5cc42aa208..511266307771 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -222,7 +222,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) continue; } pud = pud_offset(p4d, pvmw->address); - if (!pud_present(*pud)) { + if (!pud_present(pudp_get(pud))) { step_forward(pvmw, PUD_SIZE); continue; } diff --git a/mm/pagewalk.c b/mm/pagewalk.c index c3019a160e77..1d32c6da1a0d 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -145,7 +145,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, do { again: next = pud_addr_end(addr, end); - if (pud_none(*pud)) { + if (pud_none(pudp_get(pud))) { if (ops->pte_hole) err = ops->pte_hole(addr, next, depth, walk); if (err) @@ -163,14 +163,14 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, if (walk->action == ACTION_AGAIN) goto again; - if ((!walk->vma && (pud_leaf(*pud) || !pud_present(*pud))) || + if ((!walk->vma && (pud_leaf(pudp_get(pud)) || !pud_present(pudp_get(pud)))) || walk->action == ACTION_CONTINUE || !(ops->pmd_entry || ops->pte_entry)) continue; if (walk->vma) split_huge_pud(walk->vma, pud, addr); - if (pud_none(*pud)) + if (pud_none(pudp_get(pud))) goto again; err = walk_pmd_range(pud, addr, next, walk); diff --git a/mm/percpu.c b/mm/percpu.c index 7ee77c0fd5e3..5f32164b04a2 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -3200,7 +3200,7 @@ void __init __weak pcpu_populate_pte(unsigned long addr) } pud = pud_offset(p4d, addr); - if (pud_none(*pud)) { + if (pud_none(pudp_get(pud))) { pmd = memblock_alloc(PMD_TABLE_SIZE, PMD_TABLE_SIZE); if (!pmd) goto err_alloc; diff --git a/mm/pgalloc-track.h b/mm/pgalloc-track.h index e9e879de8649..0f6b809431a3 100644 --- a/mm/pgalloc-track.h +++ b/mm/pgalloc-track.h @@ -33,7 +33,7 @@ static inline pmd_t *pmd_alloc_track(struct mm_struct *mm, pud_t *pud, unsigned long address, pgtbl_mod_mask *mod_mask) { - if (unlikely(pud_none(*pud))) { + if (unlikely(pud_none(pudp_get(pud)))) { if (__pmd_alloc(mm, pud, address)) return NULL; *mod_mask |= PGTBL_PUD_MODIFIED; diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 920947bb76cd..e09e3f920f7a 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -39,7 +39,7 @@ void p4d_clear_bad(p4d_t *p4d) #ifndef __PAGETABLE_PUD_FOLDED void pud_clear_bad(pud_t *pud) { - pud_ERROR(*pud); + pud_ERROR(pudp_get(pud)); pud_clear(pud); } #endif @@ -150,10 +150,10 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, pud_t pudp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, pud_t *pudp) { - pud_t pud; + pud_t pud, old_pud = pudp_get(pudp); VM_BUG_ON(address & ~HPAGE_PUD_MASK); - VM_BUG_ON(!pud_trans_huge(*pudp) && !pud_devmap(*pudp)); + VM_BUG_ON(!pud_trans_huge(old_pud) && !pud_devmap(old_pud)); pud = pudp_huge_get_and_clear(vma->vm_mm, address, pudp); flush_pud_tlb_range(vma, address, address + HPAGE_PUD_SIZE); return pud; diff --git a/mm/ptdump.c b/mm/ptdump.c index e17588a32012..32ae8e829329 100644 --- a/mm/ptdump.c +++ b/mm/ptdump.c @@ -30,7 +30,7 @@ static int ptdump_pgd_entry(pgd_t *pgd, unsigned long addr, unsigned long next, struct mm_walk *walk) { struct ptdump_state *st = walk->private; - pgd_t val = READ_ONCE(*pgd); + pgd_t val = pgdp_get(pgd); #if CONFIG_PGTABLE_LEVELS > 4 && \ (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) @@ -76,7 +76,7 @@ static int ptdump_pud_entry(pud_t *pud, unsigned long addr, unsigned long next, struct mm_walk *walk) { struct ptdump_state *st = walk->private; - pud_t val = READ_ONCE(*pud); + pud_t val = pudp_get(pud); #if CONFIG_PGTABLE_LEVELS > 2 && \ (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) diff --git a/mm/rmap.c b/mm/rmap.c index 32e4920e419d..81f1946653e0 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -817,7 +817,7 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address) goto out; pud = pud_offset(p4d, address); - if (!pud_present(*pud)) + if (!pud_present(pudp_get(pud))) goto out; pmd = pmd_offset(pud, address); diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index c89706e107ce..d8ea64ec665f 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -203,7 +203,7 @@ void __weak __meminit pmd_init(void *addr) pud_t * __meminit vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node) { pud_t *pud = pud_offset(p4d, addr); - if (pud_none(*pud)) { + if (pud_none(pudp_get(pud))) { void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node); if (!p) return NULL; diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 1da56cbe5feb..05292d998122 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -200,7 +200,7 @@ static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end, if (!IS_ALIGNED(phys_addr, PUD_SIZE)) return 0; - if (pud_present(*pud) && !pud_free_pmd_page(pud, addr)) + if (pud_present(pudp_get(pud)) && !pud_free_pmd_page(pud, addr)) return 0; return pud_set_huge(pud, phys_addr, prot); @@ -396,7 +396,7 @@ static void vunmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, next = pud_addr_end(addr, end); cleared = pud_clear_huge(pud); - if (cleared || pud_bad(*pud)) + if (cleared || pud_bad(pudp_get(pud))) *mask |= PGTBL_PUD_MODIFIED; if (cleared) @@ -742,7 +742,7 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) struct page *page = NULL; pgd_t *pgd = pgd_offset_k(addr); p4d_t *p4d; - pud_t *pud; + pud_t *pud, old_pud; pmd_t *pmd, old_pmd; pte_t *ptep, pte; @@ -768,11 +768,12 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) return NULL; pud = pud_offset(p4d, addr); - if (pud_none(*pud)) + old_pud = pudp_get(pud); + if (pud_none(old_pud)) return NULL; - if (pud_leaf(*pud)) - return pud_page(*pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - if (WARN_ON_ONCE(pud_bad(*pud))) + if (pud_leaf(old_pud)) + return pud_page(old_pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(pud_bad(old_pud))) return NULL; pmd = pmd_offset(pud, addr); diff --git a/mm/vmscan.c b/mm/vmscan.c index bd489c1af228..04b03e6c3095 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3421,7 +3421,7 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area DEFINE_MAX_SEQ(walk->lruvec); int old_gen, new_gen = lru_gen_from_seq(max_seq); - VM_WARN_ON_ONCE(pud_leaf(*pud)); + VM_WARN_ON_ONCE(pud_leaf(pudp_get(pud))); /* try to batch at most 1+MIN_LRU_BATCH+1 entries */ if (*first == -1) { @@ -3501,7 +3501,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, struct lru_gen_mm_walk *walk = args->private; struct lru_gen_mm_state *mm_state = get_mm_state(walk->lruvec); - VM_WARN_ON_ONCE(pud_leaf(*pud)); + VM_WARN_ON_ONCE(pud_leaf(pudp_get(pud))); /* * Finish an entire PMD in two passes: the first only reaches to PTE From patchwork Tue Sep 17 07:31:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 13805946 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 997CBC35FEC for ; Tue, 17 Sep 2024 07:32:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 358D36B0096; Tue, 17 Sep 2024 03:32:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 308706B0098; Tue, 17 Sep 2024 03:32:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D14C6B0099; Tue, 17 Sep 2024 03:32:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id EACF76B0096 for ; Tue, 17 Sep 2024 03:32:12 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A439CC01AA for ; Tue, 17 Sep 2024 07:32:12 +0000 (UTC) X-FDA: 82573411704.27.1A98238 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf29.hostedemail.com (Postfix) with ESMTP id 136C512000A for ; Tue, 17 Sep 2024 07:32:10 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726558220; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=udx7kdyKMbFwoQ5diH8BDeXeoobdbhcachx8EuYDVt8=; b=2ovvO82305yo7OwY/mDOHICKO6pNDB0eXNaegO+aVk62gLJCFx1VtpE5s1vPkYgdN4q2nY VQftF4XqR/dicrmndLMrXWWctpC3V6h2xlFRen4/tO8okj7W3K/A/kAov3cRSzzU+skIag ZQ87DbFXNw3uPXieViTIKFMDMMrPJu0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726558220; a=rsa-sha256; cv=none; b=KIO3CFD4IO08QZG6rAfv518vJNgGkm+FzMTwSaDXh70gOq/93ayJvcjbflAKWZdp0tX5dI tOGqWbKNs7FDhBbZSNqPIUmDboJ+Bx+UdL7UfNfLvOhTvq+maVloviCdzFA6AW6lpPMGhM xpLMIpHQ9cEFqd45oRD10a1ZoOGlIVI= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DCE391063; Tue, 17 Sep 2024 00:32:39 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.61.158]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C5E223F64C; Tue, 17 Sep 2024 00:32:02 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: Anshuman Khandual , Andrew Morton , David Hildenbrand , Ryan Roberts , "Mike Rapoport (IBM)" , Arnd Bergmann , x86@kernel.org, linux-m68k@lists.linux-m68k.org, linux-fsdevel@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Dimitri Sivanich , Alexander Viro , Muchun Song , Andrey Ryabinin , Miaohe Lin , Dennis Zhou , Tejun Heo , Christoph Lameter , Uladzislau Rezki , Christoph Hellwig Subject: [PATCH V2 6/7] mm: Use p4dp_get() for accessing P4D entries Date: Tue, 17 Sep 2024 13:01:16 +0530 Message-Id: <20240917073117.1531207-7-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240917073117.1531207-1-anshuman.khandual@arm.com> References: <20240917073117.1531207-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 136C512000A X-Stat-Signature: nmqw9jc63x4ntmrnijpk5beine5tn9r8 X-HE-Tag: 1726558330-66700 X-HE-Meta: U2FsdGVkX18FeeCTlNIIP8+FfFQD+0NJ9oHWH8efakQrDvBT5+MJ+qvgweAC3l8yOw2V8woCD+j+BgAssoPZuvK2b8ls6c9Xj9X2I0xZHes89fsnog3WIRtvsBAvsv74ZSyNfwrtFwBadsoAXh9KxvxyWvzMuYvWWN1+PyWs16gc1zRcsbYpx9p+t9Vyh9yoXfF2UltQvmTmzAJDaPs2vkKpBfn5dxrV9m6odOtyUE2Poz5Rv2WdNKP8O+imcRKp4BcSJi81ib+87JGYVMTVfsQt68yEfjugIqWavbYtlehhGPfY7CvTgGwk3qsXbQQKm6UqiXDMxSJCkEw5+s5H0HweRaFWXh3WHgHoFad0KlEbJv8xVRBgxVmxhg2riSTGrGQT32E4lwhFMAKicnbG2dh/+GZZlB0uA1oFd7+iDmZVtRmTiovGow1SE5aQzbOpPc3JAcfusPASu+VKSpw3icXcGsJAGK2reDdtty7ETJbAhqVr1Y7CaBbpj4LLJapVhHYNV1gW+thUwKtd++g5aV8rp9gCtuNORyluGCy+VB3qINdWwESw6A/afN8Mm0aGW4hFG8Mw1psz47giri4/WRlgvjZpGMARELS4db7azHc1wNOf2NSXGuH6HH2IRm/8txYw7xMu0iY/4xnNA8UWB8XlZNIXROwc/p4XBtuAHe5l4uqZUyRqegYIfYGf9gc4++3qhKazbsJ2DZIHQylbHdjqoRH9/XqWueMsFcZqAVyg6ejBNZU+AiqeBvDbdJPifKcN1H28h1ZRsvg1BREDJRJzP4hD+0Y90bxm27xSFOIkd6FwJetb7bqgHHjDdy9GL5fO0sCZj73hwbqJ8M+tNv1HpyFAytmmwGHt3Q32qMb76r2zTnB57KgUIDrLne14YwL7DSpHVnnbBw8g+CCc5B4mggTBYZMA6fG8sByW/gyTxH7ZzhZEf6OnQ23W0WmEMC2oHQYrLIIigNBBgXA 9X+T9mfa RI75QzQNX56eVKWwYHI1oVyGepFAXR0hKmvQL5re7BtIj4kIOAed8DfSfgDHdB9+0WMinl24CiaYsgLFJLNciZabT53T1KxgQpEzAYTPueGW4XltiRKbX36sNWzi/jBK7GiPuApOXyNOqiBGd2coIPbbwM8EJBhdsLnBbof+N57VnRNvmNbiLKc2dn7wm9ROqj5WbvlJm/G/DtlmNulN5RTShXrIa6xGBFJQBq4AWYNzBoPKElrT2V71SPq7OIDKMqaLchyzpkgwW0e+FV7FZgH5DthwrmvKJprjC229/AJ5aZBJBrMU0lPvF2TQ0R3cpnpj0AOs+nBlKvHb9uW1FbsHSE7kzqE40ThESe6yN/B/xRhSNM4b1HKInEJuYdiTxW+NUYRcAheEpjE1DsnlxJ3AkIthokkCTSoi/htdk+QZNe/uMpTDyMUadhWzAUTAlhc4YvpgUDyxRlDxgc3sHHi48zY8GU5vnOItGKucTqQodQLe9p6Zj7aAnnYc1YIouwvXKOtF16cPhQ574YLv0EDcWCmUiHoyYeK3nbOQgN1ZV6TSAz0fDXb7Mjnxj2nXo+GVmVFufAIijH+g0J/Q+/1fwpyADb+d6T1Y7tbyFIyGXEy7BkCm+e0LYoIPl270eVkQ3QMsFGxZu0f507WiYrhPII93OrJSD9MQ/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Convert P4D accesses via p4dp_get() helper that defaults as READ_ONCE() but also provides the platform an opportunity to override when required. This stores read page table entry value in a local variable which can be used in multiple instances there after. This helps in avoiding multiple memory load operations as well possible race conditions. Cc: Dimitri Sivanich Cc: Alexander Viro Cc: Muchun Song Cc: Andrey Ryabinin Cc: Miaohe Lin Cc: Dennis Zhou Cc: Tejun Heo cc: Christoph Lameter Cc: Uladzislau Rezki Cc: Christoph Hellwig Cc: linux-kernel@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Cc: linux-perf-users@vger.kernel.org Cc: linux-mm@kvack.org Cc: kasan-dev@googlegroups.com Signed-off-by: Anshuman Khandual --- drivers/misc/sgi-gru/grufault.c | 2 +- fs/userfaultfd.c | 2 +- include/linux/pgtable.h | 9 ++++++--- kernel/events/core.c | 2 +- mm/gup.c | 6 +++--- mm/hugetlb.c | 2 +- mm/kasan/init.c | 10 +++++----- mm/kasan/shadow.c | 2 +- mm/memory-failure.c | 2 +- mm/memory.c | 16 +++++++++------- mm/page_vma_mapped.c | 2 +- mm/percpu.c | 2 +- mm/pgalloc-track.h | 2 +- mm/pgtable-generic.c | 2 +- mm/ptdump.c | 2 +- mm/rmap.c | 2 +- mm/sparse-vmemmap.c | 2 +- mm/vmalloc.c | 15 ++++++++------- mm/vmscan.c | 2 +- 19 files changed, 45 insertions(+), 39 deletions(-) diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c index 95d479d5e40f..fcaceac60659 100644 --- a/drivers/misc/sgi-gru/grufault.c +++ b/drivers/misc/sgi-gru/grufault.c @@ -216,7 +216,7 @@ static int atomic_pte_lookup(struct vm_area_struct *vma, unsigned long vaddr, goto err; p4dp = p4d_offset(pgdp, vaddr); - if (unlikely(p4d_none(*p4dp))) + if (unlikely(p4d_none(p4dp_get(p4dp)))) goto err; pudp = pud_offset(p4dp, vaddr); diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 00719a0f688c..4044e15cdfd9 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -307,7 +307,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, if (!pgd_present(*pgd)) goto out; p4d = p4d_offset(pgd, address); - if (!p4d_present(*p4d)) + if (!p4d_present(p4dp_get(p4d))) goto out; pud = pud_offset(p4d, address); if (!pud_present(pudp_get(pud))) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index eb993ef0946f..689cd5a32157 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1081,7 +1081,8 @@ static inline int pgd_same(pgd_t pgd_a, pgd_t pgd_b) #define set_p4d_safe(p4dp, p4d) \ ({ \ - WARN_ON_ONCE(p4d_present(*p4dp) && !p4d_same(*p4dp, p4d)); \ + p4d_t __old = p4dp_get(p4dp); \ + WARN_ON_ONCE(p4d_present(__old) && !p4d_same(__old, p4d)); \ set_p4d(p4dp, p4d); \ }) @@ -1251,9 +1252,11 @@ static inline int pgd_none_or_clear_bad(pgd_t *pgd) static inline int p4d_none_or_clear_bad(p4d_t *p4d) { - if (p4d_none(*p4d)) + p4d_t old_p4d = p4dp_get(p4d); + + if (p4d_none(old_p4d)) return 1; - if (unlikely(p4d_bad(*p4d))) { + if (unlikely(p4d_bad(old_p4d))) { p4d_clear_bad(p4d); return 1; } diff --git a/kernel/events/core.c b/kernel/events/core.c index 35e2f2789246..4e56a276ed25 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7611,7 +7611,7 @@ static u64 perf_get_pgtable_size(struct mm_struct *mm, unsigned long addr) return pgd_leaf_size(pgd); p4dp = p4d_offset_lockless(pgdp, pgd, addr); - p4d = READ_ONCE(*p4dp); + p4d = p4dp_get(p4dp); if (!p4d_present(p4d)) return 0; diff --git a/mm/gup.c b/mm/gup.c index 300fc7eb306c..3a97d0263052 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1014,7 +1014,7 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4d_t *p4dp, p4d; p4dp = p4d_offset(pgdp, address); - p4d = READ_ONCE(*p4dp); + p4d = p4dp_get(p4dp); BUILD_BUG_ON(p4d_leaf(p4d)); if (!p4d_present(p4d) || p4d_bad(p4d)) @@ -1114,7 +1114,7 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address, if (pgd_none(*pgd)) return -EFAULT; p4d = p4d_offset(pgd, address); - if (p4d_none(*p4d)) + if (p4d_none(p4dp_get(p4d))) return -EFAULT; pud = pud_offset(p4d, address); if (pud_none(pudp_get(pud))) @@ -3245,7 +3245,7 @@ static int gup_fast_p4d_range(pgd_t *pgdp, pgd_t pgd, unsigned long addr, p4dp = p4d_offset_lockless(pgdp, pgd, addr); do { - p4d_t p4d = READ_ONCE(*p4dp); + p4d_t p4d = p4dp_get(p4dp); next = p4d_addr_end(addr, end); if (!p4d_present(p4d)) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a3820242b01e..4fdb91c8cc2b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7454,7 +7454,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, if (!pgd_present(*pgd)) return NULL; p4d = p4d_offset(pgd, addr); - if (!p4d_present(*p4d)) + if (!p4d_present(p4dp_get(p4d))) return NULL; pud = pud_offset(p4d, addr); diff --git a/mm/kasan/init.c b/mm/kasan/init.c index f4cf519443e1..02af738fee5e 100644 --- a/mm/kasan/init.c +++ b/mm/kasan/init.c @@ -208,7 +208,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr, continue; } - if (p4d_none(*p4d)) { + if (p4d_none(p4dp_get(p4d))) { pud_t *p; if (slab_is_available()) { @@ -330,7 +330,7 @@ static void kasan_free_pud(pud_t *pud_start, p4d_t *p4d) return; } - pud_free(&init_mm, (pud_t *)page_to_virt(p4d_page(*p4d))); + pud_free(&init_mm, (pud_t *)page_to_virt(p4d_page(p4dp_get(p4d)))); p4d_clear(p4d); } @@ -341,7 +341,7 @@ static void kasan_free_p4d(p4d_t *p4d_start, pgd_t *pgd) for (i = 0; i < PTRS_PER_P4D; i++) { p4d = p4d_start + i; - if (!p4d_none(*p4d)) + if (!p4d_none(p4dp_get(p4d))) return; } @@ -434,10 +434,10 @@ static void kasan_remove_p4d_table(p4d_t *p4d, unsigned long addr, next = p4d_addr_end(addr, end); - if (!p4d_present(*p4d)) + if (!p4d_present(p4dp_get(p4d))) continue; - if (kasan_pud_table(*p4d)) { + if (kasan_pud_table(p4dp_get(p4d))) { if (IS_ALIGNED(addr, P4D_SIZE) && IS_ALIGNED(next, P4D_SIZE)) { p4d_clear(p4d); diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index dbd8164c75f1..52150cc5ae5f 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -194,7 +194,7 @@ static bool shadow_mapped(unsigned long addr) if (pgd_none(*pgd)) return false; p4d = p4d_offset(pgd, addr); - if (p4d_none(*p4d)) + if (p4d_none(p4dp_get(p4d))) return false; pud = pud_offset(p4d, addr); if (pud_none(pudp_get(pud))) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index fbb63401fb51..3d900cc039b3 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -414,7 +414,7 @@ static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma, if (!pgd_present(*pgd)) return 0; p4d = p4d_offset(pgd, address); - if (!p4d_present(*p4d)) + if (!p4d_present(p4dp_get(p4d))) return 0; pud = pud_offset(p4d, address); if (!pud_present(pudp_get(pud))) diff --git a/mm/memory.c b/mm/memory.c index 801750e4337c..5056f39f2c3b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2906,7 +2906,7 @@ static int apply_to_p4d_range(struct mm_struct *mm, pgd_t *pgd, pte_fn_t fn, void *data, bool create, pgtbl_mod_mask *mask) { - p4d_t *p4d; + p4d_t *p4d, old_p4d; unsigned long next; int err = 0; @@ -2919,11 +2919,12 @@ static int apply_to_p4d_range(struct mm_struct *mm, pgd_t *pgd, } do { next = p4d_addr_end(addr, end); - if (p4d_none(*p4d) && !create) + old_p4d = p4dp_get(p4d); + if (p4d_none(old_p4d) && !create) continue; - if (WARN_ON_ONCE(p4d_leaf(*p4d))) + if (WARN_ON_ONCE(p4d_leaf(old_p4d))) return -EINVAL; - if (!p4d_none(*p4d) && WARN_ON_ONCE(p4d_bad(*p4d))) { + if (!p4d_none(old_p4d) && WARN_ON_ONCE(p4d_bad(old_p4d))) { if (!create) continue; p4d_clear_bad(p4d); @@ -6075,7 +6076,7 @@ int __pud_alloc(struct mm_struct *mm, p4d_t *p4d, unsigned long address) return -ENOMEM; spin_lock(&mm->page_table_lock); - if (!p4d_present(*p4d)) { + if (!p4d_present(p4dp_get(p4d))) { mm_inc_nr_puds(mm); smp_wmb(); /* See comment in pmd_install() */ p4d_populate(mm, p4d, new); @@ -6143,7 +6144,7 @@ int follow_pte(struct vm_area_struct *vma, unsigned long address, { struct mm_struct *mm = vma->vm_mm; pgd_t *pgd; - p4d_t *p4d; + p4d_t *p4d, old_p4d; pud_t *pud; pmd_t *pmd; pte_t *ptep; @@ -6160,7 +6161,8 @@ int follow_pte(struct vm_area_struct *vma, unsigned long address, goto out; p4d = p4d_offset(pgd, address); - if (p4d_none(*p4d) || unlikely(p4d_bad(*p4d))) + old_p4d = p4dp_get(p4d); + if (p4d_none(old_p4d) || unlikely(p4d_bad(old_p4d))) goto out; pud = pud_offset(p4d, address); diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 511266307771..a33f92db2666 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -217,7 +217,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) continue; } p4d = p4d_offset(pgd, pvmw->address); - if (!p4d_present(*p4d)) { + if (!p4d_present(p4dp_get(p4d))) { step_forward(pvmw, P4D_SIZE); continue; } diff --git a/mm/percpu.c b/mm/percpu.c index 5f32164b04a2..58660e8eb892 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -3192,7 +3192,7 @@ void __init __weak pcpu_populate_pte(unsigned long addr) } p4d = p4d_offset(pgd, addr); - if (p4d_none(*p4d)) { + if (p4d_none(p4dp_get(p4d))) { pud = memblock_alloc(PUD_TABLE_SIZE, PUD_TABLE_SIZE); if (!pud) goto err_alloc; diff --git a/mm/pgalloc-track.h b/mm/pgalloc-track.h index 0f6b809431a3..3db8ccbcb141 100644 --- a/mm/pgalloc-track.h +++ b/mm/pgalloc-track.h @@ -20,7 +20,7 @@ static inline pud_t *pud_alloc_track(struct mm_struct *mm, p4d_t *p4d, unsigned long address, pgtbl_mod_mask *mod_mask) { - if (unlikely(p4d_none(*p4d))) { + if (unlikely(p4d_none(p4dp_get(p4d)))) { if (__pud_alloc(mm, p4d, address)) return NULL; *mod_mask |= PGTBL_P4D_MODIFIED; diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index e09e3f920f7a..f5ab52beb536 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -31,7 +31,7 @@ void pgd_clear_bad(pgd_t *pgd) #ifndef __PAGETABLE_P4D_FOLDED void p4d_clear_bad(p4d_t *p4d) { - p4d_ERROR(*p4d); + p4d_ERROR(p4dp_get(p4d)); p4d_clear(p4d); } #endif diff --git a/mm/ptdump.c b/mm/ptdump.c index 32ae8e829329..2c40224b8ad0 100644 --- a/mm/ptdump.c +++ b/mm/ptdump.c @@ -53,7 +53,7 @@ static int ptdump_p4d_entry(p4d_t *p4d, unsigned long addr, unsigned long next, struct mm_walk *walk) { struct ptdump_state *st = walk->private; - p4d_t val = READ_ONCE(*p4d); + p4d_t val = p4dp_get(p4d); #if CONFIG_PGTABLE_LEVELS > 3 && \ (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) diff --git a/mm/rmap.c b/mm/rmap.c index 81f1946653e0..a0ff325467eb 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -813,7 +813,7 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address) goto out; p4d = p4d_offset(pgd, address); - if (!p4d_present(*p4d)) + if (!p4d_present(p4dp_get(p4d))) goto out; pud = pud_offset(p4d, address); diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index d8ea64ec665f..2bd1c95f107a 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -220,7 +220,7 @@ void __weak __meminit pud_init(void *addr) p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node) { p4d_t *p4d = p4d_offset(pgd, addr); - if (p4d_none(*p4d)) { + if (p4d_none(p4dp_get(p4d))) { void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node); if (!p) return NULL; diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 05292d998122..f27ecac7bd6e 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -251,7 +251,7 @@ static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end, if (!IS_ALIGNED(phys_addr, P4D_SIZE)) return 0; - if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr)) + if (p4d_present(p4dp_get(p4d)) && !p4d_free_pud_page(p4d, addr)) return 0; return p4d_set_huge(p4d, phys_addr, prot); @@ -418,7 +418,7 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, next = p4d_addr_end(addr, end); p4d_clear_huge(p4d); - if (p4d_bad(*p4d)) + if (p4d_bad(p4dp_get(p4d))) *mask |= PGTBL_P4D_MODIFIED; if (p4d_none_or_clear_bad(p4d)) @@ -741,7 +741,7 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) unsigned long addr = (unsigned long) vmalloc_addr; struct page *page = NULL; pgd_t *pgd = pgd_offset_k(addr); - p4d_t *p4d; + p4d_t *p4d, old_p4d; pud_t *pud, old_pud; pmd_t *pmd, old_pmd; pte_t *ptep, pte; @@ -760,11 +760,12 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) return NULL; p4d = p4d_offset(pgd, addr); - if (p4d_none(*p4d)) + old_p4d = p4dp_get(p4d); + if (p4d_none(old_p4d)) return NULL; - if (p4d_leaf(*p4d)) - return p4d_page(*p4d) + ((addr & ~P4D_MASK) >> PAGE_SHIFT); - if (WARN_ON_ONCE(p4d_bad(*p4d))) + if (p4d_leaf(old_p4d)) + return p4d_page(old_p4d) + ((addr & ~P4D_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(p4d_bad(old_p4d))) return NULL; pud = pud_offset(p4d, addr); diff --git a/mm/vmscan.c b/mm/vmscan.c index 04b03e6c3095..b16925b5f072 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3579,7 +3579,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long start, unsigned long end, unsigned long next; struct lru_gen_mm_walk *walk = args->private; - VM_WARN_ON_ONCE(p4d_leaf(*p4d)); + VM_WARN_ON_ONCE(p4d_leaf(p4dp_get(p4d))); pud = pud_offset(p4d, start & P4D_MASK); restart: From patchwork Tue Sep 17 07:31:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 13805947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1337C35FEB for ; Tue, 17 Sep 2024 07:32:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 52E6B6B009A; Tue, 17 Sep 2024 03:32:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B6D36B009B; Tue, 17 Sep 2024 03:32:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3304F6B009C; Tue, 17 Sep 2024 03:32:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0AE836B009A for ; Tue, 17 Sep 2024 03:32:21 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B3F58A79F9 for ; Tue, 17 Sep 2024 07:32:20 +0000 (UTC) X-FDA: 82573412040.02.6EC09AA Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf16.hostedemail.com (Postfix) with ESMTP id 14130180009 for ; Tue, 17 Sep 2024 07:32:18 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726558192; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=42BX7NZPRjMZYqs+dFskLn0l9B42cpflvCwiEFssJh4=; b=fVL/Ha7nF8I5Ilt2uLK2EDD8QiN0VcrnMnsWdGxttcbWUWqWCLXn9F9DnAfk5zoeATzSbr +CJd+p7qJiL5iX9GQhyG8K3CgmJJNQCDJj8iQUlzT5Oi2eT98NhmBGEuUEdBPc7XQSR7X4 mxvvioe64zz/pyx1Pr4gkdQ+IGVUlRM= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726558192; a=rsa-sha256; cv=none; b=y2KPXtlGV39LN8THKoLHUu2txDins5F6nni66x+GRs8L5PMB5lhFyXIxaKn1bxaN3UFPbt JWTA+oo6aKKf3OSwUSg2GE4lElpzNnPtATSisg0H82KzsYbYFH4efSC1efZgrqA1abNzs0 WHlGvtQ5XaWNOPH5xswqRUlMPuVdd40= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D9BCB1063; Tue, 17 Sep 2024 00:32:47 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.61.158]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 19E0A3F64C; Tue, 17 Sep 2024 00:32:10 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org Cc: Anshuman Khandual , Andrew Morton , David Hildenbrand , Ryan Roberts , "Mike Rapoport (IBM)" , Arnd Bergmann , x86@kernel.org, linux-m68k@lists.linux-m68k.org, linux-fsdevel@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Dimitri Sivanich , Alexander Viro , Muchun Song , Andrey Ryabinin , Miaohe Lin , Dennis Zhou , Tejun Heo , Christoph Lameter , Uladzislau Rezki , Christoph Hellwig Subject: [PATCH V2 7/7] mm: Use pgdp_get() for accessing PGD entries Date: Tue, 17 Sep 2024 13:01:17 +0530 Message-Id: <20240917073117.1531207-8-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240917073117.1531207-1-anshuman.khandual@arm.com> References: <20240917073117.1531207-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 14130180009 X-Stat-Signature: bttsxuqkggghwx89myu3f48r8ss64wxc X-Rspam-User: X-HE-Tag: 1726558338-980589 X-HE-Meta: U2FsdGVkX1+I/lkyx89LGahJMZC7kIITY344u9YotKfNDeETiLJMYZZvV/C0xiRbMtSInuZd0zqQ8okvOxVBhaTIO50XqYdq7DyD+AeH4ierSIL4qKiL9sIqc5L6su9t/GF1FOiVggqsDTIjn3xR//zPcaa1E9Mv8LSXl2FU3jDCeZ/vLdegEUJW2f88T3Hi2TLXsB5Uzc4FJ1zmCqbk8kCuH4Xdhkz31RLGP3ESwkh2vEW7xA9eVx+PWR433lJX4sURckCA1oc2wcwBBb5BJth1MUVT6pDlR8TI7mteev2j6i4ONlh9u7RIgySw1p0HnWF2raGvsO44tafOO4dcjSNzMrpO/2JerGOd/lK256zYA1AwO2laGXIJd2QYHWGgmghURBauXe5hDkTIhlpvYPxF3cxtDdNDfCpGb6sNXSJJve0orQI8/OdS8vFQuBNjjtOS4ao0CdLZWV/17/LDoXuhJvTRN7z8LWP9fSThnKXkHklp3KxWVxbmx2T5q6jX1G5SFv14kiGJH+AAbADbXE1FRYjm+CKPyuAt2fpO6N3c4XdqwToHORusMiTuHqzXxEGxhHceFlEWtkf3SxQmDJtpv23eNrEAnnWs5UkgWiNpsYG2XGvhqz737WBPtbSu2/mhlZHLWnyPK3GNLucTQiTq6nGpiIUmX/94OIn86zsFvvayJ5bK5qVd4+VgQrMnplLtg2YoyyvISOYUN+W0DouOIDDaSTUuMmfcoeohsOL85QQReeYrnqiaNIKAe/bkEAUFhbVzJ3jtXBhuOgvGEj27/i8AMPC/EhtUwQqSawROEG2gxrCgA8jNgf6tLljg1Ue7Ub5bOFgxL3ttUz0IttasjavTEkC47r1ZXtodr3JaJwkRcp38ZR1+pSrCfoUmu9Ev0fGs0zLJyZR5QVoxXAkg2nit4Zu6hfnWJAZvk08rEddwVBFML1/nh+3kjGMaARm5mwC+0/tZ9aU1r7c IVsDWgtJ 5Gih7NnO2E6reB7g/rikEwA1MYYSMcZTUjI0JtndPu6derTht0b3j6Bb/IBdeNNldpdeaY56U5UNpl7DXHPGlG4k/05mX4ePjcZeEWwPZnm0u7YIAY8HdCxML3+2hF3q0qjvjgMMSUgpZSXQuUVQl8FYlJLmco0jrTgnZEJSbyHeppGU17YLrnDNYD8hsC5uvUnWrpgnSmol5koVKwEL2y07fHCZxUZZfF8ydAIrHfbZvakotAT+8Ha85t2dRBq3Lg+IQRVZaOgj9Qe9NxLvrAo4+INnzFtN2Un/BTyGArqJK1K17/I5KjT3/DEGukWPLKQI7XNqXYJIX7rUgEiA/T+8VK487euPbfgD3FouqP3Vp+gRmmQRCPKTubZg7oRqGJYXEzjIP4FYW0qGUUhFNKN9EREVKdDsyvfylxBKhZ0GtQVl0Ttfu/P+/tPofDwzD00bGOFeo5d0j8bXhRjbyIQI62FYq1URoVIAejs6Njdg+6JueAlT5xuJNUweGDWstabpQVY+ZsQUoaMSNfF/R5mMqNfV9d7FShZGtfathEICxOoLJAO2caTGVVNVufvN/u2HxsxmZqrvSMllQjC3NUexaj/pZtyIzZbbe2wN6Ddwfardmrhvwt8fHNmEXB7SlLUzvAllduUmCHVvW9a8qjOgtm1e12lrVS1cB X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Convert PGD accesses via pgdp_get() helper that defaults as READ_ONCE() but also provides the platform an opportunity to override when required. This stores read page table entry value in a local variable which can be used in multiple instances there after. This helps in avoiding multiple memory load operations as well possible race conditions. Cc: Dimitri Sivanich Cc: Alexander Viro Cc: Muchun Song Cc: Andrey Ryabinin Cc: Miaohe Lin Cc: Dennis Zhou Cc: Tejun Heo cc: Christoph Lameter Cc: Uladzislau Rezki Cc: Christoph Hellwig Cc: linux-kernel@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-perf-users@vger.kernel.org Cc: kasan-dev@googlegroups.com Signed-off-by: Anshuman Khandual --- drivers/misc/sgi-gru/grufault.c | 2 +- fs/userfaultfd.c | 2 +- include/linux/mm.h | 2 +- include/linux/pgtable.h | 9 ++++++--- kernel/events/core.c | 2 +- mm/gup.c | 11 ++++++----- mm/hugetlb.c | 2 +- mm/kasan/init.c | 8 ++++---- mm/kasan/shadow.c | 2 +- mm/memory-failure.c | 2 +- mm/memory.c | 16 +++++++++------- mm/page_vma_mapped.c | 2 +- mm/percpu.c | 2 +- mm/pgalloc-track.h | 2 +- mm/pgtable-generic.c | 2 +- mm/rmap.c | 2 +- mm/sparse-vmemmap.c | 2 +- mm/vmalloc.c | 13 +++++++------ 18 files changed, 45 insertions(+), 38 deletions(-) diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c index fcaceac60659..6aeccbd440e7 100644 --- a/drivers/misc/sgi-gru/grufault.c +++ b/drivers/misc/sgi-gru/grufault.c @@ -212,7 +212,7 @@ static int atomic_pte_lookup(struct vm_area_struct *vma, unsigned long vaddr, pte_t pte; pgdp = pgd_offset(vma->vm_mm, vaddr); - if (unlikely(pgd_none(*pgdp))) + if (unlikely(pgd_none(pgdp_get(pgdp)))) goto err; p4dp = p4d_offset(pgdp, vaddr); diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 4044e15cdfd9..6d33c7a9eb01 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -304,7 +304,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, assert_fault_locked(vmf); pgd = pgd_offset(mm, address); - if (!pgd_present(*pgd)) + if (!pgd_present(pgdp_get(pgd))) goto out; p4d = p4d_offset(pgd, address); if (!p4d_present(p4dp_get(p4d))) diff --git a/include/linux/mm.h b/include/linux/mm.h index 1bb1599b5779..1978a4b1fcf5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2819,7 +2819,7 @@ int __pte_alloc_kernel(pmd_t *pmd); static inline p4d_t *p4d_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address) { - return (unlikely(pgd_none(*pgd)) && __p4d_alloc(mm, pgd, address)) ? + return (unlikely(pgd_none(pgdp_get(pgd))) && __p4d_alloc(mm, pgd, address)) ? NULL : p4d_offset(pgd, address); } diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 689cd5a32157..6d12ae7e3982 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1088,7 +1088,8 @@ static inline int pgd_same(pgd_t pgd_a, pgd_t pgd_b) #define set_pgd_safe(pgdp, pgd) \ ({ \ - WARN_ON_ONCE(pgd_present(*pgdp) && !pgd_same(*pgdp, pgd)); \ + pgd_t __old = pgdp_get(pgdp); \ + WARN_ON_ONCE(pgd_present(__old) && !pgd_same(__old, pgd)); \ set_pgd(pgdp, pgd); \ }) @@ -1241,9 +1242,11 @@ void pmd_clear_bad(pmd_t *); static inline int pgd_none_or_clear_bad(pgd_t *pgd) { - if (pgd_none(*pgd)) + pgd_t old_pgd = pgdp_get(pgd); + + if (pgd_none(old_pgd)) return 1; - if (unlikely(pgd_bad(*pgd))) { + if (unlikely(pgd_bad(old_pgd))) { pgd_clear_bad(pgd); return 1; } diff --git a/kernel/events/core.c b/kernel/events/core.c index 4e56a276ed25..1e3142211cce 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7603,7 +7603,7 @@ static u64 perf_get_pgtable_size(struct mm_struct *mm, unsigned long addr) pte_t *ptep, pte; pgdp = pgd_offset(mm, addr); - pgd = READ_ONCE(*pgdp); + pgd = pgdp_get(pgdp); if (pgd_none(pgd)) return 0; diff --git a/mm/gup.c b/mm/gup.c index 3a97d0263052..3aff3555ba19 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1051,7 +1051,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct follow_page_context *ctx) { - pgd_t *pgd; + pgd_t *pgd, old_pgd; struct mm_struct *mm = vma->vm_mm; struct page *page; @@ -1060,7 +1060,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, ctx->page_mask = 0; pgd = pgd_offset(mm, address); - if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) + old_pgd = pgdp_get(pgd); + if (pgd_none(old_pgd) || unlikely(pgd_bad(old_pgd))) page = no_page_table(vma, flags, address); else page = follow_p4d_mask(vma, address, pgd, flags, ctx); @@ -1111,7 +1112,7 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address, pgd = pgd_offset_k(address); else pgd = pgd_offset_gate(mm, address); - if (pgd_none(*pgd)) + if (pgd_none(pgdp_get(pgd))) return -EFAULT; p4d = p4d_offset(pgd, address); if (p4d_none(p4dp_get(p4d))) @@ -3158,7 +3159,7 @@ static int gup_fast_pgd_leaf(pgd_t orig, pgd_t *pgdp, unsigned long addr, if (!folio) return 0; - if (unlikely(pgd_val(orig) != pgd_val(*pgdp))) { + if (unlikely(pgd_val(orig) != pgd_val(pgdp_get(pgdp)))) { gup_put_folio(folio, refs, flags); return 0; } @@ -3267,7 +3268,7 @@ static void gup_fast_pgd_range(unsigned long addr, unsigned long end, pgdp = pgd_offset(current->mm, addr); do { - pgd_t pgd = READ_ONCE(*pgdp); + pgd_t pgd = pgdp_get(pgdp); next = pgd_addr_end(addr, end); if (pgd_none(pgd)) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4fdb91c8cc2b..294d74b03d83 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7451,7 +7451,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, pmd_t *pmd; pgd = pgd_offset(mm, addr); - if (!pgd_present(*pgd)) + if (!pgd_present(pgdp_get(pgd))) return NULL; p4d = p4d_offset(pgd, addr); if (!p4d_present(p4dp_get(p4d))) diff --git a/mm/kasan/init.c b/mm/kasan/init.c index 02af738fee5e..c2b307716551 100644 --- a/mm/kasan/init.c +++ b/mm/kasan/init.c @@ -271,7 +271,7 @@ int __ref kasan_populate_early_shadow(const void *shadow_start, continue; } - if (pgd_none(*pgd)) { + if (pgd_none(pgdp_get(pgd))) { p4d_t *p; if (slab_is_available()) { @@ -345,7 +345,7 @@ static void kasan_free_p4d(p4d_t *p4d_start, pgd_t *pgd) return; } - p4d_free(&init_mm, (p4d_t *)page_to_virt(pgd_page(*pgd))); + p4d_free(&init_mm, (p4d_t *)page_to_virt(pgd_page(pgdp_get(pgd)))); pgd_clear(pgd); } @@ -468,10 +468,10 @@ void kasan_remove_zero_shadow(void *start, unsigned long size) next = pgd_addr_end(addr, end); pgd = pgd_offset_k(addr); - if (!pgd_present(*pgd)) + if (!pgd_present(pgdp_get(pgd))) continue; - if (kasan_p4d_table(*pgd)) { + if (kasan_p4d_table(pgdp_get(pgd))) { if (IS_ALIGNED(addr, PGDIR_SIZE) && IS_ALIGNED(next, PGDIR_SIZE)) { pgd_clear(pgd); diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index 52150cc5ae5f..7f3c46237816 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -191,7 +191,7 @@ static bool shadow_mapped(unsigned long addr) pmd_t *pmd; pte_t *pte; - if (pgd_none(*pgd)) + if (pgd_none(pgdp_get(pgd))) return false; p4d = p4d_offset(pgd, addr); if (p4d_none(p4dp_get(p4d))) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 3d900cc039b3..c9397eab52bd 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -411,7 +411,7 @@ static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma, VM_BUG_ON_VMA(address == -EFAULT, vma); pgd = pgd_offset(vma->vm_mm, address); - if (!pgd_present(*pgd)) + if (!pgd_present(pgdp_get(pgd))) return 0; p4d = p4d_offset(pgd, address); if (!p4d_present(p4dp_get(p4d))) diff --git a/mm/memory.c b/mm/memory.c index 5056f39f2c3b..b4845a84ceb5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2942,7 +2942,7 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr, unsigned long size, pte_fn_t fn, void *data, bool create) { - pgd_t *pgd; + pgd_t *pgd, old_pgd; unsigned long start = addr, next; unsigned long end = addr + size; pgtbl_mod_mask mask = 0; @@ -2954,11 +2954,12 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr, pgd = pgd_offset(mm, addr); do { next = pgd_addr_end(addr, end); - if (pgd_none(*pgd) && !create) + old_pgd = pgdp_get(pgd); + if (pgd_none(old_pgd) && !create) continue; - if (WARN_ON_ONCE(pgd_leaf(*pgd))) + if (WARN_ON_ONCE(pgd_leaf(old_pgd))) return -EINVAL; - if (!pgd_none(*pgd) && WARN_ON_ONCE(pgd_bad(*pgd))) { + if (!pgd_none(old_pgd) && WARN_ON_ONCE(pgd_bad(old_pgd))) { if (!create) continue; pgd_clear_bad(pgd); @@ -6053,7 +6054,7 @@ int __p4d_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address) return -ENOMEM; spin_lock(&mm->page_table_lock); - if (pgd_present(*pgd)) { /* Another has populated it */ + if (pgd_present(pgdp_get(pgd))) { /* Another has populated it */ p4d_free(mm, new); } else { smp_wmb(); /* See comment in pmd_install() */ @@ -6143,7 +6144,7 @@ int follow_pte(struct vm_area_struct *vma, unsigned long address, pte_t **ptepp, spinlock_t **ptlp) { struct mm_struct *mm = vma->vm_mm; - pgd_t *pgd; + pgd_t *pgd, old_pgd; p4d_t *p4d, old_p4d; pud_t *pud; pmd_t *pmd; @@ -6157,7 +6158,8 @@ int follow_pte(struct vm_area_struct *vma, unsigned long address, goto out; pgd = pgd_offset(mm, address); - if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) + old_pgd = pgdp_get(pgd); + if (pgd_none(old_pgd) || unlikely(pgd_bad(old_pgd))) goto out; p4d = p4d_offset(pgd, address); diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index a33f92db2666..fb8b610f7378 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -212,7 +212,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) restart: do { pgd = pgd_offset(mm, pvmw->address); - if (!pgd_present(*pgd)) { + if (!pgd_present(pgdp_get(pgd))) { step_forward(pvmw, PGDIR_SIZE); continue; } diff --git a/mm/percpu.c b/mm/percpu.c index 58660e8eb892..70e68ab002e9 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -3184,7 +3184,7 @@ void __init __weak pcpu_populate_pte(unsigned long addr) pud_t *pud; pmd_t *pmd; - if (pgd_none(*pgd)) { + if (pgd_none(pgdp_get(pgd))) { p4d = memblock_alloc(P4D_TABLE_SIZE, P4D_TABLE_SIZE); if (!p4d) goto err_alloc; diff --git a/mm/pgalloc-track.h b/mm/pgalloc-track.h index 3db8ccbcb141..644f632c7cba 100644 --- a/mm/pgalloc-track.h +++ b/mm/pgalloc-track.h @@ -7,7 +7,7 @@ static inline p4d_t *p4d_alloc_track(struct mm_struct *mm, pgd_t *pgd, unsigned long address, pgtbl_mod_mask *mod_mask) { - if (unlikely(pgd_none(*pgd))) { + if (unlikely(pgd_none(pgdp_get(pgd)))) { if (__p4d_alloc(mm, pgd, address)) return NULL; *mod_mask |= PGTBL_PGD_MODIFIED; diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index f5ab52beb536..16c1ed5b3d0b 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -24,7 +24,7 @@ void pgd_clear_bad(pgd_t *pgd) { - pgd_ERROR(*pgd); + pgd_ERROR(pgdp_get(pgd)); pgd_clear(pgd); } diff --git a/mm/rmap.c b/mm/rmap.c index a0ff325467eb..5f4c52f34192 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -809,7 +809,7 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address) pmd_t *pmd = NULL; pgd = pgd_offset(mm, address); - if (!pgd_present(*pgd)) + if (!pgd_present(pgdp_get(pgd))) goto out; p4d = p4d_offset(pgd, address); diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 2bd1c95f107a..ffc78329a130 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -233,7 +233,7 @@ p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node) pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node) { pgd_t *pgd = pgd_offset_k(addr); - if (pgd_none(*pgd)) { + if (pgd_none(pgdp_get(pgd))) { void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node); if (!p) return NULL; diff --git a/mm/vmalloc.c b/mm/vmalloc.c index f27ecac7bd6e..a40323a8c6ab 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -450,7 +450,7 @@ void __vunmap_range_noflush(unsigned long start, unsigned long end) pgd = pgd_offset_k(addr); do { next = pgd_addr_end(addr, end); - if (pgd_bad(*pgd)) + if (pgd_bad(pgdp_get(pgd))) mask |= PGTBL_PGD_MODIFIED; if (pgd_none_or_clear_bad(pgd)) continue; @@ -582,7 +582,7 @@ static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, pgd = pgd_offset_k(addr); do { next = pgd_addr_end(addr, end); - if (pgd_bad(*pgd)) + if (pgd_bad(pgdp_get(pgd))) mask |= PGTBL_PGD_MODIFIED; err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, &mask); if (err) @@ -740,7 +740,7 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) { unsigned long addr = (unsigned long) vmalloc_addr; struct page *page = NULL; - pgd_t *pgd = pgd_offset_k(addr); + pgd_t *pgd = pgd_offset_k(addr), old_pgd; p4d_t *p4d, old_p4d; pud_t *pud, old_pud; pmd_t *pmd, old_pmd; @@ -752,11 +752,12 @@ struct page *vmalloc_to_page(const void *vmalloc_addr) */ VIRTUAL_BUG_ON(!is_vmalloc_or_module_addr(vmalloc_addr)); - if (pgd_none(*pgd)) + old_pgd = pgdp_get(pgd); + if (pgd_none(old_pgd)) return NULL; - if (WARN_ON_ONCE(pgd_leaf(*pgd))) + if (WARN_ON_ONCE(pgd_leaf(old_pgd))) return NULL; /* XXX: no allowance for huge pgd */ - if (WARN_ON_ONCE(pgd_bad(*pgd))) + if (WARN_ON_ONCE(pgd_bad(old_pgd))) return NULL; p4d = p4d_offset(pgd, addr);