From patchwork Tue Jan 21 09:57:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13946010 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8409DC02182 for ; Tue, 21 Jan 2025 09:58:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 190F36B0083; Tue, 21 Jan 2025 04:58:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 141DC6B0088; Tue, 21 Jan 2025 04:58:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 008336B0089; Tue, 21 Jan 2025 04:58:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D75DD6B0083 for ; Tue, 21 Jan 2025 04:58:11 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8E2024537F for ; Tue, 21 Jan 2025 09:58:11 +0000 (UTC) X-FDA: 83031008382.06.E368502 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf04.hostedemail.com (Postfix) with ESMTP id E9A5F40005 for ; Tue, 21 Jan 2025 09:58:09 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=gLw0M61r; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf04.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737453489; a=rsa-sha256; cv=none; b=V3ZKWJKu+l4Yk4xjijr+upY4CwfVDZiA3ziRuDnVahLU7yMt6mfhudbR9QrR6v6goCT4Zl jI/FFro3xlgGzk9bB3dx0jWsAXAyT+E5gK2XNlwZOe908+BExtypotqESRFymbyDJs1dwH 5DLFhCfGnTi1A/PenWVlxPIsbNuUbrM= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=gLw0M61r; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf04.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737453489; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+HrNZ99qkNC8KEVuL9mBInkUyujn5g9T+2i2xoZ40eA=; b=Fx0oVyTUjf68Xjz3ZUlPtY9Hd+a9rGtB73UgbAJPGNC2GNzH9qLaSJGH3jg1nVef6nJLAw jM9hIsGsV0LfyyDF7vsEt3RGOSrh7NroNXJB1ysFEWbDHpbBIeMGocGp15rjj2F3zQ7Ius Opo9kAcF4ZbMAxmzZJFI7tYXjIhThg0= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id C1653A41452; Tue, 21 Jan 2025 09:56:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 85CE2C4CEE6; Tue, 21 Jan 2025 09:57:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737453489; bh=dDY3jvGc45EmH43gWTWL5MYq9/Q1o2IM8rTUBWv64F4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gLw0M61rGOhZV191MVewpM0US+qn7GkgSSMPL1Qc1GVotMTTMuUQYm5+6jzBp/BRz ek3Maq90VH1GCYII8+E19bQi+wxKpI19VYuawzrasaPDP2Vxxcw3Jv4p57ga7s9qYj fZCBxHUHqYyJ7UFwFL7L2vGRjE2BR8jM1dD+nlpY8ExYX4H8QxxZdV2Ak8WT9XB6N6 ieIlpopK9b5JSQuJyKo4mV0TnDhZLkmFCXiyHRvRx6jdBQI8A7SU2g6XRTnm2ofu9J goETcWcmgYYjeYHZpGvHRPjexjj9OH+A7hNMC7e0Hrhlm8Ib1s1DYHHiCCDcH5a76a fP1z+2ry4ituA== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v2 01/10] x86/mm/pat: cpa-test: fix length for CPA_ARRAY test Date: Tue, 21 Jan 2025 11:57:30 +0200 Message-ID: <20250121095739.986006-2-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250121095739.986006-1-rppt@kernel.org> References: <20250121095739.986006-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: E9A5F40005 X-Stat-Signature: 1j6huiyw81dnxhxc7fbxgjhncdnjeu6d X-HE-Tag: 1737453489-466777 X-HE-Meta: U2FsdGVkX1/yRDcC3FwikjA3HxKdnePpGJ7jrgEYNDIT3juBozyUB4gMt2uA1jWosAI5/3UV8bq5eFm+JVVMm/taviTFcE/RsaJEFo8lUO4uU44NU2TasRYJjv1b2wLsftJyUq62ctVSZ9VSFbDZYl2xGHwP+A+LehBae7TggCi7/3P8VWDIuKBPxeawuSGf/a0bG/C7pb3DELOe/VjJjCmHknDAefAD917TXvaLuzR/QB0QFhZnTWYSYCYS6dqz+zdDrnY0oQgNwMJn8z5YFcqoWqPZ7yyD+lzB7iNv0Zhw414GAsDolWZkyc4vUMIPRCJjf66ro7EznH8G8RaWf/a6w+SX8Rnjaa2oaWACefjxXo/qQBNKgpGBI1wwByY3E2TYf/u4Sp6z/NLDxLIEdy7KeKiIrAut7g0geB1wQXFX4/oHDW6+5PEWC/xfPRnVgO9HXW/3JRjAS1fzlXYw4b0NjQKko1PMTkfPljauQdaFwWYCDo0V1Y7LUZDIEq0wA31FPEp1hCvvpmhZtwW6p9UMpWbHfk1Ophg/AJOnFLiOnKniqtFRTNBY/QD5/PXS517GKOyzONvrhb9q/Lz0qJ/7St5K4CEl/rMiL02c2D2MqmapBY8IlUNWrD8SHlDxAw5yr/zWeNVp4kfGhCB97k0BPZGKNDL3bke+rNSmd9vpnUbQuihRR1uf76ekHaAV4f7zMn1ZWjbiozt0mBR+xO88wyj7m9xek0WdpwCF5cWnUuxlBJkwvwSq7v5LdqKQTc9LVhXBCNbk+1Wldy6nS4PK9jTAbEhTsHCdJgdjN1zQOXphEsEzf0lQ/8g03yNxnGPw/yPXQ2zYcCCwcNPJac8Z4l4xVAD98KSgGOwDXIH5Wz4e058Pt1HsCSjTawEulpJtjIPG9VEZRjDXKr9lc7K2RR2naSzS7DIiFzBB5wx9dOso2/GemoSk1JXMQdsSt+Dcj9BM4nxflvR2jDW 8untd4mb boM1A4T6QbQVTolXz7NYn+rzXuDVEz+o/yYIcE+MO7a3Z/+EpSd8+yX4QGqZwXMr7XTBmg8BHfTFHT+YLbGU5Dwna4GrkO75RQdMiRtqvTxsAsv8OwN4Kz2WN6NS5hT+96uHJ3tNEtBjKXwR2za5EFlqOT+ngq0mDFsztrTSCEzY+dLiwM/mQg1YFJXslPjl43crDMqIcRvqy7FlnKN0+BwRFNhNCgqX9nrXmlX+P/CmYxlplBIw245OKQo8h0iqiZuKWm1yUrgC18RrWEMKox0cgdg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" The CPA_ARRAY test always uses len[1] as numpages argument to change_page_attr_set() although the addresses array is different each iteration of the test loop. Replace len[1] with len[i] to have numpages matching the addresses array. Fixes: ecc729f1f471 ("x86/mm/cpa: Add ARRAY and PAGES_ARRAY selftests") Signed-off-by: Mike Rapoport (Microsoft) --- arch/x86/mm/pat/cpa-test.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/mm/pat/cpa-test.c b/arch/x86/mm/pat/cpa-test.c index 3d2f7f0a6ed1..ad3c1feec990 100644 --- a/arch/x86/mm/pat/cpa-test.c +++ b/arch/x86/mm/pat/cpa-test.c @@ -183,7 +183,7 @@ static int pageattr_test(void) break; case 1: - err = change_page_attr_set(addrs, len[1], PAGE_CPA_TEST, 1); + err = change_page_attr_set(addrs, len[i], PAGE_CPA_TEST, 1); break; case 2: From patchwork Tue Jan 21 09:57:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13946011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7D8EC02185 for ; Tue, 21 Jan 2025 09:58:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7D317280002; Tue, 21 Jan 2025 04:58:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7AA68280001; Tue, 21 Jan 2025 04:58:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 69840280002; Tue, 21 Jan 2025 04:58:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 4BD79280001 for ; Tue, 21 Jan 2025 04:58:22 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 030791608BE for ; Tue, 21 Jan 2025 09:58:21 +0000 (UTC) X-FDA: 83031008844.14.CE38693 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf05.hostedemail.com (Postfix) with ESMTP id 6BC82100003 for ; Tue, 21 Jan 2025 09:58:20 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ocKHZO3s; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf05.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737453500; a=rsa-sha256; cv=none; b=w1J46NF7vZ7Fi85BdnmCpvrAM5GdlBiNXk5yQum9zpnAqs2+gTuKoX/gWQP/rNaNb3Imxk SG3blcku2IH4oHTAss1QD37h6tYB2EL9jzdOuq+XWlMgBS2AEa0qK1XR9ukno9fUwKkhnG 06C9PnLcGfywZdce91H/cVjNRhngOiM= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ocKHZO3s; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf05.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737453500; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1vYutUlTp3JX+hNbxqRR2ioYt+s65i5HZVu0KwP9iP0=; b=K5PZOQlzxovelV3WD9GnxPYGHth4C9RnnbFkbsX+ZqSRxFnOKZoqme9RqpjaSTEp+00GAL AXBvYlJSeUj4f+aL/fb29atDidvpluuvQKSC1SsXtm+hOwM+N1/18pbUcEGNaGVO45+Vgj Fcs1E0aqTvB0aBN+o/8z8RqZmq35v8U= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id CC9965C4B84; Tue, 21 Jan 2025 09:57:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7F832C4CEE8; Tue, 21 Jan 2025 09:58:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737453499; bh=UnbrmexawgONPYMAXBvIF3DzXjQu5/b5KeCmUo86HRs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ocKHZO3s2nYKgIMYKCvY2VaNS7oHl41C7lwHUgZWmzrlKfcRqfeTPH/2oE+oKfDNH OrSjhn2wPOxotZh6XLFFH17DIMwJ77K637m8eOjgbObsKGTjWkqw/cqXJw4Vq+f9hj V4HuHpNq551E1vMq/WQW9h9FYHP/xB5Ajio1yFAaO05tC1TioOjYhY7G+6x7nKWPNM fxSG+eDu9zMUA4r3QGknOIc5jkTG+uvW/rxok6BPl3uzLgpGf8p3kIlfkj9yr37vwe +LYxJAiKLEMuZdBZrOSm0cVUW1OUlMURK1CzBV8RsDKsDUXSJG2hpy7Pg4p/hTvJ/3 kpozfaZkrYHxg== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v2 02/10] x86/mm/pat: drop duplicate variable in cpa_flush() Date: Tue, 21 Jan 2025 11:57:31 +0200 Message-ID: <20250121095739.986006-3-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250121095739.986006-1-rppt@kernel.org> References: <20250121095739.986006-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 6BC82100003 X-Stat-Signature: 5f669p7djno3mc4o4sfkcwbnf8y49umh X-HE-Tag: 1737453500-490547 X-HE-Meta: U2FsdGVkX1/FYpYSx97bJS46wLEaDLC1BTTX/C5QCDwNC1SQQexgCdQjCAzrzgKnLNjionlimeNRc7gLdF2W+Y7OoC5723U5WMK8Nx7hbLyl4122C/J7/rNtLreNjkjJMb3xHt1gms5DjFpt6ElJyW/OJh40PsFqVvcMwtzPkVb31JqSU0iAWRs5+a2wc+dp9BWUlNsqLihomz1i2aciO22glaMG/FLjXfhspQtkhM7ob3iRazYx+pbWDb0PXNgbABgAtW9cX6Sfqa46WKqQTzHp7/fHuByhNLliUGGocgug8CyL3EexBkN894RCai42ewz4uGGRafmKysvHq18UPIRnic9QnAD7yqXetucwKeiUdi0AcxieXeUPC29kYxxToiLwbC90Se7cvLfV9jaZr9cbCcdQNDZnjfux/BmIRiL1ZfpH3OwPBsE9AEf7xy8n+jz+f6mNaP4a17/gy+8cCpFomQVCZ8GFAY32beB2E/hsmJODfvw41l5voLVmYALAWstJU7wTNldYWZ64gX9jYMtknwwsT9Gqr/ZGVRUn+u9fomPmX0L7ReI3nYzRJj1meidSWvFIxjMdeCYe1WcLFY5L9cqstJieGAtfQQVSQSt9Xjwkk4yCnah+Nm00qm4S39dmuyz4A3pNYy4myHQKnqiQG3HmxccHMmPKrkPh+hFsn7NAl4xl/r8QPUsiTVSu3/PlLr1hvkAGmmMSHw3U4qxwsJ1ijuhpefq6NGL3sM3cPHzjv3y0FubZkESVRbLMS6ACFoXe72c5M2/HdHMtHwy8k646JQ0dj6YBJkblMVkvTqJwrRU5f+I2kpRnVtDePnPP8AiYSP9BsXfFM7iODUF7HRDpPJsEI2kIVa93rlxr9JRHNBtSJiTiAHJiPoW6wStDONi/ImpsyOvh/JhGaWA3MYqZDQQZ7EEnGdKUkzzaSotrmik6c+qoQH7uI3BappnJc7HBSnElCBiAyMV b2ocIhpf E2J4t02LO8HSSwlbuMkwx55felnuCNvfhsmg8/IO0x7lDmMKm5t2T0A3XiLXvP3cEBllULm9JIVlfMRqNItF+cikR/mDRUGtmyqlDjJHoErEHHZiN5djwphfuqGqLbxHQt+dnqJaOmB47L3NQqy2ZaMFnJGanTR0iVhtmyGG089ANblholJX9L2KSHnXtez6ewBmYIs02mF8zHdICAAelnobQP3qesibrGzysDBT1KljeZs0Tvq4ZijQQxSXob55ucieEmQMPOBWrCT0//hOSnrTa6T6pzNYYOPwV X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" There is a 'struct cpa_data *data' parameter in cpa_flush() that is assigned to a local 'struct cpa_data *cpa' variable. Rename the parameter from 'data' to 'cpa' and drop declaration of the local 'cpa' variable. Signed-off-by: Mike Rapoport (Microsoft) --- arch/x86/mm/pat/set_memory.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 95bc50a8541c..d43b919b11ae 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -396,9 +396,8 @@ static void __cpa_flush_tlb(void *data) flush_tlb_one_kernel(fix_addr(__cpa_addr(cpa, i))); } -static void cpa_flush(struct cpa_data *data, int cache) +static void cpa_flush(struct cpa_data *cpa, int cache) { - struct cpa_data *cpa = data; unsigned int i; BUG_ON(irqs_disabled() && !early_boot_irqs_disabled); From patchwork Tue Jan 21 09:57:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13946012 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46277C02182 for ; Tue, 21 Jan 2025 09:58:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB52E280003; Tue, 21 Jan 2025 04:58:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C64FC280001; Tue, 21 Jan 2025 04:58:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ADF8A280003; Tue, 21 Jan 2025 04:58:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8EFD1280001 for ; Tue, 21 Jan 2025 04:58:33 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 222571A090F for ; Tue, 21 Jan 2025 09:58:33 +0000 (UTC) X-FDA: 83031009306.16.15E36AB Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf13.hostedemail.com (Postfix) with ESMTP id 916E32000D for ; Tue, 21 Jan 2025 09:58:31 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="i2CJg/RF"; spf=pass (imf13.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737453511; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZB9RJL67fY3IXYuhWAcAfCX1iP2OBdCmZWcj/TBJ5l0=; b=e8XJtiqJHFSihtKS1HH+xUod58g/fhP0wkENoU1Vuj9TMQneX3cbrTTg1b2Znz2SXIVZ/B LSqer5JbVDh8mNmP0JL8146DR6SFUdJuAKiKNDhq4+SsAbE/S0/XoG0Jll1cbRaU762xtt +sWSWXNLi991pDr9OeZmenSUmS9tpgc= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="i2CJg/RF"; spf=pass (imf13.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737453511; a=rsa-sha256; cv=none; b=Y/TYS7MJaP7K5oLlQsudgErKLC5xISXycF8U7hoS+I6+e1PJzlWI6r6nDkc+yasV7UxgAL 04ioOq2SyRdBTm1IVUU0yVHuSrD4J0CH345UyspWzq68kquOaz0em22/87+F3irFwnuzwq XslAA3x44UPCopDzO2ZhvoJJ/lnmDJI= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id B5009A41455; Tue, 21 Jan 2025 09:56:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7847CC4CEE1; Tue, 21 Jan 2025 09:58:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737453509; bh=5TNUX0A4O+m9E85yyca5tT89w2/MG3X3oJGKr3TQ7q4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i2CJg/RFVkyd01I+tKsR65B2nLlaGoT+SZZNbGN8AHZRBFIua6s9sSXEtZ1O+GkJM 1Bwbgmo74LVOtkGo8kgNdfmH/YPDA755rddy6wrQqRoZ2iZ0sAU6FeuBKVvW8d5MI9 AzP8FWkTN1RV2UPuDP2IbalaUSXpqR3Vvw5+HBCDzRyKn8FlS4ibtGrOcaZwVAkpV/ JaT08Q8+quoKcoticQaCA2yrSeRDBqccYZdEI9BFw2L6R6VMV5OeacOGVf6MY+MBq7 vQQc8mc7bM1BWjahF6Ieb3lVHs3S802dTX+VNxagNnRz5gsLYxtqnoIbX5ilHdWPos 1qkVw19HN9A+A== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v2 03/10] x86/mm/pat: restore large ROX pages after fragmentation Date: Tue, 21 Jan 2025 11:57:32 +0200 Message-ID: <20250121095739.986006-4-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250121095739.986006-1-rppt@kernel.org> References: <20250121095739.986006-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 916E32000D X-Stat-Signature: mupqwq9m3wkc154xhj16nxjyfsukwcwb X-Rspam-User: X-HE-Tag: 1737453511-798608 X-HE-Meta: U2FsdGVkX1+9SZAzpoXpx8QjsGLxjHUwi2ULL1UGI5cMdQq5u3uw5jGhiSX1ZFh0ttkLtm+zcW4MAb1PWNuToExhxj60dYuxpit0wR636hsKzTB8EKB6MnVPSnMQDueGR75faguGBcHmVV7EfkppcqPTlBnYbCNpSUWWdG7Dy8KzlfRcMwM5LMVfK9SXEyuIhSs53n85HOCfE7fOt35X9XbmDYPbg0x5Lt03o8QRNwyU2tTUkJI4T7DNIfjs32gku1m6pNxcSD4aVZvxHW7Eyp4og6HnO+zqKjjBCPCJd73KXkswG83HMMFkSUJoYdqPDclaxlXVmTYTns7Kd6tvRnyS6/hY66Fi7LVeXy1O9TqoAXYE2GfLberTo6FS3lfg0oWh4oR0706UsZP7GB0XfxCi+JFnuSDVv0UcAEO6P0KuuHGsIHuWR6GgIZILWjNrM9mtnaZgvg4ydYsu+sB+cN2aokEcNXpFmwhnohm9dQ8F5wzorU71pwI0Xw+TA654Kwh214s0/i1hL3Yotgg1NR0hS6BuUFP8M+MnLV45tz0e3fA5ShvEGPyt8TMiKC14jXo3FmBXqmXs9bNBjb7xdJgDVBOaFJlxibseq4FW6bwrR9Ax+tZWXEsqVRtCciPFpjY2KlbdcCsnMHV5pGH64OOkh9uT0/WrX1sTfxWUk2YYy84HDP4hINYKFpU1a9rh4r0vrDCDhzh+iCK2G+0Ro4Ib/VuxMzRdL/zdJu+Eajqzu02Wg5amllH9utkpHPYMHC59/JRfssBZnvI0s51PeK4vgNKxGgHycFGFW9TjKOXk8/vPJtjY4b/D18ElCyWsTJ+RdKhIGHuVuZtLd23BuglEGQafCbWAApPPaI5HNXNjzd4DnsCRIK1MEXIpyBx4gBbaMzWzupsv/Wm76HfRJka/vuv8o02+nki7tZoqJ0AhJife/lo1onhFSIFG+1s3gCBJZnKjO6mFeD8TAEx CG6Gy2yv onI6KBf6YIl7MR/37AZ6p60bL2lTnM3nfNhglezik3GD2R1zwb6Gf/CfpiG2gAI2QWHmtvW0qUc67D8YyrNMcV4A2fdactKuAZ0h0ML+TjIQ4H8Xhg6Y81MWrqCSRZcYvCtJ/+ydLWaaMnnoYSJeqiF+zxRnakIt8cLMkI+KjcrmoTnJPK0c49bCxxWTdpfauY3v9EHRHuvySOk5QNYAp0Hbag5W6BbG7eKNc2slJBld4ZXvDQhaGSmH55jYgDhjTrVxO+w0r8GrsUEL80O7IjqKkJw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Kirill A. Shutemov" Change of attributes of the pages may lead to fragmentation of direct mapping over time and performance degradation when these pages contain executable code. With current code it's one way road: kernel tries to avoid splitting large pages, but it doesn't restore them back even if page attributes got compatible again. Any change to the mapping may potentially allow to restore large page. Add a hook to cpa_flush() path that will check if the pages in the range that were just touched can be mapped at PMD level. If the collapse at the PMD level succeeded, also attempt to collapse PUD level. The collapse logic runs only when a set_memory_ method explicitly sets CPA_COLLAPSE flag, for now this is only enabled in set_memory_rox(). CPUs don't like[1] to have to have TLB entries of different size for the same memory, but looks like it's okay as long as these entries have matching attributes[2]. Therefore it's critical to flush TLB before any following changes to the mapping. Note that we already allow for multiple TLB entries of different sizes for the same memory now in split_large_page() path. It's not a new situation. set_memory_4k() provides a way to use 4k pages on purpose. Kernel must not remap such pages as large. Re-use one of software PTE bits to indicate such pages. [1] See Erratum 383 of AMD Family 10h Processors [2] https://lore.kernel.org/linux-mm/1da1b025-cabc-6f04-bde5-e50830d1ecf0@amd.com/ [rppt@kernel.org: * s/restore/collapse/ * update formatting per peterz * use 'struct ptdesc' instead of 'struct page' for list of page tables to be freed * try to collapse PMD first and if it succeeds move on to PUD as peterz suggested * flush TLB twice: for changes done in the original CPA call and after collapsing of large pages * update commit message ] Link: https://lore.kernel.org/all/20200416213229.19174-1-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- arch/x86/include/asm/pgtable_types.h | 2 + arch/x86/mm/pat/set_memory.c | 217 ++++++++++++++++++++++++++- include/linux/vm_event_item.h | 2 + mm/vmstat.c | 2 + 4 files changed, 219 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index 4b804531b03c..c90e9c51edb7 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -33,6 +33,7 @@ #define _PAGE_BIT_CPA_TEST _PAGE_BIT_SOFTW1 #define _PAGE_BIT_UFFD_WP _PAGE_BIT_SOFTW2 /* userfaultfd wrprotected */ #define _PAGE_BIT_SOFT_DIRTY _PAGE_BIT_SOFTW3 /* software dirty tracking */ +#define _PAGE_BIT_KERNEL_4K _PAGE_BIT_SOFTW3 /* page must not be converted to large */ #define _PAGE_BIT_DEVMAP _PAGE_BIT_SOFTW4 #ifdef CONFIG_X86_64 @@ -64,6 +65,7 @@ #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE) #define _PAGE_SPECIAL (_AT(pteval_t, 1) << _PAGE_BIT_SPECIAL) #define _PAGE_CPA_TEST (_AT(pteval_t, 1) << _PAGE_BIT_CPA_TEST) +#define _PAGE_KERNEL_4K (_AT(pteval_t, 1) << _PAGE_BIT_KERNEL_4K) #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS #define _PAGE_PKEY_BIT0 (_AT(pteval_t, 1) << _PAGE_BIT_PKEY_BIT0) #define _PAGE_PKEY_BIT1 (_AT(pteval_t, 1) << _PAGE_BIT_PKEY_BIT1) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index d43b919b11ae..18c233048706 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -75,6 +75,7 @@ static DEFINE_SPINLOCK(cpa_lock); #define CPA_ARRAY 2 #define CPA_PAGES_ARRAY 4 #define CPA_NO_CHECK_ALIAS 8 /* Do not search for aliases */ +#define CPA_COLLAPSE 16 /* try to collapse large pages */ static inline pgprot_t cachemode2pgprot(enum page_cache_mode pcm) { @@ -107,6 +108,18 @@ static void split_page_count(int level) direct_pages_count[level - 1] += PTRS_PER_PTE; } +static void collapse_page_count(int level) +{ + direct_pages_count[level]++; + if (system_state == SYSTEM_RUNNING) { + if (level == PG_LEVEL_2M) + count_vm_event(DIRECT_MAP_LEVEL2_COLLAPSE); + else if (level == PG_LEVEL_1G) + count_vm_event(DIRECT_MAP_LEVEL3_COLLAPSE); + } + direct_pages_count[level - 1] -= PTRS_PER_PTE; +} + void arch_report_meminfo(struct seq_file *m) { seq_printf(m, "DirectMap4k: %8lu kB\n", @@ -124,6 +137,7 @@ void arch_report_meminfo(struct seq_file *m) } #else static inline void split_page_count(int level) { } +static inline void collapse_page_count(int level) { } #endif #ifdef CONFIG_X86_CPA_STATISTICS @@ -396,6 +410,40 @@ static void __cpa_flush_tlb(void *data) flush_tlb_one_kernel(fix_addr(__cpa_addr(cpa, i))); } +static int collapse_large_pages(unsigned long addr, struct list_head *pgtables); + +static void cpa_collapse_large_pages(struct cpa_data *cpa) +{ + unsigned long start, addr, end; + struct ptdesc *ptdesc, *tmp; + LIST_HEAD(pgtables); + int collapsed = 0; + int i; + + if (cpa->flags & (CPA_PAGES_ARRAY | CPA_ARRAY)) { + for (i = 0; i < cpa->numpages; i++) + collapsed += collapse_large_pages(__cpa_addr(cpa, i), + &pgtables); + } else { + addr = __cpa_addr(cpa, 0); + start = addr & PMD_MASK; + end = addr + PAGE_SIZE * cpa->numpages; + + for (addr = start; within(addr, start, end); addr += PMD_SIZE) + collapsed += collapse_large_pages(addr, &pgtables); + } + + if (!collapsed) + return; + + flush_tlb_all(); + + list_for_each_entry_safe(ptdesc, tmp, &pgtables, pt_list) { + list_del(&ptdesc->pt_list); + __free_page(ptdesc_page(ptdesc)); + } +} + static void cpa_flush(struct cpa_data *cpa, int cache) { unsigned int i; @@ -404,7 +452,7 @@ static void cpa_flush(struct cpa_data *cpa, int cache) if (cache && !static_cpu_has(X86_FEATURE_CLFLUSH)) { cpa_flush_all(cache); - return; + goto collapse_large_pages; } if (cpa->force_flush_all || cpa->numpages > tlb_single_page_flush_ceiling) @@ -413,7 +461,7 @@ static void cpa_flush(struct cpa_data *cpa, int cache) on_each_cpu(__cpa_flush_tlb, cpa, 1); if (!cache) - return; + goto collapse_large_pages; mb(); for (i = 0; i < cpa->numpages; i++) { @@ -429,6 +477,10 @@ static void cpa_flush(struct cpa_data *cpa, int cache) clflush_cache_range_opt((void *)fix_addr(addr), PAGE_SIZE); } mb(); + +collapse_large_pages: + if (cpa->flags & CPA_COLLAPSE) + cpa_collapse_large_pages(cpa); } static bool overlaps(unsigned long r1_start, unsigned long r1_end, @@ -1198,6 +1250,161 @@ static int split_large_page(struct cpa_data *cpa, pte_t *kpte, return 0; } +static int collapse_pmd_page(pmd_t *pmd, unsigned long addr, + struct list_head *pgtables) +{ + pmd_t _pmd, old_pmd; + pte_t *pte, first; + unsigned long pfn; + pgprot_t pgprot; + int i = 0; + + addr &= PMD_MASK; + pte = pte_offset_kernel(pmd, addr); + first = *pte; + pfn = pte_pfn(first); + + /* Make sure alignment is suitable */ + if (PFN_PHYS(pfn) & ~PMD_MASK) + return 0; + + /* The page is 4k intentionally */ + if (pte_flags(first) & _PAGE_KERNEL_4K) + return 0; + + /* Check that the rest of PTEs are compatible with the first one */ + for (i = 1, pte++; i < PTRS_PER_PTE; i++, pte++) { + pte_t entry = *pte; + + if (!pte_present(entry)) + return 0; + if (pte_flags(entry) != pte_flags(first)) + return 0; + if (pte_pfn(entry) != pte_pfn(first) + i) + return 0; + } + + old_pmd = *pmd; + + /* Success: set up a large page */ + pgprot = pgprot_4k_2_large(pte_pgprot(first)); + pgprot_val(pgprot) |= _PAGE_PSE; + _pmd = pfn_pmd(pfn, pgprot); + set_pmd(pmd, _pmd); + + /* Queue the page table to be freed after TLB flush */ + list_add(&page_ptdesc(pmd_page(old_pmd))->pt_list, pgtables); + + if (IS_ENABLED(CONFIG_X86_32) && !SHARED_KERNEL_PMD) { + struct page *page; + + /* Update all PGD tables to use the same large page */ + list_for_each_entry(page, &pgd_list, lru) { + pgd_t *pgd = (pgd_t *)page_address(page) + pgd_index(addr); + p4d_t *p4d = p4d_offset(pgd, addr); + pud_t *pud = pud_offset(p4d, addr); + pmd_t *pmd = pmd_offset(pud, addr); + /* Something is wrong if entries doesn't match */ + if (WARN_ON(pmd_val(old_pmd) != pmd_val(*pmd))) + continue; + set_pmd(pmd, _pmd); + } + } + + if (virt_addr_valid(addr) && pfn_range_is_mapped(pfn, pfn + 1)) + collapse_page_count(PG_LEVEL_2M); + + return 1; +} + +static int collapse_pud_page(pud_t *pud, unsigned long addr, + struct list_head *pgtables) +{ + unsigned long pfn; + pmd_t *pmd, first; + int i; + + if (!direct_gbpages) + return 0; + + addr &= PUD_MASK; + pmd = pmd_offset(pud, addr); + first = *pmd; + + /* + * To restore PUD page all PMD entries must be large and + * have suitable alignment + */ + pfn = pmd_pfn(first); + if (!pmd_leaf(first) || (PFN_PHYS(pfn) & ~PUD_MASK)) + return 0; + + /* + * To restore PUD page, all following PMDs must be compatible with the + * first one. + */ + for (i = 1, pmd++; i < PTRS_PER_PMD; i++, pmd++) { + pmd_t entry = *pmd; + + if (!pmd_present(entry) || !pmd_leaf(entry)) + return 0; + if (pmd_flags(entry) != pmd_flags(first)) + return 0; + if (pmd_pfn(entry) != pmd_pfn(first) + i * PTRS_PER_PTE) + return 0; + } + + /* Restore PUD page and queue page table to be freed after TLB flush */ + list_add(&page_ptdesc(pud_page(*pud))->pt_list, pgtables); + set_pud(pud, pfn_pud(pfn, pmd_pgprot(first))); + + if (virt_addr_valid(addr) && pfn_range_is_mapped(pfn, pfn + 1)) + collapse_page_count(PG_LEVEL_1G); + + return 1; +} + +/* + * Collapse PMD and PUD pages in the kernel mapping around the address where + * possible. + * + * Caller must flush TLB and free page tables queued on the list before + * touching the new entries. CPU must not see TLB entries of different size + * with different attributes. + */ +static int collapse_large_pages(unsigned long addr, struct list_head *pgtables) +{ + int collapsed = 0; + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + + addr &= PMD_MASK; + + spin_lock(&pgd_lock); + pgd = pgd_offset_k(addr); + if (pgd_none(*pgd)) + goto out; + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) + goto out; + pud = pud_offset(p4d, addr); + if (!pud_present(*pud) || pud_leaf(*pud)) + goto out; + pmd = pmd_offset(pud, addr); + if (!pmd_present(*pmd) || pmd_leaf(*pmd)) + goto out; + + collapsed = collapse_pmd_page(pmd, addr, pgtables); + if (collapsed) + collapsed += collapse_pud_page(pud, addr, pgtables); + +out: + spin_unlock(&pgd_lock); + return collapsed; +} + static bool try_to_free_pte_page(pte_t *pte) { int i; @@ -2121,7 +2328,8 @@ int set_memory_rox(unsigned long addr, int numpages) if (__supported_pte_mask & _PAGE_NX) clr.pgprot |= _PAGE_NX; - return change_page_attr_clear(&addr, numpages, clr, 0); + return change_page_attr_set_clr(&addr, numpages, __pgprot(0), clr, 0, + CPA_COLLAPSE, NULL); } int set_memory_rw(unsigned long addr, int numpages) @@ -2148,7 +2356,8 @@ int set_memory_p(unsigned long addr, int numpages) int set_memory_4k(unsigned long addr, int numpages) { - return change_page_attr_set_clr(&addr, numpages, __pgprot(0), + return change_page_attr_set_clr(&addr, numpages, + __pgprot(_PAGE_KERNEL_4K), __pgprot(0), 1, 0, NULL); } diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index f70d0958095c..5a37cb2b6f93 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -151,6 +151,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifdef CONFIG_X86 DIRECT_MAP_LEVEL2_SPLIT, DIRECT_MAP_LEVEL3_SPLIT, + DIRECT_MAP_LEVEL2_COLLAPSE, + DIRECT_MAP_LEVEL3_COLLAPSE, #endif #ifdef CONFIG_PER_VMA_LOCK_STATS VMA_LOCK_SUCCESS, diff --git a/mm/vmstat.c b/mm/vmstat.c index 16bfe1c694dd..88998725f1c5 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1435,6 +1435,8 @@ const char * const vmstat_text[] = { #ifdef CONFIG_X86 "direct_map_level2_splits", "direct_map_level3_splits", + "direct_map_level2_collapses", + "direct_map_level3_collapses", #endif #ifdef CONFIG_PER_VMA_LOCK_STATS "vma_lock_success", From patchwork Tue Jan 21 09:57:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13946013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A464AC02182 for ; Tue, 21 Jan 2025 09:58:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 40288280004; Tue, 21 Jan 2025 04:58:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D9BA280001; Tue, 21 Jan 2025 04:58:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A239280004; Tue, 21 Jan 2025 04:58:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0AF7A280001 for ; Tue, 21 Jan 2025 04:58:42 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id AC561C08A5 for ; Tue, 21 Jan 2025 09:58:41 +0000 (UTC) X-FDA: 83031009642.26.05D41AE Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf21.hostedemail.com (Postfix) with ESMTP id 138191C0004 for ; Tue, 21 Jan 2025 09:58:39 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=FoJDZOnl; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737453520; a=rsa-sha256; cv=none; b=VyVSsfxHHTu4IGRtstJjqSBv/X7QeMVRaXjPsNBAYxzq7ZSlzeyrxRlzD5d7KK8++tmSne XEsSpxrR0BDs6pmQV989NaZ5MfIrrgA+0tEs2kLOxEobjkfd/xpWlUysHFF1G3saeiIKxL 9Pr1sGOA7ymDGokzW89MAUR4WPBwYsE= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=FoJDZOnl; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737453520; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KIFxGxnvBKihw4fTbFCPK+DYQ7uljyxljs0IBYR0gFM=; b=NdYhT6FTtcslMvYhjiLs14Mu8j4j6W+4U9SDHAJZAG5BH9zQaLpyDyq62EsGYGp73NBAjy GDv2uPQWzWby069coD1JzTfeCQZ875pB08mPnvuhb5l7qDyYrSR7cqzzD+JCXpx0UMH+IS rauDZ/wZo/XKbcor2Cer1PhYhG1bIQE= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C80885C5542; Tue, 21 Jan 2025 09:57:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 77818C4CEEB; Tue, 21 Jan 2025 09:58:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737453519; bh=WqNu22Xx50tarZUt/SmRrxy/7e4ez9U96yWXlx51zT4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FoJDZOnleAZwQIodwpQ5Ij7X645HWLirwOfQ0PhgbtrAeGgWm0F1BXmNxBMxNOwHb BDgECe3T1RZ/O+9XVPcSbALd53EQ/PlnqBfdQBdMhcXcZ+XXMzcLKMhHhdcZtfJpkQ lvZ+7uhS8+o8oy4nUgdHZsuP0yWdBRXAQZqs+NOswDcGekMvmFjYb/p3oM9oZLjscE ODzHCzjWAHlUfeWSzGBEWwQ7S2LmdR+XTiDwEsXBXcAVr05ZGnWSHBHO5NprCQ7JQH 9e0Wc/fjy9kaqfaasO/7S8ssSjD4RHcmUTWiXWg8YtQu/ZrYdH7nv7nO1DVVQfGz7j awdEEf3waopQw== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v2 04/10] execmem: don't remove ROX cache from the direct map Date: Tue, 21 Jan 2025 11:57:33 +0200 Message-ID: <20250121095739.986006-5-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250121095739.986006-1-rppt@kernel.org> References: <20250121095739.986006-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 138191C0004 X-Stat-Signature: gxc8f9jpgigi3azw64798utqsqidg7wr X-Rspam-User: X-HE-Tag: 1737453519-707139 X-HE-Meta: U2FsdGVkX18gg9WF9EG6lSZ5AYDSd+DbjCGAnSWbL7xLaMMUc0zymExKKik/aPIubHAm8D0nDIt1Nq2nXwWZQDehJyyX5aBKya0hJXLpsUoa3VnKO6a99fIAl2SgxwXJHveFvmP/lMW1lt2kxMStER+p3UQK1olj1Ay4P50U0gKVOAsFMlUHd0ycRSidWeD3YoM6JnURcb7tEr/KLA6I0t8fgSa2Mt5a8wGS+Gytei7R3zUbTg84wzmrKCZuAqu2sG7tPRV4AGFUh1kOM4nGm+5e2/3o6WY8iUN3uo/kMMs4LlaS7NEpE/rXTtlYeZu6HWk+PLBHclFV1rrGywEYUt312kQnAPkOAZ4d+4odB0UBISCiwlAga8OFY5A+qLejUoOewc2AKdIHtg6saj5KLAydhRKEOwMzxgG7KlPQnhPPfEHVVLKicWi77wGf29e/hX0omYVIAKVERHmyU6inoCmMAnO1/0F6Kk289B2hg6nJp4RUxSQi7AheAED1XAwXsc9jzZGuGHJ+a7zvYEvnqlmrnnVF2On9KMlOlLJSB9f1pQmqupsZAgzAFfD9KVtQ0vw8lYXdMyeQhbCJGVNl5M8BBCKsInsk2pJiqF3kBzN2nWFzSiUp6ntA/WfSIaW/B9nIZgnl5J0ZeMyLeiDg8hI/YQtWs8nuN1fN8Kcw1GtNRHIdoX58ftr4Zq83OS4kXIeOtIIX9dIvulBwM1eM0sCgLtWOhxwEwZazzWEZkFZ0of4QG8olcH8er2S5zsiwegtknohTunE85KRkIRg+Fyr97sLQP0MRybiBwqSNUXj+KOiwOxDWMKCl89lvaUTthz4OnbHwjOL8VNK3MunPrXwON3v/PdLMWwltKArKGlKTy0guPpvKTlFi0h7WkSA+cHEcqrSJy1Ex0zkQrE1kQgm0NzR+kznHL/70y7cXADpugw2BWF7INSH5496tabDw5uVcKdSKT8wAkmQquh6 tTecqbez HCRIuuhTnyJLCPScYnTVs3sVzQGYNaLvvzF9VdgBS4KonoUdW8cMSgNWeid1fTeZONw5hzI0u3p+octAy+h38YrRF+NdvZ/wMp6yw8SMPO5AwHTVA1J6G2D0leEexGxms1UVaxYJrYO4LJ0/CPRfIzDiKbqh5SEj77HdoFdIp8My9Xx/F6fs6Z9o2Hgy6ai1O+xG8dojMRLxSrMSy2IDmZdqMwafGX7v2OHotkc8rVAo8Y88= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" The memory allocated for the ROX cache was removed from the direct map to reduce amount of direct map updates, however this cannot be tolerated by /proc/kcore that accesses module memory using vread_iter() and the latter does vmalloc_to_page() and copy_page_to_iter_nofault(). Instead of removing ROX cache memory from the direct map and mapping it as ROX in vmalloc space, simply call set_memory_rox() that will take care of proper permissions on both vmalloc and in the direct map. Signed-off-by: Mike Rapoport (Microsoft) --- mm/execmem.c | 17 ++++------------- 1 file changed, 4 insertions(+), 13 deletions(-) diff --git a/mm/execmem.c b/mm/execmem.c index 317b6a8d35be..04b0bf1b5025 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -257,7 +257,6 @@ static void *__execmem_cache_alloc(struct execmem_range *range, size_t size) static int execmem_cache_populate(struct execmem_range *range, size_t size) { unsigned long vm_flags = VM_ALLOW_HUGE_VMAP; - unsigned long start, end; struct vm_struct *vm; size_t alloc_size; int err = -ENOMEM; @@ -275,26 +274,18 @@ static int execmem_cache_populate(struct execmem_range *range, size_t size) /* fill memory with instructions that will trap */ execmem_fill_trapping_insns(p, alloc_size, /* writable = */ true); - start = (unsigned long)p; - end = start + alloc_size; - - vunmap_range(start, end); - - err = execmem_set_direct_map_valid(vm, false); - if (err) - goto err_free_mem; - - err = vmap_pages_range_noflush(start, end, range->pgprot, vm->pages, - PMD_SHIFT); + err = set_memory_rox((unsigned long)p, vm->nr_pages); if (err) goto err_free_mem; err = execmem_cache_add(p, alloc_size); if (err) - goto err_free_mem; + goto err_reset_direct_map; return 0; +err_reset_direct_map: + execmem_set_direct_map_valid(vm, true); err_free_mem: vfree(p); return err; From patchwork Tue Jan 21 09:57:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13946014 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DADCCC0218D for ; Tue, 21 Jan 2025 09:58:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 53B4E280006; Tue, 21 Jan 2025 04:58:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4EBAE280005; Tue, 21 Jan 2025 04:58:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3B393280006; Tue, 21 Jan 2025 04:58:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 1DD0E280005 for ; Tue, 21 Jan 2025 04:58:52 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id CFB111A0920 for ; Tue, 21 Jan 2025 09:58:51 +0000 (UTC) X-FDA: 83031010062.11.4D5A132 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf28.hostedemail.com (Postfix) with ESMTP id 231D4C0010 for ; Tue, 21 Jan 2025 09:58:49 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Tph63Lwi; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf28.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737453530; a=rsa-sha256; cv=none; b=Y3VorrY78RsuQplpEEVm9rjtszVS3c/zEkI8sH4fa85AKeIyeDGkDC6YSGy2yEgI9Mdozp dWJEdcrx83qIPw97cuLC+aarghEzla+fsZCoDktds+KRNqaAER5Zj2SrU1j3ITvFHNB7vz iMAWLvmWV2I3Psk4AkQOdmszGwdDWLo= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Tph63Lwi; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf28.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737453530; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NfiOldQuoBsOca6bai8743RJBuvry8sUliN14rP7yPw=; b=4jmUhODXqMZ9nXo/yio1zLB+wjY7ARg2cumFyWPVC8w7SKGZOI+RJfu2hPNZwerPPgByXO zgjj7jC4IgoZAoRKUOsS+RM/YTClNOKMnT2nZvff7ZhaQ7ZbeDsYXxIuqjS8e89q83WEeZ 84qlUFg2+ecVycqtTj2V/rhM470h4d8= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C96385C57C1; Tue, 21 Jan 2025 09:58:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 73BD0C4CEE5; Tue, 21 Jan 2025 09:58:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737453528; bh=mjWwPz12D8IuXSas4rcsP8ghTqzzxxLYwZNdxRHbGuE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tph63LwihacCRZdg3/KoXRXy/L8AEtUnxQkyTo32q1HWh2AdKzVUCfdT4rNz4dVjq Uv4xokrygxNH5zuKByyprdx7K9MGUClZEw98LpFNi9DGq1AeEI9urWLO16DE8FCydn MB+GzWkdUS5ylBIb8pvzI7nmdNLQefTvXwvnjRMO5efH8KnRrrazKBkaJ/CCm6zth/ cdwCt2pktttCf5JCsGO3f3zrex39RCF8Mnj7/5AD2T54mQvWaqDwzhSsBwDWy+H4v0 sa5AuN1ZYYZvtmMB5CDOjglr2nV8Tt1tyIuB8pqeOrHWhJajf65QoJYARxpi7qVP/+ OuIH4NaKXIiqw== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v2 05/10] execmem: add API for temporal remapping as RW and restoring ROX afterwards Date: Tue, 21 Jan 2025 11:57:34 +0200 Message-ID: <20250121095739.986006-6-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250121095739.986006-1-rppt@kernel.org> References: <20250121095739.986006-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 231D4C0010 X-Stat-Signature: pf1tihareq98tbjzcic4e59o8s6qozhs X-Rspam-User: X-HE-Tag: 1737453529-274539 X-HE-Meta: U2FsdGVkX193e9M5qwFaILhOuBSS2/PjJhFGxRKVmhEVoJWWXrPcEryamnaZZZB60ZsML1rI1bwJYXXHiH1Jh/XOoLxiLlaCoMyE2PYB3Z2UPOF/mE64/hrUfz1Ec6NC8jm/j/mvm6MtoP8FyOyo6Dgzt47V4qJAaz8FUhRPAehGgIZvelMUXPjK/whc++w6AivXwQvZz1tylrlCOVf+dPVoeqn7dIq8qrNO2gF/ZLSx25QB8/hCCi4U/lsoqgB40gSeYLPFq3vBipMMSguJAUpWm9SOvA68PZhkjxUOgTe2NF6p9Ov+3cwnBqzjuAbDLpA+HwHoJgk6gSz3xklHj/nw5ZTE3EyVfayRvIEumjXp5Cim0qGRdAaHSkmwqdZunrbzKbCTOLuUqC1/KOjbimfiZ/11KQ/ow9qnZtZ8xRyZFo5CuOF7T+ILT2YDBNflpIhCXyblcaDRWcQC3Rr/T3+JPEzY5HUG5Pp1w/r/Dnq6AnL7GP3W0zwhvKw2SRwzEMCicJcUdmt2L0leFpcs75cOozkLr7HsWv3LYZEdLhi3+c6VPe8foPU1TSq93KXh9mVxaKFC/rlkHgYv0f2Goyzya9g0kaM1XWFSMiU3fGWYUuyEio+kuAd5Df9x2Ulx90zEiizjOQ/WTvwaEUaWouVIJyIwdD2sP8K1+VAqFzvSlG5swGhiyE+IBl8fP3hUFsoPoK+YscGxnBVCi17S9cN/jjBK7fG7LOQYjihI0v1nznK+7GfN2+TXlNFFRaQYmp2RlfLcan8+rT9RW5rc2Sr0k+8NlTLRmUgVWlZAGUArIKb9KOU9Kcto5YX0Eh5jPrj8D+2KoopCWSh8pUK5v24H4tj/MM3icFc77COy/JEJEbHBvPCfcFVC5+cZTv8TZ9u6AHtrZtzczLUIJ4aPF11hoONqQIKAOLwfm8xHRd9Pqr2VKOfCliuQVb2+51NJyTVadWF1mUhnNExk5S+ iytvFD+D MIAapG5U75JeqiJy+8V2T1u9N9ji0DISsZ7Io8yPhSy/EFfmoQqpXduMNP5zhEQLpHReBW/tVAEA8yuaFS0F88YtHPwXQiogGa/eP71+x8EE4nRHALkEiDXWS30EakcJiQ+5+lMA/uYy0vaDosz+vHMKUtlNHRkDV9VXvxon5wwn9lSO7h9m39CTeVVDU5mLXtgfb1m7qRNFiuLJPX53OKGd3J4SZco3ZHIBsQ1Wmo1aKfek= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" Using a writable copy for ROX memory is cumbersome and error prone. Add API that allow temporarily remapping of ranges in the ROX cache as writable and then restoring their read-only-execute permissions. This API will be later used in modules code and will allow removing nasty games with writable copy in alternatives patching on x86. The restoring of the ROX permissions relies on the ability of architecture to reconstruct large pages in its set_memory_rox() method. Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/execmem.h | 31 +++++++++++++++++++++++++++++++ mm/execmem.c | 22 ++++++++++++++++++++++ 2 files changed, 53 insertions(+) diff --git a/include/linux/execmem.h b/include/linux/execmem.h index 64130ae19690..65655a5d1be2 100644 --- a/include/linux/execmem.h +++ b/include/linux/execmem.h @@ -65,6 +65,37 @@ enum execmem_range_flags { * Architectures that use EXECMEM_ROX_CACHE must implement this. */ void execmem_fill_trapping_insns(void *ptr, size_t size, bool writable); + +/** + * execmem_make_temp_rw - temporarily remap region with read-write + * permissions + * @ptr: address of the region to remap + * @size: size of the region to remap + * + * Remaps a part of the cached large page in the ROX cache in the range + * [@ptr, @ptr + @size) as writable and not executable. The caller must + * have exclusive ownership of this range and ensure nothing will try to + * execute code in this range. + * + * Return: 0 on success or negative error code on failure. + */ +int execmem_make_temp_rw(void *ptr, size_t size); + +/** + * execmem_restore_rox - restore read-only-execute permissions + * @ptr: address of the region to remap + * @size: size of the region to remap + * + * Restores read-only-execute permissions on a range [@ptr, @ptr + @size) + * after it was temporarily remapped as writable. Relies on architecture + * implementation of set_memory_rox() to restore mapping using large pages. + * + * Return: 0 on success or negative error code on failure. + */ +int execmem_restore_rox(void *ptr, size_t size); +#else +static inline int execmem_make_temp_rw(void *ptr, size_t size) { return 0; } +static inline int execmem_restore_rox(void *ptr, size_t size) { return 0; } #endif /** diff --git a/mm/execmem.c b/mm/execmem.c index 04b0bf1b5025..e6c4f5076ca8 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -335,6 +335,28 @@ static bool execmem_cache_free(void *ptr) return true; } + +int execmem_make_temp_rw(void *ptr, size_t size) +{ + unsigned int nr = PAGE_ALIGN(size) >> PAGE_SHIFT; + unsigned long addr = (unsigned long)ptr; + int ret; + + ret = set_memory_nx(addr, nr); + if (ret) + return ret; + + return set_memory_rw(addr, nr); +} + +int execmem_restore_rox(void *ptr, size_t size) +{ + unsigned int nr = PAGE_ALIGN(size) >> PAGE_SHIFT; + unsigned long addr = (unsigned long)ptr; + + return set_memory_rox(addr, nr); +} + #else /* CONFIG_ARCH_HAS_EXECMEM_ROX */ static void *execmem_cache_alloc(struct execmem_range *range, size_t size) { From patchwork Tue Jan 21 09:57:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13946015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66CEDC02182 for ; Tue, 21 Jan 2025 09:59:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F2700280007; Tue, 21 Jan 2025 04:59:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ED7A5280005; Tue, 21 Jan 2025 04:59:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D9E6E280007; Tue, 21 Jan 2025 04:59:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id BFD2A280005 for ; Tue, 21 Jan 2025 04:59:01 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 7B80F808B8 for ; Tue, 21 Jan 2025 09:59:01 +0000 (UTC) X-FDA: 83031010482.01.B7A7028 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf25.hostedemail.com (Postfix) with ESMTP id CDA51A0010 for ; Tue, 21 Jan 2025 09:58:59 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="s/cyZC/2"; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf25.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737453539; a=rsa-sha256; cv=none; b=aFLkVAmfh7valhtfRWedTReU+DXDz3pzu7LwjN7FobQk03bB5LYNHpTeE8canaebgAFOnG nKRJNmsIKxUhHP4edG5vXXeMDNiNWv3VbJO6vz/2Jn62ddzEgWXkwqEnfxwJdfMSTPyFvZ F6xrYcrm1UE0UWc2TaFVgV1+ipXqJI8= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="s/cyZC/2"; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf25.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737453539; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ib6pf1pA1v2rTETWvC2iYNfto5zyBemlnb0Wkhja7Yc=; b=0JXx13/vhnaFVhgGEAWFH9su/UbuAyYy1+DpwFb+6iEZvE+b9SZFVDMSgSEcSg+od8/fs9 xPltaEKeUDKbvT6KWiCX5chBZM9MFC4GDNxJyKY1ifcc7wTIicBnFTCNPJ45OS0xb8//JR ggqSBpl7JtBE/5VSNuaF5KZS17NB920= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 77B7BA41455; Tue, 21 Jan 2025 09:57:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 70E9CC4CEE3; Tue, 21 Jan 2025 09:58:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737453538; bh=KoOp2bBaEgNdkErfcjUVf9WpGIAQRLrTvLS33Bc8NoU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=s/cyZC/2+b5qlN46eSer1ImrHb9ZvKawbx0SCNqyc80TrZ8glanv9A7kNWdwEr5Wv cilfUKrKcLj7pV0iJOxfKg0uCBgGQ+dSZfau1xxcqqREPdq8QanuA6rkhZwJ1DK+6x jwimt+tv+qCRZ1SfmxmNTdyLbo96UJCTzycvEkvyVmqSMV8u9ei9kRMiybBa196KWi 3ihKXb9mZgCM9ThnLH/sDFxiQNcRUbkGOXTrdy4MRkhZWJ2YSVCqPL2XJGmCPCsgy1 KaCLpZ65i+w/Eqd7w7BhkQ4Nb08JUKhwvmZFasDG4sVdJvybj0GS+bKvIJvjDUgNq/ TtuScPHhkmPkQ== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v2 06/10] module: introduce MODULE_STATE_GONE Date: Tue, 21 Jan 2025 11:57:35 +0200 Message-ID: <20250121095739.986006-7-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250121095739.986006-1-rppt@kernel.org> References: <20250121095739.986006-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: CDA51A0010 X-Stat-Signature: r1oxh8otyaiwh7y3yoe1ttmmrt1y4dop X-HE-Tag: 1737453539-293401 X-HE-Meta: U2FsdGVkX19FJPIOcyp4DuLmFSyfz4Y93G1imx8UApVa4gNvQagteNeipkE3/JVCzOJuYxy0Ea4gcguSR/OOw3TZrgcmKLr0P16Ix7QQpdO7Z0VussKP9bSWC3V0B6yGCeR1NuODVctnAQjxHSqtqKvP6sx2Xdfq72jnNO1AqOs/Jf2XCgIwJYrr/v1ZoLWFcGyh1odsbftU9FpVe0kRpABHFnbrACR3cLhlYnJGFiRkgRjTyfrZWRYx6eGls/mv/QGdNmgNoCRXvLY4WmCjq2B7wEmAO/XA/x4FbBnxWyu2Dop252qCFPW5PgbM8geEMFw9Uk2DT9C/oGKfV1oc4Wrd9vlaF1LbhEPoF++kpbKMr6Jx4bRXDXFYsw7RbNV6cHOwzMQzCCTooXXU7nTF0xPBEyCd07fJP6rM3tYpGDwDFX0EliIa22PNlBwkC4VM2a3CbDwLQlGyPU/fQny82jEsB4oa9Y4+ptJkeCkuU7dajmTjG81t+6PVJMvoeaO/v5zjHS848OzWw0+XfmyovX968n0c2Lt2tLrQBWtDO2H8OxiLMnRrJk8cMsKp/pmCR3S+SISnCojzzX38Ksje7bAJ+/EhnwxI5pe3J3iMQEt7IMMv3sPDj7cV/GT4GAh+vYtYqj9u5s/E+0CC2LjXD24f0OyQlowr0o6Irnp5sVHsJGbZJvSrEDRb3gV5z7VadSMbm3X67tUeZS0qXNhYoYS1P3z1N5kQbeUreD7wk8J39BPUevRmf0gCpOZyRlDuYvNer1Tu7X/aqMjhkbAZBoo/UyX7lbH9YDThY0erp1kO16J3Jp97KcdaPTo6bCYwyfcy0VF7UW8HZzmFCfWJ4ITD+XVZQnYpGOEDQFFlgXztTA21Dtgp1kKIXQ0q6QWK9KVUXGcr8yIY/YKSWpOGVgQ8tqbmy5aV6zgmORkSUM0Dp4DfWDf5xRZdGTTFSGpFXQzNSPkYd8KJrH9Qq2r v2z0+BwG wBZgRAfSp7wsWV7wdMs9NwnieFoFYb5oW6VNBzm1B28BFsO5JUuBeUohgyEwVRaLdd1WsbU0MRqFubavGEY23TcJp5nTLRRoofjjnrXaoWvfftFX1riqxjtdgKooTNoW5fTRtBkUzkFsyIOAIooAlHTxraPu+xxYQbTsViIahwJOIDhcD7jE5M1L/LNd8WJoFVRjtZClyQljhrxEUTCC2bWPPCkdGUr9Lh+Ka2Yt+tF4tv46LrZzqogg/wfmIWFAohO6Gkt/ug1W1YgAVJ5p3PjKPle1oZt9nSayW X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" In order to use execmem's API for temporal remapping of the memory allocated from ROX cache as writable, there is a need to distinguish between the state when the module is being formed and the state when it is deconstructed and freed so that when module_memory_free() is called from error paths during module loading it could restore ROX mappings. Replace open coded checks for MODULE_STATE_UNFORMED with a helper function module_is_formed() and add a new MODULE_STATE_GONE that will be set when the module is deconstructed and freed. Signed-off-by: Mike Rapoport (Microsoft) Acked-by: Daniel Thompson (RISCstar) --- include/linux/module.h | 6 ++++++ kernel/module/kallsyms.c | 8 ++++---- kernel/module/kdb.c | 2 +- kernel/module/main.c | 19 +++++++++---------- kernel/module/procfs.c | 2 +- kernel/tracepoint.c | 2 ++ lib/kunit/test.c | 2 ++ samples/livepatch/livepatch-callbacks-demo.c | 1 + .../test_modules/test_klp_callbacks_demo.c | 1 + .../test_modules/test_klp_callbacks_demo2.c | 1 + .../livepatch/test_modules/test_klp_state.c | 1 + .../livepatch/test_modules/test_klp_state2.c | 1 + 12 files changed, 30 insertions(+), 16 deletions(-) diff --git a/include/linux/module.h b/include/linux/module.h index b3a643435357..624a5317d5a5 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -320,6 +320,7 @@ enum module_state { MODULE_STATE_COMING, /* Full formed, running module_init. */ MODULE_STATE_GOING, /* Going away. */ MODULE_STATE_UNFORMED, /* Still setting it up. */ + MODULE_STATE_GONE, /* Deconstructing and freeing. */ }; struct mod_tree_node { @@ -620,6 +621,11 @@ static inline bool module_is_coming(struct module *mod) return mod->state == MODULE_STATE_COMING; } +static inline bool module_is_formed(struct module *mod) +{ + return mod->state < MODULE_STATE_UNFORMED; +} + struct module *__module_text_address(unsigned long addr); struct module *__module_address(unsigned long addr); bool is_module_address(unsigned long addr); diff --git a/kernel/module/kallsyms.c b/kernel/module/kallsyms.c index bf65e0c3c86f..daf9a9b3740f 100644 --- a/kernel/module/kallsyms.c +++ b/kernel/module/kallsyms.c @@ -361,7 +361,7 @@ int lookup_module_symbol_name(unsigned long addr, char *symname) preempt_disable(); list_for_each_entry_rcu(mod, &modules, list) { - if (mod->state == MODULE_STATE_UNFORMED) + if (!module_is_formed(mod)) continue; if (within_module(addr, mod)) { const char *sym; @@ -389,7 +389,7 @@ int module_get_kallsym(unsigned int symnum, unsigned long *value, char *type, list_for_each_entry_rcu(mod, &modules, list) { struct mod_kallsyms *kallsyms; - if (mod->state == MODULE_STATE_UNFORMED) + if (!module_is_formed(mod)) continue; kallsyms = rcu_dereference_sched(mod->kallsyms); if (symnum < kallsyms->num_symtab) { @@ -441,7 +441,7 @@ static unsigned long __module_kallsyms_lookup_name(const char *name) list_for_each_entry_rcu(mod, &modules, list) { unsigned long ret; - if (mod->state == MODULE_STATE_UNFORMED) + if (!module_is_formed(mod)) continue; ret = __find_kallsyms_symbol_value(mod, name); if (ret) @@ -484,7 +484,7 @@ int module_kallsyms_on_each_symbol(const char *modname, list_for_each_entry(mod, &modules, list) { struct mod_kallsyms *kallsyms; - if (mod->state == MODULE_STATE_UNFORMED) + if (!module_is_formed(mod)) continue; if (modname && strcmp(modname, mod->name)) diff --git a/kernel/module/kdb.c b/kernel/module/kdb.c index 995c32d3698f..14f14700ffc2 100644 --- a/kernel/module/kdb.c +++ b/kernel/module/kdb.c @@ -23,7 +23,7 @@ int kdb_lsmod(int argc, const char **argv) kdb_printf("Module Size modstruct Used by\n"); list_for_each_entry(mod, &modules, list) { - if (mod->state == MODULE_STATE_UNFORMED) + if (!module_is_formed(mod)) continue; kdb_printf("%-20s%8u", mod->name, mod->mem[MOD_TEXT].size); diff --git a/kernel/module/main.c b/kernel/module/main.c index 5399c182b3cb..ad8ef20c120f 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -153,7 +153,7 @@ EXPORT_SYMBOL(unregister_module_notifier); */ static inline int strong_try_module_get(struct module *mod) { - BUG_ON(mod && mod->state == MODULE_STATE_UNFORMED); + BUG_ON(mod && !module_is_formed(mod)); if (mod && mod->state == MODULE_STATE_COMING) return -EBUSY; if (try_module_get(mod)) @@ -361,7 +361,7 @@ bool find_symbol(struct find_symbol_arg *fsa) GPL_ONLY }, }; - if (mod->state == MODULE_STATE_UNFORMED) + if (!module_is_formed(mod)) continue; for (i = 0; i < ARRAY_SIZE(arr); i++) @@ -386,7 +386,7 @@ struct module *find_module_all(const char *name, size_t len, list_for_each_entry_rcu(mod, &modules, list, lockdep_is_held(&module_mutex)) { - if (!even_unformed && mod->state == MODULE_STATE_UNFORMED) + if (!even_unformed && !module_is_formed(mod)) continue; if (strlen(mod->name) == len && !memcmp(mod->name, name, len)) return mod; @@ -457,7 +457,7 @@ bool __is_module_percpu_address(unsigned long addr, unsigned long *can_addr) preempt_disable(); list_for_each_entry_rcu(mod, &modules, list) { - if (mod->state == MODULE_STATE_UNFORMED) + if (!module_is_formed(mod)) continue; if (!mod->percpu_size) continue; @@ -1326,7 +1326,7 @@ static void free_module(struct module *mod) * that noone uses it while it's being deconstructed. */ mutex_lock(&module_mutex); - mod->state = MODULE_STATE_UNFORMED; + mod->state = MODULE_STATE_GONE; mutex_unlock(&module_mutex); /* Arch-specific cleanup. */ @@ -3048,8 +3048,7 @@ static int module_patient_check_exists(const char *name, if (old == NULL) return 0; - if (old->state == MODULE_STATE_COMING || - old->state == MODULE_STATE_UNFORMED) { + if (old->state == MODULE_STATE_COMING || !module_is_formed(old)) { /* Wait in case it fails to load. */ mutex_unlock(&module_mutex); err = wait_event_interruptible(module_wq, @@ -3608,7 +3607,7 @@ char *module_flags(struct module *mod, char *buf, bool show_state) { int bx = 0; - BUG_ON(mod->state == MODULE_STATE_UNFORMED); + BUG_ON(!module_is_formed(mod)); if (!mod->taints && !show_state) goto out; if (mod->taints || @@ -3702,7 +3701,7 @@ struct module *__module_address(unsigned long addr) mod = mod_find(addr, &mod_tree); if (mod) { BUG_ON(!within_module(addr, mod)); - if (mod->state == MODULE_STATE_UNFORMED) + if (!module_is_formed(mod)) mod = NULL; } return mod; @@ -3756,7 +3755,7 @@ void print_modules(void) /* Most callers should already have preempt disabled, but make sure */ preempt_disable(); list_for_each_entry_rcu(mod, &modules, list) { - if (mod->state == MODULE_STATE_UNFORMED) + if (!module_is_formed(mod)) continue; pr_cont(" %s%s", mod->name, module_flags(mod, buf, true)); } diff --git a/kernel/module/procfs.c b/kernel/module/procfs.c index 0a4841e88adb..2c617e6f8bc0 100644 --- a/kernel/module/procfs.c +++ b/kernel/module/procfs.c @@ -79,7 +79,7 @@ static int m_show(struct seq_file *m, void *p) unsigned int size; /* We always ignore unformed modules. */ - if (mod->state == MODULE_STATE_UNFORMED) + if (!module_is_formed(mod)) return 0; size = module_total_size(mod); diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c index 1848ce7e2976..e94247afb2c6 100644 --- a/kernel/tracepoint.c +++ b/kernel/tracepoint.c @@ -668,6 +668,8 @@ static int tracepoint_module_notify(struct notifier_block *self, break; case MODULE_STATE_UNFORMED: break; + case MODULE_STATE_GONE: + break; } return notifier_from_errno(ret); } diff --git a/lib/kunit/test.c b/lib/kunit/test.c index 089c832e3cdb..54eaed92a2d3 100644 --- a/lib/kunit/test.c +++ b/lib/kunit/test.c @@ -836,6 +836,8 @@ static int kunit_module_notify(struct notifier_block *nb, unsigned long val, break; case MODULE_STATE_UNFORMED: break; + case MODULE_STATE_GONE: + break; } return 0; diff --git a/samples/livepatch/livepatch-callbacks-demo.c b/samples/livepatch/livepatch-callbacks-demo.c index 11c3f4357812..324bddaef9a6 100644 --- a/samples/livepatch/livepatch-callbacks-demo.c +++ b/samples/livepatch/livepatch-callbacks-demo.c @@ -93,6 +93,7 @@ static const char *const module_state[] = { [MODULE_STATE_COMING] = "[MODULE_STATE_COMING] Full formed, running module_init", [MODULE_STATE_GOING] = "[MODULE_STATE_GOING] Going away", [MODULE_STATE_UNFORMED] = "[MODULE_STATE_UNFORMED] Still setting it up", + [MODULE_STATE_GONE] = "[MODULE_STATE_GONE] Deconstructing and freeing", }; static void callback_info(const char *callback, struct klp_object *obj) diff --git a/tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_demo.c b/tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_demo.c index 3fd8fe1cd1cc..8435e3254f85 100644 --- a/tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_demo.c +++ b/tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_demo.c @@ -16,6 +16,7 @@ static const char *const module_state[] = { [MODULE_STATE_COMING] = "[MODULE_STATE_COMING] Full formed, running module_init", [MODULE_STATE_GOING] = "[MODULE_STATE_GOING] Going away", [MODULE_STATE_UNFORMED] = "[MODULE_STATE_UNFORMED] Still setting it up", + [MODULE_STATE_GONE] = "[MODULE_STATE_GONE] Deconstructing and freeing", }; static void callback_info(const char *callback, struct klp_object *obj) diff --git a/tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_demo2.c b/tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_demo2.c index 5417573e80af..78c1fff5d977 100644 --- a/tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_demo2.c +++ b/tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_demo2.c @@ -16,6 +16,7 @@ static const char *const module_state[] = { [MODULE_STATE_COMING] = "[MODULE_STATE_COMING] Full formed, running module_init", [MODULE_STATE_GOING] = "[MODULE_STATE_GOING] Going away", [MODULE_STATE_UNFORMED] = "[MODULE_STATE_UNFORMED] Still setting it up", + [MODULE_STATE_GONE] = "[MODULE_STATE_GONE] Deconstructing and freeing", }; static void callback_info(const char *callback, struct klp_object *obj) diff --git a/tools/testing/selftests/livepatch/test_modules/test_klp_state.c b/tools/testing/selftests/livepatch/test_modules/test_klp_state.c index 57a4253acb01..bdebf1d24c98 100644 --- a/tools/testing/selftests/livepatch/test_modules/test_klp_state.c +++ b/tools/testing/selftests/livepatch/test_modules/test_klp_state.c @@ -18,6 +18,7 @@ static const char *const module_state[] = { [MODULE_STATE_COMING] = "[MODULE_STATE_COMING] Full formed, running module_init", [MODULE_STATE_GOING] = "[MODULE_STATE_GOING] Going away", [MODULE_STATE_UNFORMED] = "[MODULE_STATE_UNFORMED] Still setting it up", + [MODULE_STATE_GONE] = "[MODULE_STATE_GONE] Deconstructing and freeing", }; static void callback_info(const char *callback, struct klp_object *obj) diff --git a/tools/testing/selftests/livepatch/test_modules/test_klp_state2.c b/tools/testing/selftests/livepatch/test_modules/test_klp_state2.c index c978ea4d5e67..1a55f84a8eb3 100644 --- a/tools/testing/selftests/livepatch/test_modules/test_klp_state2.c +++ b/tools/testing/selftests/livepatch/test_modules/test_klp_state2.c @@ -18,6 +18,7 @@ static const char *const module_state[] = { [MODULE_STATE_COMING] = "[MODULE_STATE_COMING] Full formed, running module_init", [MODULE_STATE_GOING] = "[MODULE_STATE_GOING] Going away", [MODULE_STATE_UNFORMED] = "[MODULE_STATE_UNFORMED] Still setting it up", + [MODULE_STATE_GONE] = "[MODULE_STATE_GONE] Deconstructing and freeing", }; static void callback_info(const char *callback, struct klp_object *obj) From patchwork Tue Jan 21 09:57:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13946016 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21F27C02182 for ; Tue, 21 Jan 2025 09:59:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9814F280008; Tue, 21 Jan 2025 04:59:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 931A5280005; Tue, 21 Jan 2025 04:59:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D266280008; Tue, 21 Jan 2025 04:59:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 61CFB280005 for ; Tue, 21 Jan 2025 04:59:11 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 26A64A08E4 for ; Tue, 21 Jan 2025 09:59:11 +0000 (UTC) X-FDA: 83031010902.17.A3C0E59 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf02.hostedemail.com (Postfix) with ESMTP id 8F9DE8000D for ; Tue, 21 Jan 2025 09:59:09 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=nOcSe+zn; spf=pass (imf02.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737453549; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F4ehBje27eYCoCxqcpIDdeXwELBg2teltHcjpXQWPlI=; b=aCA84ObGQArRM3Pd3lmUkAZWCIbPi3yjEbLg0WGne3vyFv8OiGpL5H9jLSoMYKV50Xlx6q OvTDo9SVcQzQmzAPx5+0tWjI7IRtz6W1+WG4iL1FTmx8kA14VqMdMNY2KUGM6o/StSENhs Cgke9PGI6H4UZk5RRW0cIvoyLgFnXzE= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=nOcSe+zn; spf=pass (imf02.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737453549; a=rsa-sha256; cv=none; b=oVaVSkZk3P6NN7buNvKaKYg+PXLBb26YpU1AKyrrox2BR9zg/7qZgKIhH189OQ1mb1OA4T e9Fa1ZlNnLQcXhTUPzyUZelWGpBopb4D6YrVa4xLWochom+eMr1M2r5xnd/5QS0P+7j6rK Wrzz6L3w8AZmQO7sX/VwIQv+fjKFJ3U= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 730B9A41452; Tue, 21 Jan 2025 09:57:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3AAB5C4CEE4; Tue, 21 Jan 2025 09:58:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737453548; bh=Eo8f08lrcV3v5bdbrUA1ZQLB7ILKyWieK2yN3UVHBBw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nOcSe+zn+SQExzGVJmzfDbPykH2LUlPfMPtlGyAoyGfDXrLhhyMS+lUgcbHw1a4Wz jbOQJDn2RMDsmYCvo+I+LOOWAvH/t33sSG3Wxi6k4qMum2yxJuHEjZe3AEp8Thq0vm 3eq9IIM/hKZW7e8raadP3VhM0smzDYZM4FufTAW+HJbROFbyCOHkPH+ER0/jXUDxNG ODBRoC1VzPp5GAbGelT61fPVZmp++PAFZ7mSlL5goi7IipJxrx5FOR0QfLFIX79ocU MVd294mCGBs7yY44AalYwmEEVPEpH0oW2DCFcLBPELvQzCiCRNisKNd+vpctP70o65 y9faULu6kJT3w== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v2 07/10] module: switch to execmem API for remapping as RW and restoring ROX Date: Tue, 21 Jan 2025 11:57:36 +0200 Message-ID: <20250121095739.986006-8-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250121095739.986006-1-rppt@kernel.org> References: <20250121095739.986006-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 8F9DE8000D X-Stat-Signature: p7sq7ytmissg37zwi9i685kd3ddypn4u X-HE-Tag: 1737453549-166750 X-HE-Meta: U2FsdGVkX1/X0a/z/MM8704lpfJb1kU85x17Nr1uK5LvcvpTb6NRtBVXJRjnX1jC6sgQjVtJ3Rha6ckL8Xg1dkq5DsUmKnLqBT66xrgp3mbIGf30EjgREXaVlnxRL34M6HoHpDDn19z0iYQTcE9AIempg3gQ0LZWsz9F3wDaUl/12EcKGVZ7pRN8nmJzefk99bBvYPyoJw9MEliNQudAHLZcdzyyeGTYqiuZhQfXOE+/PwxIJVHz8pDafWz/oMD0Kjgu1SSCwbIYa5XFA6fQZIeG8Tc+SgOUiWkgK8O1xAYxi8JJeOG1ncDvsJZiB01vleE42e36iTMLY0tpLG6yJGZgivuUKdzDQu81BFySUjSQOBtTmpn773nR9VBRDhKW2g3GfuxXvfN3AyFPNZoLUjhLvo33L5EqSqDa9tZtzyQF1f4VlE/UuH70MpVsMDuBhD/r+u0P7VvIyhKCPcnZ+fTc6MatoewJ40J9s10/MgCsHPipOlh6h022xTONvCezYQ9NUGr2Y+ii6jtL0RRDiLs+mdd4znAVmigpLAnNsdLMYnK77hOHF8+uDWd7fXh23nrr3Kvk7CYobl+1tGv55nGGcnoA992a5YdYWtNAOw2BHQS8/ZtRs1Y5dwlPIMBfoDJmv56cwvJ/8Cwc0zyplNoZ/F/vIlk3/xW3py5gKXwBusJSdMsnnOgzbqWmNfDWvkI4eAluwcqEQuBNwO7wSoiER+vItTDvFTAX8kzH7SLLN6zex/A8IxkWqc+GKv1MkLxUNSj64TqHoRJ7j50b5GKqR/dgcWAdvelwwa7jcOgIQ0F5BOW1XZMaPwjgq9D4LxE5zGdh6tlV6ZRcy91zR01nYHmv/5PQ0GxDgjLcjhatUoFo3htaHBL8EQaNP9VQZravEiiRECsJ5ot7bBlbyN+8n2sixIPGSjHrfklZbVRsk606F7cHNUU3iUViUfwMXchhU7d7ZKiw2hiDvOj dGOEQKWj WR92aOqNHB+A/Kx5Wm9FIkalHHZgSFilpcHKMTxFvoeX566dsc3+F0UWlRN+mDHz2pbsIX47KHVoq/KIfJ2aKXo0szoWwhx3g5TFUhWfBng+bFR2Mk8BO1AWTCbpyozon+I8e5avYjCUzktP9mbFIVCIxqqG3w7kJKbwFxazjD0xDPOn//4u2Rc9B3L85J3EZY2+ai2oiF5/Xy2Ik5/ClT+MKFBJ92RNp6a7NCLOdBwHbkxw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" Instead of using writable copy for module text sections, temporarily remap the memory allocated from execmem's ROX cache as writable and restore its ROX permissions after the module is formed. This will allow removing nasty games with writable copy in alternatives patching on x86. Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/module.h | 8 +---- include/linux/moduleloader.h | 4 --- kernel/module/main.c | 67 ++++++------------------------------ kernel/module/strict_rwx.c | 9 ++--- 4 files changed, 17 insertions(+), 71 deletions(-) diff --git a/include/linux/module.h b/include/linux/module.h index 624a5317d5a5..e9fc9d1fa476 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -368,7 +368,6 @@ enum mod_mem_type { struct module_memory { void *base; - void *rw_copy; bool is_rox; unsigned int size; @@ -775,14 +774,9 @@ static inline bool is_livepatch_module(struct module *mod) void set_module_sig_enforced(void); -void *__module_writable_address(struct module *mod, void *loc); - static inline void *module_writable_address(struct module *mod, void *loc) { - if (!IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX) || !mod || - mod->state != MODULE_STATE_UNFORMED) - return loc; - return __module_writable_address(mod, loc); + return loc; } #else /* !CONFIG_MODULES... */ diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h index 1f5507ba5a12..e395461d59e5 100644 --- a/include/linux/moduleloader.h +++ b/include/linux/moduleloader.h @@ -108,10 +108,6 @@ int module_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *mod); -int module_post_finalize(const Elf_Ehdr *hdr, - const Elf_Shdr *sechdrs, - struct module *mod); - #ifdef CONFIG_MODULES void flush_module_init_free_work(void); #else diff --git a/kernel/module/main.c b/kernel/module/main.c index ad8ef20c120f..ee6b46e753a0 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1221,18 +1221,6 @@ void __weak module_arch_freeing_init(struct module *mod) { } -void *__module_writable_address(struct module *mod, void *loc) -{ - for_class_mod_mem_type(type, text) { - struct module_memory *mem = &mod->mem[type]; - - if (loc >= mem->base && loc < mem->base + mem->size) - return loc + (mem->rw_copy - mem->base); - } - - return loc; -} - static int module_memory_alloc(struct module *mod, enum mod_mem_type type) { unsigned int size = PAGE_ALIGN(mod->mem[type].size); @@ -1250,21 +1238,15 @@ static int module_memory_alloc(struct module *mod, enum mod_mem_type type) if (!ptr) return -ENOMEM; - mod->mem[type].base = ptr; - if (execmem_is_rox(execmem_type)) { - ptr = vzalloc(size); + int err = execmem_make_temp_rw(ptr, size); - if (!ptr) { - execmem_free(mod->mem[type].base); + if (err) { + execmem_free(ptr); return -ENOMEM; } - mod->mem[type].rw_copy = ptr; mod->mem[type].is_rox = true; - } else { - mod->mem[type].rw_copy = mod->mem[type].base; - memset(mod->mem[type].base, 0, size); } /* @@ -1280,6 +1262,9 @@ static int module_memory_alloc(struct module *mod, enum mod_mem_type type) */ kmemleak_not_leak(ptr); + memset(ptr, 0, size); + mod->mem[type].base = ptr; + return 0; } @@ -1287,8 +1272,8 @@ static void module_memory_free(struct module *mod, enum mod_mem_type type) { struct module_memory *mem = &mod->mem[type]; - if (mem->is_rox) - vfree(mem->rw_copy); + if (mod->state == MODULE_STATE_UNFORMED && mem->is_rox) + execmem_restore_rox(mem->base, mem->size); execmem_free(mem->base); } @@ -2561,7 +2546,6 @@ static int move_module(struct module *mod, struct load_info *info) for_each_mod_mem_type(type) { if (!mod->mem[type].size) { mod->mem[type].base = NULL; - mod->mem[type].rw_copy = NULL; continue; } @@ -2578,7 +2562,6 @@ static int move_module(struct module *mod, struct load_info *info) void *dest; Elf_Shdr *shdr = &info->sechdrs[i]; const char *sname; - unsigned long addr; if (!(shdr->sh_flags & SHF_ALLOC)) continue; @@ -2599,14 +2582,12 @@ static int move_module(struct module *mod, struct load_info *info) ret = PTR_ERR(dest); goto out_err; } - addr = (unsigned long)dest; codetag_section_found = true; } else { enum mod_mem_type type = shdr->sh_entsize >> SH_ENTSIZE_TYPE_SHIFT; unsigned long offset = shdr->sh_entsize & SH_ENTSIZE_OFFSET_MASK; - addr = (unsigned long)mod->mem[type].base + offset; - dest = mod->mem[type].rw_copy + offset; + dest = mod->mem[type].base + offset; } if (shdr->sh_type != SHT_NOBITS) { @@ -2629,7 +2610,7 @@ static int move_module(struct module *mod, struct load_info *info) * users of info can keep taking advantage and using the newly * minted official memory area. */ - shdr->sh_addr = addr; + shdr->sh_addr = (unsigned long)dest; pr_debug("\t0x%lx 0x%.8lx %s\n", (long)shdr->sh_addr, (long)shdr->sh_size, info->secstrings + shdr->sh_name); } @@ -2782,17 +2763,8 @@ int __weak module_finalize(const Elf_Ehdr *hdr, return 0; } -int __weak module_post_finalize(const Elf_Ehdr *hdr, - const Elf_Shdr *sechdrs, - struct module *me) -{ - return 0; -} - static int post_relocation(struct module *mod, const struct load_info *info) { - int ret; - /* Sort exception table now relocations are done. */ sort_extable(mod->extable, mod->extable + mod->num_exentries); @@ -2804,24 +2776,7 @@ static int post_relocation(struct module *mod, const struct load_info *info) add_kallsyms(mod, info); /* Arch-specific module finalizing. */ - ret = module_finalize(info->hdr, info->sechdrs, mod); - if (ret) - return ret; - - for_each_mod_mem_type(type) { - struct module_memory *mem = &mod->mem[type]; - - if (mem->is_rox) { - if (!execmem_update_copy(mem->base, mem->rw_copy, - mem->size)) - return -ENOMEM; - - vfree(mem->rw_copy); - mem->rw_copy = NULL; - } - } - - return module_post_finalize(info->hdr, info->sechdrs, mod); + return module_finalize(info->hdr, info->sechdrs, mod); } /* Call module constructors. */ diff --git a/kernel/module/strict_rwx.c b/kernel/module/strict_rwx.c index 239e5013359d..ce47b6346f27 100644 --- a/kernel/module/strict_rwx.c +++ b/kernel/module/strict_rwx.c @@ -9,6 +9,7 @@ #include #include #include +#include #include "internal.h" static int module_set_memory(const struct module *mod, enum mod_mem_type type, @@ -32,12 +33,12 @@ static int module_set_memory(const struct module *mod, enum mod_mem_type type, int module_enable_text_rox(const struct module *mod) { for_class_mod_mem_type(type, text) { + const struct module_memory *mem = &mod->mem[type]; int ret; - if (mod->mem[type].is_rox) - continue; - - if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) + if (mem->is_rox) + ret = execmem_restore_rox(mem->base, mem->size); + else if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) ret = module_set_memory(mod, type, set_memory_rox); else ret = module_set_memory(mod, type, set_memory_x); From patchwork Tue Jan 21 09:57:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13946042 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8439C02182 for ; Tue, 21 Jan 2025 09:59:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 58A60280009; Tue, 21 Jan 2025 04:59:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 539DC280005; Tue, 21 Jan 2025 04:59:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3DAEB280009; Tue, 21 Jan 2025 04:59:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1AC5D280005 for ; Tue, 21 Jan 2025 04:59:22 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C3856808C0 for ; Tue, 21 Jan 2025 09:59:21 +0000 (UTC) X-FDA: 83031011322.12.82D83AD Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf27.hostedemail.com (Postfix) with ESMTP id 15DB840011 for ; Tue, 21 Jan 2025 09:59:19 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=uUfl5aZw; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf27.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737453560; a=rsa-sha256; cv=none; b=e12QU5uWQ4E3ujMaoBdSsckJQvJBLIqNQKPqYoq8XDpzOIE7nsVCNMSHI311306Y+SuxaE RnfjFUmDioOcavoL5xi/29Zdi7YhWuNN1Av1wTScfg8KgZqb+s/bPiNa5iq39i6ax09o8G YPrz5aS6wNgvx5NQ+TC0edGFSM8bgCE= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=uUfl5aZw; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf27.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737453560; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=13TmkeVkr2qGdMj9Vqok33BQm9CxYM1haCBTyjvpy0Q=; b=Jgb4HtlARrpK5gsbWd8wuYDaHrloDG97MXjTl1u7TgGxXqI2sm4LNZBC8oLhW6qPWh3qGK v+mDNSebeiUTJTT413wH0RuVRGG+QVbIBblGGdVQrY+GHxz8bFnBiUjFisIPKZhHOSje6C x6z4CfomVJqLggq4v/qTWIgJsSKV1aA= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id B22F85C57DD; Tue, 21 Jan 2025 09:58:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 37764C4CEDF; Tue, 21 Jan 2025 09:59:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737453558; bh=5WcbMLleT0o3eKZQNgsMEVrIOTvbPCyMPrwQ3+fNd90=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uUfl5aZwJkv4bPwVXoadqbBIIS5lvUI7KcPtUwUKK4cEvXatWnIzauTmEEUUv7FDv fuWMJOZQ4lyQoglUEWoGLrrL52mzXJfLrXExmwwOW6EJauBkxR2aYPVEDE76eEPmxm U/7ya68zv6IATYW9VuP8f5yeGlo0+66kQMD5PzVuU06iUCEyi5AIQsy5GooBzdU+qZ aHT2lvM2/XOXOCrA2NsDkaVWK9Fu/OWUxlGZJ/mddlmyf68a5BmckqKl9jJxj3Jzho QVn8OZPoMGwY4esTj3Ja+qBh47vjaUgwYxC8Q6NHs4LrgK7jzglw6QItjL3XqCf9UH ijzCeOsR/+5sQ== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v2 08/10] Revert "x86/module: prepare module loading for ROX allocations of text" Date: Tue, 21 Jan 2025 11:57:37 +0200 Message-ID: <20250121095739.986006-9-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250121095739.986006-1-rppt@kernel.org> References: <20250121095739.986006-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 15DB840011 X-Stat-Signature: bowx4sgm46jw1wgiirpond8fz1smpzkb X-Rspam-User: X-HE-Tag: 1737453559-474688 X-HE-Meta: U2FsdGVkX1+hivzz1Gp5VUMt6F2vhJ8wQxXswDdXDFySthSIVGXsVYciKg+/iQTn0F8xdJJncfQPWT37nTF2xXb+8fOMtQJhTOwz/PBEYqXHBzmH/m428eIOPwLS+m97ytTov962ab3bPrtRbgR+HNjmcZUoA13+Eua5Zz7W5ESetRLsJatJKPHzAKqfxlDENICtXyUOFIMKfCVzU37TjDaJNvu+JatYcMpHYFBo6+3Pd30zk7+ZkfjsRco8tMmW/oqIrG7mRc9N0Q3it/g/C3Wd+BRThcUpGurz9KBq9rdUd6GkUndstnBFVMJhFykXAJ3pBsuLMdIbHZjG9zFGupAuZ6KpvyVFVALSr+KFizuHjDu5FOC+hz9jHuhb36QOiz7U7TMckIaP+BhP+oNjPDUme6jViZWR83kSNKD1me5w+pPHTHHW9E8OC/9zLt/gUUq/IKsHixmahUQmkzaIJ1wUKKc5PH+ZYvmrO0EUmHTfj/b3NFHHgF0QBCfVKXtpK2gcTcNeGAUsUJJZ0aEr2EXsVMV5ZNdafFkCcFA0/bzJjx16zrPKqPnTHwhsUEBpNqPP1NCv/xc3Lcrrj+BVf4P+VTw1pFroc1yJeE8SH3oD4y7wxdHUvotAjD4X941LNIXxFQ7OoW5gWClzv9RAI4n4Fa/r7luQLzbRQZGwc64cALgJLzMbd3sEFDqWupu19wCRqjWZaqCa+37+4/HoWvaIkwENbjt+7iOhC5/YaOZEH6PrWBPFidyy228RQXBnGqr14RIK0yjQ0kY9THtT7/vDiwCfpe7UYL8rJ6tHVZEbhSJIXBYjamXgDJb0/NECoJL0b79ZQCkEM4fP3UPqcXgwBN+lIco64y6NkXaS+2zgLnQfVyLcYumoY3mnEpN3XzCU3YDGeXlkRtV4OG3UaTcmRrMvs9NzLkgY0tno4zfy/Of64zmV5JqpH0znId9f3ZbmywtkP4f46zEwijZ L3sf4ehr SRwcy6fe2UiBW1AVqkFRxdmAeUvb2sDpD8oFwzAENSQsKhuzKtcT+AMlEKjL3DRVgf89IL6uP3mXIeD3L0/BaU9AWpZWHWURTwQ0Jth7PWdzQsWnWfeSeaz3TQeeVR2/Oc3uZ7gJ7JPgnOmcA7YMBYIgRKm2/SfZ7qXfQQEcqralIgCUatSFJSIEWT/A74YcZRT0s9DpdkqV7NoHH+4PK6HZCU6ff66HK7WR3g+6eZySal+M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" The module code does not create a writable copy of the executable memory anymore so there is no need to handle it in module relocation and alternatives patching. This reverts commit 9bfc4824fd4836c16bb44f922bfaffba5da3e4f3. Signed-off-by: Mike Rapoport (Microsoft) --- arch/um/kernel/um_arch.c | 11 +- arch/x86/entry/vdso/vma.c | 3 +- arch/x86/include/asm/alternative.h | 14 +-- arch/x86/kernel/alternative.c | 181 ++++++++++++----------------- arch/x86/kernel/ftrace.c | 30 +++-- arch/x86/kernel/module.c | 45 +++---- 6 files changed, 117 insertions(+), 167 deletions(-) diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c index 8037a967225d..d2cc2c69a8c4 100644 --- a/arch/um/kernel/um_arch.c +++ b/arch/um/kernel/um_arch.c @@ -440,25 +440,24 @@ void __init arch_cpu_finalize_init(void) os_check_bugs(); } -void apply_seal_endbr(s32 *start, s32 *end, struct module *mod) +void apply_seal_endbr(s32 *start, s32 *end) { } -void apply_retpolines(s32 *start, s32 *end, struct module *mod) +void apply_retpolines(s32 *start, s32 *end) { } -void apply_returns(s32 *start, s32 *end, struct module *mod) +void apply_returns(s32 *start, s32 *end) { } void apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, - s32 *start_cfi, s32 *end_cfi, struct module *mod) + s32 *start_cfi, s32 *end_cfi) { } -void apply_alternatives(struct alt_instr *start, struct alt_instr *end, - struct module *mod) +void apply_alternatives(struct alt_instr *start, struct alt_instr *end) { } diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c index 39e6efc1a9ca..bfc7cabf4017 100644 --- a/arch/x86/entry/vdso/vma.c +++ b/arch/x86/entry/vdso/vma.c @@ -48,8 +48,7 @@ int __init init_vdso_image(const struct vdso_image *image) apply_alternatives((struct alt_instr *)(image->data + image->alt), (struct alt_instr *)(image->data + image->alt + - image->alt_len), - NULL); + image->alt_len)); return 0; } diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h index dc03a647776d..ca9ae606aab9 100644 --- a/arch/x86/include/asm/alternative.h +++ b/arch/x86/include/asm/alternative.h @@ -96,16 +96,16 @@ extern struct alt_instr __alt_instructions[], __alt_instructions_end[]; * instructions were patched in already: */ extern int alternatives_patched; -struct module; extern void alternative_instructions(void); -extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end, - struct module *mod); -extern void apply_retpolines(s32 *start, s32 *end, struct module *mod); -extern void apply_returns(s32 *start, s32 *end, struct module *mod); -extern void apply_seal_endbr(s32 *start, s32 *end, struct module *mod); +extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end); +extern void apply_retpolines(s32 *start, s32 *end); +extern void apply_returns(s32 *start, s32 *end); +extern void apply_seal_endbr(s32 *start, s32 *end); extern void apply_fineibt(s32 *start_retpoline, s32 *end_retpoine, - s32 *start_cfi, s32 *end_cfi, struct module *mod); + s32 *start_cfi, s32 *end_cfi); + +struct module; struct callthunk_sites { s32 *call_start, *call_end; diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 243843e44e89..d17518ca19b8 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -392,10 +392,8 @@ EXPORT_SYMBOL(BUG_func); * Rewrite the "call BUG_func" replacement to point to the target of the * indirect pv_ops call "call *disp(%ip)". */ -static int alt_replace_call(u8 *instr, u8 *insn_buff, struct alt_instr *a, - struct module *mod) +static int alt_replace_call(u8 *instr, u8 *insn_buff, struct alt_instr *a) { - u8 *wr_instr = module_writable_address(mod, instr); void *target, *bug = &BUG_func; s32 disp; @@ -405,14 +403,14 @@ static int alt_replace_call(u8 *instr, u8 *insn_buff, struct alt_instr *a, } if (a->instrlen != 6 || - wr_instr[0] != CALL_RIP_REL_OPCODE || - wr_instr[1] != CALL_RIP_REL_MODRM) { + instr[0] != CALL_RIP_REL_OPCODE || + instr[1] != CALL_RIP_REL_MODRM) { pr_err("ALT_FLAG_DIRECT_CALL set for unrecognized indirect call\n"); BUG(); } /* Skip CALL_RIP_REL_OPCODE and CALL_RIP_REL_MODRM */ - disp = *(s32 *)(wr_instr + 2); + disp = *(s32 *)(instr + 2); #ifdef CONFIG_X86_64 /* ff 15 00 00 00 00 call *0x0(%rip) */ /* target address is stored at "next instruction + disp". */ @@ -450,8 +448,7 @@ static inline u8 * instr_va(struct alt_instr *i) * to refetch changed I$ lines. */ void __init_or_module noinline apply_alternatives(struct alt_instr *start, - struct alt_instr *end, - struct module *mod) + struct alt_instr *end) { u8 insn_buff[MAX_PATCH_LEN]; u8 *instr, *replacement; @@ -480,7 +477,6 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, */ for (a = start; a < end; a++) { int insn_buff_sz = 0; - u8 *wr_instr, *wr_replacement; /* * In case of nested ALTERNATIVE()s the outer alternative might @@ -494,11 +490,7 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, } instr = instr_va(a); - wr_instr = module_writable_address(mod, instr); - replacement = (u8 *)&a->repl_offset + a->repl_offset; - wr_replacement = module_writable_address(mod, replacement); - BUG_ON(a->instrlen > sizeof(insn_buff)); BUG_ON(a->cpuid >= (NCAPINTS + NBUGINTS) * 32); @@ -509,9 +501,9 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, * patch if feature is *NOT* present. */ if (!boot_cpu_has(a->cpuid) == !(a->flags & ALT_FLAG_NOT)) { - memcpy(insn_buff, wr_instr, a->instrlen); + memcpy(insn_buff, instr, a->instrlen); optimize_nops(instr, insn_buff, a->instrlen); - text_poke_early(wr_instr, insn_buff, a->instrlen); + text_poke_early(instr, insn_buff, a->instrlen); continue; } @@ -521,12 +513,11 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, instr, instr, a->instrlen, replacement, a->replacementlen, a->flags); - memcpy(insn_buff, wr_replacement, a->replacementlen); + memcpy(insn_buff, replacement, a->replacementlen); insn_buff_sz = a->replacementlen; if (a->flags & ALT_FLAG_DIRECT_CALL) { - insn_buff_sz = alt_replace_call(instr, insn_buff, a, - mod); + insn_buff_sz = alt_replace_call(instr, insn_buff, a); if (insn_buff_sz < 0) continue; } @@ -536,11 +527,11 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, apply_relocation(insn_buff, instr, a->instrlen, replacement, a->replacementlen); - DUMP_BYTES(ALT, wr_instr, a->instrlen, "%px: old_insn: ", instr); + DUMP_BYTES(ALT, instr, a->instrlen, "%px: old_insn: ", instr); DUMP_BYTES(ALT, replacement, a->replacementlen, "%px: rpl_insn: ", replacement); DUMP_BYTES(ALT, insn_buff, insn_buff_sz, "%px: final_insn: ", instr); - text_poke_early(wr_instr, insn_buff, insn_buff_sz); + text_poke_early(instr, insn_buff, insn_buff_sz); } kasan_enable_current(); @@ -731,20 +722,18 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes) /* * Generated by 'objtool --retpoline'. */ -void __init_or_module noinline apply_retpolines(s32 *start, s32 *end, - struct module *mod) +void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) { s32 *s; for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr = module_writable_address(mod, addr); struct insn insn; int len, ret; u8 bytes[16]; u8 op1, op2; - ret = insn_decode_kernel(&insn, wr_addr); + ret = insn_decode_kernel(&insn, addr); if (WARN_ON_ONCE(ret < 0)) continue; @@ -772,9 +761,9 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end, len = patch_retpoline(addr, &insn, bytes); if (len == insn.length) { optimize_nops(addr, bytes, len); - DUMP_BYTES(RETPOLINE, ((u8*)wr_addr), len, "%px: orig: ", addr); + DUMP_BYTES(RETPOLINE, ((u8*)addr), len, "%px: orig: ", addr); DUMP_BYTES(RETPOLINE, ((u8*)bytes), len, "%px: repl: ", addr); - text_poke_early(wr_addr, bytes, len); + text_poke_early(addr, bytes, len); } } } @@ -810,8 +799,7 @@ static int patch_return(void *addr, struct insn *insn, u8 *bytes) return i; } -void __init_or_module noinline apply_returns(s32 *start, s32 *end, - struct module *mod) +void __init_or_module noinline apply_returns(s32 *start, s32 *end) { s32 *s; @@ -820,13 +808,12 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end, for (s = start; s < end; s++) { void *dest = NULL, *addr = (void *)s + *s; - void *wr_addr = module_writable_address(mod, addr); struct insn insn; int len, ret; u8 bytes[16]; u8 op; - ret = insn_decode_kernel(&insn, wr_addr); + ret = insn_decode_kernel(&insn, addr); if (WARN_ON_ONCE(ret < 0)) continue; @@ -846,35 +833,32 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end, len = patch_return(addr, &insn, bytes); if (len == insn.length) { - DUMP_BYTES(RET, ((u8*)wr_addr), len, "%px: orig: ", addr); + DUMP_BYTES(RET, ((u8*)addr), len, "%px: orig: ", addr); DUMP_BYTES(RET, ((u8*)bytes), len, "%px: repl: ", addr); - text_poke_early(wr_addr, bytes, len); + text_poke_early(addr, bytes, len); } } } #else -void __init_or_module noinline apply_returns(s32 *start, s32 *end, - struct module *mod) { } +void __init_or_module noinline apply_returns(s32 *start, s32 *end) { } #endif /* CONFIG_MITIGATION_RETHUNK */ #else /* !CONFIG_MITIGATION_RETPOLINE || !CONFIG_OBJTOOL */ -void __init_or_module noinline apply_retpolines(s32 *start, s32 *end, - struct module *mod) { } -void __init_or_module noinline apply_returns(s32 *start, s32 *end, - struct module *mod) { } +void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) { } +void __init_or_module noinline apply_returns(s32 *start, s32 *end) { } #endif /* CONFIG_MITIGATION_RETPOLINE && CONFIG_OBJTOOL */ #ifdef CONFIG_X86_KERNEL_IBT -static void poison_cfi(void *addr, void *wr_addr); +static void poison_cfi(void *addr); -static void __init_or_module poison_endbr(void *addr, void *wr_addr, bool warn) +static void __init_or_module poison_endbr(void *addr, bool warn) { u32 endbr, poison = gen_endbr_poison(); - if (WARN_ON_ONCE(get_kernel_nofault(endbr, wr_addr))) + if (WARN_ON_ONCE(get_kernel_nofault(endbr, addr))) return; if (!is_endbr(endbr)) { @@ -889,7 +873,7 @@ static void __init_or_module poison_endbr(void *addr, void *wr_addr, bool warn) */ DUMP_BYTES(ENDBR, ((u8*)addr), 4, "%px: orig: ", addr); DUMP_BYTES(ENDBR, ((u8*)&poison), 4, "%px: repl: ", addr); - text_poke_early(wr_addr, &poison, 4); + text_poke_early(addr, &poison, 4); } /* @@ -898,23 +882,22 @@ static void __init_or_module poison_endbr(void *addr, void *wr_addr, bool warn) * Seal the functions for indirect calls by clobbering the ENDBR instructions * and the kCFI hash value. */ -void __init_or_module noinline apply_seal_endbr(s32 *start, s32 *end, struct module *mod) +void __init_or_module noinline apply_seal_endbr(s32 *start, s32 *end) { s32 *s; for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr = module_writable_address(mod, addr); - poison_endbr(addr, wr_addr, true); + poison_endbr(addr, true); if (IS_ENABLED(CONFIG_FINEIBT)) - poison_cfi(addr - 16, wr_addr - 16); + poison_cfi(addr - 16); } } #else -void __init_or_module apply_seal_endbr(s32 *start, s32 *end, struct module *mod) { } +void __init_or_module apply_seal_endbr(s32 *start, s32 *end) { } #endif /* CONFIG_X86_KERNEL_IBT */ @@ -1136,7 +1119,7 @@ static u32 decode_caller_hash(void *addr) } /* .retpoline_sites */ -static int cfi_disable_callers(s32 *start, s32 *end, struct module *mod) +static int cfi_disable_callers(s32 *start, s32 *end) { /* * Disable kCFI by patching in a JMP.d8, this leaves the hash immediate @@ -1148,23 +1131,20 @@ static int cfi_disable_callers(s32 *start, s32 *end, struct module *mod) for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr; u32 hash; addr -= fineibt_caller_size; - wr_addr = module_writable_address(mod, addr); - hash = decode_caller_hash(wr_addr); - + hash = decode_caller_hash(addr); if (!hash) /* nocfi callers */ continue; - text_poke_early(wr_addr, jmp, 2); + text_poke_early(addr, jmp, 2); } return 0; } -static int cfi_enable_callers(s32 *start, s32 *end, struct module *mod) +static int cfi_enable_callers(s32 *start, s32 *end) { /* * Re-enable kCFI, undo what cfi_disable_callers() did. @@ -1174,115 +1154,106 @@ static int cfi_enable_callers(s32 *start, s32 *end, struct module *mod) for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr; u32 hash; addr -= fineibt_caller_size; - wr_addr = module_writable_address(mod, addr); - hash = decode_caller_hash(wr_addr); + hash = decode_caller_hash(addr); if (!hash) /* nocfi callers */ continue; - text_poke_early(wr_addr, mov, 2); + text_poke_early(addr, mov, 2); } return 0; } /* .cfi_sites */ -static int cfi_rand_preamble(s32 *start, s32 *end, struct module *mod) +static int cfi_rand_preamble(s32 *start, s32 *end) { s32 *s; for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr = module_writable_address(mod, addr); u32 hash; - hash = decode_preamble_hash(wr_addr); + hash = decode_preamble_hash(addr); if (WARN(!hash, "no CFI hash found at: %pS %px %*ph\n", addr, addr, 5, addr)) return -EINVAL; hash = cfi_rehash(hash); - text_poke_early(wr_addr + 1, &hash, 4); + text_poke_early(addr + 1, &hash, 4); } return 0; } -static int cfi_rewrite_preamble(s32 *start, s32 *end, struct module *mod) +static int cfi_rewrite_preamble(s32 *start, s32 *end) { s32 *s; for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr = module_writable_address(mod, addr); u32 hash; - hash = decode_preamble_hash(wr_addr); + hash = decode_preamble_hash(addr); if (WARN(!hash, "no CFI hash found at: %pS %px %*ph\n", addr, addr, 5, addr)) return -EINVAL; - text_poke_early(wr_addr, fineibt_preamble_start, fineibt_preamble_size); - WARN_ON(*(u32 *)(wr_addr + fineibt_preamble_hash) != 0x12345678); - text_poke_early(wr_addr + fineibt_preamble_hash, &hash, 4); + text_poke_early(addr, fineibt_preamble_start, fineibt_preamble_size); + WARN_ON(*(u32 *)(addr + fineibt_preamble_hash) != 0x12345678); + text_poke_early(addr + fineibt_preamble_hash, &hash, 4); } return 0; } -static void cfi_rewrite_endbr(s32 *start, s32 *end, struct module *mod) +static void cfi_rewrite_endbr(s32 *start, s32 *end) { s32 *s; for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr = module_writable_address(mod, addr); - poison_endbr(addr + 16, wr_addr + 16, false); + poison_endbr(addr+16, false); } } /* .retpoline_sites */ -static int cfi_rand_callers(s32 *start, s32 *end, struct module *mod) +static int cfi_rand_callers(s32 *start, s32 *end) { s32 *s; for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr; u32 hash; addr -= fineibt_caller_size; - wr_addr = module_writable_address(mod, addr); - hash = decode_caller_hash(wr_addr); + hash = decode_caller_hash(addr); if (hash) { hash = -cfi_rehash(hash); - text_poke_early(wr_addr + 2, &hash, 4); + text_poke_early(addr + 2, &hash, 4); } } return 0; } -static int cfi_rewrite_callers(s32 *start, s32 *end, struct module *mod) +static int cfi_rewrite_callers(s32 *start, s32 *end) { s32 *s; for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr; u32 hash; addr -= fineibt_caller_size; - wr_addr = module_writable_address(mod, addr); - hash = decode_caller_hash(wr_addr); + hash = decode_caller_hash(addr); if (hash) { - text_poke_early(wr_addr, fineibt_caller_start, fineibt_caller_size); - WARN_ON(*(u32 *)(wr_addr + fineibt_caller_hash) != 0x12345678); - text_poke_early(wr_addr + fineibt_caller_hash, &hash, 4); + text_poke_early(addr, fineibt_caller_start, fineibt_caller_size); + WARN_ON(*(u32 *)(addr + fineibt_caller_hash) != 0x12345678); + text_poke_early(addr + fineibt_caller_hash, &hash, 4); } /* rely on apply_retpolines() */ } @@ -1291,9 +1262,8 @@ static int cfi_rewrite_callers(s32 *start, s32 *end, struct module *mod) } static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, - s32 *start_cfi, s32 *end_cfi, struct module *mod) + s32 *start_cfi, s32 *end_cfi, bool builtin) { - bool builtin = mod ? false : true; int ret; if (WARN_ONCE(fineibt_preamble_size != 16, @@ -1311,7 +1281,7 @@ static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, * rewrite them. This disables all CFI. If this succeeds but any of the * later stages fails, we're without CFI. */ - ret = cfi_disable_callers(start_retpoline, end_retpoline, mod); + ret = cfi_disable_callers(start_retpoline, end_retpoline); if (ret) goto err; @@ -1322,11 +1292,11 @@ static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, cfi_bpf_subprog_hash = cfi_rehash(cfi_bpf_subprog_hash); } - ret = cfi_rand_preamble(start_cfi, end_cfi, mod); + ret = cfi_rand_preamble(start_cfi, end_cfi); if (ret) goto err; - ret = cfi_rand_callers(start_retpoline, end_retpoline, mod); + ret = cfi_rand_callers(start_retpoline, end_retpoline); if (ret) goto err; } @@ -1338,7 +1308,7 @@ static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, return; case CFI_KCFI: - ret = cfi_enable_callers(start_retpoline, end_retpoline, mod); + ret = cfi_enable_callers(start_retpoline, end_retpoline); if (ret) goto err; @@ -1348,17 +1318,17 @@ static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, case CFI_FINEIBT: /* place the FineIBT preamble at func()-16 */ - ret = cfi_rewrite_preamble(start_cfi, end_cfi, mod); + ret = cfi_rewrite_preamble(start_cfi, end_cfi); if (ret) goto err; /* rewrite the callers to target func()-16 */ - ret = cfi_rewrite_callers(start_retpoline, end_retpoline, mod); + ret = cfi_rewrite_callers(start_retpoline, end_retpoline); if (ret) goto err; /* now that nobody targets func()+0, remove ENDBR there */ - cfi_rewrite_endbr(start_cfi, end_cfi, mod); + cfi_rewrite_endbr(start_cfi, end_cfi); if (builtin) pr_info("Using FineIBT CFI\n"); @@ -1377,7 +1347,7 @@ static inline void poison_hash(void *addr) *(u32 *)addr = 0; } -static void poison_cfi(void *addr, void *wr_addr) +static void poison_cfi(void *addr) { switch (cfi_mode) { case CFI_FINEIBT: @@ -1389,8 +1359,8 @@ static void poison_cfi(void *addr, void *wr_addr) * ud2 * 1: nop */ - poison_endbr(addr, wr_addr, false); - poison_hash(wr_addr + fineibt_preamble_hash); + poison_endbr(addr, false); + poison_hash(addr + fineibt_preamble_hash); break; case CFI_KCFI: @@ -1399,7 +1369,7 @@ static void poison_cfi(void *addr, void *wr_addr) * movl $0, %eax * .skip 11, 0x90 */ - poison_hash(wr_addr + 1); + poison_hash(addr + 1); break; default: @@ -1410,21 +1380,22 @@ static void poison_cfi(void *addr, void *wr_addr) #else static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, - s32 *start_cfi, s32 *end_cfi, struct module *mod) + s32 *start_cfi, s32 *end_cfi, bool builtin) { } #ifdef CONFIG_X86_KERNEL_IBT -static void poison_cfi(void *addr, void *wr_addr) { } +static void poison_cfi(void *addr) { } #endif #endif void apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, - s32 *start_cfi, s32 *end_cfi, struct module *mod) + s32 *start_cfi, s32 *end_cfi) { return __apply_fineibt(start_retpoline, end_retpoline, - start_cfi, end_cfi, mod); + start_cfi, end_cfi, + /* .builtin = */ false); } #ifdef CONFIG_SMP @@ -1721,16 +1692,16 @@ void __init alternative_instructions(void) paravirt_set_cap(); __apply_fineibt(__retpoline_sites, __retpoline_sites_end, - __cfi_sites, __cfi_sites_end, NULL); + __cfi_sites, __cfi_sites_end, true); /* * Rewrite the retpolines, must be done before alternatives since * those can rewrite the retpoline thunks. */ - apply_retpolines(__retpoline_sites, __retpoline_sites_end, NULL); - apply_returns(__return_sites, __return_sites_end, NULL); + apply_retpolines(__retpoline_sites, __retpoline_sites_end); + apply_returns(__return_sites, __return_sites_end); - apply_alternatives(__alt_instructions, __alt_instructions_end, NULL); + apply_alternatives(__alt_instructions, __alt_instructions_end); /* * Now all calls are established. Apply the call thunks if @@ -1741,7 +1712,7 @@ void __init alternative_instructions(void) /* * Seal all functions that do not have their address taken. */ - apply_seal_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end, NULL); + apply_seal_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end); #ifdef CONFIG_SMP /* Patch to UP if other cpus not imminent. */ diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 4dd0ad6c94d6..adb09f78edb2 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -118,13 +118,10 @@ ftrace_modify_code_direct(unsigned long ip, const char *old_code, return ret; /* replace the text with the new text */ - if (ftrace_poke_late) { + if (ftrace_poke_late) text_poke_queue((void *)ip, new_code, MCOUNT_INSN_SIZE, NULL); - } else { - mutex_lock(&text_mutex); - text_poke((void *)ip, new_code, MCOUNT_INSN_SIZE); - mutex_unlock(&text_mutex); - } + else + text_poke_early((void *)ip, new_code, MCOUNT_INSN_SIZE); return 0; } @@ -321,7 +318,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) unsigned const char op_ref[] = { 0x48, 0x8b, 0x15 }; unsigned const char retq[] = { RET_INSN_OPCODE, INT3_INSN_OPCODE }; union ftrace_op_code_union op_ptr; - void *ret; + int ret; if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) { start_offset = (unsigned long)ftrace_regs_caller; @@ -352,15 +349,15 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) npages = DIV_ROUND_UP(*tramp_size, PAGE_SIZE); /* Copy ftrace_caller onto the trampoline memory */ - ret = text_poke_copy(trampoline, (void *)start_offset, size); - if (WARN_ON(!ret)) + ret = copy_from_kernel_nofault(trampoline, (void *)start_offset, size); + if (WARN_ON(ret < 0)) goto fail; ip = trampoline + size; if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) __text_gen_insn(ip, JMP32_INSN_OPCODE, ip, x86_return_thunk, JMP32_INSN_SIZE); else - text_poke_copy(ip, retq, sizeof(retq)); + memcpy(ip, retq, sizeof(retq)); /* No need to test direct calls on created trampolines */ if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) { @@ -368,7 +365,8 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) ip = trampoline + (jmp_offset - start_offset); if (WARN_ON(*(char *)ip != 0x75)) goto fail; - if (!text_poke_copy(ip, x86_nops[2], 2)) + ret = copy_from_kernel_nofault(ip, x86_nops[2], 2); + if (ret < 0) goto fail; } @@ -381,7 +379,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) */ ptr = (unsigned long *)(trampoline + size + RET_SIZE); - text_poke_copy(ptr, &ops, sizeof(unsigned long)); + *ptr = (unsigned long)ops; op_offset -= start_offset; memcpy(&op_ptr, trampoline + op_offset, OP_REF_SIZE); @@ -397,7 +395,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) op_ptr.offset = offset; /* put in the new offset to the ftrace_ops */ - text_poke_copy(trampoline + op_offset, &op_ptr, OP_REF_SIZE); + memcpy(trampoline + op_offset, &op_ptr, OP_REF_SIZE); /* put in the call to the function */ mutex_lock(&text_mutex); @@ -407,9 +405,9 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) * the depth accounting before the call already. */ dest = ftrace_ops_get_func(ops); - text_poke_copy_locked(trampoline + call_offset, - text_gen_insn(CALL_INSN_OPCODE, trampoline + call_offset, dest), - CALL_INSN_SIZE, false); + memcpy(trampoline + call_offset, + text_gen_insn(CALL_INSN_OPCODE, trampoline + call_offset, dest), + CALL_INSN_SIZE); mutex_unlock(&text_mutex); /* ALLOC_TRAMP flags lets us know we created it */ diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index 8984abd91c00..837450b6e882 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -146,21 +146,18 @@ static int __write_relocate_add(Elf64_Shdr *sechdrs, } if (apply) { - void *wr_loc = module_writable_address(me, loc); - - if (memcmp(wr_loc, &zero, size)) { + if (memcmp(loc, &zero, size)) { pr_err("x86/modules: Invalid relocation target, existing value is nonzero for type %d, loc %p, val %Lx\n", (int)ELF64_R_TYPE(rel[i].r_info), loc, val); return -ENOEXEC; } - write(wr_loc, &val, size); + write(loc, &val, size); } else { if (memcmp(loc, &val, size)) { pr_warn("x86/modules: Invalid relocation target, existing value does not match expected value for type %d, loc %p, val %Lx\n", (int)ELF64_R_TYPE(rel[i].r_info), loc, val); return -ENOEXEC; } - /* FIXME: needs care for ROX module allocations */ write(loc, &zero, size); } } @@ -227,7 +224,7 @@ int module_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *me) { - const Elf_Shdr *s, *alt = NULL, + const Elf_Shdr *s, *alt = NULL, *locks = NULL, *orc = NULL, *orc_ip = NULL, *retpolines = NULL, *returns = NULL, *ibt_endbr = NULL, *calls = NULL, *cfi = NULL; @@ -236,6 +233,8 @@ int module_finalize(const Elf_Ehdr *hdr, for (s = sechdrs; s < sechdrs + hdr->e_shnum; s++) { if (!strcmp(".altinstructions", secstrings + s->sh_name)) alt = s; + if (!strcmp(".smp_locks", secstrings + s->sh_name)) + locks = s; if (!strcmp(".orc_unwind", secstrings + s->sh_name)) orc = s; if (!strcmp(".orc_unwind_ip", secstrings + s->sh_name)) @@ -266,20 +265,20 @@ int module_finalize(const Elf_Ehdr *hdr, csize = cfi->sh_size; } - apply_fineibt(rseg, rseg + rsize, cseg, cseg + csize, me); + apply_fineibt(rseg, rseg + rsize, cseg, cseg + csize); } if (retpolines) { void *rseg = (void *)retpolines->sh_addr; - apply_retpolines(rseg, rseg + retpolines->sh_size, me); + apply_retpolines(rseg, rseg + retpolines->sh_size); } if (returns) { void *rseg = (void *)returns->sh_addr; - apply_returns(rseg, rseg + returns->sh_size, me); + apply_returns(rseg, rseg + returns->sh_size); } if (alt) { /* patch .altinstructions */ void *aseg = (void *)alt->sh_addr; - apply_alternatives(aseg, aseg + alt->sh_size, me); + apply_alternatives(aseg, aseg + alt->sh_size); } if (calls || alt) { struct callthunk_sites cs = {}; @@ -298,28 +297,8 @@ int module_finalize(const Elf_Ehdr *hdr, } if (ibt_endbr) { void *iseg = (void *)ibt_endbr->sh_addr; - apply_seal_endbr(iseg, iseg + ibt_endbr->sh_size, me); + apply_seal_endbr(iseg, iseg + ibt_endbr->sh_size); } - - if (orc && orc_ip) - unwind_module_init(me, (void *)orc_ip->sh_addr, orc_ip->sh_size, - (void *)orc->sh_addr, orc->sh_size); - - return 0; -} - -int module_post_finalize(const Elf_Ehdr *hdr, - const Elf_Shdr *sechdrs, - struct module *me) -{ - const Elf_Shdr *s, *locks = NULL; - char *secstrings = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset; - - for (s = sechdrs; s < sechdrs + hdr->e_shnum; s++) { - if (!strcmp(".smp_locks", secstrings + s->sh_name)) - locks = s; - } - if (locks) { void *lseg = (void *)locks->sh_addr; void *text = me->mem[MOD_TEXT].base; @@ -329,6 +308,10 @@ int module_post_finalize(const Elf_Ehdr *hdr, text, text_end); } + if (orc && orc_ip) + unwind_module_init(me, (void *)orc_ip->sh_addr, orc_ip->sh_size, + (void *)orc->sh_addr, orc->sh_size); + return 0; } From patchwork Tue Jan 21 09:57:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13946043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B33DBC02185 for ; Tue, 21 Jan 2025 09:59:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 42BCD28000A; Tue, 21 Jan 2025 04:59:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3DB72280005; Tue, 21 Jan 2025 04:59:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 254F628000A; Tue, 21 Jan 2025 04:59:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 095A2280005 for ; Tue, 21 Jan 2025 04:59:32 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id AD9DF1A0911 for ; Tue, 21 Jan 2025 09:59:31 +0000 (UTC) X-FDA: 83031011742.05.6EE6DA6 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf28.hostedemail.com (Postfix) with ESMTP id F1395C000F for ; Tue, 21 Jan 2025 09:59:29 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=lO3rwmtK; spf=pass (imf28.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737453570; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mVhqgsU2xvF7EXptjIpaUTgDXd0zYOHitWWau0YEx3Y=; b=cKPaafR3EDCNsMEWNua0x2oY8twUjOCb/O6sJbW23bkBuodF6mJ855If5XfDfrwvcVg8BT 9NBhFtVqHchf9xYv9nBeIIE+BbAL9JU0Flryq15xa/G54CeYAF/mz+sPYk3m4XhdlbM1u9 bKRUrKJF/9dHLIsFH0cih+OsVF6cMbY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737453570; a=rsa-sha256; cv=none; b=Q6Fo/fnhEVHYuG+FhQDL4127gt8qdwNrxxq7W1BC4aKykIpP69Kn2JM+zE3fJgglO9LQob 2/+n2vaG2AQQ4Cf0JocSh1Yrf/sr/4KcxdQBFaB0KraJjnH6YDZsu8GZijTrL5NzIGzXLF zNGVLupTP+P6jyT+UQPoZe/a6VzDjgs= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=lO3rwmtK; spf=pass (imf28.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id AEF3D5C48A3; Tue, 21 Jan 2025 09:58:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 653E5C4CEE3; Tue, 21 Jan 2025 09:59:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737453568; bh=4TvuKJvEyJTpjYu41+xgE0YbwjdhXodJecvgvQg8Q9g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lO3rwmtKdjvSyOTkukuBYX/Kdfyf9ugRWHUvmt5sFPUPJJKND4k1Tfj4UvoLYIsdV oMnnlIybnBcPJwyP7hLurwTy1FLO7xU5YCGVEO4Z9YNxa+A+ivvZR7sHrg0llrLDLo zHUHAR4TwpX+KvSBv9cOcBh0SRHwfc9WH1EqHO/ASARkmR+neVEbRiV3a+OWzFyv+l DQOMpeYbBHK4xkv86LMQM8yPr/9hrKuU0W9li6HW/W91RunXsLhWHt9O/DhA7s1odH VV4Eh8wESQ4/3qjlQGGAmyrDuIfg102fTw3gngm4/Pzu406yb9upTxzi6m79TptDtR IiWq4ZvvCndmA== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v2 09/10] module: drop unused module_writable_address() Date: Tue, 21 Jan 2025 11:57:38 +0200 Message-ID: <20250121095739.986006-10-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250121095739.986006-1-rppt@kernel.org> References: <20250121095739.986006-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: F1395C000F X-Stat-Signature: zicoo8h5xzkp1tos6m58cci69hw43h37 X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1737453569-731304 X-HE-Meta: U2FsdGVkX1/zd2Qd8XHjYMAvvxuXkAcWyUguB26TIfcKUTmVvkpNBQLZAp0ZAInyDOqMkjcUWx+esp/SVHjDiEYxz1mGWzQ2AWRB8O74+tBN+SoZF8WKVAAo1+YhPYVaamPpdqUX33dr9IGVpARCqHbdTTMLcXYqL2JvCQdTUW/wI8IjZHctCGU8T9dI2iDLEmPg1VXEQOlan9iba5NdLJ13YZZbpZ1VbLn0kOla6QPbq1fl1VzTMcSUrPjg6xgG2OPSqKv7jAs/zD98EGeNOuASnIbOSzVq9TR+xqUNZKi/CprdMY9ydalmfTwQOI7/etNZrdR/ovpF2zzelC05yuRcyILmojxcshML/c4NotVlw3Q/EAI7wsvYqSSDYoPmPqWcvGiPzdZkwmEpCi178ugn4oRHwJLEtEDEHPon1bJQAI9mAztA4VIY+DMyX51ikAb4n+wUPfGDEzMVjZR1Q4FEwwZeLj/kDdwP2V7ZZKWVvhnLlSxqx2dM1k/qgwq1Q5SpgaZ0qtqyCutUIgc8uHW7RXLWyR2cpDVt+yR6jIFQQ0/uzvrvIrnuTbDP9crOMGgbIT5uqC6HJWgoVjxKOHCxkrYIzMZ70z5KLUMOSaZZvHPCIufL4NpFMq23cbKSvQ1SgS0UC7iLCSJLbxkqL98fodYtn/QOtSS0aadgTexR0JqoheHlGiFnBAyyZprDx2AZgEZ7fS21VEK/rlqpimwzgOgKxl8GyFYvEHMEd/X+cthbvxrBCDJkxj/lRYk6lgcWMrpai4SiYbTx2vP0+KR3DlvG0TLo9ZnkjCX7ZBWy+dieII3byNQEqvoUt/+z7EsgqENpUvujFBscmMp4Q3ya5KPAZ/2E7c6qCcaAC8O5ny9i/1a29Y4p96oWLmOfXaXMRcoUOBCnrZhquDedtEIbttxqI59EDg9LACcS+rcMXnA1FQfWt+qW/H1ISTdkswYIXqX4nOCXyXnRRzg F49BANnZ zGRFG6ptRLZIQYYikfE2AS6lADpNYucIPbAwz1A0UWRwODPrsVH0BKzXtri9LmyjyRAOcQcroEHAjz5bfBXRUWs1EC0b5F41NJjv2KXs/7Ap33aH8NcomtNY+UNM26UldjOCQ2uJjqGwNBVUTQTjGezueORs6NNAgvGp5xZOD/rQY7SbzdonZMBaMEDYcxwnptCufvcqGeI7lLxL4Kb/RV5o3J34HUSeQLGCvX8FxRKWfK2GhP1s+Bsk3eFiDHV1Ptp8AgyLAlFCH/RpiGKlPOv0Hggqm2hwtYsjk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" module_writable_address() is unused and can be removed. Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/module.h | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/include/linux/module.h b/include/linux/module.h index e9fc9d1fa476..222099bb07cf 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -774,11 +774,6 @@ static inline bool is_livepatch_module(struct module *mod) void set_module_sig_enforced(void); -static inline void *module_writable_address(struct module *mod, void *loc) -{ - return loc; -} - #else /* !CONFIG_MODULES... */ static inline struct module *__module_address(unsigned long addr) @@ -886,11 +881,6 @@ static inline bool module_is_coming(struct module *mod) { return false; } - -static inline void *module_writable_address(struct module *mod, void *loc) -{ - return loc; -} #endif /* CONFIG_MODULES */ #ifdef CONFIG_SYSFS From patchwork Tue Jan 21 09:57:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13946044 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 268DCC02185 for ; Tue, 21 Jan 2025 09:59:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AE23A28000C; Tue, 21 Jan 2025 04:59:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AC3E028000B; Tue, 21 Jan 2025 04:59:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A7E528000C; Tue, 21 Jan 2025 04:59:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7DC7E28000B for ; Tue, 21 Jan 2025 04:59:41 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 476DAC05C0 for ; Tue, 21 Jan 2025 09:59:41 +0000 (UTC) X-FDA: 83031012162.13.E8BD0C4 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf19.hostedemail.com (Postfix) with ESMTP id BE3991A0004 for ; Tue, 21 Jan 2025 09:59:39 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=RtnsARiR; spf=pass (imf19.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737453579; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aMCcoKmYu2iXUwJbQTVGV2jsmkWbYCGb/CE1d3c5aKw=; b=GOE1sVLUMp5whUD3Qx7Xha1tKDTYSW/ifutHJPpRDyAaB3O1bjzH2n5fVaIshZTf4LFNgZ e++ZvPkdgwTJeZBHalPCOXmdslMZmnnEx7QImejAZhGb/r/L++iQuNuTAxYVlg5SD5+7+5 6Ia822Ime83UXuX5j4YV6sI7GFYDqmQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737453579; a=rsa-sha256; cv=none; b=ThM6P7SLUCg9BwWc2nOXw68KsMhhOXSNc0pgpP77+Eb2bMYkdFB50FkGdILITVSQOefwaK HSpC79Q8dMia7iwKlBpddfHiq7SIN8fNcBu6Bla9rzESFjh+XDk79k7ZXuP7BnEZm/OvKo iCOSBZc3ITx4o9+Vaa4/IYW0WZz2TDU= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=RtnsARiR; spf=pass (imf19.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 9F32BA41455; Tue, 21 Jan 2025 09:57:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 609A5C4CEDF; Tue, 21 Jan 2025 09:59:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737453578; bh=iCOJADJJ/sFPbuOid5OXzq1M+MQ1WlHyV9KAisT6QLU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RtnsARiRHrenQWHrMnLuEAY7/eJ5WWknmOCHRtfZSkV3oFUmyozJVP5a4Shqstzg3 Z3E6ToP2NZa2ayoQLxkfuTxTAcK84dX/5ut/OsAu0PZlp34UjLqKMLqfLPQxsyzPVD 8QjGowjl6dMGqtqC1HEDEgE8n5NoPlzKoE6fyydZwid7WQ1Msc0Q4WXnelo39417IG eNQw19ghX0WQWwv5N6+onzWlI9S2+i3vTIwNmA5GD6H4FpAJ/uAQeDfVBpunl21d2w +3fQAgRw23aCdCr5+lQ0tXWAWIHQckUfsJaQrhRyjjuDJk+DO6cubGx+Ty6nWv9lQC 1ZFmWGRBIU28g== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v2 10/10] x86: re-enable EXECMEM_ROX support Date: Tue, 21 Jan 2025 11:57:39 +0200 Message-ID: <20250121095739.986006-11-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250121095739.986006-1-rppt@kernel.org> References: <20250121095739.986006-1-rppt@kernel.org> MIME-Version: 1.0 X-Stat-Signature: 6gt77dgukognhtbrdbc9ofwf5c598gx6 X-Rspamd-Queue-Id: BE3991A0004 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1737453579-122060 X-HE-Meta: U2FsdGVkX18LbC9cYEeXIq0qckOojudM59LPLap9qdVn8jqJyzfW0xX04YJ0p9wKql0AM/RYN2OopkBxQpjpYP9/arU5rme7Bz9HSee+rDIB1HLAS4MfI8AOqhz7puHOijALF5N+1owOvvkZXyaUopoeTWhOUoXmXhxnNwJ4Du4Mj04iR6OABQeB/t+EeJZEwn8my7qm5nl4XeEoyOPqSV/eBOZcxZ1EJ/YjhowLK8FtbCR1ap54EY5rWVqCzbMTrFJm4Ic68ft3VilX3Pr9KzqMX8HW9+S8K693tOFEnX0ovBUrDC0B4IArpECKF+8UzvivhzmwoW4P//an0rb9LXFB6bZpVAMH5CB7oWuKJClCYIxC+uhvD6NkwIBUDG3w/LahYTDe4Pg1mRczLQYcFanbdTAU/GEiOabggs2zkakTg3+I1eRzjCIBOLRmY9sDIQp0CBQ4YHya1ODSj03q0psoG1fF9zGR4T7GIOfnkOeZHs4aW1yhBlCkRN911DSWOpFhGJ1ABJjlmRmbfFxkJ/BdtTgzy3Tfq334kvXbG0ok+vwjQYiZK2guvDA8woHNuvxqTMZZV93aVdNosLELZjoO6fUH2Ir4dp4Rhyxzc4uRADnpA+DUsbghW0OtUqnGdsslXnsUe/4R542/OwDqQxalosT7R+fIDUcx7IH618yE9ZVTMwiOld7dXkaooM1hzvnhFLaI3y4NcKN0MN2wCGWBTUZ567czLc5ulMqbXi9MseVI2RbGV4DVczb48lCkXdXQ2SHsq6cHFGNFL3M9w7zQikrCo+gIirF2jVTh2RAu/jO52NuimkIahy+vlHZeWxjkH4cmDWbCJl20KGe4pFIS9OOtP4Wbk44MFUa9K22I82w1pJ6urXuI+F41q5j6PJVJFVDWBL/erl1unckjpV+Q+5W3Kf7EwNrMr1X5AWwzE6GZEZ3Ho9oyMCuYj39DBF9MYwAc/XN+TSiHT/7 K9qzHPtT qoPQwwQAXetMuknS2acGCwO65pT0Re8oIwuA27vrIPKxVOOm9R7Gv6LL2pzkrDmWWe28RA0jwrYg9QPxRNyxOi6yUaHPoyJnPyo71tq8sR7xXy8pqZ40LmHze9L+DOZ+P6AZ5h9iqXyko0IKOiZX/DAjnmBNdJFcskMd99IBzqOE1QEk0Wh0DaTbwu5D0XD5TG1nUZxKwjSVW/ojND0ebgBzRMP5k78dyUHoz2w5ZrwxA1W2ZoIBGlMPy/+V1bAB1F6yajypdHpynCOacFYo2mjkfCw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" after rework of execmem ROX caches Signed-off-by: Mike Rapoport (Microsoft) --- arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index ef6cfea9df73..9d7bd0ae48c4 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -83,6 +83,7 @@ config X86 select ARCH_HAS_DMA_OPS if GART_IOMMU || XEN select ARCH_HAS_EARLY_DEBUG if KGDB select ARCH_HAS_ELF_RANDOMIZE + select ARCH_HAS_EXECMEM_ROX if X86_64 select ARCH_HAS_FAST_MULTIPLIER select ARCH_HAS_FORTIFY_SOURCE select ARCH_HAS_GCOV_PROFILE_ALL