From patchwork Sun Jan 26 07:47:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13950602 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12D8DC0218D for ; Sun, 26 Jan 2025 07:48:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7F46E2800EB; Sun, 26 Jan 2025 02:48:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A3CC2800E8; Sun, 26 Jan 2025 02:48:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 66BA52800EB; Sun, 26 Jan 2025 02:48:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 463DA2800E8 for ; Sun, 26 Jan 2025 02:48:06 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E6E99120731 for ; Sun, 26 Jan 2025 07:48:05 +0000 (UTC) X-FDA: 83048824530.10.8CA0980 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf25.hostedemail.com (Postfix) with ESMTP id 5E82AA0004 for ; Sun, 26 Jan 2025 07:48:04 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=t5r4qgbb; spf=pass (imf25.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737877684; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+HrNZ99qkNC8KEVuL9mBInkUyujn5g9T+2i2xoZ40eA=; b=AgYMZCi7HedDviU5GyBhv9HLUtI/tMNbv0BMyrP5V9E5b9PgSKrm8NvKVG2NKVYIKFBTGv r09m/ToOePbS8Am9z0EahzpK97fmHjcTX6nXVkTdWXBL9gj7Og9gvm8qSum7aTHeb9S3AZ KVO0/ozuqZpQKY3gsbZsqGaFgUBzfyQ= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=t5r4qgbb; spf=pass (imf25.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737877684; a=rsa-sha256; cv=none; b=G7ipqod1eMf1JH7lY7cBHwW4rH9I+Ru5Opd1m+xyPB7AVHkbFx3qWhrU+0+XNUrSzOGERc sNfZzVJJsc1OPpiDRaUacKkfINWmf2nmj9FGfGKrzfhgGnjDGhZzuf2A1x7v8a2z8QnAe/ 6mmgQWZPj6inWfp2t/Sm4aGxZK4F8oA= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 1B329A40135; Sun, 26 Jan 2025 07:46:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 34FF5C4CEEA; Sun, 26 Jan 2025 07:47:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737877682; bh=dDY3jvGc45EmH43gWTWL5MYq9/Q1o2IM8rTUBWv64F4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=t5r4qgbbVp+nn4AwESyq/kRBw7iTmIzwwOhO3g1QZKZPsL1LLunQnJSvESCwGNX31 fLfPLM9b9UVXxJ1voOhzdE2kaPMQswbGjy8LnTPVrwb6a+JqfwGRg2d2bTNgrVscEe wSEIp2UELK9QFXarhTcBem6rAsoeTzDEztXuHCE2IDKD+8q+h8vaMBMOtMBsslCWMI LgFtJPlxWPZ4Duaypkzc9ZGfgijEAzwbLYGxqkBN05QJpFrSNxB7AbaKYpR/BrwIrv JImzfewDKD/78iXFN+lUP0nSrcuA/GyVpaKeOoJZZ9LFtTqCUm0WT0cIxtZCjp9p3E Oz/ejtI5b29RQ== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v3 1/9] x86/mm/pat: cpa-test: fix length for CPA_ARRAY test Date: Sun, 26 Jan 2025 09:47:25 +0200 Message-ID: <20250126074733.1384926-2-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250126074733.1384926-1-rppt@kernel.org> References: <20250126074733.1384926-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 5E82AA0004 X-Stat-Signature: rxrx5976z5xw5wfq98s1fafa96n9r8wy X-Rspam-User: X-HE-Tag: 1737877684-864759 X-HE-Meta: U2FsdGVkX18ZVzjtT44jqqp8hYiVY0TKmHFR5xkzyCgT4eRr+w8nB6SxCLDyj7fJWsK9dtT82ruQDvbOcEZxjjhvkkgnsXxeYKdgBRaLwmmyKeddhdthaX58EHQAtlak5qnBwKn1IgsP9ahr8pQoCMOhz9Db+e6FeFZF5C7B0KhKpgaxFj3xp3eXz1DBVSPiNelquV1XgQNJI0/1aeI9D1xJDmUecLvjoL6eEE4kaRO4N2m/LsMSucbXsFigKpNBOtRPT95RidR80HMOQcSxTZDilwGe7RT9UEhHxa2+U4FQRqiJj9XNZ4zuDJ5nKBY9WRQI+32PMbSvwvZJxjIx75xNUhrGlm1BBwq6zhPJcEtGk8Oz4gb08rLob6w2tty8aKv0FMnOvrl4fXTZoEeqzW32oM+Fiy8qQs9SdRVsMq2jOBx5JfLrrw7UFvdTw1CvsdlyN32YU9oIAoJ6r+mAAOZZpBRv4TQotrtdb6cBhYxur34qn6TJTrkXOEC81A/+kUCuy1QdbX33JxxKurl0oIyCQAz5xsOh7Z5dO2buCeqR4bPrspo0cBgab6lm4qBR9gKAGBdwFXyXh44uLmRNN5XdjSosSl+GGg6ZmjBhwW9/mWAMN0S6tebHiI+1q4/LPXMh4/Ww5KnGttKyZRmOt6BfLZBQiN5OIkJ4NwjK4VmGIUoDVyD4zvtk3ZbylxqDmbiqTJd/Hl80RB3PWOtKE/IyfjRG4IgNzgAtpxpXpPApO+KlmZM3gkwI/zKbSLzRr+P2VstJOjtpqZiHKVLCCrWFX0VAUBQfK8cz2a4yAM+B8IRxaBSGGUB2KK00u/pV1me3hy0w7IcW5M1CXoO4QyUTjUMVqWdEB5BKahs9lJuCGD1hUdCUfHmCGRvvYKEnOdYJc4j2UoEvNtzNjusiT/hNe8UuxTjpxC53HsUB+dAuWIbwUpW71YKr6PIgOlwY7UOXsLw39vqozRL0Ka8 qb4gkQ3a hlFJkm35MQUXlGoEdeThMQtpKMvVPJtsdfl41BSf2bFPNtkj1UnmLzQDVZkJX+tIOtvIT4h1kKgj3ErUWp6XHpjRQySdE67F5irp39Sl79BiqNKfTPjh/4nV2ObgMV//w10H+HPcC4j5qcnxNfKaufOR6f/fsUL1N8e1X5Lb5pgCrtLJq60VFxbna9p5l+xRAozSTUSI4SJZLCsIk1lrRv44yIn4nhbgRuhKw8qtOtepijE5qjTpJU7q/sPsCQVB2220eJoxfB36CILxR5tOSKE4wHA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" The CPA_ARRAY test always uses len[1] as numpages argument to change_page_attr_set() although the addresses array is different each iteration of the test loop. Replace len[1] with len[i] to have numpages matching the addresses array. Fixes: ecc729f1f471 ("x86/mm/cpa: Add ARRAY and PAGES_ARRAY selftests") Signed-off-by: Mike Rapoport (Microsoft) --- arch/x86/mm/pat/cpa-test.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/mm/pat/cpa-test.c b/arch/x86/mm/pat/cpa-test.c index 3d2f7f0a6ed1..ad3c1feec990 100644 --- a/arch/x86/mm/pat/cpa-test.c +++ b/arch/x86/mm/pat/cpa-test.c @@ -183,7 +183,7 @@ static int pageattr_test(void) break; case 1: - err = change_page_attr_set(addrs, len[1], PAGE_CPA_TEST, 1); + err = change_page_attr_set(addrs, len[i], PAGE_CPA_TEST, 1); break; case 2: From patchwork Sun Jan 26 07:47:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13950603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AD38C0218E for ; Sun, 26 Jan 2025 07:48:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A64FA2800EC; Sun, 26 Jan 2025 02:48:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A15602800E8; Sun, 26 Jan 2025 02:48:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8DCE42800EC; Sun, 26 Jan 2025 02:48:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6F75F2800E8 for ; Sun, 26 Jan 2025 02:48:16 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E5AF71203EE for ; Sun, 26 Jan 2025 07:48:15 +0000 (UTC) X-FDA: 83048824950.05.6BFD91B Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf29.hostedemail.com (Postfix) with ESMTP id 5B526120009 for ; Sun, 26 Jan 2025 07:48:14 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Ip3YgJ3Y; spf=pass (imf29.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737877694; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1vYutUlTp3JX+hNbxqRR2ioYt+s65i5HZVu0KwP9iP0=; b=o8y2keeQB8RXjY3NnFcA50WtxK7BtdqDwwiKfTV9gWdFsx5aKHGvCLpI+1KiYcPerlSlnS m8hl5Gz08r3RccdwvTm0a9n0/Yhnex9Exd3Pi9vcIuvE8A4aNGVA6mypiwRpHsYKgRypcu yjLrXIwyaqzok8xoLuo/KhYxbFZ9qls= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Ip3YgJ3Y; spf=pass (imf29.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737877694; a=rsa-sha256; cv=none; b=zu1K5pHxdYF9CcAf4Q/3LP2juUw5859jrHMl+dCtji2+zFNyHRuChYulYDDfIq8a2Z8riN kxCE59tibGrYwFLMyDsFnIdCIPS/wgO8qSGBaKxMuwQ0lHaj9AMyrDx15MHrFAmRzmYlwh 1tByifJ15GI3ZJHIeThKwoC+G0N31ms= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 30E485C5C75; Sun, 26 Jan 2025 07:47:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 62EF0C4CED3; Sun, 26 Jan 2025 07:48:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737877693; bh=UnbrmexawgONPYMAXBvIF3DzXjQu5/b5KeCmUo86HRs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ip3YgJ3YSP9SXdUh/kxhiQVdDs3WVvuBouH5X89T+VIIo6TzzE7FCt0gM5JJ90qdB qFokyCnFrMLuusz7d73gejhGjtpaGSOXriTKI4BIE1yBpNhZEiDufYAcmk+Bny0+pi mka5HQUFLD61qPJQ38dTyd3ZKji3uDer4OdT9bFadlUYLCQs+5YalQ3gY2xBjNUrYT hZa97zbHTVoKXZmQKChwE4mHoQs3pRngNrmPu2jrkNiOIXVi1uwbrn+txO4g/aUq6L gISj+VrDa5T7aQN4aew1n4foLfwHD83fvLl/TKDYXV703XcKmVLgmgNS/HPpngfLaF tGV4//yCmaeKQ== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v3 2/9] x86/mm/pat: drop duplicate variable in cpa_flush() Date: Sun, 26 Jan 2025 09:47:26 +0200 Message-ID: <20250126074733.1384926-3-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250126074733.1384926-1-rppt@kernel.org> References: <20250126074733.1384926-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 5B526120009 X-Stat-Signature: hsa597t4me9dux89ux6aexz4yano4rkw X-Rspam-User: X-HE-Tag: 1737877694-667290 X-HE-Meta: U2FsdGVkX18Be6E4ssF5rWi5df8JtM2SC7KPWhC0tapKS/CAk+BM/pPzFLyhZCN41ayoygTLImWZ5xyDbbeQjTVDR413P6FhBfHsF11ilm2iiKgN11eAieuGN1Akzg8g4dLEcAh70Ag+cUX3NO8c9vrUSTKWk3ppb0WcwvQmUSwuYZy2hEyrKYhgCOYko+fyDnMcD0j789ZKjk4Sjq0V6r6godJQCWpFg+QIzQhIXVoEvsBVhdua8cyrMFY0/ecsh7tGp5E4bQx1QAnfrUcgC0LN0ww7DLLgSkMGPhrr4Oc8mJROKOffjp+05SLJGJPtStOhDlevMrTfN4KX+9ZWImr/5g0XNxipUUhWG9c+QjIcaoq7ZQtBt8BmoOIfwZFi18NHQSIM3oyVT5D4m2oWC+6MUtoayLKQeu03EaR1f0G9WZfRNQVDq0+4cauPYMneM4ZK9yOOOLa+XO8ELCNsInDpAWmq6UCJJcjqrsuRr3CMv6PHxw/b7jIYg41Hio4CyKwxTIvmofw+zaMNqt5cPPnoQQ5gA+Fee4bCkDrLsCipBFsIOISeYCGVHrsUWa7R6+CE6hyh8gU6cQ9dsqn0kEPeP9srbNZc0lrSeSGh4Vp2pP2RvN4QSGrVtzBpcy3ojU2+2BFpthzYdkx1KVT0kWBIsLKdPa9VHZGWZYSpZCreGbdJuMlStBruBJNiupXfpGhfO9nx/OotQqETZigalPsG7ch/URgyZaTzkHGzoVwGKVnTMJ4LWY5bX3rKe25wRdkhr1RVC2Rw9WH1QR3ughkDVvUqq9Yqp0Vo53wDg3buML3TgQ2b8bEKi/xiiDc3oLssIgD0ZfoosaVT0/gSRfKw5Ig7SuTPleM4bs/xJmJxsUhGbJG9BGHfq9tvDnzYVsVSuLJMpdKWlj4Y6wWa+9ORLy8w16+18GTgKQehjgotzYOHTeH7n6dxwd5uj+/9Z9fYWCQCfARAzuXf+XK UilWRpw2 2YSI/4wtFSXK4TTKptE3p2Jzla7GeWn9dTSN4Yy001BhNTbO0V7BYYvA8AaN5b4sG9rNUVW3pAWu3dnugk0BKB1DIAqWGp/iJ3sd1bw6nKFR9vm/MKNULLdHqJc4rNEbSrIKNxfSkPG5D6dvjgaQJ6yOjWolQ7dC2ve7W6CvBg9pR7rkTdR+rOv6WAJHn3FXrfoUIyQq3PC9tGWbdjHk42AsohVTcmBzapszSANyjUt2KeBdKmd38VB4gnqZ1ltivuNE7ZIa7XCaa1sYs/qqBX8PlSF4UW9HN8AY+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" There is a 'struct cpa_data *data' parameter in cpa_flush() that is assigned to a local 'struct cpa_data *cpa' variable. Rename the parameter from 'data' to 'cpa' and drop declaration of the local 'cpa' variable. Signed-off-by: Mike Rapoport (Microsoft) --- arch/x86/mm/pat/set_memory.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 95bc50a8541c..d43b919b11ae 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -396,9 +396,8 @@ static void __cpa_flush_tlb(void *data) flush_tlb_one_kernel(fix_addr(__cpa_addr(cpa, i))); } -static void cpa_flush(struct cpa_data *data, int cache) +static void cpa_flush(struct cpa_data *cpa, int cache) { - struct cpa_data *cpa = data; unsigned int i; BUG_ON(irqs_disabled() && !early_boot_irqs_disabled); From patchwork Sun Jan 26 07:47:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13950604 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB41BC02181 for ; Sun, 26 Jan 2025 07:48:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 312D02800ED; Sun, 26 Jan 2025 02:48:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E9632800E8; Sun, 26 Jan 2025 02:48:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18AD52800ED; Sun, 26 Jan 2025 02:48:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id EC9932800E8 for ; Sun, 26 Jan 2025 02:48:26 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6D230C0629 for ; Sun, 26 Jan 2025 07:48:26 +0000 (UTC) X-FDA: 83048825412.18.B21B284 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf19.hostedemail.com (Postfix) with ESMTP id AFBA71A0006 for ; Sun, 26 Jan 2025 07:48:24 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=eQsRhRTb; spf=pass (imf19.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737877704; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZB9RJL67fY3IXYuhWAcAfCX1iP2OBdCmZWcj/TBJ5l0=; b=RqCd7sQqSNlt47jS68qD7p5dVWtQgHZeR6OX7GRhOph1o3ZWizG3YN+0hsDgUoMuLl2KFZ vANThtVwhbUeWKGTRhIJpiy2tXoOrjz3ZKqapKpiXqbNUMXbgoBSOdErdFOTUoumyuzSeD xtnsFYFNbXCqrMU1ZgeAXwTdgtrBEZI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737877704; a=rsa-sha256; cv=none; b=WqyFBaIWer27rgDPz6YwboK6MKDWsLn1l1ixHKb7kHIiqmyauIbupKX14p3FWh9RVSbTli P8A4Yxw1fH7nROXnjBQZO+IUBE7JVd0LslXV9i/ymIQzD0Ap/RIHOfDRTsV64lI0YwE4Zk Fpk9rb2Gs8QFHHCARvceUnNpbcSLzGw= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=eQsRhRTb; spf=pass (imf19.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 5F04D5C5C75; Sun, 26 Jan 2025 07:47:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 97126C4CEE4; Sun, 26 Jan 2025 07:48:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737877703; bh=5TNUX0A4O+m9E85yyca5tT89w2/MG3X3oJGKr3TQ7q4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eQsRhRTbxo1Rjx6g8WL1v1xsvrANb3axX7MiQV/+yluVHHioS/bwH8zAWOEizXmRN eVJhWJ6uTVGS1b2ioNDjbAKDtbwssH+xEtbvh2tmDs8Qy2Dd433H5CzVJqCqD4frvE BLO9oiv92PviqjVNYC56pJaZDAoupS+O3qMb6lFHw13RTaTQ42SXHiiv78fEvm7JmT Ig7tVXF4FEjZ0PVEWy2jp+e7Mr61b9EOwLdYveEOBEQs6OG9wH7XL2Gyde3OyS43Bs c3KSR2jkAnZ5W/s5z6AQRvLJG11oYce4qnd8Ojy7MUtralqxKNyYtAn3RUbe2qtaxm XpUA7p81b0YcQ== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v3 3/9] x86/mm/pat: restore large ROX pages after fragmentation Date: Sun, 26 Jan 2025 09:47:27 +0200 Message-ID: <20250126074733.1384926-4-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250126074733.1384926-1-rppt@kernel.org> References: <20250126074733.1384926-1-rppt@kernel.org> MIME-Version: 1.0 X-Stat-Signature: xpza4b4j85kq51o1yc8d1dzuq1sfj1zt X-Rspamd-Queue-Id: AFBA71A0006 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1737877704-653223 X-HE-Meta: U2FsdGVkX1/pUZEI0Yjn6uL/SC/DxeYXu8pu/GuunnaUtS05rqe4oKj/eiMqc1LPlPzxgtUZwJ35F0ipGPsmEHwfIAcvuHFS+soidDTG+wQsxIjmODp4sd38X8YX8a22yL0WUbz+ZDJv3nFDjoNofw22WxyS5wA4FN+S2WtdGVCvE1zjDGPNK4w2x2aGTZWQ05gIl9NH1fnBAMhWGbvexW9lp6/nMYBik5LDBV8UyRQMmg2E78qAfizSh6ln9HOCvjWKZc/Zz5rh2XSwzgwwrFYEpDzkx4isLTnvS9SaWXXtEysrvPRZFD3EaS158EkixYpO8BvRlUPaqfZA5ZtV595Zy6TC2DjvLlBJpKCbMnix7+4QPq7FuzsDxWwE04dCB6137F7cc1o/ou3hSYkXWkTETf9UaFCNKpY30WGWReU89Y5goeNRAlVSQ0CU9czAamhGzIujn7QeJW63OrNlfe73iqnsEIl/eyKgUaCB2QfULVwCCRNfgrRsZ2twPpl8LWYglQlomwjQ9Z//50MxtMu0HFkThXKmjpEAUo7No994rnU/34cz6l7Ra7PWX0L5yom/EAqiU27lQHwPLEHzhSGRjrDs3CDTAI1VXojExnvrr2Kq7pYjaaOrrRBYx0mJQsxulcwvNmg95pXNzFTPC+qB2H1mWwFTaVBfyxIxJ9WJ2oXIe2ryopCVKANQg1t8TBZ6AZCTfCDPHM1N0VI7t8uj1jG7Xd9XfzqEmXeKCjw5edA7ZsDy4FAK0qezpXS7IKcmxQClcKt4nCsSlCEruZAJcu5G2FIK8TdygIopDp85YW6SUpvid31YV7plR6/mCtTePiwjSDWb54sdgzkHI7XaPWYW/sIfEJfSsjkjfBuI2hydOoFVFW66AwLJ5yw8uOZLvHbOaqNtkJfNj82Xc0LBd2rSfWt4jH7tkgClXkPQUDKX7EbCJ/J8tQGV8JjvBVSHQl5X5ZGixkEX42I irC8JBVL wY5ROzRQ8yDuxGfk96ZGriyetX08aESZgsbT1Fpmghrbl1uShGock9SDw9dmgueWf4jSZNtxcS0fGa5k0WdZNBE8OxT9d1wGsCEddQf+1S5beUzAY3Yxmb8UqYbVeo3crHaPaLvFKSL+LSdgIpCnmZjFuvXu1incEpxfRWw+Q4hmCBneL8VyiW82jkszl8iIfTp3z8jLRJTOFggtDfcqQb4iIiv3CvGtGJ1Mi8uZJ7HgA9cos366M1+l4mrhrKZqtw2aGjiJahImlQwgHn6L7GoHWLg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Kirill A. Shutemov" Change of attributes of the pages may lead to fragmentation of direct mapping over time and performance degradation when these pages contain executable code. With current code it's one way road: kernel tries to avoid splitting large pages, but it doesn't restore them back even if page attributes got compatible again. Any change to the mapping may potentially allow to restore large page. Add a hook to cpa_flush() path that will check if the pages in the range that were just touched can be mapped at PMD level. If the collapse at the PMD level succeeded, also attempt to collapse PUD level. The collapse logic runs only when a set_memory_ method explicitly sets CPA_COLLAPSE flag, for now this is only enabled in set_memory_rox(). CPUs don't like[1] to have to have TLB entries of different size for the same memory, but looks like it's okay as long as these entries have matching attributes[2]. Therefore it's critical to flush TLB before any following changes to the mapping. Note that we already allow for multiple TLB entries of different sizes for the same memory now in split_large_page() path. It's not a new situation. set_memory_4k() provides a way to use 4k pages on purpose. Kernel must not remap such pages as large. Re-use one of software PTE bits to indicate such pages. [1] See Erratum 383 of AMD Family 10h Processors [2] https://lore.kernel.org/linux-mm/1da1b025-cabc-6f04-bde5-e50830d1ecf0@amd.com/ [rppt@kernel.org: * s/restore/collapse/ * update formatting per peterz * use 'struct ptdesc' instead of 'struct page' for list of page tables to be freed * try to collapse PMD first and if it succeeds move on to PUD as peterz suggested * flush TLB twice: for changes done in the original CPA call and after collapsing of large pages * update commit message ] Link: https://lore.kernel.org/all/20200416213229.19174-1-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- arch/x86/include/asm/pgtable_types.h | 2 + arch/x86/mm/pat/set_memory.c | 217 ++++++++++++++++++++++++++- include/linux/vm_event_item.h | 2 + mm/vmstat.c | 2 + 4 files changed, 219 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index 4b804531b03c..c90e9c51edb7 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -33,6 +33,7 @@ #define _PAGE_BIT_CPA_TEST _PAGE_BIT_SOFTW1 #define _PAGE_BIT_UFFD_WP _PAGE_BIT_SOFTW2 /* userfaultfd wrprotected */ #define _PAGE_BIT_SOFT_DIRTY _PAGE_BIT_SOFTW3 /* software dirty tracking */ +#define _PAGE_BIT_KERNEL_4K _PAGE_BIT_SOFTW3 /* page must not be converted to large */ #define _PAGE_BIT_DEVMAP _PAGE_BIT_SOFTW4 #ifdef CONFIG_X86_64 @@ -64,6 +65,7 @@ #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE) #define _PAGE_SPECIAL (_AT(pteval_t, 1) << _PAGE_BIT_SPECIAL) #define _PAGE_CPA_TEST (_AT(pteval_t, 1) << _PAGE_BIT_CPA_TEST) +#define _PAGE_KERNEL_4K (_AT(pteval_t, 1) << _PAGE_BIT_KERNEL_4K) #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS #define _PAGE_PKEY_BIT0 (_AT(pteval_t, 1) << _PAGE_BIT_PKEY_BIT0) #define _PAGE_PKEY_BIT1 (_AT(pteval_t, 1) << _PAGE_BIT_PKEY_BIT1) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index d43b919b11ae..18c233048706 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -75,6 +75,7 @@ static DEFINE_SPINLOCK(cpa_lock); #define CPA_ARRAY 2 #define CPA_PAGES_ARRAY 4 #define CPA_NO_CHECK_ALIAS 8 /* Do not search for aliases */ +#define CPA_COLLAPSE 16 /* try to collapse large pages */ static inline pgprot_t cachemode2pgprot(enum page_cache_mode pcm) { @@ -107,6 +108,18 @@ static void split_page_count(int level) direct_pages_count[level - 1] += PTRS_PER_PTE; } +static void collapse_page_count(int level) +{ + direct_pages_count[level]++; + if (system_state == SYSTEM_RUNNING) { + if (level == PG_LEVEL_2M) + count_vm_event(DIRECT_MAP_LEVEL2_COLLAPSE); + else if (level == PG_LEVEL_1G) + count_vm_event(DIRECT_MAP_LEVEL3_COLLAPSE); + } + direct_pages_count[level - 1] -= PTRS_PER_PTE; +} + void arch_report_meminfo(struct seq_file *m) { seq_printf(m, "DirectMap4k: %8lu kB\n", @@ -124,6 +137,7 @@ void arch_report_meminfo(struct seq_file *m) } #else static inline void split_page_count(int level) { } +static inline void collapse_page_count(int level) { } #endif #ifdef CONFIG_X86_CPA_STATISTICS @@ -396,6 +410,40 @@ static void __cpa_flush_tlb(void *data) flush_tlb_one_kernel(fix_addr(__cpa_addr(cpa, i))); } +static int collapse_large_pages(unsigned long addr, struct list_head *pgtables); + +static void cpa_collapse_large_pages(struct cpa_data *cpa) +{ + unsigned long start, addr, end; + struct ptdesc *ptdesc, *tmp; + LIST_HEAD(pgtables); + int collapsed = 0; + int i; + + if (cpa->flags & (CPA_PAGES_ARRAY | CPA_ARRAY)) { + for (i = 0; i < cpa->numpages; i++) + collapsed += collapse_large_pages(__cpa_addr(cpa, i), + &pgtables); + } else { + addr = __cpa_addr(cpa, 0); + start = addr & PMD_MASK; + end = addr + PAGE_SIZE * cpa->numpages; + + for (addr = start; within(addr, start, end); addr += PMD_SIZE) + collapsed += collapse_large_pages(addr, &pgtables); + } + + if (!collapsed) + return; + + flush_tlb_all(); + + list_for_each_entry_safe(ptdesc, tmp, &pgtables, pt_list) { + list_del(&ptdesc->pt_list); + __free_page(ptdesc_page(ptdesc)); + } +} + static void cpa_flush(struct cpa_data *cpa, int cache) { unsigned int i; @@ -404,7 +452,7 @@ static void cpa_flush(struct cpa_data *cpa, int cache) if (cache && !static_cpu_has(X86_FEATURE_CLFLUSH)) { cpa_flush_all(cache); - return; + goto collapse_large_pages; } if (cpa->force_flush_all || cpa->numpages > tlb_single_page_flush_ceiling) @@ -413,7 +461,7 @@ static void cpa_flush(struct cpa_data *cpa, int cache) on_each_cpu(__cpa_flush_tlb, cpa, 1); if (!cache) - return; + goto collapse_large_pages; mb(); for (i = 0; i < cpa->numpages; i++) { @@ -429,6 +477,10 @@ static void cpa_flush(struct cpa_data *cpa, int cache) clflush_cache_range_opt((void *)fix_addr(addr), PAGE_SIZE); } mb(); + +collapse_large_pages: + if (cpa->flags & CPA_COLLAPSE) + cpa_collapse_large_pages(cpa); } static bool overlaps(unsigned long r1_start, unsigned long r1_end, @@ -1198,6 +1250,161 @@ static int split_large_page(struct cpa_data *cpa, pte_t *kpte, return 0; } +static int collapse_pmd_page(pmd_t *pmd, unsigned long addr, + struct list_head *pgtables) +{ + pmd_t _pmd, old_pmd; + pte_t *pte, first; + unsigned long pfn; + pgprot_t pgprot; + int i = 0; + + addr &= PMD_MASK; + pte = pte_offset_kernel(pmd, addr); + first = *pte; + pfn = pte_pfn(first); + + /* Make sure alignment is suitable */ + if (PFN_PHYS(pfn) & ~PMD_MASK) + return 0; + + /* The page is 4k intentionally */ + if (pte_flags(first) & _PAGE_KERNEL_4K) + return 0; + + /* Check that the rest of PTEs are compatible with the first one */ + for (i = 1, pte++; i < PTRS_PER_PTE; i++, pte++) { + pte_t entry = *pte; + + if (!pte_present(entry)) + return 0; + if (pte_flags(entry) != pte_flags(first)) + return 0; + if (pte_pfn(entry) != pte_pfn(first) + i) + return 0; + } + + old_pmd = *pmd; + + /* Success: set up a large page */ + pgprot = pgprot_4k_2_large(pte_pgprot(first)); + pgprot_val(pgprot) |= _PAGE_PSE; + _pmd = pfn_pmd(pfn, pgprot); + set_pmd(pmd, _pmd); + + /* Queue the page table to be freed after TLB flush */ + list_add(&page_ptdesc(pmd_page(old_pmd))->pt_list, pgtables); + + if (IS_ENABLED(CONFIG_X86_32) && !SHARED_KERNEL_PMD) { + struct page *page; + + /* Update all PGD tables to use the same large page */ + list_for_each_entry(page, &pgd_list, lru) { + pgd_t *pgd = (pgd_t *)page_address(page) + pgd_index(addr); + p4d_t *p4d = p4d_offset(pgd, addr); + pud_t *pud = pud_offset(p4d, addr); + pmd_t *pmd = pmd_offset(pud, addr); + /* Something is wrong if entries doesn't match */ + if (WARN_ON(pmd_val(old_pmd) != pmd_val(*pmd))) + continue; + set_pmd(pmd, _pmd); + } + } + + if (virt_addr_valid(addr) && pfn_range_is_mapped(pfn, pfn + 1)) + collapse_page_count(PG_LEVEL_2M); + + return 1; +} + +static int collapse_pud_page(pud_t *pud, unsigned long addr, + struct list_head *pgtables) +{ + unsigned long pfn; + pmd_t *pmd, first; + int i; + + if (!direct_gbpages) + return 0; + + addr &= PUD_MASK; + pmd = pmd_offset(pud, addr); + first = *pmd; + + /* + * To restore PUD page all PMD entries must be large and + * have suitable alignment + */ + pfn = pmd_pfn(first); + if (!pmd_leaf(first) || (PFN_PHYS(pfn) & ~PUD_MASK)) + return 0; + + /* + * To restore PUD page, all following PMDs must be compatible with the + * first one. + */ + for (i = 1, pmd++; i < PTRS_PER_PMD; i++, pmd++) { + pmd_t entry = *pmd; + + if (!pmd_present(entry) || !pmd_leaf(entry)) + return 0; + if (pmd_flags(entry) != pmd_flags(first)) + return 0; + if (pmd_pfn(entry) != pmd_pfn(first) + i * PTRS_PER_PTE) + return 0; + } + + /* Restore PUD page and queue page table to be freed after TLB flush */ + list_add(&page_ptdesc(pud_page(*pud))->pt_list, pgtables); + set_pud(pud, pfn_pud(pfn, pmd_pgprot(first))); + + if (virt_addr_valid(addr) && pfn_range_is_mapped(pfn, pfn + 1)) + collapse_page_count(PG_LEVEL_1G); + + return 1; +} + +/* + * Collapse PMD and PUD pages in the kernel mapping around the address where + * possible. + * + * Caller must flush TLB and free page tables queued on the list before + * touching the new entries. CPU must not see TLB entries of different size + * with different attributes. + */ +static int collapse_large_pages(unsigned long addr, struct list_head *pgtables) +{ + int collapsed = 0; + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + + addr &= PMD_MASK; + + spin_lock(&pgd_lock); + pgd = pgd_offset_k(addr); + if (pgd_none(*pgd)) + goto out; + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) + goto out; + pud = pud_offset(p4d, addr); + if (!pud_present(*pud) || pud_leaf(*pud)) + goto out; + pmd = pmd_offset(pud, addr); + if (!pmd_present(*pmd) || pmd_leaf(*pmd)) + goto out; + + collapsed = collapse_pmd_page(pmd, addr, pgtables); + if (collapsed) + collapsed += collapse_pud_page(pud, addr, pgtables); + +out: + spin_unlock(&pgd_lock); + return collapsed; +} + static bool try_to_free_pte_page(pte_t *pte) { int i; @@ -2121,7 +2328,8 @@ int set_memory_rox(unsigned long addr, int numpages) if (__supported_pte_mask & _PAGE_NX) clr.pgprot |= _PAGE_NX; - return change_page_attr_clear(&addr, numpages, clr, 0); + return change_page_attr_set_clr(&addr, numpages, __pgprot(0), clr, 0, + CPA_COLLAPSE, NULL); } int set_memory_rw(unsigned long addr, int numpages) @@ -2148,7 +2356,8 @@ int set_memory_p(unsigned long addr, int numpages) int set_memory_4k(unsigned long addr, int numpages) { - return change_page_attr_set_clr(&addr, numpages, __pgprot(0), + return change_page_attr_set_clr(&addr, numpages, + __pgprot(_PAGE_KERNEL_4K), __pgprot(0), 1, 0, NULL); } diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index f70d0958095c..5a37cb2b6f93 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -151,6 +151,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifdef CONFIG_X86 DIRECT_MAP_LEVEL2_SPLIT, DIRECT_MAP_LEVEL3_SPLIT, + DIRECT_MAP_LEVEL2_COLLAPSE, + DIRECT_MAP_LEVEL3_COLLAPSE, #endif #ifdef CONFIG_PER_VMA_LOCK_STATS VMA_LOCK_SUCCESS, diff --git a/mm/vmstat.c b/mm/vmstat.c index 16bfe1c694dd..88998725f1c5 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1435,6 +1435,8 @@ const char * const vmstat_text[] = { #ifdef CONFIG_X86 "direct_map_level2_splits", "direct_map_level3_splits", + "direct_map_level2_collapses", + "direct_map_level3_collapses", #endif #ifdef CONFIG_PER_VMA_LOCK_STATS "vma_lock_success", From patchwork Sun Jan 26 07:47:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13950605 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6887CC0218E for ; Sun, 26 Jan 2025 07:48:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 02BDE2800EE; Sun, 26 Jan 2025 02:48:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F1E772800E8; Sun, 26 Jan 2025 02:48:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE65B2800EE; Sun, 26 Jan 2025 02:48:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id BCA962800E8 for ; Sun, 26 Jan 2025 02:48:36 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 3E1F0141346 for ; Sun, 26 Jan 2025 07:48:36 +0000 (UTC) X-FDA: 83048825832.07.9E692A3 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf07.hostedemail.com (Postfix) with ESMTP id 9194640006 for ; Sun, 26 Jan 2025 07:48:34 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=IL59FVJx; spf=pass (imf07.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737877714; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KIFxGxnvBKihw4fTbFCPK+DYQ7uljyxljs0IBYR0gFM=; b=5g+WgSfizqTEHs8E9W1wsIoheYBKCIvdKYa6s3HD6XIkUwAIrF25YtGFqN/kaqxWRhkoNy B3S+g4H0WDgjT4uzrcFMJt/zxHhKaw010ofiYD81E+HDxWfeP67/RyJE0PdUx/Z6rA5XIS NMf6gQ/ZLUlvfBNwcShZaicI6gA7498= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=IL59FVJx; spf=pass (imf07.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737877714; a=rsa-sha256; cv=none; b=qfEt1Uu1Y1VjEr9Ekscf2y2iyM95im4a/Gu5OZoyjkOCTBhgz929mNoktjuywITn9CSLf7 KDlcqpiCVu9+vUo85+lF/altAj/4qUdVkxyQwj/qNSNh1OMrpag0LxcCvuKLkSX18OMqGU V2eoFlykKC3sRKjuoCIslO5jEL+Hc3I= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 8C36F5C4B70; Sun, 26 Jan 2025 07:47:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C68E5C4CED3; Sun, 26 Jan 2025 07:48:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737877713; bh=WqNu22Xx50tarZUt/SmRrxy/7e4ez9U96yWXlx51zT4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IL59FVJxiCVgMaLJ03jJ6YzpD01XNIwi0gBNBKK07c1A9UrVVp+DeEZ2rkk7RHr1N 3eCIN13xm6EvJVgb+JieAsSB1dycOK5Wu4U/lIDmwOBa+Ay/gXeb3XbE5qlN4n3kAN G1IlICvvFL6lbfPBKeM4SUCMyOE/tFMGD4lfJ/zOBMWVcstXeVBL860ETBQnFLlBvf Jyf0FcnNBGdRU/dR1FtpOF0iGnFBcVUFKQg9B0G1UEXNo0SXlQqdzGtBgyE3i8oivO FhaZajQrYx2jRx1ihqabOGFmS9UJ4kFAH+SMbk13mmJb7Tlj7vJokgTrtOiuhYSown 5p9cUgf6XD/Eg== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v3 4/9] execmem: don't remove ROX cache from the direct map Date: Sun, 26 Jan 2025 09:47:28 +0200 Message-ID: <20250126074733.1384926-5-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250126074733.1384926-1-rppt@kernel.org> References: <20250126074733.1384926-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 9194640006 X-Stat-Signature: 1g5kyobya5u1hcc8xomtxcp4mjesmwo3 X-HE-Tag: 1737877714-107643 X-HE-Meta: U2FsdGVkX1/d0jtsdpZlQjcUWbfOmvMZjMYgKLAQpIc31Ih6is0XIhKZJLF2yKHcDUcVZyeGr1ECW394O8Q4pUohWOKd6M4kUdvmJ78gFwPWwvO/EeGqcE4gI4QuxpWDtt2KyqiOQF+K84/8/Sm2p3KYzTihjwHeU5p4S5OO/tXHa8INCD5DJ7GuX254DPJ8+XHPiq5uhphn6KbEzSe28YFO1ifMawt3EGsBbE9JOaxgeqkCpUKM30QxEHlXRUC0lRo5XGJFgKyURQRp/l5aM6xf2mWTcceSpgPeicZbcpd2ZffehuAdIH6zFEbYwiU9zmmaXI687B+j3YgKEfXhQGdVyaqkT6A4tloOxTgcKBp7Y0w9rEyBiu2Swfvvg9+cj5lko3v8G4m8AlBsfDQD/vV1kh5x5gMbhGDSTXoF4SaVWy50mmfR8FzRdHvuK8QvVZfkMUTBWd7yctA1vAUUKmYRQ1y0TzC6/+4bLymbXP5wkoQpJxsF8t0K4J/+8AANEyzSfdF3J1HH0PRe6e4lN7lll1ypsQNzdXELrg54QJqdkovrZgczNzuYAxCHRsTHyE3yZp//emsUXSYIfm9l4aZk3PNi5H2NrVZ9GKz3brD5LWnSnq4CQNtizj2byfR1zUxPRY8S9zXZGeN99CnIm3Lj6LfFTYMmW4ev054bWFsc/77Rrq85JsWY+nCEB3wzZtmLV4I5bldetFXWqsz2QfSdTzWbxO42kpwiZ1UviN18s56y7cv8H37jJmK5Vyho9MHhvdRDGD+ICZEh7rKCikiuEDkcQ6y+AyNVJkO8WzkALcdQFoGR607pN5M/NrxTOC/HfDq84IWNk5+fAUPyHbiZUyxIG5/XhJ+S6IbtJ/d5mLGT9BU0Osl232mpHGh1UicjvtM6TReDKDGPpK23tRj0ou2b8rhA8PcZuAfDXDk0EOfuXRuZtQ8VDKy9iA0oLCttQSCp0g/uLOq/DNT ACvRV48I RWRHRHLZA0JxHkB3+lRLDxd+MUltyxZtfwhjfJTm2r7bE9fmpWbA63PeZaiFFoHq0k3fXpdV05NXy9rB7tLFuYdlz+vOaQUdJ8+VGJwDi4xSbpFNuz5S9FwVEMohBJVRUEZN9yfOh+iFbOMmvtl6X5VL1TrMiwmVDJaEXulPbXovChFaP5TJ4BP7+K5+6Z/V+ALSNm0nXFov+VBRaD17RHvfxxYrzR5Le0HlV5xv1/r9JB1o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" The memory allocated for the ROX cache was removed from the direct map to reduce amount of direct map updates, however this cannot be tolerated by /proc/kcore that accesses module memory using vread_iter() and the latter does vmalloc_to_page() and copy_page_to_iter_nofault(). Instead of removing ROX cache memory from the direct map and mapping it as ROX in vmalloc space, simply call set_memory_rox() that will take care of proper permissions on both vmalloc and in the direct map. Signed-off-by: Mike Rapoport (Microsoft) --- mm/execmem.c | 17 ++++------------- 1 file changed, 4 insertions(+), 13 deletions(-) diff --git a/mm/execmem.c b/mm/execmem.c index 317b6a8d35be..04b0bf1b5025 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -257,7 +257,6 @@ static void *__execmem_cache_alloc(struct execmem_range *range, size_t size) static int execmem_cache_populate(struct execmem_range *range, size_t size) { unsigned long vm_flags = VM_ALLOW_HUGE_VMAP; - unsigned long start, end; struct vm_struct *vm; size_t alloc_size; int err = -ENOMEM; @@ -275,26 +274,18 @@ static int execmem_cache_populate(struct execmem_range *range, size_t size) /* fill memory with instructions that will trap */ execmem_fill_trapping_insns(p, alloc_size, /* writable = */ true); - start = (unsigned long)p; - end = start + alloc_size; - - vunmap_range(start, end); - - err = execmem_set_direct_map_valid(vm, false); - if (err) - goto err_free_mem; - - err = vmap_pages_range_noflush(start, end, range->pgprot, vm->pages, - PMD_SHIFT); + err = set_memory_rox((unsigned long)p, vm->nr_pages); if (err) goto err_free_mem; err = execmem_cache_add(p, alloc_size); if (err) - goto err_free_mem; + goto err_reset_direct_map; return 0; +err_reset_direct_map: + execmem_set_direct_map_valid(vm, true); err_free_mem: vfree(p); return err; From patchwork Sun Jan 26 07:47:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13950606 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B43CC02181 for ; Sun, 26 Jan 2025 07:48:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0053E2800EF; Sun, 26 Jan 2025 02:48:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F1E6A2800E8; Sun, 26 Jan 2025 02:48:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE6AA2800EF; Sun, 26 Jan 2025 02:48:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C14932800E8 for ; Sun, 26 Jan 2025 02:48:46 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 3B49F141325 for ; Sun, 26 Jan 2025 07:48:46 +0000 (UTC) X-FDA: 83048826252.28.CC83B4D Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf27.hostedemail.com (Postfix) with ESMTP id A82B340002 for ; Sun, 26 Jan 2025 07:48:44 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=L7WS+xrD; spf=pass (imf27.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737877724; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NfiOldQuoBsOca6bai8743RJBuvry8sUliN14rP7yPw=; b=DanWEoevdfHHOOwPaSKN4oV3ySwiFH95vOqoQcLl4srewk6rzNHPby+W7PG9JpXB0MVBP+ jNf4SYa47FSEu4ZF26NDrnl8AmiRGUwmJTwYmNbIlSSI/ijF/exAg9NpYDDNLoG1QLs/HI d1cpn3X1zFaXR5zA5ofi/x4UPJPjnGY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737877724; a=rsa-sha256; cv=none; b=MhAuoDVgETWANNbaPdzZIZoVWsM4jALLbhv5tK5Git51dAI53oCPTUURWNRcRSLz9D3A3y 51b6n8c8aiK/YdYqhgQW/ITx6JUOnQe+KTKJC9pm3u5aZ+Yrvn1yL/PNF87qP1tiLfABYS /XMi55RIjrdczuwXogywXMTgrFZvoyo= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=L7WS+xrD; spf=pass (imf27.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id D9144A4023F; Sun, 26 Jan 2025 07:46:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01143C4CEE1; Sun, 26 Jan 2025 07:48:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737877723; bh=mjWwPz12D8IuXSas4rcsP8ghTqzzxxLYwZNdxRHbGuE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L7WS+xrD8siDHHKIewMcy4VDu761ztpMzjAypCJJ2WsNTWgjM3mc6GgvHmJQlqhBk OnkGkuyPc9CJKE6wqSVz9q5vArEIkaAs95yYliIfUZrn60Z6kTJscKscpbt5txLhM7 cRTNesO6U4XLYOzbbXL0WlzgBkPKBKmlhy1tx1WSfpBDU7tSNQU9viL0l7X9MIM9uF cCfiOh26vaEB3Y9WpnrWOn86FsEiN31h+IvXpdqwTTQkykeWsOiCKoiH9f2lYv4AzZ dbMj3rG/xphiE5ifXHBrt7di3cQmP1Y9JQNoeT88cxV+Hbw4MXvC0Z6tQxVLYsexB6 xRwqH+ad+TyqA== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v3 5/9] execmem: add API for temporal remapping as RW and restoring ROX afterwards Date: Sun, 26 Jan 2025 09:47:29 +0200 Message-ID: <20250126074733.1384926-6-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250126074733.1384926-1-rppt@kernel.org> References: <20250126074733.1384926-1-rppt@kernel.org> MIME-Version: 1.0 X-Stat-Signature: nfufnyo5m6ify43hjinzob93wg5jsrhu X-Rspam-User: X-Rspamd-Queue-Id: A82B340002 X-Rspamd-Server: rspam03 X-HE-Tag: 1737877724-470058 X-HE-Meta: U2FsdGVkX19UfSIJbW2poAPlQccduA40B4n5De5KqDvCby3f3BCYO+a3PN3tM/UU8jzU5ibHQaQ68ZfFGPQC5sLV3kJ7F1AJLG515m5gktcaPvC81mslKKmP5nQpm9wj93eKCvZtrFWac0bJvu0qMVg2nA113VPMLXx4ZvW+YG21fmH55+DS8xl8g7fRzp4dOkPTRL/2FMK3aZF8MIwE1rEpwyGUzsmGTmWiMeihiG56kj0bew9+9RWDDofxAkNntaSHCDgCC9T1pyae4uSSgBjcmPzkSTCza8lTtK0X6LfMc8B0iG8ZCLBFse4ROSLeKuEK2hUnPu5Dz87jyclNDpZygX3BeTRk0I7zYR0UVRh8iW+946HpSalpy96Xcbxfo7lZRvTF0xWSKcpDW4PENDCL3ub1Ht1PvFM0kDkS3X3ZC0nNPoF8iziX8bKVKf40gf9/YgnGbT3SZdBlD7gEb5eVOb/u2KNF85zJK1Njc5AkYXPDMm8EGQE7B2dUTtyciaSrbClQfyE5PLv45Xc9I0gjPpI+3dycG7l45WKo0eIsiDV7rnRmcjXulzx3lAqLgbsyGZL8H5zLhu9pKPrLjXzSBzKa5AmCLXGLvAWjSONGS96YgraAYMmXXGMn1Jx2kqmaUXyg8nLWopKKVWZUkMz+9Nd5MbM/9maZNb7yCA8TpRHYbCvy/smxQJigdCEFc84E457aMFVC3o4rcUmA7m4nTFwlc3T9tSHT/LzUC+eFyW3G6xv57WvVwstITCatWW2pE+peVOGhUjDRSqBC96+JjDszopsJKxisauNX7mG0XpPykLClR63FTomuYodYZlv3BitFcA0AVIT3evE0qSGVrpm+ZfBNQ8tQr3GV7zPZk/vRT9Sjj+27T0+13YfLHNy/j3V8XVvEsPrNsOupXzK3xyBuJs6I8NPXoKtIPrvZSTWXY+sgdKQIRFC7EibnNIVVODSBp1rhCqYe+eS eewmB1Cl jLpko4eEpruH2k2a/Z939PxOrx9fO9ercX5+VvcdhMxQ1BuB4ZYRT9HSMJBKxuaDbmMXzlUzfF35cXwlDjzsNn0p1H21o45Cb0ut501N7TyLLKkQXAa9wPdWK+/L8+lXKQ2Hqrc1f86O3nog/sCZFPwKka4PtHq22P4DbuyKoW6zozdwMOIABxBWrb7I5K3uKzTAxJBEwDIhbFEGl99zDsvlitElNLD2m+oY/MK237aJYhQ4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" Using a writable copy for ROX memory is cumbersome and error prone. Add API that allow temporarily remapping of ranges in the ROX cache as writable and then restoring their read-only-execute permissions. This API will be later used in modules code and will allow removing nasty games with writable copy in alternatives patching on x86. The restoring of the ROX permissions relies on the ability of architecture to reconstruct large pages in its set_memory_rox() method. Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/execmem.h | 31 +++++++++++++++++++++++++++++++ mm/execmem.c | 22 ++++++++++++++++++++++ 2 files changed, 53 insertions(+) diff --git a/include/linux/execmem.h b/include/linux/execmem.h index 64130ae19690..65655a5d1be2 100644 --- a/include/linux/execmem.h +++ b/include/linux/execmem.h @@ -65,6 +65,37 @@ enum execmem_range_flags { * Architectures that use EXECMEM_ROX_CACHE must implement this. */ void execmem_fill_trapping_insns(void *ptr, size_t size, bool writable); + +/** + * execmem_make_temp_rw - temporarily remap region with read-write + * permissions + * @ptr: address of the region to remap + * @size: size of the region to remap + * + * Remaps a part of the cached large page in the ROX cache in the range + * [@ptr, @ptr + @size) as writable and not executable. The caller must + * have exclusive ownership of this range and ensure nothing will try to + * execute code in this range. + * + * Return: 0 on success or negative error code on failure. + */ +int execmem_make_temp_rw(void *ptr, size_t size); + +/** + * execmem_restore_rox - restore read-only-execute permissions + * @ptr: address of the region to remap + * @size: size of the region to remap + * + * Restores read-only-execute permissions on a range [@ptr, @ptr + @size) + * after it was temporarily remapped as writable. Relies on architecture + * implementation of set_memory_rox() to restore mapping using large pages. + * + * Return: 0 on success or negative error code on failure. + */ +int execmem_restore_rox(void *ptr, size_t size); +#else +static inline int execmem_make_temp_rw(void *ptr, size_t size) { return 0; } +static inline int execmem_restore_rox(void *ptr, size_t size) { return 0; } #endif /** diff --git a/mm/execmem.c b/mm/execmem.c index 04b0bf1b5025..e6c4f5076ca8 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -335,6 +335,28 @@ static bool execmem_cache_free(void *ptr) return true; } + +int execmem_make_temp_rw(void *ptr, size_t size) +{ + unsigned int nr = PAGE_ALIGN(size) >> PAGE_SHIFT; + unsigned long addr = (unsigned long)ptr; + int ret; + + ret = set_memory_nx(addr, nr); + if (ret) + return ret; + + return set_memory_rw(addr, nr); +} + +int execmem_restore_rox(void *ptr, size_t size) +{ + unsigned int nr = PAGE_ALIGN(size) >> PAGE_SHIFT; + unsigned long addr = (unsigned long)ptr; + + return set_memory_rox(addr, nr); +} + #else /* CONFIG_ARCH_HAS_EXECMEM_ROX */ static void *execmem_cache_alloc(struct execmem_range *range, size_t size) { From patchwork Sun Jan 26 07:47:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13950607 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91D1EC02181 for ; Sun, 26 Jan 2025 07:48:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 29D042800F0; Sun, 26 Jan 2025 02:48:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 24D842800E8; Sun, 26 Jan 2025 02:48:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EEB02800F0; Sun, 26 Jan 2025 02:48:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E1B6A2800E8 for ; Sun, 26 Jan 2025 02:48:56 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A9C6BB21CF for ; Sun, 26 Jan 2025 07:48:56 +0000 (UTC) X-FDA: 83048826672.02.B25CF92 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf27.hostedemail.com (Postfix) with ESMTP id F163740005 for ; Sun, 26 Jan 2025 07:48:54 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Q3CR+WgX; spf=pass (imf27.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737877735; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dLOhiCYJeGwuqjCBqjoq3shROGVp80xye52V1IHw82A=; b=wZmXINxYnxD++hS4bSVEfjW01MVhPXQv3ldZpWVvgxEc3jMCCFLC1i9JNqbjYJi4RB1lv1 LYz1Yqheya0U8yqVSYNCzZX23BEAULGpIWHkdlqwY4aI3hjOCTnx9BNcnxHR4P5toskFpg ND1yG6lW5nifEF+ILAUN1zRWcOtDOmQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737877735; a=rsa-sha256; cv=none; b=Q9mW+JSh4s3zmyqN1Ma+773p/EZjRuMJK0+o1pBRcG6hRolr7nLxJOC1tvjdbpS2wEG7bd Hif2Jrklbrhr9tvR7chP6UbEqRMV3ltRPPNpeyP+knfiDeMqXOLig/Eh6VsXxddCykX1yt F3FcnUvUGc/czAJXmmtytNIKRo2/bHQ= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Q3CR+WgX; spf=pass (imf27.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id E92505C0706; Sun, 26 Jan 2025 07:48:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2D6C1C4CED3; Sun, 26 Jan 2025 07:48:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737877733; bh=xwDxvyvlbxvJeVZzj9KMpy3H3vAeFVvQXS+rKT0mI/8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q3CR+WgXKXNwHo6aho9r7aFhu/902ehjGLuyyhJ2IJ4YicSiuQmg8Aa/cYOcI8k66 4rkN37YvOcCebfKeVcMHlpqLg2FCT7Ee01RqHUM0QscEx5tRiwXhjcrphJQyVRyn0y TDOIugWfdaExO9pNoI/3ND43Iac9UzNr4haiUbRc0gqXt/EvIL7c0eUpT2jT4RluwW HPOqbIrhDyptBfsVw0Vm6veEfta+nNcZJWsu+BfPkK4VUo0Pgw/1QdNj6r9jf1gpYD WozkmdxsLheJRZRyXUBIi8cEgOfa2Ev0U0PF4HwQZ8ILrNnoRjYgJuZC5tqHsEKwB2 ZVU4+OVsF7D8g== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v3 6/9] module: switch to execmem API for remapping as RW and restoring ROX Date: Sun, 26 Jan 2025 09:47:30 +0200 Message-ID: <20250126074733.1384926-7-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250126074733.1384926-1-rppt@kernel.org> References: <20250126074733.1384926-1-rppt@kernel.org> MIME-Version: 1.0 X-Stat-Signature: sf9iwk5o1d4ciuk69mqr4wyuigzgzw88 X-Rspamd-Queue-Id: F163740005 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1737877734-327337 X-HE-Meta: U2FsdGVkX18ncRGB8z9u68hpcwnY2j1lvLHxQmv6D99WcL9ks3NKg3WHg63weqIF6UWtNmUMlIJbgAalZyNUzj28hPHwBAwh+aziUAhT3qfiVZ3HfaWdAgnZZX7LKdFWe6mYaofvSYcn8nn3kAsMhxxmoTft29TCgd3ThL43e/vxPsk3Ab4DMHz4ExXo0pH1GGLw+GDCvKBHFusEAfdUBHN7mjnSmaLpM9T/D0olNpfOJiBcPy8zXL8O0fA8gaFOgj6dSqWiMZB5/sn8+ynZHAj964xJXZuFqGnihawCKTNL0Zk+LyFG3UzWrrOaGMuH3iCmS2rJusI6SHTR9rp0hkxewbRxmv3FyJJiaKB/2x3nhkR3dO7VgOLZ7rNqjp/7I7dVinj8fiZ1uhSkq7xQ6RJ5yhbAfSLOgb/Nq1vovA1fjvBqzpqpHapnwVXk1ZzD1i1Z8LjnE1FFosewGYpf7r/AFqCtpOC4wHdFA2c3QZPaMuOiMWTVvK4TFzAyTqYk4dy1gbnpE2HZGMyO9sC+ukjGP0aCPfRE/GB9YmD/pIeSiRSHH34AwvX2xAoMKW6XFC2+WawJiBnsdp6evZqOeSco3XJjxqHB0loM1ZaNqvqWo3s2n3uVuOXncwJHWO6tEzGvmdvp2CcEnENyHrkk0xDlOd1qEreFO3KPu6wOY8S5yBXAdpQj4W2L83dgnSAfEVklvEm89v2J37u26Bb2EcErzB/LOPB8dYINTXBo5GLQ9zpCF3XKDzR7QDcX01Gwpf6MUpTC5MmYFz0mQCXtcFY792OJXI0tGO7q0V48pZ7xXol8NUlDPaZjHyB0q19mNHUw/svt9lf8PgE4SGbfmhziZFi4V4URoCpd9tq1JO/jUopyAdi+Qq/VoXKuM4NJSH3oh6sZICGhsgMWm9QVIHHDRTlCrV9AfKU8QIwIqyht9Hc8WckZNauNDXxfUrapEvg4QMdJApEUEiCMNMd hv6miqs1 jQdH91v0SYuIGxCEdVOh7Rc9+7herwMCl/jO5TblmmuDxyb4SPyANl9cWU4gc9bG/mUBE0CPvB/J4huw8BPLRw3lKxRJ0LcUoGScLFC8CZyYh0o4JMVHnMYnCdxsIZDrWEztX3Q9QJpVY980WZ+edaP5f6nKwzJnype7Mohgl7qOg79g4uzWUMXBn0zsoRI9am87v/s0bld3l+EPxySVIDWO4UXe+pVkqwmDS/cnbwTbJWbo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" Instead of using writable copy for module text sections, temporarily remap the memory allocated from execmem's ROX cache as writable and restore its ROX permissions after the module is formed. This will allow removing nasty games with writable copy in alternatives patching on x86. Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/module.h | 8 +--- include/linux/moduleloader.h | 4 -- kernel/module/main.c | 78 ++++++++++-------------------------- kernel/module/strict_rwx.c | 9 +++-- 4 files changed, 27 insertions(+), 72 deletions(-) diff --git a/include/linux/module.h b/include/linux/module.h index b3a643435357..6a24e9395cb2 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -367,7 +367,6 @@ enum mod_mem_type { struct module_memory { void *base; - void *rw_copy; bool is_rox; unsigned int size; @@ -769,14 +768,9 @@ static inline bool is_livepatch_module(struct module *mod) void set_module_sig_enforced(void); -void *__module_writable_address(struct module *mod, void *loc); - static inline void *module_writable_address(struct module *mod, void *loc) { - if (!IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX) || !mod || - mod->state != MODULE_STATE_UNFORMED) - return loc; - return __module_writable_address(mod, loc); + return loc; } #else /* !CONFIG_MODULES... */ diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h index 1f5507ba5a12..e395461d59e5 100644 --- a/include/linux/moduleloader.h +++ b/include/linux/moduleloader.h @@ -108,10 +108,6 @@ int module_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *mod); -int module_post_finalize(const Elf_Ehdr *hdr, - const Elf_Shdr *sechdrs, - struct module *mod); - #ifdef CONFIG_MODULES void flush_module_init_free_work(void); #else diff --git a/kernel/module/main.c b/kernel/module/main.c index 5399c182b3cb..4a02503836d7 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1221,18 +1221,6 @@ void __weak module_arch_freeing_init(struct module *mod) { } -void *__module_writable_address(struct module *mod, void *loc) -{ - for_class_mod_mem_type(type, text) { - struct module_memory *mem = &mod->mem[type]; - - if (loc >= mem->base && loc < mem->base + mem->size) - return loc + (mem->rw_copy - mem->base); - } - - return loc; -} - static int module_memory_alloc(struct module *mod, enum mod_mem_type type) { unsigned int size = PAGE_ALIGN(mod->mem[type].size); @@ -1250,21 +1238,15 @@ static int module_memory_alloc(struct module *mod, enum mod_mem_type type) if (!ptr) return -ENOMEM; - mod->mem[type].base = ptr; - if (execmem_is_rox(execmem_type)) { - ptr = vzalloc(size); + int err = execmem_make_temp_rw(ptr, size); - if (!ptr) { - execmem_free(mod->mem[type].base); + if (err) { + execmem_free(ptr); return -ENOMEM; } - mod->mem[type].rw_copy = ptr; mod->mem[type].is_rox = true; - } else { - mod->mem[type].rw_copy = mod->mem[type].base; - memset(mod->mem[type].base, 0, size); } /* @@ -1280,16 +1262,26 @@ static int module_memory_alloc(struct module *mod, enum mod_mem_type type) */ kmemleak_not_leak(ptr); + memset(ptr, 0, size); + mod->mem[type].base = ptr; + return 0; } +static void module_memory_restore_rox(struct module *mod) +{ + for_class_mod_mem_type(type, text) { + struct module_memory *mem = &mod->mem[type]; + + if (mem->is_rox) + execmem_restore_rox(mem->base, mem->size); + } +} + static void module_memory_free(struct module *mod, enum mod_mem_type type) { struct module_memory *mem = &mod->mem[type]; - if (mem->is_rox) - vfree(mem->rw_copy); - execmem_free(mem->base); } @@ -2561,7 +2553,6 @@ static int move_module(struct module *mod, struct load_info *info) for_each_mod_mem_type(type) { if (!mod->mem[type].size) { mod->mem[type].base = NULL; - mod->mem[type].rw_copy = NULL; continue; } @@ -2578,7 +2569,6 @@ static int move_module(struct module *mod, struct load_info *info) void *dest; Elf_Shdr *shdr = &info->sechdrs[i]; const char *sname; - unsigned long addr; if (!(shdr->sh_flags & SHF_ALLOC)) continue; @@ -2599,14 +2589,12 @@ static int move_module(struct module *mod, struct load_info *info) ret = PTR_ERR(dest); goto out_err; } - addr = (unsigned long)dest; codetag_section_found = true; } else { enum mod_mem_type type = shdr->sh_entsize >> SH_ENTSIZE_TYPE_SHIFT; unsigned long offset = shdr->sh_entsize & SH_ENTSIZE_OFFSET_MASK; - addr = (unsigned long)mod->mem[type].base + offset; - dest = mod->mem[type].rw_copy + offset; + dest = mod->mem[type].base + offset; } if (shdr->sh_type != SHT_NOBITS) { @@ -2629,13 +2617,14 @@ static int move_module(struct module *mod, struct load_info *info) * users of info can keep taking advantage and using the newly * minted official memory area. */ - shdr->sh_addr = addr; + shdr->sh_addr = (unsigned long)dest; pr_debug("\t0x%lx 0x%.8lx %s\n", (long)shdr->sh_addr, (long)shdr->sh_size, info->secstrings + shdr->sh_name); } return 0; out_err: + module_memory_restore_rox(mod); for (t--; t >= 0; t--) module_memory_free(mod, t); if (codetag_section_found) @@ -2782,17 +2771,8 @@ int __weak module_finalize(const Elf_Ehdr *hdr, return 0; } -int __weak module_post_finalize(const Elf_Ehdr *hdr, - const Elf_Shdr *sechdrs, - struct module *me) -{ - return 0; -} - static int post_relocation(struct module *mod, const struct load_info *info) { - int ret; - /* Sort exception table now relocations are done. */ sort_extable(mod->extable, mod->extable + mod->num_exentries); @@ -2804,24 +2784,7 @@ static int post_relocation(struct module *mod, const struct load_info *info) add_kallsyms(mod, info); /* Arch-specific module finalizing. */ - ret = module_finalize(info->hdr, info->sechdrs, mod); - if (ret) - return ret; - - for_each_mod_mem_type(type) { - struct module_memory *mem = &mod->mem[type]; - - if (mem->is_rox) { - if (!execmem_update_copy(mem->base, mem->rw_copy, - mem->size)) - return -ENOMEM; - - vfree(mem->rw_copy); - mem->rw_copy = NULL; - } - } - - return module_post_finalize(info->hdr, info->sechdrs, mod); + return module_finalize(info->hdr, info->sechdrs, mod); } /* Call module constructors. */ @@ -3417,6 +3380,7 @@ static int load_module(struct load_info *info, const char __user *uargs, mod->mem[type].size); } + module_memory_restore_rox(mod); module_deallocate(mod, info); free_copy: /* diff --git a/kernel/module/strict_rwx.c b/kernel/module/strict_rwx.c index 239e5013359d..ce47b6346f27 100644 --- a/kernel/module/strict_rwx.c +++ b/kernel/module/strict_rwx.c @@ -9,6 +9,7 @@ #include #include #include +#include #include "internal.h" static int module_set_memory(const struct module *mod, enum mod_mem_type type, @@ -32,12 +33,12 @@ static int module_set_memory(const struct module *mod, enum mod_mem_type type, int module_enable_text_rox(const struct module *mod) { for_class_mod_mem_type(type, text) { + const struct module_memory *mem = &mod->mem[type]; int ret; - if (mod->mem[type].is_rox) - continue; - - if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) + if (mem->is_rox) + ret = execmem_restore_rox(mem->base, mem->size); + else if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) ret = module_set_memory(mod, type, set_memory_rox); else ret = module_set_memory(mod, type, set_memory_x); From patchwork Sun Jan 26 07:47:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13950608 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAF93C02181 for ; Sun, 26 Jan 2025 07:49:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 50E9E2800F1; Sun, 26 Jan 2025 02:49:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4BEB72800E8; Sun, 26 Jan 2025 02:49:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3607B2800F1; Sun, 26 Jan 2025 02:49:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 137442800E8 for ; Sun, 26 Jan 2025 02:49:07 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C12964AD48 for ; Sun, 26 Jan 2025 07:49:06 +0000 (UTC) X-FDA: 83048827092.24.4174AF7 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf09.hostedemail.com (Postfix) with ESMTP id 39FB1140008 for ; Sun, 26 Jan 2025 07:49:05 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=YOQE1ywl; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf09.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737877745; a=rsa-sha256; cv=none; b=y1RMGp7xryzxOpCmoCDonAqU3UOohBR1Cxcujyl9cOqzdAUeDMKSZZNJyyQGci5zRfaWMp FN1B4vVUTZgfCMVDD7uko04fOqKERMGZqI8Bu/giuP+XjqmdtWqrtbcwFJTFxC692mrpCr oBVr/jKgvLZmB3Fq8Gm1bMCJtV5zC/g= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=YOQE1ywl; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf09.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737877745; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=13TmkeVkr2qGdMj9Vqok33BQm9CxYM1haCBTyjvpy0Q=; b=50aNZF/5MLo4fztvIVso5cAAjbWLTU6/FtwnEYDN0MDL/aL9AwIgEtKHoKVk+XBJoa+OC1 3iVMf6Wwaz0dZiZb85W+m4cVtnJeAcnSPtxJxmHKeOFH8Wl05tw3F2X1s3/nlHJMeAiF0z E09EhDP1YvUrntr1zz5IiEzAgpFWipM= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 6F814A400E8; Sun, 26 Jan 2025 07:47:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5998DC4CEE1; Sun, 26 Jan 2025 07:48:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737877744; bh=5WcbMLleT0o3eKZQNgsMEVrIOTvbPCyMPrwQ3+fNd90=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YOQE1ywlaQKWrH20ftQvssFPkudoQuJlEx1fzZwa07njIOdMraqFYS0y4S720Devu GjCZuLWQypJBz3hVAOzJApHZktGXqFQ9z1ZyHpljrQwAaATXiQP3dBULhk+Sz0G+V6 0sRLbpgY+B1tQ14/rDH5rxmLt64zCvxrYKWzU1jQkP/btAJG0MjfHXA+7HLNPdztfY HSzdkpUHQCUTStGzbsW9/SSwgxpaz6tD3R6e+3sVTo4qxs7lpm0VZEEW7kFJgedLbE nbGUHrDJdLy032rDUywPVLsxbSipszeTXLUYf7raUKU2RSoUzpnAAwkhVGqrTC8orj AHYM9XzfYb4MQ== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v3 7/9] Revert "x86/module: prepare module loading for ROX allocations of text" Date: Sun, 26 Jan 2025 09:47:31 +0200 Message-ID: <20250126074733.1384926-8-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250126074733.1384926-1-rppt@kernel.org> References: <20250126074733.1384926-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 39FB1140008 X-Stat-Signature: y7r9xcyoof5nn45skkdjyhpn456kp7s5 X-HE-Tag: 1737877745-650172 X-HE-Meta: U2FsdGVkX1+K/ra22MqBKuDKeQpPhdIDSOYhUWGXUmdL46ehbVTHZC9csU2RoEenjJKCofkieyvx+4KVHxgDQKf9tEUJ4k0lziQivf7gSfgFnmf3Yywp07aYb12UlfEEItiYsIqBUSZdOhtxJY+HRcHIp0l0nl4y3MR7Z+j5SvcZ7tbDOBF6TGgQNaJEB/VH/Eb8ZRJHVN+lvlpueAlEyPCPsAiBogllYfR28EN1xTqUGGQveTIpY2vOJQKYy8kvUdT9klmYAQZoXv2lnt8q8+EEeYJslqqIvso4X8A0r8geGXZHEhbBzFZ3wME5yNQCmJGoG6Fi5fmOPfTfUqC5WFgEh1ElVOi07ZCsrZ+bH2QMh6lVDcgoColFE9Za9k+i5JJmtLAITNyyUE1lkiJfz6unOWQ4cduleXUsrpp8ZDyfr1j9iHmzgFRDgU0IoxsxMBc7lQ9qzCgcEvdUDQ9BVSdwZCILq7fU15nhcYv02V9n/7tAo9+dNqo5LSiDFaZOmsWYqN9Y5bz5xiuR6JF+wKNwi+e96bEsNyBi7tlt98zcM2FU4r5a5LTWDyNO50m91e5DQdcx4PMqypD1vc0TGxCwzZOynDms3OuS2SeyDXBg7qIrFg0khQ2ll9G1/RjVMJnlJRcz9HaizmQirMhQfvtjj/uMuTi703MNSYXX2VNE82k281LV2/UscJmO0vlsRPq9H469NdKu2zYl9mRSQ9mmg0HahoTmwM0mkvHkIuK1hbNRZ2eKgHJDpC1eagFXH8xRyAIvbP7jiX4ewGZ6RpQbi69EL4UjldlVEDjl+O9oZPjnLAQl70J3pZU15W1Tp05mfzaULr3L7dva1jflwbN09xvILo9HI63f2+g9B4/7j5e2XPgAKpU7hi2MNLgJwe5kyzSzWCZ076m4me1QjeLDDxvObJ6C9zM6A6g35vWNYE38vD7eI3qwomic8te5JnmpB1AOWADezLsLJk3 WrrPCEYL xIEQ9/gu2MaDX3XZffFIt/KtfJUW4Ctz6tAf4AJhEOXtSZDF6FhD/10o/t3ebYij3/ryDDUjRzLm7DrlttSSnchkmN4SP2/Ko493CBzP3i0XtPYkpcdALJh53/Ja9ajlvshv33QbwqCR+XRD7+iPepqL8Rl0uoDJ3z1jPVZKMVLTklUKa6cLrtJgCqz8H//ncPur9aD6DgyENbxi7PYUaL4/O9wFW7MnOGSBL+QW6w1cr5+I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" The module code does not create a writable copy of the executable memory anymore so there is no need to handle it in module relocation and alternatives patching. This reverts commit 9bfc4824fd4836c16bb44f922bfaffba5da3e4f3. Signed-off-by: Mike Rapoport (Microsoft) --- arch/um/kernel/um_arch.c | 11 +- arch/x86/entry/vdso/vma.c | 3 +- arch/x86/include/asm/alternative.h | 14 +-- arch/x86/kernel/alternative.c | 181 ++++++++++++----------------- arch/x86/kernel/ftrace.c | 30 +++-- arch/x86/kernel/module.c | 45 +++---- 6 files changed, 117 insertions(+), 167 deletions(-) diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c index 8037a967225d..d2cc2c69a8c4 100644 --- a/arch/um/kernel/um_arch.c +++ b/arch/um/kernel/um_arch.c @@ -440,25 +440,24 @@ void __init arch_cpu_finalize_init(void) os_check_bugs(); } -void apply_seal_endbr(s32 *start, s32 *end, struct module *mod) +void apply_seal_endbr(s32 *start, s32 *end) { } -void apply_retpolines(s32 *start, s32 *end, struct module *mod) +void apply_retpolines(s32 *start, s32 *end) { } -void apply_returns(s32 *start, s32 *end, struct module *mod) +void apply_returns(s32 *start, s32 *end) { } void apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, - s32 *start_cfi, s32 *end_cfi, struct module *mod) + s32 *start_cfi, s32 *end_cfi) { } -void apply_alternatives(struct alt_instr *start, struct alt_instr *end, - struct module *mod) +void apply_alternatives(struct alt_instr *start, struct alt_instr *end) { } diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c index 39e6efc1a9ca..bfc7cabf4017 100644 --- a/arch/x86/entry/vdso/vma.c +++ b/arch/x86/entry/vdso/vma.c @@ -48,8 +48,7 @@ int __init init_vdso_image(const struct vdso_image *image) apply_alternatives((struct alt_instr *)(image->data + image->alt), (struct alt_instr *)(image->data + image->alt + - image->alt_len), - NULL); + image->alt_len)); return 0; } diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h index dc03a647776d..ca9ae606aab9 100644 --- a/arch/x86/include/asm/alternative.h +++ b/arch/x86/include/asm/alternative.h @@ -96,16 +96,16 @@ extern struct alt_instr __alt_instructions[], __alt_instructions_end[]; * instructions were patched in already: */ extern int alternatives_patched; -struct module; extern void alternative_instructions(void); -extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end, - struct module *mod); -extern void apply_retpolines(s32 *start, s32 *end, struct module *mod); -extern void apply_returns(s32 *start, s32 *end, struct module *mod); -extern void apply_seal_endbr(s32 *start, s32 *end, struct module *mod); +extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end); +extern void apply_retpolines(s32 *start, s32 *end); +extern void apply_returns(s32 *start, s32 *end); +extern void apply_seal_endbr(s32 *start, s32 *end); extern void apply_fineibt(s32 *start_retpoline, s32 *end_retpoine, - s32 *start_cfi, s32 *end_cfi, struct module *mod); + s32 *start_cfi, s32 *end_cfi); + +struct module; struct callthunk_sites { s32 *call_start, *call_end; diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 243843e44e89..d17518ca19b8 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -392,10 +392,8 @@ EXPORT_SYMBOL(BUG_func); * Rewrite the "call BUG_func" replacement to point to the target of the * indirect pv_ops call "call *disp(%ip)". */ -static int alt_replace_call(u8 *instr, u8 *insn_buff, struct alt_instr *a, - struct module *mod) +static int alt_replace_call(u8 *instr, u8 *insn_buff, struct alt_instr *a) { - u8 *wr_instr = module_writable_address(mod, instr); void *target, *bug = &BUG_func; s32 disp; @@ -405,14 +403,14 @@ static int alt_replace_call(u8 *instr, u8 *insn_buff, struct alt_instr *a, } if (a->instrlen != 6 || - wr_instr[0] != CALL_RIP_REL_OPCODE || - wr_instr[1] != CALL_RIP_REL_MODRM) { + instr[0] != CALL_RIP_REL_OPCODE || + instr[1] != CALL_RIP_REL_MODRM) { pr_err("ALT_FLAG_DIRECT_CALL set for unrecognized indirect call\n"); BUG(); } /* Skip CALL_RIP_REL_OPCODE and CALL_RIP_REL_MODRM */ - disp = *(s32 *)(wr_instr + 2); + disp = *(s32 *)(instr + 2); #ifdef CONFIG_X86_64 /* ff 15 00 00 00 00 call *0x0(%rip) */ /* target address is stored at "next instruction + disp". */ @@ -450,8 +448,7 @@ static inline u8 * instr_va(struct alt_instr *i) * to refetch changed I$ lines. */ void __init_or_module noinline apply_alternatives(struct alt_instr *start, - struct alt_instr *end, - struct module *mod) + struct alt_instr *end) { u8 insn_buff[MAX_PATCH_LEN]; u8 *instr, *replacement; @@ -480,7 +477,6 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, */ for (a = start; a < end; a++) { int insn_buff_sz = 0; - u8 *wr_instr, *wr_replacement; /* * In case of nested ALTERNATIVE()s the outer alternative might @@ -494,11 +490,7 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, } instr = instr_va(a); - wr_instr = module_writable_address(mod, instr); - replacement = (u8 *)&a->repl_offset + a->repl_offset; - wr_replacement = module_writable_address(mod, replacement); - BUG_ON(a->instrlen > sizeof(insn_buff)); BUG_ON(a->cpuid >= (NCAPINTS + NBUGINTS) * 32); @@ -509,9 +501,9 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, * patch if feature is *NOT* present. */ if (!boot_cpu_has(a->cpuid) == !(a->flags & ALT_FLAG_NOT)) { - memcpy(insn_buff, wr_instr, a->instrlen); + memcpy(insn_buff, instr, a->instrlen); optimize_nops(instr, insn_buff, a->instrlen); - text_poke_early(wr_instr, insn_buff, a->instrlen); + text_poke_early(instr, insn_buff, a->instrlen); continue; } @@ -521,12 +513,11 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, instr, instr, a->instrlen, replacement, a->replacementlen, a->flags); - memcpy(insn_buff, wr_replacement, a->replacementlen); + memcpy(insn_buff, replacement, a->replacementlen); insn_buff_sz = a->replacementlen; if (a->flags & ALT_FLAG_DIRECT_CALL) { - insn_buff_sz = alt_replace_call(instr, insn_buff, a, - mod); + insn_buff_sz = alt_replace_call(instr, insn_buff, a); if (insn_buff_sz < 0) continue; } @@ -536,11 +527,11 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, apply_relocation(insn_buff, instr, a->instrlen, replacement, a->replacementlen); - DUMP_BYTES(ALT, wr_instr, a->instrlen, "%px: old_insn: ", instr); + DUMP_BYTES(ALT, instr, a->instrlen, "%px: old_insn: ", instr); DUMP_BYTES(ALT, replacement, a->replacementlen, "%px: rpl_insn: ", replacement); DUMP_BYTES(ALT, insn_buff, insn_buff_sz, "%px: final_insn: ", instr); - text_poke_early(wr_instr, insn_buff, insn_buff_sz); + text_poke_early(instr, insn_buff, insn_buff_sz); } kasan_enable_current(); @@ -731,20 +722,18 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes) /* * Generated by 'objtool --retpoline'. */ -void __init_or_module noinline apply_retpolines(s32 *start, s32 *end, - struct module *mod) +void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) { s32 *s; for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr = module_writable_address(mod, addr); struct insn insn; int len, ret; u8 bytes[16]; u8 op1, op2; - ret = insn_decode_kernel(&insn, wr_addr); + ret = insn_decode_kernel(&insn, addr); if (WARN_ON_ONCE(ret < 0)) continue; @@ -772,9 +761,9 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end, len = patch_retpoline(addr, &insn, bytes); if (len == insn.length) { optimize_nops(addr, bytes, len); - DUMP_BYTES(RETPOLINE, ((u8*)wr_addr), len, "%px: orig: ", addr); + DUMP_BYTES(RETPOLINE, ((u8*)addr), len, "%px: orig: ", addr); DUMP_BYTES(RETPOLINE, ((u8*)bytes), len, "%px: repl: ", addr); - text_poke_early(wr_addr, bytes, len); + text_poke_early(addr, bytes, len); } } } @@ -810,8 +799,7 @@ static int patch_return(void *addr, struct insn *insn, u8 *bytes) return i; } -void __init_or_module noinline apply_returns(s32 *start, s32 *end, - struct module *mod) +void __init_or_module noinline apply_returns(s32 *start, s32 *end) { s32 *s; @@ -820,13 +808,12 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end, for (s = start; s < end; s++) { void *dest = NULL, *addr = (void *)s + *s; - void *wr_addr = module_writable_address(mod, addr); struct insn insn; int len, ret; u8 bytes[16]; u8 op; - ret = insn_decode_kernel(&insn, wr_addr); + ret = insn_decode_kernel(&insn, addr); if (WARN_ON_ONCE(ret < 0)) continue; @@ -846,35 +833,32 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end, len = patch_return(addr, &insn, bytes); if (len == insn.length) { - DUMP_BYTES(RET, ((u8*)wr_addr), len, "%px: orig: ", addr); + DUMP_BYTES(RET, ((u8*)addr), len, "%px: orig: ", addr); DUMP_BYTES(RET, ((u8*)bytes), len, "%px: repl: ", addr); - text_poke_early(wr_addr, bytes, len); + text_poke_early(addr, bytes, len); } } } #else -void __init_or_module noinline apply_returns(s32 *start, s32 *end, - struct module *mod) { } +void __init_or_module noinline apply_returns(s32 *start, s32 *end) { } #endif /* CONFIG_MITIGATION_RETHUNK */ #else /* !CONFIG_MITIGATION_RETPOLINE || !CONFIG_OBJTOOL */ -void __init_or_module noinline apply_retpolines(s32 *start, s32 *end, - struct module *mod) { } -void __init_or_module noinline apply_returns(s32 *start, s32 *end, - struct module *mod) { } +void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) { } +void __init_or_module noinline apply_returns(s32 *start, s32 *end) { } #endif /* CONFIG_MITIGATION_RETPOLINE && CONFIG_OBJTOOL */ #ifdef CONFIG_X86_KERNEL_IBT -static void poison_cfi(void *addr, void *wr_addr); +static void poison_cfi(void *addr); -static void __init_or_module poison_endbr(void *addr, void *wr_addr, bool warn) +static void __init_or_module poison_endbr(void *addr, bool warn) { u32 endbr, poison = gen_endbr_poison(); - if (WARN_ON_ONCE(get_kernel_nofault(endbr, wr_addr))) + if (WARN_ON_ONCE(get_kernel_nofault(endbr, addr))) return; if (!is_endbr(endbr)) { @@ -889,7 +873,7 @@ static void __init_or_module poison_endbr(void *addr, void *wr_addr, bool warn) */ DUMP_BYTES(ENDBR, ((u8*)addr), 4, "%px: orig: ", addr); DUMP_BYTES(ENDBR, ((u8*)&poison), 4, "%px: repl: ", addr); - text_poke_early(wr_addr, &poison, 4); + text_poke_early(addr, &poison, 4); } /* @@ -898,23 +882,22 @@ static void __init_or_module poison_endbr(void *addr, void *wr_addr, bool warn) * Seal the functions for indirect calls by clobbering the ENDBR instructions * and the kCFI hash value. */ -void __init_or_module noinline apply_seal_endbr(s32 *start, s32 *end, struct module *mod) +void __init_or_module noinline apply_seal_endbr(s32 *start, s32 *end) { s32 *s; for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr = module_writable_address(mod, addr); - poison_endbr(addr, wr_addr, true); + poison_endbr(addr, true); if (IS_ENABLED(CONFIG_FINEIBT)) - poison_cfi(addr - 16, wr_addr - 16); + poison_cfi(addr - 16); } } #else -void __init_or_module apply_seal_endbr(s32 *start, s32 *end, struct module *mod) { } +void __init_or_module apply_seal_endbr(s32 *start, s32 *end) { } #endif /* CONFIG_X86_KERNEL_IBT */ @@ -1136,7 +1119,7 @@ static u32 decode_caller_hash(void *addr) } /* .retpoline_sites */ -static int cfi_disable_callers(s32 *start, s32 *end, struct module *mod) +static int cfi_disable_callers(s32 *start, s32 *end) { /* * Disable kCFI by patching in a JMP.d8, this leaves the hash immediate @@ -1148,23 +1131,20 @@ static int cfi_disable_callers(s32 *start, s32 *end, struct module *mod) for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr; u32 hash; addr -= fineibt_caller_size; - wr_addr = module_writable_address(mod, addr); - hash = decode_caller_hash(wr_addr); - + hash = decode_caller_hash(addr); if (!hash) /* nocfi callers */ continue; - text_poke_early(wr_addr, jmp, 2); + text_poke_early(addr, jmp, 2); } return 0; } -static int cfi_enable_callers(s32 *start, s32 *end, struct module *mod) +static int cfi_enable_callers(s32 *start, s32 *end) { /* * Re-enable kCFI, undo what cfi_disable_callers() did. @@ -1174,115 +1154,106 @@ static int cfi_enable_callers(s32 *start, s32 *end, struct module *mod) for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr; u32 hash; addr -= fineibt_caller_size; - wr_addr = module_writable_address(mod, addr); - hash = decode_caller_hash(wr_addr); + hash = decode_caller_hash(addr); if (!hash) /* nocfi callers */ continue; - text_poke_early(wr_addr, mov, 2); + text_poke_early(addr, mov, 2); } return 0; } /* .cfi_sites */ -static int cfi_rand_preamble(s32 *start, s32 *end, struct module *mod) +static int cfi_rand_preamble(s32 *start, s32 *end) { s32 *s; for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr = module_writable_address(mod, addr); u32 hash; - hash = decode_preamble_hash(wr_addr); + hash = decode_preamble_hash(addr); if (WARN(!hash, "no CFI hash found at: %pS %px %*ph\n", addr, addr, 5, addr)) return -EINVAL; hash = cfi_rehash(hash); - text_poke_early(wr_addr + 1, &hash, 4); + text_poke_early(addr + 1, &hash, 4); } return 0; } -static int cfi_rewrite_preamble(s32 *start, s32 *end, struct module *mod) +static int cfi_rewrite_preamble(s32 *start, s32 *end) { s32 *s; for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr = module_writable_address(mod, addr); u32 hash; - hash = decode_preamble_hash(wr_addr); + hash = decode_preamble_hash(addr); if (WARN(!hash, "no CFI hash found at: %pS %px %*ph\n", addr, addr, 5, addr)) return -EINVAL; - text_poke_early(wr_addr, fineibt_preamble_start, fineibt_preamble_size); - WARN_ON(*(u32 *)(wr_addr + fineibt_preamble_hash) != 0x12345678); - text_poke_early(wr_addr + fineibt_preamble_hash, &hash, 4); + text_poke_early(addr, fineibt_preamble_start, fineibt_preamble_size); + WARN_ON(*(u32 *)(addr + fineibt_preamble_hash) != 0x12345678); + text_poke_early(addr + fineibt_preamble_hash, &hash, 4); } return 0; } -static void cfi_rewrite_endbr(s32 *start, s32 *end, struct module *mod) +static void cfi_rewrite_endbr(s32 *start, s32 *end) { s32 *s; for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr = module_writable_address(mod, addr); - poison_endbr(addr + 16, wr_addr + 16, false); + poison_endbr(addr+16, false); } } /* .retpoline_sites */ -static int cfi_rand_callers(s32 *start, s32 *end, struct module *mod) +static int cfi_rand_callers(s32 *start, s32 *end) { s32 *s; for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr; u32 hash; addr -= fineibt_caller_size; - wr_addr = module_writable_address(mod, addr); - hash = decode_caller_hash(wr_addr); + hash = decode_caller_hash(addr); if (hash) { hash = -cfi_rehash(hash); - text_poke_early(wr_addr + 2, &hash, 4); + text_poke_early(addr + 2, &hash, 4); } } return 0; } -static int cfi_rewrite_callers(s32 *start, s32 *end, struct module *mod) +static int cfi_rewrite_callers(s32 *start, s32 *end) { s32 *s; for (s = start; s < end; s++) { void *addr = (void *)s + *s; - void *wr_addr; u32 hash; addr -= fineibt_caller_size; - wr_addr = module_writable_address(mod, addr); - hash = decode_caller_hash(wr_addr); + hash = decode_caller_hash(addr); if (hash) { - text_poke_early(wr_addr, fineibt_caller_start, fineibt_caller_size); - WARN_ON(*(u32 *)(wr_addr + fineibt_caller_hash) != 0x12345678); - text_poke_early(wr_addr + fineibt_caller_hash, &hash, 4); + text_poke_early(addr, fineibt_caller_start, fineibt_caller_size); + WARN_ON(*(u32 *)(addr + fineibt_caller_hash) != 0x12345678); + text_poke_early(addr + fineibt_caller_hash, &hash, 4); } /* rely on apply_retpolines() */ } @@ -1291,9 +1262,8 @@ static int cfi_rewrite_callers(s32 *start, s32 *end, struct module *mod) } static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, - s32 *start_cfi, s32 *end_cfi, struct module *mod) + s32 *start_cfi, s32 *end_cfi, bool builtin) { - bool builtin = mod ? false : true; int ret; if (WARN_ONCE(fineibt_preamble_size != 16, @@ -1311,7 +1281,7 @@ static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, * rewrite them. This disables all CFI. If this succeeds but any of the * later stages fails, we're without CFI. */ - ret = cfi_disable_callers(start_retpoline, end_retpoline, mod); + ret = cfi_disable_callers(start_retpoline, end_retpoline); if (ret) goto err; @@ -1322,11 +1292,11 @@ static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, cfi_bpf_subprog_hash = cfi_rehash(cfi_bpf_subprog_hash); } - ret = cfi_rand_preamble(start_cfi, end_cfi, mod); + ret = cfi_rand_preamble(start_cfi, end_cfi); if (ret) goto err; - ret = cfi_rand_callers(start_retpoline, end_retpoline, mod); + ret = cfi_rand_callers(start_retpoline, end_retpoline); if (ret) goto err; } @@ -1338,7 +1308,7 @@ static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, return; case CFI_KCFI: - ret = cfi_enable_callers(start_retpoline, end_retpoline, mod); + ret = cfi_enable_callers(start_retpoline, end_retpoline); if (ret) goto err; @@ -1348,17 +1318,17 @@ static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, case CFI_FINEIBT: /* place the FineIBT preamble at func()-16 */ - ret = cfi_rewrite_preamble(start_cfi, end_cfi, mod); + ret = cfi_rewrite_preamble(start_cfi, end_cfi); if (ret) goto err; /* rewrite the callers to target func()-16 */ - ret = cfi_rewrite_callers(start_retpoline, end_retpoline, mod); + ret = cfi_rewrite_callers(start_retpoline, end_retpoline); if (ret) goto err; /* now that nobody targets func()+0, remove ENDBR there */ - cfi_rewrite_endbr(start_cfi, end_cfi, mod); + cfi_rewrite_endbr(start_cfi, end_cfi); if (builtin) pr_info("Using FineIBT CFI\n"); @@ -1377,7 +1347,7 @@ static inline void poison_hash(void *addr) *(u32 *)addr = 0; } -static void poison_cfi(void *addr, void *wr_addr) +static void poison_cfi(void *addr) { switch (cfi_mode) { case CFI_FINEIBT: @@ -1389,8 +1359,8 @@ static void poison_cfi(void *addr, void *wr_addr) * ud2 * 1: nop */ - poison_endbr(addr, wr_addr, false); - poison_hash(wr_addr + fineibt_preamble_hash); + poison_endbr(addr, false); + poison_hash(addr + fineibt_preamble_hash); break; case CFI_KCFI: @@ -1399,7 +1369,7 @@ static void poison_cfi(void *addr, void *wr_addr) * movl $0, %eax * .skip 11, 0x90 */ - poison_hash(wr_addr + 1); + poison_hash(addr + 1); break; default: @@ -1410,21 +1380,22 @@ static void poison_cfi(void *addr, void *wr_addr) #else static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, - s32 *start_cfi, s32 *end_cfi, struct module *mod) + s32 *start_cfi, s32 *end_cfi, bool builtin) { } #ifdef CONFIG_X86_KERNEL_IBT -static void poison_cfi(void *addr, void *wr_addr) { } +static void poison_cfi(void *addr) { } #endif #endif void apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, - s32 *start_cfi, s32 *end_cfi, struct module *mod) + s32 *start_cfi, s32 *end_cfi) { return __apply_fineibt(start_retpoline, end_retpoline, - start_cfi, end_cfi, mod); + start_cfi, end_cfi, + /* .builtin = */ false); } #ifdef CONFIG_SMP @@ -1721,16 +1692,16 @@ void __init alternative_instructions(void) paravirt_set_cap(); __apply_fineibt(__retpoline_sites, __retpoline_sites_end, - __cfi_sites, __cfi_sites_end, NULL); + __cfi_sites, __cfi_sites_end, true); /* * Rewrite the retpolines, must be done before alternatives since * those can rewrite the retpoline thunks. */ - apply_retpolines(__retpoline_sites, __retpoline_sites_end, NULL); - apply_returns(__return_sites, __return_sites_end, NULL); + apply_retpolines(__retpoline_sites, __retpoline_sites_end); + apply_returns(__return_sites, __return_sites_end); - apply_alternatives(__alt_instructions, __alt_instructions_end, NULL); + apply_alternatives(__alt_instructions, __alt_instructions_end); /* * Now all calls are established. Apply the call thunks if @@ -1741,7 +1712,7 @@ void __init alternative_instructions(void) /* * Seal all functions that do not have their address taken. */ - apply_seal_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end, NULL); + apply_seal_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end); #ifdef CONFIG_SMP /* Patch to UP if other cpus not imminent. */ diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 4dd0ad6c94d6..adb09f78edb2 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -118,13 +118,10 @@ ftrace_modify_code_direct(unsigned long ip, const char *old_code, return ret; /* replace the text with the new text */ - if (ftrace_poke_late) { + if (ftrace_poke_late) text_poke_queue((void *)ip, new_code, MCOUNT_INSN_SIZE, NULL); - } else { - mutex_lock(&text_mutex); - text_poke((void *)ip, new_code, MCOUNT_INSN_SIZE); - mutex_unlock(&text_mutex); - } + else + text_poke_early((void *)ip, new_code, MCOUNT_INSN_SIZE); return 0; } @@ -321,7 +318,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) unsigned const char op_ref[] = { 0x48, 0x8b, 0x15 }; unsigned const char retq[] = { RET_INSN_OPCODE, INT3_INSN_OPCODE }; union ftrace_op_code_union op_ptr; - void *ret; + int ret; if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) { start_offset = (unsigned long)ftrace_regs_caller; @@ -352,15 +349,15 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) npages = DIV_ROUND_UP(*tramp_size, PAGE_SIZE); /* Copy ftrace_caller onto the trampoline memory */ - ret = text_poke_copy(trampoline, (void *)start_offset, size); - if (WARN_ON(!ret)) + ret = copy_from_kernel_nofault(trampoline, (void *)start_offset, size); + if (WARN_ON(ret < 0)) goto fail; ip = trampoline + size; if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) __text_gen_insn(ip, JMP32_INSN_OPCODE, ip, x86_return_thunk, JMP32_INSN_SIZE); else - text_poke_copy(ip, retq, sizeof(retq)); + memcpy(ip, retq, sizeof(retq)); /* No need to test direct calls on created trampolines */ if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) { @@ -368,7 +365,8 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) ip = trampoline + (jmp_offset - start_offset); if (WARN_ON(*(char *)ip != 0x75)) goto fail; - if (!text_poke_copy(ip, x86_nops[2], 2)) + ret = copy_from_kernel_nofault(ip, x86_nops[2], 2); + if (ret < 0) goto fail; } @@ -381,7 +379,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) */ ptr = (unsigned long *)(trampoline + size + RET_SIZE); - text_poke_copy(ptr, &ops, sizeof(unsigned long)); + *ptr = (unsigned long)ops; op_offset -= start_offset; memcpy(&op_ptr, trampoline + op_offset, OP_REF_SIZE); @@ -397,7 +395,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) op_ptr.offset = offset; /* put in the new offset to the ftrace_ops */ - text_poke_copy(trampoline + op_offset, &op_ptr, OP_REF_SIZE); + memcpy(trampoline + op_offset, &op_ptr, OP_REF_SIZE); /* put in the call to the function */ mutex_lock(&text_mutex); @@ -407,9 +405,9 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) * the depth accounting before the call already. */ dest = ftrace_ops_get_func(ops); - text_poke_copy_locked(trampoline + call_offset, - text_gen_insn(CALL_INSN_OPCODE, trampoline + call_offset, dest), - CALL_INSN_SIZE, false); + memcpy(trampoline + call_offset, + text_gen_insn(CALL_INSN_OPCODE, trampoline + call_offset, dest), + CALL_INSN_SIZE); mutex_unlock(&text_mutex); /* ALLOC_TRAMP flags lets us know we created it */ diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index 8984abd91c00..837450b6e882 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -146,21 +146,18 @@ static int __write_relocate_add(Elf64_Shdr *sechdrs, } if (apply) { - void *wr_loc = module_writable_address(me, loc); - - if (memcmp(wr_loc, &zero, size)) { + if (memcmp(loc, &zero, size)) { pr_err("x86/modules: Invalid relocation target, existing value is nonzero for type %d, loc %p, val %Lx\n", (int)ELF64_R_TYPE(rel[i].r_info), loc, val); return -ENOEXEC; } - write(wr_loc, &val, size); + write(loc, &val, size); } else { if (memcmp(loc, &val, size)) { pr_warn("x86/modules: Invalid relocation target, existing value does not match expected value for type %d, loc %p, val %Lx\n", (int)ELF64_R_TYPE(rel[i].r_info), loc, val); return -ENOEXEC; } - /* FIXME: needs care for ROX module allocations */ write(loc, &zero, size); } } @@ -227,7 +224,7 @@ int module_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *me) { - const Elf_Shdr *s, *alt = NULL, + const Elf_Shdr *s, *alt = NULL, *locks = NULL, *orc = NULL, *orc_ip = NULL, *retpolines = NULL, *returns = NULL, *ibt_endbr = NULL, *calls = NULL, *cfi = NULL; @@ -236,6 +233,8 @@ int module_finalize(const Elf_Ehdr *hdr, for (s = sechdrs; s < sechdrs + hdr->e_shnum; s++) { if (!strcmp(".altinstructions", secstrings + s->sh_name)) alt = s; + if (!strcmp(".smp_locks", secstrings + s->sh_name)) + locks = s; if (!strcmp(".orc_unwind", secstrings + s->sh_name)) orc = s; if (!strcmp(".orc_unwind_ip", secstrings + s->sh_name)) @@ -266,20 +265,20 @@ int module_finalize(const Elf_Ehdr *hdr, csize = cfi->sh_size; } - apply_fineibt(rseg, rseg + rsize, cseg, cseg + csize, me); + apply_fineibt(rseg, rseg + rsize, cseg, cseg + csize); } if (retpolines) { void *rseg = (void *)retpolines->sh_addr; - apply_retpolines(rseg, rseg + retpolines->sh_size, me); + apply_retpolines(rseg, rseg + retpolines->sh_size); } if (returns) { void *rseg = (void *)returns->sh_addr; - apply_returns(rseg, rseg + returns->sh_size, me); + apply_returns(rseg, rseg + returns->sh_size); } if (alt) { /* patch .altinstructions */ void *aseg = (void *)alt->sh_addr; - apply_alternatives(aseg, aseg + alt->sh_size, me); + apply_alternatives(aseg, aseg + alt->sh_size); } if (calls || alt) { struct callthunk_sites cs = {}; @@ -298,28 +297,8 @@ int module_finalize(const Elf_Ehdr *hdr, } if (ibt_endbr) { void *iseg = (void *)ibt_endbr->sh_addr; - apply_seal_endbr(iseg, iseg + ibt_endbr->sh_size, me); + apply_seal_endbr(iseg, iseg + ibt_endbr->sh_size); } - - if (orc && orc_ip) - unwind_module_init(me, (void *)orc_ip->sh_addr, orc_ip->sh_size, - (void *)orc->sh_addr, orc->sh_size); - - return 0; -} - -int module_post_finalize(const Elf_Ehdr *hdr, - const Elf_Shdr *sechdrs, - struct module *me) -{ - const Elf_Shdr *s, *locks = NULL; - char *secstrings = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset; - - for (s = sechdrs; s < sechdrs + hdr->e_shnum; s++) { - if (!strcmp(".smp_locks", secstrings + s->sh_name)) - locks = s; - } - if (locks) { void *lseg = (void *)locks->sh_addr; void *text = me->mem[MOD_TEXT].base; @@ -329,6 +308,10 @@ int module_post_finalize(const Elf_Ehdr *hdr, text, text_end); } + if (orc && orc_ip) + unwind_module_init(me, (void *)orc_ip->sh_addr, orc_ip->sh_size, + (void *)orc->sh_addr, orc->sh_size); + return 0; } From patchwork Sun Jan 26 07:47:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13950609 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3E62C0218D for ; Sun, 26 Jan 2025 07:49:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7CE4D2800F2; Sun, 26 Jan 2025 02:49:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A52C2800E8; Sun, 26 Jan 2025 02:49:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6940A2800F2; Sun, 26 Jan 2025 02:49:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 4C86B2800E8 for ; Sun, 26 Jan 2025 02:49:17 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 09BB2813C5 for ; Sun, 26 Jan 2025 07:49:17 +0000 (UTC) X-FDA: 83048827554.09.F9DD2C6 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf19.hostedemail.com (Postfix) with ESMTP id 687CF1A0005 for ; Sun, 26 Jan 2025 07:49:15 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=WptzH7Yd; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf19.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737877755; a=rsa-sha256; cv=none; b=oW3IFvE7OiWa1UTFcK2DE8j+3OLPa+FTESJf2TJozxlmgqq+krseziEGfXa8jp+BeIyMlU hQEHZZUB6n/4Ndhtz+zx9vvs5YjLYqM9rjqSJqns6JmOBLmoGbdi/xrgHTw6dhWI8d0w/U 4e7MMYIqcTW/c+bgMxil18h5nugabrY= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=WptzH7Yd; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf19.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737877755; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SXKx4LPXiF70wUws9XsZRFp3yakIdcF2/aoPSR49yGo=; b=Fmvj/M9lQApnCXTy/F+ydDkzDTRb+c3kOTi0Af2nQ9ebPrBlK4uFSlWo02k98nYUC4JeVQ x8iw50Qtk+Z4urlVs4l6L0OhdVcdqzgLRg/J4Ehwa5osZlHPi0I4Eq0y9AcRQqUyhaXitA JEKWheQ67S4kkoeJcgToxPZlrJKQzTc= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 7C71A5C56D4; Sun, 26 Jan 2025 07:48:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B6817C4CEE4; Sun, 26 Jan 2025 07:49:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737877754; bh=ojukvsVbrsOautblNhkpoP7wuqi3ozWbxIf3p0+h6f0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WptzH7YdSogdAkUSZLvcqplIdwp+MBY+laOzNuHQ78GWRSBBOknbXpAk7uYSnJrMz wp2yZtsEj4Ft7cNz7tQlCeN1KsCLqDz9mqnl+TnipPcoTYtEO+sXiQJQboHjiChfNL vA3aEX4vPp8wpDbDReIBYoQLGzjtwfHwBMvGsAAMDNHt03hVTP8VPftd6hLurGej+P AsbRuca4pr2+C6v3mt4e73OWtBqd80b+EHsrPFjsLsq4grSJSpFTAnEyDf77cnhJcr sx71WcZSLvCbmLvRIC86nvUK0DVK2x50qb/+CGX6umGCycAkK1HmtBx/EyszcSrn67 bHDc6aXclkXIg== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v3 8/9] module: drop unused module_writable_address() Date: Sun, 26 Jan 2025 09:47:32 +0200 Message-ID: <20250126074733.1384926-9-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250126074733.1384926-1-rppt@kernel.org> References: <20250126074733.1384926-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 687CF1A0005 X-Stat-Signature: a3zi35e3n8tesw7ewf8qny3mwam7wdj4 X-Rspam-User: X-HE-Tag: 1737877755-265394 X-HE-Meta: U2FsdGVkX18mysaYwll8LZJXIrUVSz3I5sNXGX0OPS5ia3uyP00FeSJ+AfAFWI3KxpZX0GAu0toyFdSVVFjBwCcxk0Uigi8a4EQUSs0+TVfRNj8X2S9XwHZXWSe9mb6GNAPGldTULMQCEcrT0/vgjjf3g3zss/JkPwdY7CFegQ4lbAGnNxfF2BtMwuWOcVT1omN/6BO2ImYREu4y1w+RwT3RXYDDW6HTMFYnNcw/aor/xxD+MMcW7j8i+BB+Dhc43oeDRb16X+tGgb/fXqn+0zEnHwUWZzMKZFadH4FBL6wZrMbog+GnNYFLr6Oql9NccA+vDc/j27zrtYiEN9tgyb6QnRqfRmFFhBU0L+9TjBqi7II7LX1PnyW3Xd9KKPP54ZAeY4nsMsH+5OSPtdjynnlNBv3DHAWwb8Zxok+ErHVo/fmp6NVvwnE/kp9lulouJ3KU1+d20NsV4LrnO1ZLp4TDHvt3uQMkKg9PS1xc4VtasRqoF3Y1Kyn7Pbqxyg4vnrB4NyP6dBirKkSO1nRczOB9MyFZD1rWdCZtMJ7vmkRT4ltBdqV1P9chT95RUg75HloI7RudR51+c6xYTc4TQcYa3p77CfXT0SsnMdplFPB3TKmM2RTIdNENQayzQfLEg3ikokhBJSuO3p6/S4bUyAcmyfnUgcJWLsvTb84VKFUu76TKe7LhGU9ps2fPYvGDeeecZSGuAxRwDPaRcnGouAAn8LaS33od8nbA4+PEEXiwV0FfTVuuVRmAX6LXeY4YVGd/L0RwUK2AiBRQ889I+XKwY8dUn/GvRANsTvCHXPbkRcfjHY3qwjeXoO+Kvzl2lGvSIyge2F8Gg6bAjX9qiOiHjxskLeYJu+bessglBLia07YpVacLLsMHuhF+UlAn0FB/BAVJs+nnkMnLmSe5imifuzc3Sv5CTLc7caKko4hXwdXLTcBuLD5Xo5Vq90dy3WxKKIjQF1lBaH3R7cd c3nBKAmD 5mvuKqjfZiO2KV2DrHfgYL0WbrKBO4IZzLuJQNa6Flj+QRpviClPvsJmzAqW3mu/acg07yMkJgfbJxPJN6cBMd2kVS71cbXF5VZNcUPwrPmjKhQ4DwUuykw1UYlQ4P8LpB4CcAK9wHI9P/zYphKS9I/h7GzfmKYM6Hj2XRqGsirTji0weyao+ZuR5KcvvPiIpUEkuzyOuJuBc3LOBE/hu/xDnEr/fFk2y9/wZs6ZH/vdl0//XCYrRqOMZg3g83cwRsP8HQkD1rxbfs81Su8DbxxcYfh9/A4mn2g1O X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" module_writable_address() is unused and can be removed. Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/module.h | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/include/linux/module.h b/include/linux/module.h index 6a24e9395cb2..d2cf30be10cc 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -768,11 +768,6 @@ static inline bool is_livepatch_module(struct module *mod) void set_module_sig_enforced(void); -static inline void *module_writable_address(struct module *mod, void *loc) -{ - return loc; -} - #else /* !CONFIG_MODULES... */ static inline struct module *__module_address(unsigned long addr) @@ -880,11 +875,6 @@ static inline bool module_is_coming(struct module *mod) { return false; } - -static inline void *module_writable_address(struct module *mod, void *loc) -{ - return loc; -} #endif /* CONFIG_MODULES */ #ifdef CONFIG_SYSFS From patchwork Sun Jan 26 07:47:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13950632 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45965C0218E for ; Sun, 26 Jan 2025 07:49:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CAFB72800F3; Sun, 26 Jan 2025 02:49:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C60002800E8; Sun, 26 Jan 2025 02:49:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B27F22800F3; Sun, 26 Jan 2025 02:49:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 94A592800E8 for ; Sun, 26 Jan 2025 02:49:27 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 56987C0629 for ; Sun, 26 Jan 2025 07:49:27 +0000 (UTC) X-FDA: 83048827974.26.6D93A8C Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf17.hostedemail.com (Postfix) with ESMTP id B29DE40004 for ; Sun, 26 Jan 2025 07:49:25 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=K4s+daeo; spf=pass (imf17.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737877765; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aMCcoKmYu2iXUwJbQTVGV2jsmkWbYCGb/CE1d3c5aKw=; b=izQGF23of2n3HZmyxDA4UayLNqMperk5Ij4/saqzWMukFfgbS6JVOdjK40ILHxESHBPyYq mDlgyvItVHvvfFwQKliGXtxb/jiRrw5tky5PUL46zyMOcEt2SU9PIRJONTbd+2e8LmoKX/ 7welbVZV4fOkUeNgoeQ9Et19oCQOhJM= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=K4s+daeo; spf=pass (imf17.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737877765; a=rsa-sha256; cv=none; b=OXwwGM0RWV14sQhOOkZzpnAf3vFJGyNQlzNlab+WKeVl83NPnyV/2K9SiAHv+eAPyHkIC4 7qbtq5YiLvC22+BfCDo7C+fh4Fx+eWjqZzKkYPHp6xDzTS8/YmDuAsefD9NpkrWEl/p6nm 4wfnvLIyZjbF3ngnbngQ6tsmRdME0w4= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id AFF8B5C0706; Sun, 26 Jan 2025 07:48:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E1B87C4CED3; Sun, 26 Jan 2025 07:49:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737877764; bh=iCOJADJJ/sFPbuOid5OXzq1M+MQ1WlHyV9KAisT6QLU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=K4s+daeo5ZtRx6Gu4vJ/P5va2/Zc1aBNvpydz9rKkSVitnt5s4tQ4p7lT4T7koHi2 wvqpFOfNIJ4JvEi6mI2AH0DeuLwKbVe3rJ5HBW4E9DuKwRa24y9nMYZzZEoMxFBPuj r3lfMvP/zDLkxMw27lf06oOh8SETm1Dsfzs9boTj7kyerxd4kK4XYTdJQnf5NnqRU7 2nOaBRt/YCtwpuvvVDC9No5WR+kbAukE44WO5GnFFgtxx9m3AtHPGru2xrot7i52tK my4z9DG0wF6kJFGXr/ty1oVp9XX+EnLvBW/nOKu6f8wEe7Q70aCehyXls3uZlat3aP plrNoZEvAzEAA== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v3 9/9] x86: re-enable EXECMEM_ROX support Date: Sun, 26 Jan 2025 09:47:33 +0200 Message-ID: <20250126074733.1384926-10-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250126074733.1384926-1-rppt@kernel.org> References: <20250126074733.1384926-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: B29DE40004 X-Stat-Signature: ochyu6ei9kputid6zq6tdyn3h37it7pk X-HE-Tag: 1737877765-743662 X-HE-Meta: U2FsdGVkX187xhI4rSSjF/1CnPFiGSNv4OOBIi8fb1ZJh6/70NnqHVVzy6oym1VLbxjP3cBylkNB3kepqG2p6scCjt1w5Y/UpbzxxzaPAIM0JEfOdb51gh67dxtfw5Pcmy3dGPQSnkvGeSxLObK8+wjB8wLbRME5MJjT2tfzmfIGRWyqWqjfxBsstsfPIdI/ZA/DCfLMVR5mqTVesSNjYD4ydY3BFtnFvREMgQMl2LJSl5ch4a7sIBL5389zCPYq2Hc+WJW5oUz80AuWeiduBg0RgEgd6r2EOZ6/B9JrtWosDMgmDMpEKKPCJIJ/nTJn7pBn7v/yxF4cM8aT2fMG+4Fa8umZdNosEntenc/LnXZkg5b6a1twZpdmk3QVLOTUWkPxuzYePY1jlxjylHH+F9QVE3oZtchALjSloalv3HZzjWY6/uzmZkpg7fE2hkKxZW6l5MmwdMbZORJLU+xcNJqVppiiG6q9QAxG5FybOkurfpQR9oijuswvUPfzHCyna71naHC+LP0j54ISB0x1AYY4kJrnjkkRLH0wMAJxtPCuedcV+92hzt79xGasw/BbW2txDZqa9b59nggXEnylBQ8mw9qcXPQRGIzAj5TX5Iho/dxKCRQIv3QRC9hjwwPbLicT+V6cEM1JnG56ig8VMdsLYVLyB/C+twfxK3hW8Ql1db3YYprR7ku38d5QpxTy9RaN4IRWBGAGEOWgnJsDT0ggfVhm1ow6Mty/QP6V3pND4mvTyttaCFsLSMDCSj4PwKu6Q3+PdaIJZZAc8TFGrV+rvkWbXG1az+NsDxDqnJrTFKFH5ls5LOzQpjmsUs5opRtTqMnL8a7FygFLGSphhr1IVSaYR6Ik6BmiS50iTmLouOv8WDwSXlWBNMsAzmUhadusiZiDZEFjxcT37dQ9zBeKsRVvoCiw7evn+O9ydODp5HRnnYmIJ4sZWtr9auTEiPx+rPEm6rnZy8V2K7Y wAlCMH17 sESold1ie/KE9vdJPpFu/H2OIq1b+Y7vxR2pv3xgVXLJZZ0453QbPo6z9Lx3JbpNrI3qL8uI1aXjYzkJ7CqPv/BDASk7xloud1cExCDCKCqkSnJm/39aKaH6Ud8EfkMbVWbf0fCO2jjjsYtg6uVxEgtH/Tij3+H19amTliJUaDJxY9n+bV12Rw0GSjuAeFyzjJM/v31gmVjsBa/hF5hH57+otphBVb/Gdt3HBkfSVUdkGDtmdLs6dEKk6lQtiIzi7F8VuDg83+1un6PtfLXmHMsWvXYcZtX2vFa+j X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" after rework of execmem ROX caches Signed-off-by: Mike Rapoport (Microsoft) --- arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index ef6cfea9df73..9d7bd0ae48c4 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -83,6 +83,7 @@ config X86 select ARCH_HAS_DMA_OPS if GART_IOMMU || XEN select ARCH_HAS_EARLY_DEBUG if KGDB select ARCH_HAS_ELF_RANDOMIZE + select ARCH_HAS_EXECMEM_ROX if X86_64 select ARCH_HAS_FAST_MULTIPLIER select ARCH_HAS_FORTIFY_SOURCE select ARCH_HAS_GCOV_PROFILE_ALL