From patchwork Sat Feb 11 03:39:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13136877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BD69C636CC for ; Sat, 11 Feb 2023 03:39:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 78BCD6B0072; Fri, 10 Feb 2023 22:39:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 73BB56B0073; Fri, 10 Feb 2023 22:39:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 603576B0074; Fri, 10 Feb 2023 22:39:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5207F6B0072 for ; Fri, 10 Feb 2023 22:39:58 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 222D41C6CFF for ; Sat, 11 Feb 2023 03:39:58 +0000 (UTC) X-FDA: 80453607276.17.F6D443E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id DFE3440003 for ; Sat, 11 Feb 2023 03:39:55 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Q2PeMcwr; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676086796; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xzupGZYjgsRdPzxbRyZPs4MoEuy+mGaMtCibA/DBqRA=; b=XMWdotpF7TvZ6nJnb4a3N9Di6xKuS2ccBp/wMweKhpRMMWF7PD2WHT/B33St71/ebY1kxX l3TWENYJDGCgjuCvYp8bFswPAATrmipgdAfLUS3FyT18vDAAlVSrdHzso0Sbez799vsByQ BjcZvg/Qz/g6RsvD5PpQPrgtmLfmlLU= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Q2PeMcwr; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676086796; a=rsa-sha256; cv=none; b=eNNy9nZaGAEbLZ57K3eU51PSVi9skftkwbrztwDLBIpY/gJv1jivFF45WK3JfiTFapzlsO 6uMCoSlMGDf38Ke7bNDuhALHSeQFg7P5NOGUhH3JvOWUI1Bu7512L9kqtfYXadHXxjYNBL rj87b2SPK5y3as/R5kMaGDh6fEtOWfg= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=xzupGZYjgsRdPzxbRyZPs4MoEuy+mGaMtCibA/DBqRA=; b=Q2PeMcwrdTsBzt3lz5/gXUbu1i Xd/IWhDKAN1pI9im9nvwMzNCe7kqpQ6F2ZyiNG6N5yqxUe2NmOZI6kvPuflUxJOA3bPIFAY1m2Fue 8ACsj5NcHINEPvNjA9af9rtkg7dtFwmRVKIiRRMikRvo3fWx3kPyOvShxE1Bda7wTgXdSLmSNjImK 5AyQPNnMxJMMERJP5r35+IykdW/DHxWaPAxrdjuJDp5m3PMwu0c1eX9F/JoKcJlf+gVISwE2HzESO Qs3ttwQK3uqYdCJBIcp62s6qLE+H6WUK0s/9euUK3WujdGJMQq84PHTjsPHAOjY6zNbRzknvdILDq Q2CEJWZA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pQgjv-003k2g-43; Sat, 11 Feb 2023 03:39:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 1/7] mm: Convert page_table_check_pte_set() to page_table_check_ptes_set() Date: Sat, 11 Feb 2023 03:39:42 +0000 Message-Id: <20230211033948.891959-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230211033948.891959-1-willy@infradead.org> References: <20230211033948.891959-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: y7dg55uqush1kcanmxrkq9znjhkfd6h3 X-Rspamd-Queue-Id: DFE3440003 X-HE-Tag: 1676086795-766445 X-HE-Meta: U2FsdGVkX19dnUgr8yxYG2mwVBJTb7z2GfZZxkitUZwDNkmJFGfzKJwXhnyLIN32BcFlsps/rX6u1FIsurm3dKB2H/7KBrQTpIczSwfrRjcQ4KlDIqV9NHj9OJoq4HhGJtGarVCdpB+MGVypFx0TkOEpoeV8MYZe7u5wN3ITtj7eHXGz75eAyVcTdgx45QgftkJPgaAwoh9LeHkRiDeX1w5tXDqcXvFlHn1emVoww6wc3WXJivpeabNZU+16yc4Q4xFp74huacpRVu4H4MO5ovGAUtZweIj4rE/HBTmuFAjTIA9yEsl2qnVJgjyopkLeE5ra5usiIipo9VMB6Isso0IDoGiQrrnwuYUD2ooHdBtQaKrjvSOipAALEfM/Qg1RSwk3AePXU4geDBz5QoUnP8G9Ootg6IlY6592WdYA/+ZN36vAcxYdd/Ec8d8HmTGFMSSkEjNQz41HOjisNLa28w0IizrJYzZ/PKRaN1n4Nn1ZGLEPgOSHBuMn0jFeZRZ5Gcb/6eSQ6gRNfyAlIAPWmQjxUjde+oUN3Usga4NH/JdXp+Yo+q5ZsL1p6KWjbulR8Aem0zYTy1vk+YqcOre1/KDxf6sFmauwek4Vx4nPY5cEXGDbspr9Uy/I5V69lerPpMFs8aBveoY7m3KJb8W8xRi0tOmGXYmFT457rdwUgtIMCe5A/OFn+6enKvcKJGyMEhel01CqCabIRFty0R0qwEFDFc0tjo90sHYr8PFsic7u8CKj0XqDB4PUnguyNEYpohSXUJYmuOUPsEG5AH3+XU0noIIIjR2wpUyvVdAedMOmDWb9hiJIysUt90N4WB022QPi6rqen6Zp1R29lxCJZgxHjJJ6vEjjCTmLGlvBgIijVB6l7U58k2UG/xTfgZ6qeTzedK8GPInOpcOYB5aq/gkM/jiNbfBxhVOPzjQPGnOjEzQxVL1U1+LKqs6gvr+1lDp86RwUSUcuydU8VIR B/in71mo 0hzdcrfCqokQm4f2toKeBCAWYGQXn/C8q6PaVmoImR87jvt3RTMDGSxNolm5VoL11DOmIvXnIvfxtCj0SJjOeelAtfKEjuZ593zblGAiUT2ReQD5kVgt9KqKpMGGIURsEZkCFG0d9dZ+B1HX3TOKVGPbicf07ZmLjLXzqmEtY5N1whE/PXn3pdGPqJozpNqotGImzlmF3IWEYdTqBP9y1wRodnS096lCpWv7ZZ+fmvjKv9WaUDiSgif+1sw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Tell the page table check how many PTEs & PFNs we want it to check. Signed-off-by: Matthew Wilcox (Oracle) --- arch/arm64/include/asm/pgtable.h | 2 +- arch/riscv/include/asm/pgtable.h | 2 +- arch/x86/include/asm/pgtable.h | 2 +- include/linux/page_table_check.h | 14 +++++++------- mm/page_table_check.c | 14 ++++++++------ 5 files changed, 18 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b6ba466e2e8a..69765dc697af 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -358,7 +358,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - page_table_check_pte_set(mm, addr, ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, 1); return __set_pte_at(mm, addr, ptep, pte); } diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 2d92de386837..13222fd5c4b4 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -459,7 +459,7 @@ static inline void __set_pte_at(struct mm_struct *mm, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval) { - page_table_check_pte_set(mm, addr, ptep, pteval); + page_table_check_ptes_set(mm, addr, ptep, pteval, 1); __set_pte_at(mm, addr, ptep, pteval); } diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 7425f32e5293..84be3e07b112 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1022,7 +1022,7 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - page_table_check_pte_set(mm, addr, ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, 1); set_pte(ptep, pte); } diff --git a/include/linux/page_table_check.h b/include/linux/page_table_check.h index 01e16c7696ec..ba269c7009e4 100644 --- a/include/linux/page_table_check.h +++ b/include/linux/page_table_check.h @@ -20,8 +20,8 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr, pmd_t pmd); void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr, pud_t pud); -void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte); +void __page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr); void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd); void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr, @@ -73,14 +73,14 @@ static inline void page_table_check_pud_clear(struct mm_struct *mm, __page_table_check_pud_clear(mm, addr, pud); } -static inline void page_table_check_pte_set(struct mm_struct *mm, +static inline void page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) + pte_t pte, unsigned int nr) { if (static_branch_likely(&page_table_check_disabled)) return; - __page_table_check_pte_set(mm, addr, ptep, pte); + __page_table_check_ptes_set(mm, addr, ptep, pte, nr); } static inline void page_table_check_pmd_set(struct mm_struct *mm, @@ -138,9 +138,9 @@ static inline void page_table_check_pud_clear(struct mm_struct *mm, { } -static inline void page_table_check_pte_set(struct mm_struct *mm, +static inline void page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) + pte_t pte, unsigned int nr) { } diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 25d8610c0042..e6f4d40caaa2 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -184,20 +184,22 @@ void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL(__page_table_check_pud_clear); -void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +void __page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { + unsigned int i; + if (&init_mm == mm) return; - __page_table_check_pte_clear(mm, addr, *ptep); + for (i = 0; i < nr; i++) + __page_table_check_pte_clear(mm, addr, ptep[i]); if (pte_user_accessible_page(pte)) { - page_table_check_set(mm, addr, pte_pfn(pte), - PAGE_SIZE >> PAGE_SHIFT, + page_table_check_set(mm, addr, pte_pfn(pte), nr, pte_write(pte)); } } -EXPORT_SYMBOL(__page_table_check_pte_set); +EXPORT_SYMBOL(__page_table_check_ptes_set); void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) From patchwork Sat Feb 11 03:39:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13136878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6062AC64EC7 for ; Sat, 11 Feb 2023 03:40:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C20936B0073; Fri, 10 Feb 2023 22:39:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B817B6B0074; Fri, 10 Feb 2023 22:39:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D4476B0075; Fri, 10 Feb 2023 22:39:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 8A3106B0073 for ; Fri, 10 Feb 2023 22:39:59 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D64B314139E for ; Sat, 11 Feb 2023 03:39:58 +0000 (UTC) X-FDA: 80453607276.14.8F4D7EF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 2EF5B14000C for ; Sat, 11 Feb 2023 03:39:56 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=YLTkO73U; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676086797; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=goynt5niP0v9aEgqldbKfY9zrWSKLPIYoftetp9ETpc=; b=XYp0VhnCS5uwxGLSpzYMzbsWePObLmQrZqF12vVxT9ynLDQB3f58LwmI1iVC2rpFnKeIwk qtdSoXk4E73J8C1aj5En4I5iT+L914RKDvnnnPlTQlxxuklQk2pFUzCsjZwPw1jUgkeW5D txgCnxnqKNkP7a+ULheY/O3VkGKRasg= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=YLTkO73U; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676086797; a=rsa-sha256; cv=none; b=VDs6OvMSmr8PZeEv13dp8DbXIbMTuFVkNP7VPH8ZtG/t9NzcgUEFIYOS1MOZqPE0XEDjPW 2xX+xvy17lven6h2c9HmPxgIcEOfNPIDevfhqh1ko9Q8VOJGNJA+YxrizAYv6Myr7Om0P/ lVy4eU+OyjKKkUf5IjSPFN8qE/r53cs= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=goynt5niP0v9aEgqldbKfY9zrWSKLPIYoftetp9ETpc=; b=YLTkO73UnHtS+xHg6lDBRbDDQY cMswgE55oPYP1mOmCfoxOoxs+/tY9KtwYA+LuRKnZF8A1Lihd0kbn6+IBOfMz4xYdxIKbXS76m4VA OX1R5nCNghyPflpfHywAL1VA8Mwa8w/QUjbpkE/mrye8Wv0nIzKPLCzO3rJO0kyss0XDHdJfuRrGo bim+P/N2GA3Hk/5qbHGDkMaNdtmgCeQTFdSKDxUHjGtHLO5Twtx3T7BOuj4+OzOJunEc0ZVFsmu1l 5HwtMUuPd4USvm3c2YfK6Qrb1h4jHRz7EL2i4225TlUli4TpEOPbkx6R5unvh5iyEDYrXcT6myjfz h60rMx3w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pQgjv-003k2i-6l; Sat, 11 Feb 2023 03:39:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 2/7] mm: Add generic flush_icache_pages() and documentation Date: Sat, 11 Feb 2023 03:39:43 +0000 Message-Id: <20230211033948.891959-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230211033948.891959-1-willy@infradead.org> References: <20230211033948.891959-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 2EF5B14000C X-Stat-Signature: zbsfsytje15iyz696r93ideys81oorhm X-HE-Tag: 1676086796-894044 X-HE-Meta: U2FsdGVkX187hX1vaqjfE/exEd4jEyurp6e5itL0aCkZBRpAA1dPxMmv6jYO16ZUl+WGV72vt6/DyIkXKgcsPDE/qnDY27xKLOZv4d3KoVLzUNrxMzeM970VrTPuPQHNGEFq3+geXyCG3WTPRIeeuBxP6+Qq2hZyOAZVs8ceeBcryGpC5br1rsTP6eqHBMIgBXIE6al0oEzTZi5csykAmgmyrlQvUfMdSHU6mGW7ZjObEAWKkW0qjZR+tXg0BBnVekQhsQWMPWUhaFjEE4NxAKYzyWtHpM5vjzU8Mm6LoH4B+FlkELbFbumkZs/xsZ2e8tw3vZxSmPVhMiEjZQkNXO2YPKZLMOQ3VxhFQ+hChu57fooFb5/uYSd5HxylEmWzBW7DTwabJYueJWmWtd18HTrNSGxIjrxuNMd/PV0V+7bK8VWlQ2r+zy50d7JCS3rM5st7C21cQmuSUuLL65e3PYx4mPxVU8t37yM3h9VFjOSyRzS2PL1ExCJ/g+8FQho3l+zgQwywtN5M5NQA89XeJV7yC4JEg40OS+wstXiAdnbBmYEYvR5ai0NoMhp9VmsQGj0Qe30f4SU1k+OhX6xIkqjxHnYYYhpJ3AV4l1Da9WnE1tjXeFV8ZUAWqYXFeF43by1DfiKJtuS2ciosy3H+dXMsr6qdnqtItIdLjjoBha4y1LCewTQJwjagPFzaAGa/cQ8j3tam2U9x6zz/I1BRhDCfKk5W+7edc5tknjUHrBoRNr+xz/dTXor4vhZi7lGMwf/KI5pXwZclulc4GOJ6uzsTmTDTiNoRBt96niT/ek6GCqdsXJ2mfdz1+6V7SzizsjWpgl5rASG5yl5KTWKYPMJ+uJv0DWOCkVNXst6NnGn+B+xfojxYeeizj+0X3xL7Gezq5uOPCr47vUXbWfqDVJzNpoUEEcetFoYkLizubVjd6VUuz1GLAjB/zSf/j9KE1YTFqWrtsvSJKFN+fRx x+DtQ2nM diol110cr7mGfOAGSIPkSFU5dAisfDXrragHu/XCkiXm9O2BAnqd9vgfKSu/eHl3cvQSfC7kzlPvxiMKpI027CBhDshY7zEX1VjcRK6JwZxROepl1E29z8vRaJAtZqRxIYMX/PtbRN0c0qzyI8P90Ajn+SX1YwYXvG60Cvih2SD8Tf/++DQ1w2i+uaULWaBoa09cbIXZJxfWIqBJBy7ecquqLbml+yUjEZPhWCkWBooGhJWSxsZN1XsLfYw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: flush_icache_page() is deprecated but not yet removed, so add a range version of it. Change the documentation to refer to update_mmu_cache_range() instead of update_mmu_cache(). Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/core-api/cachetlb.rst | 35 +++++++++++++++-------------- include/asm-generic/cacheflush.h | 5 +++++ 2 files changed, 23 insertions(+), 17 deletions(-) diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst index 5c0552e78c58..d4c9e2a28d36 100644 --- a/Documentation/core-api/cachetlb.rst +++ b/Documentation/core-api/cachetlb.rst @@ -88,13 +88,13 @@ changes occur: This is used primarily during fault processing. -5) ``void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep)`` +5) ``void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr)`` - At the end of every page fault, this routine is invoked to - tell the architecture specific code that a translation - now exists at virtual address "address" for address space - "vma->vm_mm", in the software page tables. + At the end of every page fault, this routine is invoked to tell + the architecture specific code that translations now exists + in the software page tables for address space "vma->vm_mm" + at virtual address "address" for "nr" consecutive pages. A port may use this information in any way it so chooses. For example, it could use this event to pre-load TLB @@ -306,17 +306,18 @@ maps this page at its virtual address. private". The kernel guarantees that, for pagecache pages, it will clear this bit when such a page first enters the pagecache. - This allows these interfaces to be implemented much more efficiently. - It allows one to "defer" (perhaps indefinitely) the actual flush if - there are currently no user processes mapping this page. See sparc64's - flush_dcache_page and update_mmu_cache implementations for an example - of how to go about doing this. + This allows these interfaces to be implemented much more + efficiently. It allows one to "defer" (perhaps indefinitely) the + actual flush if there are currently no user processes mapping this + page. See sparc64's flush_dcache_page and update_mmu_cache_range + implementations for an example of how to go about doing this. - The idea is, first at flush_dcache_page() time, if page_file_mapping() - returns a mapping, and mapping_mapped on that mapping returns %false, - just mark the architecture private page flag bit. Later, in - update_mmu_cache(), a check is made of this flag bit, and if set the - flush is done and the flag bit is cleared. + The idea is, first at flush_dcache_page() time, if + page_file_mapping() returns a mapping, and mapping_mapped on that + mapping returns %false, just mark the architecture private page + flag bit. Later, in update_mmu_cache_range(), a check is made + of this flag bit, and if set the flush is done and the flag bit + is cleared. .. important:: @@ -369,7 +370,7 @@ maps this page at its virtual address. ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` All the functionality of flush_icache_page can be implemented in - flush_dcache_page and update_mmu_cache. In the future, the hope + flush_dcache_page and update_mmu_cache_range. In the future, the hope is to remove this interface completely. The final category of APIs is for I/O to deliberately aliased address diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index f46258d1a080..09d51a680765 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -78,6 +78,11 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) #endif #ifndef flush_icache_page +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ +} + static inline void flush_icache_page(struct vm_area_struct *vma, struct page *page) { From patchwork Sat Feb 11 03:39:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13136880 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 262B3C6379F for ; Sat, 11 Feb 2023 03:40:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B5DF56B007B; Fri, 10 Feb 2023 22:40:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A98976B007D; Fri, 10 Feb 2023 22:40:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 95F0E6B007E; Fri, 10 Feb 2023 22:40:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 839736B007B for ; Fri, 10 Feb 2023 22:40:04 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 6408A14139E for ; Sat, 11 Feb 2023 03:40:04 +0000 (UTC) X-FDA: 80453607528.24.F054464 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id B0C9540011 for ; Sat, 11 Feb 2023 03:40:02 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WG+cCCwQ; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676086802; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aaXlbA2/BvRg6HHDAadnVZCwlkGxkg/4CEs3FrmdfUo=; b=Aq96PwBPh+togdesVE5Mx1YIrbVw6DaraP5rE48tvOwG+p1HyW3GWTUwEwMHfh/U4UJ8kC 3SAZZrMWiC5feWK8DoELEjoj7UvGXX+kxrSOfUUrN2EoHMS4Td9mPzrVS//hlDDeg6kkJD YVp4UKiuK/51k7CmpdexPNhRbsaCfuk= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WG+cCCwQ; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676086802; a=rsa-sha256; cv=none; b=jzDOcQZC/buBkuQOJ+ngMgGHXYyFqtDflsxgrQ/mT/r1Wausw6bKb5n6TJWV4QC+Sn8e9p n4ctV/9GivYy+gz8ydhZ1Ft/QN9YOpRTmxQFqWeOZUNOZK17nkkDM5nomQiMW24WUdIPWv A1k54fjJ6xwKfOKPY/MFvelSwcomQLA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aaXlbA2/BvRg6HHDAadnVZCwlkGxkg/4CEs3FrmdfUo=; b=WG+cCCwQUC/ZjIWR94a19/LBWU lVGQ77hW0CWVTgjmxOD1LgDbemxdMo08QnirJDUmnVuyhvwWRkEM/syoO2RC6FQNRLMnghc1djwhM rGHuc2XWxNjniFmcnC46So5vf5OEvRgFUfMsCWJOCiXP2T9+nSwg4lzGofnlY7KO9Uyk014+AfvZm S/ryGW9ZnnIXXfIjqPC780Hz7uihlK9Fgyholr+fIj2ZFurhPmdccXibd3knmsPYfcy277UqvBrFo /teaeBbhh1O/UpeW9R2vrL+Gd9WR5k/yNxJ2mA61TYWpOCNT2NfQrkpzXy46rRhv+jl65BXKqN6oC r6OWn31w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pQgjv-003k2k-9Y; Sat, 11 Feb 2023 03:39:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 3/7] mm: Add folio_flush_mapping() Date: Sat, 11 Feb 2023 03:39:44 +0000 Message-Id: <20230211033948.891959-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230211033948.891959-1-willy@infradead.org> References: <20230211033948.891959-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: B0C9540011 X-Stat-Signature: yf314d4ygfb4n1tacbaptdppzm9ugms5 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1676086802-435600 X-HE-Meta: U2FsdGVkX1/W7gBxY/1yqou15d/ae+0AkeBDRIEYeKf8hvf19aXibFA8bURvs/DFBrnIlcll2d/+mgXXmV03CbXoGe6VAYvT3FgQKwwVo9YiEZD5o3g33gidTHJMWDxZiATdt1zCK8YLYI2vquAfjXxpnViZCmjpVEif0rW/HjjF0Y7Ox0fPUbuDJNM7foN6/cqtFwiPmo2oxsVB0UZlMP81yOAx1QkYkANiDUFDcJAUEeGFza3VGWOifpOO1qJXJdkr3pIKoGUjIBU/j2ZbV3NKJDHN3Y4Z8OUJGPIDN8oX10tOTZApGgUE1uNRMKxanywNdk4zjRgKeiQkl09M28PFhl5FfUPjZSWpOjMTkqNB9WEfSvJKPLaAgQHv7tWJigdAeCvr4O49l3TeaiZA/u1KQCRJTaUB0cbKSCsghsYkwBUqtoCwSsKk8wQG2keAU7dbg0YdFeWQPAqpbbnLKTDhlNw6u9ODbeOkLLjfarg6Epy5h/2wOMOqsxWMKGkQLP5GfXnK6Mj5a0BubSsl57iCLS+myCi3dZjheGVfOrsQuJUav/4BUSIf4/YWGMVz1GcckwcBP2uPn54deW3CE5JXqemCBgnSHsKzVUYzu9VCAcvbC5w3m74+ROyK1saQ5QRoaVyyKw2BUGIH2EF3YY8TKzfGZOIrZmstHqgkUUIaS2uWEZUZmB3q4R0cgq1PbfIViSIG64uUWfuhUrvMjkr2k39mag44lTKmMs/IKrAEAlHsiQfQXP/iwwRigUuqfzwSKGRsYyHia66v/bb9nYKgUqJsKHX0pKd/ImzLECQOiZQ8pvI/Tw6WJD74N9IgBc0Ftjhu/9jhlcSlrC7uKYRmq0BDlgRpBuPgCbGHPh+yAY20wFB7E0Dy05kqQmR7QpjDT/8zcEQDiYRrAjLoWwu63rglPKoQ8oeJrGS4Pu4qgjl5oXq3gWTJXHByfQMthS8uLsRyQsWske/MQTF el/Lvtjp 6SktFXUDoCtlQh6YR6LsdsP9Sa0DcluqsvEh9PrTr+6ZRcQsHsq2O75IIGwUbUt3b/CDDxSAvCVyGhuXKrCNXXL7OZucXe4RyymwJrCx7ItdaEjDV2ke5BO40robwfhkcag4Alxs3aXkF97pI5jsFJBEDTBUJuxoBcZ9myO6BmKnbde3vrqSYd5W+wmRW3bzCSbl34hPfd0LjGCb26rDZiJoL/uoc3QXQzgzH1WzLX0tIy1c/MqoIRV6Uug== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the folio equivalent of page_mapping_file(), but rename it to make it clear that it's very different from page_file_mapping(). Theoretically, there's nothing flush-only about it, but there are no other users today, and I doubt there will be; it's almost always more useful to know the swapfile's mapping or the swapcache's mapping. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 51b75b89730e..647c5a036a97 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -369,6 +369,26 @@ static inline struct address_space *folio_file_mapping(struct folio *folio) return folio->mapping; } +/** + * folio_flush_mapping - Find the file mapping this folio belongs to. + * @folio: The folio. + * + * For folios which are in the page cache, return the mapping that this + * page belongs to. Anonymous folios return NULL, even if they're in + * the swap cache. Other kinds of folio also return NULL. + * + * This is ONLY used by architecture cache flushing code. If you aren't + * writing cache flushing code, you want either folio_mapping() or + * folio_file_mapping(). + */ +static inline struct address_space *folio_flush_mapping(struct folio *folio) +{ + if (unlikely(folio_test_swapcache(folio))) + return swapcache_mapping(folio); + + return folio->mapping; +} + static inline struct address_space *page_file_mapping(struct page *page) { return folio_file_mapping(page_folio(page)); @@ -379,11 +399,7 @@ static inline struct address_space *page_file_mapping(struct page *page) */ static inline struct address_space *page_mapping_file(struct page *page) { - struct folio *folio = page_folio(page); - - if (unlikely(folio_test_swapcache(folio))) - return NULL; - return folio_mapping(folio); + return folio_flush_mapping(page_folio(page)); } /** From patchwork Sat Feb 11 03:39:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13136882 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0F61C636CC for ; Sat, 11 Feb 2023 03:40:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 960066B007E; Fri, 10 Feb 2023 22:40:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 910336B0080; Fri, 10 Feb 2023 22:40:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D8726B0081; Fri, 10 Feb 2023 22:40:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 7045A6B007E for ; Fri, 10 Feb 2023 22:40:10 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 38C691413C2 for ; Sat, 11 Feb 2023 03:40:10 +0000 (UTC) X-FDA: 80453607780.30.7C437EB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 9FAA0180009 for ; Sat, 11 Feb 2023 03:40:08 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=NlvNZfLz; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676086808; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dOGEtFuO96QcrcSdnYLDuLmKoSiMmwf8gSJprJSMjcI=; b=07wjJA00qhPys44552Ob12K+hhZF98LqNOR4Fi1A/JmnBgBA1GhqfAW/z4kXG0L5KQ6nQ/ qcnjZM1ftdAzN4s/eydR3Jm0Eqxi2QwyTRWDTN68Gn3jEPUQu3fSlzNgY//pwgD1EjF+Gj KJTtIN7cyCEphmfpIaMGCQIRxvxfuXo= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=NlvNZfLz; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676086808; a=rsa-sha256; cv=none; b=VWPgfZY/+kwLJz/NluKm51Rq2jsx/VX3K0mh4u9v+CAB6d+Ew+8phQD9KdzrM27alOpFTo D2I/pRms4MeEtDnanGhBv3dXGoLhhWSKoxxOm4qwpvzQYsd7UM78wJ76OERzolPwTlAai/ SsCkYTtttmK8JtI8ldWZGGxKFxFbwfE= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dOGEtFuO96QcrcSdnYLDuLmKoSiMmwf8gSJprJSMjcI=; b=NlvNZfLzTeOAjqXaAHAOalOpg9 xPTswcHeY74qfz8i/Kp4jrtO20fu4ihQTkGos9V5Ugs8xp4BhPhqDY5BHEl5pHfjYGpcW+08YoLNv 8VQIM8bxixkBsollEQDOQAEPpRQoUkd90r7cBcXBM5FnBjfP0okzca8B83sLrIwXduU5PNAtO9O2T 8ndc1WmpGQVp5tJlkdP4BgXc6vp+MTj6d28eZHqnvCbLFXA1THxsh9mvvHz2vHZGAC24Wzb27lsew z0PWLzLwmn0qSBnyvzvar6bxMMYgv+ZDR6AK4nP++1QrFxr3I8+ZQIkY5JZxBQws5nsHTCbxD7jha WFuYOvOw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pQgjv-003k2m-C9; Sat, 11 Feb 2023 03:39:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 4/7] mm: Remove ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO Date: Sat, 11 Feb 2023 03:39:45 +0000 Message-Id: <20230211033948.891959-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230211033948.891959-1-willy@infradead.org> References: <20230211033948.891959-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 9FAA0180009 X-Rspam-User: X-Stat-Signature: 4n7xux6ujp7esr6n71gatxbu5mwe7gog X-HE-Tag: 1676086808-417076 X-HE-Meta: U2FsdGVkX182oYfPEEk3UU7j2XGbH2MP5rD0vqwSS+IQ6Ogazrvb1//FgwMp/cSL+SyyiI0JdBt+Cs2saSVGaK3/Xds5BaV3oBvEahaA/HjMktc/vGdlKd+PAj0vgMdQ7aoiLiwWkr2sXRq42MX8+LLeeVgPsTLmwKgZqwiLKrQY3WI33Ud1jm3nAPK4rzIEwclMFRFRQ7+CGmMgrtk59BGSHy9+5TN4Inr0MbKMfvcNTAvPAtUl7yRqXK9EF34sQ57s/3FBZXl/e9m3nIeDowfdvdGkyLzZuejgw21dYzTqqBIh3KnJ/X26UoQraI1MAbqSApkMUJsVnFTaMCb9Rp1W9kOFtO551+X2J3zFwiAYnfSKTIYoNIADHiKzDdHBFWDCibNdgxOG668B4h8v6zFGleMqlqbirwQjxafBvc1D5VQpbieyLDkzTjL+iXkllEWdfcdJrdlRF6rMTuwf40+FiJ53YvftckYjKAwmHHr31fnyJRq/2/g833J+6CS4GGupQC+uvvZVLSYufHrgkoRkjRmyUYtcDQGLWQp8HqDzmhzPb5L36pSpI+gBIVwcv/tPQQjfpK6U/c8QluaJEp16bdJwjC+i9XmMO68hXP3tQ9qIPiCvGbC1mXLyLqTt0efrcwTKGaY06KXkJjOKDhN2RkCuxc6AVhcbSwiGTgHUESB4tn/NJZjwsiJ2puA7hrioSH3Em68hODZU/yvUh5ZBbBf0/fwriYN7M6rAiu3nzpFn/UcIUiNQmt9fBY2QRjc/QLYGZEw45kdng0lRE1mqny/vKTGxHmir2hnQY8qFFLcHERZIZQW9xiDaT7UwTWhsYVNcJAx4bD8DDkd2zlSZikKsAk5uDH6TfxTouD4Df+uPFSICEacfQrtMNbikymB3J0KOKlwZWRyzNbJEdAHYiaSy56luYKfwEbwrQqm9B73sFp+ovpxoI5kggKiENwwDG0EJIMnUoJA106z 2oOGS2As 0gnHNNtpWuhGeGUbZuPK47eFeflVqm/J7dDJsWnBBiHprvtuG3lwYlsdHp+/c/6yKQjgw/AYxR93gGRqCHibgejAQcUKWyIV8U/J//V07WQGebuD7zRYGK6DGBbUUHl5EM+CEhhGWSdDUdVNNQuzUgJHk20GHjTu2QK9px2hstzRlILKuckBLKE+yKZLJJY2qwEam+/r6Px1QT/ZFcYu0sHqlsu+ZS6ayi9ksz6Ft0MrgpCJjNXbLfsAKdw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Current best practice is to reuse the name of the function as a define to indicate that the function is implemented by the architecture. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/cacheflush.h | 4 ++-- mm/util.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h index a6189d21f2ba..82136f3fcf54 100644 --- a/include/linux/cacheflush.h +++ b/include/linux/cacheflush.h @@ -7,14 +7,14 @@ struct folio; #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +#ifndef flush_dcache_folio void flush_dcache_folio(struct folio *folio); #endif #else static inline void flush_dcache_folio(struct folio *folio) { } -#define ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO 0 +#define flush_dcache_folio flush_dcache_folio #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ #endif /* _LINUX_CACHEFLUSH_H */ diff --git a/mm/util.c b/mm/util.c index cec9327b27b4..39ea7af8171c 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1124,7 +1124,7 @@ void page_offline_end(void) } EXPORT_SYMBOL(page_offline_end); -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +#ifndef flush_dcache_folio void flush_dcache_folio(struct folio *folio) { long i, nr = folio_nr_pages(folio); From patchwork Sat Feb 11 03:39:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13136881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40E8BC61DA4 for ; Sat, 11 Feb 2023 03:40:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A99266B007D; Fri, 10 Feb 2023 22:40:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A490F6B007E; Fri, 10 Feb 2023 22:40:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E94E6B0080; Fri, 10 Feb 2023 22:40:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 81DC16B007D for ; Fri, 10 Feb 2023 22:40:07 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4E0F116135B for ; Sat, 11 Feb 2023 03:40:07 +0000 (UTC) X-FDA: 80453607654.12.DF9B508 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 9D70218000B for ; Sat, 11 Feb 2023 03:40:05 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Z1nfjIAw; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676086805; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vX1YJXK+PcXOZWsWZ/UEYBTWr9HbSfOYWjKMlt98xdk=; b=Hdzqy4vIeGBGTR5TI6K0szTQ6CvGXraofx8nmfQmy0zQmlM8T1CHDnKofWBZxVb/fY7X+v Jt3/Q5/0911MFEARyXHEgvlNzvH2ILZG71I/7M2QBWoWncCRnH/UA3OWyTuSI96dsHNgL0 mQB98oGnkeWMGCursvztneqBiyLOVbs= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Z1nfjIAw; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676086805; a=rsa-sha256; cv=none; b=WJsLRP47r2IjWtvx3EJxzbXNDCXtj+tOFhVrNvSsXcZywBxt8pMJHUxdlEb8xiR+3Rq/vj g+ZYcVyQ5s1/Fqrrqd24nvhOhbqprZ5UdUJcoUV/Mrx+YxydsXb+wVU/Cd0ftlDV3WV45z Wb9WvRC4N3tU9P56XHmFk0Wmj5dSZVQ= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vX1YJXK+PcXOZWsWZ/UEYBTWr9HbSfOYWjKMlt98xdk=; b=Z1nfjIAwRo3cDyujGAD4AJQjA0 Yv90BtvR90Xy4tXm+pzXiH3CDMzSJFZ93Xh23khrcEMXVLWZ9Rs7rnRXZwDmlNtiazi63Pkj0Z2tc jvXkfClpimFXCFQI3I8Jbstb+Cj2qSwIKySH8+oqwvdUP6Mam86IFiHrrcviTHj6dagmWJuVIylxy JWo33myUaSf+M0Sm/jcR4c9d4iXJB1lkPZH8VU+aVmXGS3OebKaGjhL4YRugg95XtQoA1gIL+oOl9 vDkK4vEReaMgXlzqeo4x20QEb+ehq0xiD+LioJkuoJ9Jfooe5l8PIBjsBjvxfaSScJX4NsOHpu4HF SV9zLBSg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pQgjv-003k2o-Et; Sat, 11 Feb 2023 03:39:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 5/7] alpha: Implement the new page table range API Date: Sat, 11 Feb 2023 03:39:46 +0000 Message-Id: <20230211033948.891959-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230211033948.891959-1-willy@infradead.org> References: <20230211033948.891959-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: qt51m8ar9ji88w8644mdqkyisy6datyr X-Rspamd-Queue-Id: 9D70218000B X-HE-Tag: 1676086805-878468 X-HE-Meta: U2FsdGVkX19q1ELXasCqO90dWF21ZMkxbS4ZhURuSokESNkDJ3zcEySVsgRTLxIUjHLNHn9FfLrRj7EH2P/707NyUUWS08HWFP5MJrLUlMT5g1eSANvXwCn7ie0sq6dXwsdWx9zfkip1VagyE3SxeP8LhD7+x6MbucClLsIq7eKs+gQMvN8qKxsj/y/5ogm6s94RHkkiiHi9lEaPMf3HtjDXBRUvQeg2OKabFwtLaWK31UNG2zRDE+32AAVnHjOfeeftUd0PUB8v2a4vxxg7epmESCQ7Kx1jsga+hQAzeRRpzNm7DuGbouTMx30CHI8KRr1z6nEuwRfIrwOMTvw5ovrDpBie7d5nBgSAu1gIXXgLsN6AqoNGCZI3anKVZ2V+s9rKPp5wtNFqpf3hwwnXBROcwtYSvoEXceyw7u3Jm7lxXsbOvYt572MAtL5E1lDB7sVEldtbMFNGdQCKuWMfsFIlCuQiwOLugYdi5nOHjrmNzGYfrgPLLKMIz0b8UkxMaBl6ywZRvAJMwyERIy4Eqv2APfo8BEvSw3DCo23KtEf7bD5H6iU8Pt28xUL+/DJNN78P4JAMTFQtIt6MsdHQvtHTKFNLVAQ73aSL1LM1NhVXLDdl7gVUdqrDFDuSRSdwtZm2+2Rf9v1MC/LImF97R48PUa67MvYLSOORVXQZYVKu1E19nFLyp551+1G/P/7LUhot/l4tmW1r6UJmtWIjflJcov9soDCHFGkNv17T6r1vzOMo1cQ59bHnmARhc1EiUaMtXMJ2OrLP/Ql43C972qkehIr6fNGKVp4JcHQPv8v2iaTaAa90ITpROJgdQSQ/p24JXOEpRFyMM/RpNfsuskxckjo00D+NnmfXkRzguHoJwiP12+veio1dYU3cjlJiaVV/Kjce638MFzUk2vJlqc7biMVc+P9EVsfn2B0XZwQvofYGUGG9uGBLoO3qLLikRkQrOmjZb+gQKex/rMY rwJL61I9 bqnsdOs/KAsBGWeZZVhQalTUYxpggSex73/gUU7OAQRZBrfGJCJQc+YC4z6oWoOFeg7HnnRYpcaVUwqcpP0T0y9Jhxhc4zra5J0svs95g7MyDLLaCZqG7U7c56P6NcP4Ptjl1fwgvg3teTvsfYZs/W06qURu4HoanjxfsSHmZ+NZxIzEpRbVKkexfnKbOOzWAVYb5DJuMQ97hZck= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) --- arch/alpha/include/asm/cacheflush.h | 10 ++++++++++ arch/alpha/include/asm/pgtable.h | 18 +++++++++++++++++- 2 files changed, 27 insertions(+), 1 deletion(-) diff --git a/arch/alpha/include/asm/cacheflush.h b/arch/alpha/include/asm/cacheflush.h index 9945ff483eaf..3956460e69e2 100644 --- a/arch/alpha/include/asm/cacheflush.h +++ b/arch/alpha/include/asm/cacheflush.h @@ -57,6 +57,16 @@ extern void flush_icache_user_page(struct vm_area_struct *vma, #define flush_icache_page(vma, page) \ flush_icache_user_page((vma), (page), 0, 0) +/* + * Both implementations of flush_icache_user_page flush the entire + * address space, so one call, no matter how many pages. + */ +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ + flush_icache_user_page(vma, page, 0, 0); +} + #include #endif /* _ALPHA_CACHEFLUSH_H */ diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h index ba43cb841d19..1e3354e9731b 100644 --- a/arch/alpha/include/asm/pgtable.h +++ b/arch/alpha/include/asm/pgtable.h @@ -26,7 +26,18 @@ struct vm_area_struct; * hook is made available. */ #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1UL << 32; + } +} +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* PMD_SHIFT determines the size of the area a second-level page table can map */ #define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT-3)) @@ -303,6 +314,11 @@ extern inline void update_mmu_cache(struct vm_area_struct * vma, { } +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) +{ +} + /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that * are !pte_none() && !pte_present(). From patchwork Sat Feb 11 03:39:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13136879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9FC5C636CC for ; Sat, 11 Feb 2023 03:40:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 716D06B0075; Fri, 10 Feb 2023 22:40:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C6AD6B0078; Fri, 10 Feb 2023 22:40:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58E646B007B; Fri, 10 Feb 2023 22:40:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 48A806B0075 for ; Fri, 10 Feb 2023 22:40:02 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 0F9321A12A0 for ; Sat, 11 Feb 2023 03:40:02 +0000 (UTC) X-FDA: 80453607444.08.8A238E7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id 2CAEF16000D for ; Sat, 11 Feb 2023 03:39:59 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="v/XKKIln"; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676086800; a=rsa-sha256; cv=none; b=QVT3gJz086Df79mVLHragXmqihS2/zXCG0wJSX2p1mrdikpRJ1Wak43SXXEA0bi9FcK4PA S61vlfhDAGsRuRAR538nlEYVnphdBoXEdo0XOBV+rs6/7iBLAMwyAw3Nv7eh25M7YSTNvl BfYFPxFRvvxkcBuQNKE5xzoqH/a2a1w= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="v/XKKIln"; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676086800; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=i952xWUVOOOc/GL6UVWoLXB53jR4WQOcBwrrQqvGx28=; b=b6qcmfhIEUQfyDlqVPn68YReinVOeGM9ThfoHXdusL8B8F/M2nE1YEEyeLr/Peyvc3HC7p mN2t91B1gN3nmZ2rWwh4xlY7NKxhx66TMYnfBxh79Fk2Tvy25Bj4LwNLK9781RV/NsgOON f5sEut8jYzAzBPT2B/MbiM8GvzGQQT4= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=i952xWUVOOOc/GL6UVWoLXB53jR4WQOcBwrrQqvGx28=; b=v/XKKIlnabnQGORcG2wgH9ln+J Y/CLdqdbpm8aMCkCYi4WighWAkihKQVTeO/YSVvBG+acDXYcZ8ZQTZEvh/yevqOMHGDbJND5ERYH4 E6fhpNnELquRd1tlrRwHNg0VoLqXq8/ru81p98rLiOVTlb3RyH2ppv9A4L7FB8SRU66NtlXX95fn7 hsKUcx3zqfzaAXS0SbRy4S9sh6RebtYE9fzSk4VpfWF6ujE6fg+J4QsjeA7u0H+sJRqeAvpQsOSCp YXc07TgZZKPt4OHMQZ81R0QAqUPWIs5MFYf0ix5PLHaX9bS93LEP7KKAEpvmqF7i0QEKnj1lfoPH/ PpQ5aFTg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pQgjv-003k2q-H9; Sat, 11 Feb 2023 03:39:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 6/7] arc: Implement the new page table range API Date: Sat, 11 Feb 2023 03:39:47 +0000 Message-Id: <20230211033948.891959-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230211033948.891959-1-willy@infradead.org> References: <20230211033948.891959-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 2CAEF16000D X-Rspamd-Server: rspam01 X-Stat-Signature: hux9yya89bx8e561g8ix8f1ogdha8mex X-HE-Tag: 1676086799-105698 X-HE-Meta: U2FsdGVkX19uFFjT8oV89IkM7XQYn8H1w+ASGPF7hVosY+k2gDTZCY89SvUPnxIdp0ggiW5I0vAkzSN4WPocQF3t7i7XY0uCalU9kePX8YLlOAgV91nfAxmHivuLBv2Nk0J9Ru49GlejIRJfEtwHpD5bVMdasUWAyJMzfdXN0UxpNl35JcULXtZSRr73qYzEqvdjtiBvtmGW8kum5Fj2uY0EF8+9D8wSfmG9l2Fp4tUtq0sIMXweP2WhMyJNFfNK8jPAq/dd3V4hgksdVA5h3Gi5oyvN5uTD9ICLggfk31c9oli9ECi4bQL3Ydt7gFDn1PRoHTDf2f1ayi6mdvRbBvhZvtFa7pHmrOsDqM77DRO/7qS7zBLjT34vlSqA1b96lF22bx0WGvlp0hr413/yY3pCutvRX0HUtD98onpnRkMCmNe9n6hITYlE3uuXhMdSPM77f/C+XBLUm3N59e5HPZhZ/hs+/VXAXJW8gGI+wPtmGmQz8H1EBWXXZzkfckkOS+tFRRSngfcCzoRW4VF8SseU+11I2VDA4xhEyHJa5Aj4D83t+r7GFY6kGjWrYHrW85Z2lwT+dHrUP9nnWNCfaQ9YkEFcNBIRu/27iT2ja1+Kn/LJ3BkMvWj9PMCsXO+fKEUGh9GOFeWH4Hnzp18qFQ8btdY6m7j/BETKCSRdx18hyuCXlyE09ymm8ZYc73W6ITXMFoC8NB8pNaIGU5lUfEmoAODoyn3BYoK00wFAVOcrPczq5aIl9Tu484j3Ni/Oi5rE/33yAvHBoQzon8UHx7JnfFCXwjkB9h4khVsRSgN7xyAuteqaftoliO7dIprlo0TkskYixhqRhETF0ecigz79TeRh27AmVbsR6V6yL9bKRY3eWnYddgUBBjNdcJHDfZz89OqnLxTCq4ZIzExB1BOVwhjRt9kzq6IWResJgxK4yJOI2rxPYlsrwtMtjEsnpMPJD5XlQCdB/VHKrHD rKr1dSrw O0Zgx2Nvncoblunib2Hk7UbFHKXtDhO/VYLOoA56RKwWw8C++p1ML2gUbWSfwzn1O6MhP+GGahS2Y72wWvKrFvlABQtbumSDVlgDIzBQV3ppjca9JXGr1Zra9mP8jbq8nejRLuMuKpsuPhFQgY/nv4ZhQv2SZcF0Bv0Een9HyBQItQ8YlQ7xO2Io8Vv9nQ2bMEtt4WPoM7jnjphU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). This is a fairly deep change. The PG_dc_clean flag changes from being a per-page bit to being a per-folio bit (which means it cannot always be set as we don't know that all pages in this folio were cleaned). The internal flush routines are enhanced to take the number of pages to flush. Signed-off-by: Matthew Wilcox (Oracle) --- arch/arc/include/asm/cacheflush.h | 7 ++- arch/arc/include/asm/pgtable-bits-arcv2.h | 20 ++++++-- arch/arc/mm/cache.c | 61 ++++++++++++++--------- arch/arc/mm/tlb.c | 18 ++++--- 4 files changed, 68 insertions(+), 38 deletions(-) diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index e201b4b1655a..04f65f588510 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -25,17 +25,20 @@ * in update_mmu_cache() */ #define flush_icache_page(vma, page) +#define flush_icache_pages(vma, page, nr) void flush_cache_all(void); void flush_icache_range(unsigned long kstart, unsigned long kend); void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len); -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr); -void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr); +void __inv_icache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr); +void __flush_dcache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio void dma_cache_wback_inv(phys_addr_t start, unsigned long sz); void dma_cache_inv(phys_addr_t start, unsigned long sz); diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h index 6e9f8ca6d6a1..4a1b2ce204c6 100644 --- a/arch/arc/include/asm/pgtable-bits-arcv2.h +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h @@ -100,14 +100,24 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - set_pte(ptep, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr); + +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c index 55c6de138eae..3c16ee942a5c 100644 --- a/arch/arc/mm/cache.c +++ b/arch/arc/mm/cache.c @@ -752,17 +752,17 @@ static inline void arc_slc_enable(void) * There's a corollary case, where kernel READs from a userspace mapped page. * If the U-mapping is not congruent to K-mapping, former needs flushing. */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; if (!cache_is_vipt_aliasing()) { - clear_bit(PG_dc_clean, &page->flags); + clear_bit(PG_dc_clean, &folio->flags); return; } /* don't handle anon pages here */ - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!mapping) return; @@ -771,17 +771,27 @@ void flush_dcache_page(struct page *page) * Make a note that K-mapping is dirty */ if (!mapping_mapped(mapping)) { - clear_bit(PG_dc_clean, &page->flags); - } else if (page_mapcount(page)) { - + clear_bit(PG_dc_clean, &folio->flags); + } else if (folio_mapped(folio)) { /* kernel reading from page with U-mapping */ - phys_addr_t paddr = (unsigned long)page_address(page); - unsigned long vaddr = page->index << PAGE_SHIFT; + phys_addr_t paddr = (unsigned long)folio_address(folio); + unsigned long vaddr = folio_pos(folio); + /* + * vaddr is not actually the virtual address, but is + * congruent to every user mapping. + */ if (addr_not_cache_congruent(paddr, vaddr)) - __flush_dcache_page(paddr, vaddr); + __flush_dcache_pages(paddr, vaddr, + folio_nr_pages(folio)); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + return flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); /* @@ -921,18 +931,18 @@ void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len) } /* wrapper to compile time eliminate alignment checks in flush loop */ -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr) +void __inv_icache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr) { - __ic_line_inv_vaddr(paddr, vaddr, PAGE_SIZE); + __ic_line_inv_vaddr(paddr, vaddr, nr * PAGE_SIZE); } /* * wrapper to clearout kernel or userspace mappings of a page * For kernel mappings @vaddr == @paddr */ -void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr) +void __flush_dcache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr) { - __dc_line_op(paddr, vaddr & PAGE_MASK, PAGE_SIZE, OP_FLUSH_N_INV); + __dc_line_op(paddr, vaddr & PAGE_MASK, nr * PAGE_SIZE, OP_FLUSH_N_INV); } noinline void flush_cache_all(void) @@ -962,10 +972,10 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long u_vaddr, u_vaddr &= PAGE_MASK; - __flush_dcache_page(paddr, u_vaddr); + __flush_dcache_pages(paddr, u_vaddr, 1); if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, u_vaddr); + __inv_icache_pages(paddr, u_vaddr, 1); } void flush_cache_range(struct vm_area_struct *vma, unsigned long start, @@ -978,9 +988,9 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long u_vaddr) { /* TBD: do we really need to clear the kernel mapping */ - __flush_dcache_page((phys_addr_t)page_address(page), u_vaddr); - __flush_dcache_page((phys_addr_t)page_address(page), - (phys_addr_t)page_address(page)); + __flush_dcache_pages((phys_addr_t)page_address(page), u_vaddr, 1); + __flush_dcache_pages((phys_addr_t)page_address(page), + (phys_addr_t)page_address(page), 1); } @@ -989,6 +999,8 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, void copy_user_highpage(struct page *to, struct page *from, unsigned long u_vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); + struct folio *dst = page_folio(to); void *kfrom = kmap_atomic(from); void *kto = kmap_atomic(to); int clean_src_k_mappings = 0; @@ -1005,7 +1017,7 @@ void copy_user_highpage(struct page *to, struct page *from, * addr_not_cache_congruent() is 0 */ if (page_mapcount(from) && addr_not_cache_congruent(kfrom, u_vaddr)) { - __flush_dcache_page((unsigned long)kfrom, u_vaddr); + __flush_dcache_pages((unsigned long)kfrom, u_vaddr, 1); clean_src_k_mappings = 1; } @@ -1019,17 +1031,17 @@ void copy_user_highpage(struct page *to, struct page *from, * non copied user pages (e.g. read faults which wire in pagecache page * directly). */ - clear_bit(PG_dc_clean, &to->flags); + clear_bit(PG_dc_clean, &dst->flags); /* * if SRC was already usermapped and non-congruent to kernel mapping * sync the kernel mapping back to physical page */ if (clean_src_k_mappings) { - __flush_dcache_page((unsigned long)kfrom, (unsigned long)kfrom); - set_bit(PG_dc_clean, &from->flags); + __flush_dcache_pages((unsigned long)kfrom, + (unsigned long)kfrom, 1); } else { - clear_bit(PG_dc_clean, &from->flags); + clear_bit(PG_dc_clean, &src->flags); } kunmap_atomic(kto); @@ -1038,8 +1050,9 @@ void copy_user_highpage(struct page *to, struct page *from, void clear_user_page(void *to, unsigned long u_vaddr, struct page *page) { + struct folio *folio = page_folio(page); clear_page(to); - clear_bit(PG_dc_clean, &page->flags); + clear_bit(PG_dc_clean, &folio->flags); } EXPORT_SYMBOL(clear_user_page); diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 5f71445f26bd..0a996b65bb4e 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -467,8 +467,8 @@ void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep) * Note that flush (when done) involves both WBACK - so physical page is * in sync as well as INV - so any non-congruent aliases don't remain */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long vaddr_unaligned, pte_t *ptep, unsigned int nr) { unsigned long vaddr = vaddr_unaligned & PAGE_MASK; phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK_PHYS; @@ -491,15 +491,19 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, */ if ((vma->vm_flags & VM_EXEC) || addr_not_cache_congruent(paddr, vaddr)) { - - int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); + struct folio *folio = page_folio(page); + int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags); if (dirty) { + unsigned long offset = offset_in_folio(folio, paddr); + nr = folio_nr_pages(folio); + paddr -= offset; + vaddr -= offset; /* wback + inv dcache lines (K-mapping) */ - __flush_dcache_page(paddr, paddr); + __flush_dcache_pages(paddr, paddr, nr); /* invalidate any existing icache lines (U-mapping) */ if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, vaddr); + __inv_icache_pages(paddr, vaddr, nr); } } } @@ -531,7 +535,7 @@ void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { pte_t pte = __pte(pmd_val(*pmd)); - update_mmu_cache(vma, addr, &pte); + update_mmu_cache_range(vma, addr, &pte, HPAGE_PMD_NR); } void local_flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, From patchwork Sat Feb 11 03:39:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13136884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F10BAC6379F for ; Sat, 11 Feb 2023 03:40:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8EA7E6B0081; Fri, 10 Feb 2023 22:40:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 89B336B0082; Fri, 10 Feb 2023 22:40:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 762F06B0083; Fri, 10 Feb 2023 22:40:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 670BC6B0081 for ; Fri, 10 Feb 2023 22:40:17 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3C633812F7 for ; Sat, 11 Feb 2023 03:40:17 +0000 (UTC) X-FDA: 80453608074.09.97A5A5D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id D6AADC0002 for ; Sat, 11 Feb 2023 03:40:14 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BR9p7odg; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676086815; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zTqNedUUD33pi+rIerI27ysOb8vsfwAvAS9c6mk3R14=; b=AacaRsJLHl9/+iTKu9oy5ovxID5Uq15XZFlXQ+fw0D7qKxMFQAbWeCLDpjVX7e6mvij0l1 n8bTkzuHAAZBuIH6bI/kvY5i6d+NB8g0gh+wAisfhSyc/q2cKLPvQ9OPrGiyWcL8aPeHVk IlCpB/20WzVu0Rclrp4KTB74QRAh8a0= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=BR9p7odg; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676086815; a=rsa-sha256; cv=none; b=1+/kc4xJK8ZkTxEkpWDOp+fg/XBbbWwczjgL4cIZva2i0j2YCUF+RtI/dwCwWt7lMfKaGV q/tVE6k+LTQYE0a+MYOo0rU0TjLQZpdoVFVwgV/m5Q6m/LcPFmWZHfWZHovsoy0t7PWqyb 3SWHdSWW1HD0jjhUee33l0kZk5GzwBU= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=zTqNedUUD33pi+rIerI27ysOb8vsfwAvAS9c6mk3R14=; b=BR9p7odgz9gHarPmxgkLK3jSj7 eFkcUltTkdzxAb8fOSxNXZZrCFvVrGi/XbEuIRhfQ0aYsrfz7ABwHiFn/WS9exEgWNXMiybNKnL2S HghW8Y1glDDZs9bEPjzJSUSW8IDFITn6PIspdKTIQiU8V4JFbKO0rIkzxSbTIkpdYtwODjGclkQEu amzAhlgNOuvrp3nMx2O+r++nyERMmvRR52S9YGoRpPtIDexKp2lt+neagI+S42J0hNK41mE5nS+yz eiq387rpVX6OKZNAlyNTnBK1f6tTH09ZGRJWXZ9UYt9MnNTXdybd9JcDQ4Me26dTEaGm0+RV8gjKF oDDj1qjw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pQgjv-003k2s-K2; Sat, 11 Feb 2023 03:39:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 7/7] x86: Implement the new page table range API Date: Sat, 11 Feb 2023 03:39:48 +0000 Message-Id: <20230211033948.891959-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230211033948.891959-1-willy@infradead.org> References: <20230211033948.891959-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: D6AADC0002 X-Stat-Signature: jqaeusymqjw5wneb5djm4rair4ubi5ma X-HE-Tag: 1676086814-424218 X-HE-Meta: U2FsdGVkX19Gf94w1K94A5/CDzTn8aXzdT273Wjtz4JqkwIJnjBVEJWkLT9QkkkR/JOISSDAcAaPV2LtJMOHvTrxIzprCWoDkk7R9F7aa2vrYDhr05lbftPkuIggfYoqi8Ytg/MbP39ar+W54tBGX25A1Hss+mW5Z0Mzg/UVmCuHVKtb5hIKgbTgKfnqzZITMEzWqWF1BJYVfKM1rOdBLD/c0Xm42iszA1dAFCBybCVCiz/M8NsPP+ZcYCMDxetHxuunrB3tjF1SHZfRIiBIfWf21I7En4Il6Xxl2+bekCLwXkixQhZjBWVUINy0npn1PET1a+1Vk6FqU95o05yTa+H/coZrsH9AnIUjDMzFpKiUdoGBEBReUOLpcnENykOV6em/JrkOgCcPFILJsojP79PXFV4WU00c+aYXHwXcKXJeEbMVmGcqWhhJZTzcJIVgXW5yxdC9ZXtvRg/oc1ZQ9696Mp6jehxMywFw6d1CObLd9hspLR88fAcSQytYxiswAD1JPUVoriRY/l23h8gN2qOFJTMSjg4awDEvyxSMElZEDEDF4+uutxRd0zJsfdogs1Y+Kmr/jY8TkMlw9eAsn/l6CLEQGDUDcnBpeNi8tv6C5z1cIGmX/JNApQIFY+wu1tkn0+NeBfK1yqNYMb7zWo6mnh5rGK6UGL02YBe09NOh9AJbmNWEtM/NY9iQESpVWvdeHlPlMHWOdjwJoIn/eZIYj4mC53C5uEU8c1dBWuJrZ8Bty5UVKPkkp0kY1ySduEwUmD9lWJgsBZfJdJo1g8V51X5LbvK020zMpBoALu995C//m5SMkBzL7fL0M4M9c7CwIOB3hTNaB8CSzmdivIdUnDwdeW8bIpdLiSYOtRFOoJZptRI9Gm5db9pF/UcGi4Vi+lEbNI72i8h3srLgczvYWCA3VBmQZtSbP/Y+QJllE/Jkf3qpm0i7G+J7neFptfqxRWwtzZcl47Cuxh0 8RYLy7by sIbU+isKPCFEwv34eh/Zeij6EAPEmEfGC8X4s0ZL6aKEgiKyWVnGKk8RSoLjwizI9q4ZJCql1r8e4VLDev5dYm0H9xolLYWZiWWGhp5y5+kMNGDexy+GJLdT8K40o2jAm9TP0w+sfSOqrTKuXClDHna46QzeO32PrtZ4PJPtrdtZBeWKZWnZS1yhgPXfVDqMjo2rV X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert set_pte_at() into set_ptes() and add a noop update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) --- arch/x86/include/asm/pgtable.h | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 84be3e07b112..f424371ea143 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1019,13 +1019,22 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) return res; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - page_table_check_ptes_set(mm, addr, ptep, pte, 1); - set_pte(ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, nr); + + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + PAGE_SIZE); + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) { @@ -1291,6 +1300,10 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { } +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) +{ +} static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { From patchwork Mon Feb 13 21:04:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13139046 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28693C636D4 for ; Mon, 13 Feb 2023 21:04:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3AE7B6B0072; Mon, 13 Feb 2023 16:04:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 35E4E6B0074; Mon, 13 Feb 2023 16:04:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D8196B0075; Mon, 13 Feb 2023 16:04:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 067F96B0072 for ; Mon, 13 Feb 2023 16:04:25 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B4BEC40C8B for ; Mon, 13 Feb 2023 21:04:24 +0000 (UTC) X-FDA: 80463496848.04.3AAB804 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id 2EE9180116 for ; Mon, 13 Feb 2023 21:04:21 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=IL0XssrD; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676322263; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yWyL/onOElWDHijW4djTiGP97JWQeErYkUipJyKjzEQ=; b=fHocg8psdL4lf67rPvYtfXR4TVeMHIzz6PMHdzquXEDahtw4PBWT4+P7OVQ7dNNBceYnhr w4Da/qGk1/BC5r8/Wcg25QCmMp3N6zYOtMLXjgk/64i2a6luKIgXrUKQK5M0jI4/UhiZrm XO++zb/FZ8U7vtesJubFQkg+guTmvTA= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=IL0XssrD; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676322263; a=rsa-sha256; cv=none; b=Ww6I1DI0GeKozvqh6bo1xjb9jtlp38lsc29OatvB1QPCzptjKKIKF5cYFh9/eRcIqLHXMw /FJBYkSez+JXGEat3SH2GnNze1RItcZIAdHI9CUP13+Htr2/29o8uNijjDl01zGz1nZ/Dg WjvLNwOUFt3/0WqOq4IJ5R67yrJEg08= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yWyL/onOElWDHijW4djTiGP97JWQeErYkUipJyKjzEQ=; b=IL0XssrDcnf+NTdLTOd8BofAV1 vCULFF3T4lI4clh+BHd+cg2JTQa2+ZAL+HnU4AirgJWxKpJdtvwKOze9p2medXAAu8JcqAOIfIk7Z mXO7+6qNMv6ZUM7jG5bsdTKDrwHQWT3GHoYjGSekvq7nUnOp3tLe1kyNoT+w3NioSnecqW7Krb2XZ Ubu5SbFdEelsdtxSbPHB5GmKL0ppqPwJ8jPytu26VTkrPWjckNWtFmAUlx1w7nNQs2Cak30gYdfmC GdcIFVujDoc+0inZGJkiNxbMde+7XeHvyyPirtnCX+N9Qw6IyT3mE1X1FuNTqeaBNoyK6Dhi9T3oB 9hwZ7enQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRfzn-00660C-9x; Mon, 13 Feb 2023 21:04:19 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 8/7] arm: Implement the new page table range API Date: Mon, 13 Feb 2023 21:04:07 +0000 Message-Id: <20230213210407.1452946-1-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230211033948.891959-1-willy@infradead.org> References: <20230211033948.891959-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: aemngk4ejat553fti8hicwtj6nidd468 X-Rspam-User: X-Rspamd-Queue-Id: 2EE9180116 X-Rspamd-Server: rspam06 X-HE-Tag: 1676322261-978223 X-HE-Meta: U2FsdGVkX18wiB17UxdSqvtWDydaZ9JpaOzN299/HqaExuiSrAJzGttPjweqp6nmq0ATUYtjHhJdc5F2dG/Iyo6N3tfzo4UcyX4PjP001DZZaZgtxCjHabTVZZFoevjfRQPeCHXiJe7jrJb0Osb+g4qopGTwit5kbar/ru+M+dwh6CLAOdKYJzv3lA8DovYO5t18rRWwrmp5h71vY6r21RpI8qz7Vg+KHt5JCy+8LDdupnElQkemrVIyJ3A5N484xkb3TKKQbb9g5dROYAkxkCyOxwxh6cy3JpnDDOG/fZcnB/MPXofWKEtX5k6WAdPiBLkc195GpGq8ELzElJLCXVqo8bUvCcz0y/svTbshX5fIeXy9wGNeDsBJz/27YhndtP6IiGohNjbpGzCxJFbcXO2oEzmiSc+B1Yt0BsW/BBX+rQfHnGwt5/u/6xVLYoKn1rCXmGz676GVFzZPL08yovirNmkDK/tSAHl+59KqKZBVmmJyAQWRKImdnvC4KFjtgzyh1J+1JGmHpllZt0bDK3yJutl9yQUfChHDiRJ2dVfpIFXX03yMXaae622CIOfVUdB425KyYO2dBDYusxswsVjuLr8HiDdqNVW0qkdy/HfqpjuOBGUI8VIGcw6aXfI3k/XjdKnCor7tsETFnOVPeWdVyYw+klfiNDcx+ynB0TBcnidBSS0rXa/282f2nchjcAhihUU79hhr4rdyHTFYh/a2k5SJtOPWTguRJQmA23aJ0NxYf23gdyUtYVpE9GPAWmWkrYoeY+j04V2qurUnpQ7NZlUFpU7SJFibY1gnlfPNRLrHv8JiP1New3NVhjtKfHbW3gYV7NH4oi2kKKMBpCB40DkQA8K4zO+ysYs5oJUjKZqfuOB+kMDH+NsjxHsPdd1kkNS0eabQ8vmzUyrRUSms5HxpKvUZqCjhp9h+SG5t51Z9S4BCXe7OZHgmXXsRDU2jqL1+XDT357WBwlp KJLewtNT WTPYnR+SJLWqfeYFloUw4BenGP0om7x6J/ft4sDLsZvT1dNwsFdimL1NKhGElgKacVz17fbabKYi0jmTdOBG5/BfWQyuRB9+FKQnkzXodE8f4hRVF7CLD03gE/xvzvqbg75UWgP5dapDNkniE+/8/znAIzHQDYUA87cs96yk9Xyjn3rg/mMzmWRBa6Ka1Mob5HqNmLl/HHxqRUEI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). The PG_dcache_clear flag changes from being a per-page bit to being a per-folio bit, which makes __dma_page_dev_to_cpu() a bit more exciting. It also makes sense to add flush_cache_pages(), even though this isn't used by generic code (yet?) Signed-off-by: Matthew Wilcox (Oracle) --- arch/arm/include/asm/cacheflush.h | 24 ++++--- arch/arm/include/asm/pgtable.h | 5 +- arch/arm/include/asm/tlbflush.h | 13 ++-- arch/arm/mm/copypage-v4mc.c | 5 +- arch/arm/mm/copypage-v6.c | 5 +- arch/arm/mm/copypage-xscale.c | 5 +- arch/arm/mm/dma-mapping.c | 24 +++---- arch/arm/mm/fault-armv.c | 14 ++--- arch/arm/mm/flush.c | 101 ++++++++++++++++++------------ arch/arm/mm/mm.h | 2 +- arch/arm/mm/mmu.c | 14 +++-- 11 files changed, 127 insertions(+), 85 deletions(-) diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index a094f964c869..841e268d2374 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -231,14 +231,15 @@ vivt_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned vma->vm_flags); } -static inline void -vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) +static inline void vivt_flush_cache_pages(struct vm_area_struct *vma, + unsigned long user_addr, unsigned long pfn, unsigned int nr) { struct mm_struct *mm = vma->vm_mm; if (!mm || cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) { unsigned long addr = user_addr & PAGE_MASK; - __cpuc_flush_user_range(addr, addr + PAGE_SIZE, vma->vm_flags); + __cpuc_flush_user_range(addr, addr + nr * PAGE_SIZE, + vma->vm_flags); } } @@ -247,15 +248,17 @@ vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsig vivt_flush_cache_mm(mm) #define flush_cache_range(vma,start,end) \ vivt_flush_cache_range(vma,start,end) -#define flush_cache_page(vma,addr,pfn) \ - vivt_flush_cache_page(vma,addr,pfn) +#define flush_cache_pages(vma, addr, pfn, nr) \ + vivt_flush_cache_pages(vma, addr, pfn, nr) #else -extern void flush_cache_mm(struct mm_struct *mm); -extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); -extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn); +void flush_cache_mm(struct mm_struct *mm); +void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); +void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, + unsigned long pfn, unsigned int nr); #endif #define flush_cache_dup_mm(mm) flush_cache_mm(mm) +#define flush_cache_page(vma, addr, pfn) flush_cache_pages(vma, addr, pfn, 1) /* * flush_icache_user_range is used when we want to ensure that the @@ -289,7 +292,9 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr * See update_mmu_cache for the user space part. */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -extern void flush_dcache_page(struct page *); +void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1 static inline void flush_kernel_vmap_range(void *addr, int size) @@ -321,6 +326,7 @@ static inline void flush_anon_page(struct vm_area_struct *vma, * duplicate cache flushing elsewhere performed by flush_dcache_page(). */ #define flush_icache_page(vma,page) do { } while (0) +#define flush_icache_pages(vma, page, nr) do { } while (0) /* * flush_cache_vmap() is used when creating mappings (eg, via vmap, diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index a58ccbb406ad..6525ac82bd50 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -207,8 +207,9 @@ static inline void __sync_icache_dcache(pte_t pteval) extern void __sync_icache_dcache(pte_t pteval); #endif -void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval); +void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr); +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline pte_t clear_pte_bit(pte_t pte, pgprot_t prot) { diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h index 0ccc985b90af..7d792e485f4f 100644 --- a/arch/arm/include/asm/tlbflush.h +++ b/arch/arm/include/asm/tlbflush.h @@ -619,18 +619,21 @@ extern void flush_bp_all(void); * If PG_dcache_clean is not set for the page, we need to ensure that any * cache entries for the kernels virtual memory range are written * back to the page. On ARMv6 and later, the cache coherency is handled via - * the set_pte_at() function. + * the set_ptes() function. */ #if __LINUX_ARM_ARCH__ < 6 -extern void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr); #else -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { } #endif +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) #endif diff --git a/arch/arm/mm/copypage-v4mc.c b/arch/arm/mm/copypage-v4mc.c index f1da3b439b96..7ddd82b9fe8b 100644 --- a/arch/arm/mm/copypage-v4mc.c +++ b/arch/arm/mm/copypage-v4mc.c @@ -64,10 +64,11 @@ static void mc_copy_user_page(void *from, void *to) void v4_mc_copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *kto = kmap_atomic(to); - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); raw_spin_lock(&minicache_lock); diff --git a/arch/arm/mm/copypage-v6.c b/arch/arm/mm/copypage-v6.c index d8a115de5507..a1a71f36d850 100644 --- a/arch/arm/mm/copypage-v6.c +++ b/arch/arm/mm/copypage-v6.c @@ -69,11 +69,12 @@ static void discard_old_kernel_data(void *kto) static void v6_copy_user_highpage_aliasing(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); unsigned int offset = CACHE_COLOUR(vaddr); unsigned long kfrom, kto; - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); /* FIXME: not highmem safe */ discard_old_kernel_data(page_address(to)); diff --git a/arch/arm/mm/copypage-xscale.c b/arch/arm/mm/copypage-xscale.c index bcb485620a05..f1e29d3e8193 100644 --- a/arch/arm/mm/copypage-xscale.c +++ b/arch/arm/mm/copypage-xscale.c @@ -84,10 +84,11 @@ static void mc_copy_user_page(void *from, void *to) void xscale_mc_copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *kto = kmap_atomic(to); - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); raw_spin_lock(&minicache_lock); diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 8bc01071474a..5ecfde41d70a 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -693,6 +693,7 @@ static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, size_t size, enum dma_data_direction dir) { + struct folio *folio = page_folio(page); phys_addr_t paddr = page_to_phys(page) + off; /* FIXME: non-speculating: not required */ @@ -707,19 +708,18 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, * Mark the D-cache clean for these pages to avoid extra flushing. */ if (dir != DMA_TO_DEVICE && size >= PAGE_SIZE) { - unsigned long pfn; - size_t left = size; - - pfn = page_to_pfn(page) + off / PAGE_SIZE; - off %= PAGE_SIZE; - if (off) { - pfn++; - left -= PAGE_SIZE - off; + ssize_t left = size; + size_t offset = offset_in_folio(folio, paddr); + + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); } - while (left >= PAGE_SIZE) { - page = pfn_to_page(pfn++); - set_bit(PG_dcache_clean, &page->flags); - left -= PAGE_SIZE; + + while (left >= (ssize_t)folio_size(folio)) { + set_bit(PG_dcache_clean, &folio->flags); + left -= folio_size(folio); + folio = folio_next(folio); } } } diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c index 0e49154454a6..e2c869b8f012 100644 --- a/arch/arm/mm/fault-armv.c +++ b/arch/arm/mm/fault-armv.c @@ -178,8 +178,8 @@ make_coherent(struct address_space *mapping, struct vm_area_struct *vma, * * Note that the pte lock will be held. */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); struct address_space *mapping; @@ -192,13 +192,13 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - mapping = page_mapping_file(page); - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + folio = page_folio(pfn_to_page(pfn)); + mapping = folio_flush_mapping(page); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); if (mapping) { if (cache_is_vivt()) make_coherent(mapping, vma, addr, ptep, pfn); diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 7ff9feea13a6..b56a65626798 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -95,10 +95,10 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned __flush_icache_all(); } -void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) +void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn, unsigned int nr) { if (cache_is_vivt()) { - vivt_flush_cache_page(vma, user_addr, pfn); + vivt_flush_cache_pages(vma, user_addr, pfn, nr); return; } @@ -196,29 +196,31 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, #endif } -void __flush_dcache_page(struct address_space *mapping, struct page *page) +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio) { /* * Writeback any data associated with the kernel mapping of this * page. This ensures that data in the physical page is mutually * coherent with the kernels mapping. */ - if (!PageHighMem(page)) { - __cpuc_flush_dcache_area(page_address(page), page_size(page)); + if (!folio_test_highmem(folio)) { + __cpuc_flush_dcache_area(folio_address(folio), + folio_size(folio)); } else { unsigned long i; if (cache_is_vipt_nonaliasing()) { - for (i = 0; i < compound_nr(page); i++) { - void *addr = kmap_atomic(page + i); + for (i = 0; i < folio_nr_pages(folio); i++) { + void *addr = kmap_local_folio(folio, + i * PAGE_SIZE); __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_atomic(addr); + kunmap_local(addr); } } else { - for (i = 0; i < compound_nr(page); i++) { - void *addr = kmap_high_get(page + i); + for (i = 0; i < folio_nr_pages(folio); i++) { + void *addr = kmap_high_get(folio_page(folio, i)); if (addr) { __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_high(page + i); + kunmap_high(folio_page(folio, i)); } } } @@ -230,15 +232,14 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page) * userspace colour, which is congruent with page->index. */ if (mapping && cache_is_vipt_aliasing()) - flush_pfn_alias(page_to_pfn(page), - page->index << PAGE_SHIFT); + flush_pfn_alias(folio_pfn(folio), folio_pos(folio)); } -static void __flush_dcache_aliases(struct address_space *mapping, struct page *page) +static void __flush_dcache_aliases(struct address_space *mapping, struct folio *folio) { struct mm_struct *mm = current->active_mm; - struct vm_area_struct *mpnt; - pgoff_t pgoff; + struct vm_area_struct *vma; + pgoff_t pgoff, pgoff_end; /* * There are possible user space mappings of this page: @@ -246,21 +247,38 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p * data in the current VM view associated with this page. * - aliasing VIPT: we only need to find one mapping of this page. */ - pgoff = page->index; + pgoff = folio->index; + pgoff_end = pgoff + folio_nr_pages(folio) - 1; flush_dcache_mmap_lock(mapping); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - unsigned long offset; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff_end) { + long offset; + unsigned long start, pfn; + unsigned int nr; /* * If this VMA is not in our MM, we can ignore it. */ - if (mpnt->vm_mm != mm) + if (vma->vm_mm != mm) continue; - if (!(mpnt->vm_flags & VM_MAYSHARE)) + if (!(vma->vm_flags & VM_MAYSHARE)) continue; - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - flush_cache_page(mpnt, mpnt->vm_start + offset, page_to_pfn(page)); + + start = vma->vm_start; + pfn = folio_pfn(folio); + nr = folio_nr_pages(folio); + offset = pgoff - vma->vm_pgoff; + if (offset < 0) { + pfn -= offset; + nr += offset; + start -= offset * PAGE_SIZE; + } else { + start += offset * PAGE_SIZE; + } + if (start + nr * PAGE_SIZE > vma->vm_end) + nr = (vma->vm_end - start) / PAGE_SIZE; + + flush_cache_pages(vma, start, pfn, nr); } flush_dcache_mmap_unlock(mapping); } @@ -269,7 +287,7 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p void __sync_icache_dcache(pte_t pteval) { unsigned long pfn; - struct page *page; + struct folio *folio; struct address_space *mapping; if (cache_is_vipt_nonaliasing() && !pte_exec(pteval)) @@ -279,14 +297,14 @@ void __sync_icache_dcache(pte_t pteval) if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); + folio = page_folio(pfn_to_page(pfn)); if (cache_is_vipt_aliasing()) - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); else mapping = NULL; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); if (pte_exec(pteval)) __flush_icache_all(); @@ -312,7 +330,7 @@ void __sync_icache_dcache(pte_t pteval) * Note that we disable the lazy flush for SMP configurations where * the cache maintenance operations are not automatically broadcasted. */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; @@ -320,31 +338,36 @@ void flush_dcache_page(struct page *page) * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; if (!cache_ops_need_broadcast() && cache_is_vipt_nonaliasing()) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); return; } - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!cache_ops_need_broadcast() && - mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { - __flush_dcache_page(mapping, page); + __flush_dcache_folio(mapping, folio); if (mapping && cache_is_vivt()) - __flush_dcache_aliases(mapping, page); + __flush_dcache_aliases(mapping, folio); else if (mapping) __flush_icache_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} +EXPORT_SYMBOL(flush_dcache_page); /* * Flush an anonymous page so that users of get_user_pages() * can safely access the data. The expected sequence is: diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h index d7ffccb7fea7..419316316711 100644 --- a/arch/arm/mm/mm.h +++ b/arch/arm/mm/mm.h @@ -45,7 +45,7 @@ struct mem_type { const struct mem_type *get_mem_type(unsigned int type); -extern void __flush_dcache_page(struct address_space *mapping, struct page *page); +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio); /* * ARM specific vm_struct->flags bits. diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 463fc2a8448f..9947bbc32b04 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1788,7 +1788,7 @@ void __init paging_init(const struct machine_desc *mdesc) bootmem_init(); empty_zero_page = virt_to_page(zero_page); - __flush_dcache_page(NULL, empty_zero_page); + __flush_dcache_folio(NULL, page_folio(empty_zero_page)); } void __init early_mm_init(const struct machine_desc *mdesc) @@ -1797,8 +1797,8 @@ void __init early_mm_init(const struct machine_desc *mdesc) early_paging_init(mdesc); } -void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { unsigned long ext = 0; @@ -1808,5 +1808,11 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, ext |= PTE_EXT_NG; } - set_pte_ext(ptep, pteval, ext); + for (;;) { + set_pte_ext(ptep, pteval, ext); + if (--nr == 0) + break; + ptep++; + pte_val(pteval) += PAGE_SIZE; + } } From patchwork Wed Feb 15 00:04:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13141056 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24117C05027 for ; Wed, 15 Feb 2023 00:05:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 68E546B0073; Tue, 14 Feb 2023 19:05:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 63DD26B0074; Tue, 14 Feb 2023 19:05:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B81E6B0075; Tue, 14 Feb 2023 19:05:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 3A8A16B0073 for ; Tue, 14 Feb 2023 19:05:15 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 06E7E121069 for ; Wed, 15 Feb 2023 00:05:15 +0000 (UTC) X-FDA: 80467581390.10.C00853D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id 5CAF11C001D for ; Wed, 15 Feb 2023 00:05:12 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=SEG1Q+pP; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676419513; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=s+L17RxfXCQ5HVPYV7rNHXw7afhlVPPZ+bFT60n+KTA=; b=jk/bI+BmgTaPuRgaznUEb5O6uK6RlhaPJzQRmB9URAzP1cVuTvQELLKvmaMgHJBQ8f/UUE VmlKlP8IfewMQ6lZZj0wy088e7CkyJXN7OVGtqcuwDj7GycPs17a/X06oHb3t1MScHppQ6 fX7n4zoL2yljGXGmfQ6dIUiyVmJwZf4= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=SEG1Q+pP; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676419513; a=rsa-sha256; cv=none; b=6kZ1Wl/9gX0oeaoOead3WXS7OuUFg7gr8KKjll10hC+SZCu8bHSZ2vygC/johQAf95vwl6 lK8YTAF/JAXkRMqvWRRhn4lEZWMrTwhumBZXivJG1vKxQMqJkvha2rR5hXNRsY7SApAsNn 3hDYk8jexgI8CwNlQEI5i1czp1/pZE0= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=s+L17RxfXCQ5HVPYV7rNHXw7afhlVPPZ+bFT60n+KTA=; b=SEG1Q+pPQu2QJMOI6kaTv2LQv1 NVAh09QWWSuOAlT5w2SJ/qY2l17fJJ0+WBL79k7sGJLB2eEvwaqf0DNBgmZMqSYOI21T60nNZuc0B 5dgQ4SJCAQqGZeORfpiPMhqD5/0Ft81JdOy94qCQ8Hv8a2TWikCWigrwSCPhKmnmhV4OtJXJf6FUz wYCEKu/O4esYVzZxSG199TLSwoGqDjXJZeCiqX+jsHIZLaS7zyUF9XR5YrPGjtZEwPGOKfGutx3bH 1uGBJh5rnOvcNzNjsTAz973YyMOMsoJunKCB9ELpfYlgdRzy8UUoVZonGZh5peN82C0D5ubno8Fpr AImeRKTg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS5II-006wkY-AK; Wed, 15 Feb 2023 00:05:06 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, Catalin Marinas , Will Deacon , linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 9/7] arm64: Implement the new page table range API Date: Wed, 15 Feb 2023 00:04:42 +0000 Message-Id: <20230215000446.1655635-1-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230211033948.891959-1-willy@infradead.org> References: <20230211033948.891959-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 5CAF11C001D X-Rspam-User: X-Stat-Signature: dgeffcugimaohymkezjm5trej6stt8hj X-HE-Tag: 1676419512-247529 X-HE-Meta: U2FsdGVkX1+Byj9n7Skg1LRDkVEH8Iq9GqlGkqvdN4xnu0nJNrs3wKOFWquEWjhuKrHHPhOnj+rQhsjXJQBVwK16XRrAg/K3lln8sfRnWQnZUq4bo1bhJoNxtjGGcDXvc7oQU1FWvGgUof9H5G0K2JX8wRj+BPug9uHkSpOBQuXUGCXVnNtiy0i6bw1zvk/2tIXdAnqgsFS1puH6+vtxSI0AAonlpy6gDf90b0jDjcpmFOJFsHMWOM41jTdw/YkWtNVbmlqzWvxxiPdrzdMQ75IVyAp3BBVP+wlAKmEY9zZ3G1dBTavIS7f8G7mf4geiIfiZ503FKah3Tw3Ty2qHgo6eaJduylpDybvmxO6xgQDuWtuggc9QcAwTIiUEkvwhW9CjuBRPtY3ABWUuFHaHWg1PsRbyv3LRkMv8qfB101PwRhcCDXXGtBmgrSDaWSdZA2wCtIuDPwGUFgc/KWU8CFlvMv+D5+w9bwd5GykprOZcEpnd9PtVnuhTyvT884TfuHsDc6Q/ipBfKaqG5GzbtCtJPbdw/uwruTQ8pU4Thbm8bg9IzRoLwy+F+C0qyR68Ug6GvUxhhHFDj49T5kkG8P1pgHzycR6v7Du0jhZc+xcGXIEt239oG4Gep7rkcM1L9huTgfpWe9y4BCm/YrIcx9DFBFHRLfOTpKkOnPTgJzBdtL3Oq+oER6SraKlG+xCGdu7bOxM1i5uVxJGrLr0WByO4oPZKs6X4vlJaNkFqvyeelhODEhePyWu23n0Dt/uflFi1+Y3trCdz6YVfjgCQI5yODdJs4K08SceSfLP8EO9oS6AeJc6qYzvjpbG9HTLsABJe6XGyT0Ij5QSxQUZsVfBEtkyLget9NWG7eloNVWxzNUvw1CKDAb0ChkD/JGgBdAYDqKQeKVzh3ylQwPen3x1p1ZqvKnA90+aE7korVfK9pW8nwsuBI9HIIt44HEClj5iuQrgWQS1wE6MHBE2 nUE+3+s+ xt39Q7cozJHVt2qKyL+4iT2gtRwS4FdnO/aNp8nh0canTnRD7bnyaKa4Pb+QhjoS2HZ82kLeC2CB6Jj2KsrRI31y2PfXh5gHOUY6yxrz7jlEmGk/5W+//FquK4Lrq8DwRLBhxMVEgjA0BkxK9bTW6WfpqLCUvggrSRObFCLEkK2C/UwcwQDO5/26VX9Amfr+hpvRZ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). The PG_dcache_clear flag changes from being a per-page bit to being a per-folio bit. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/cacheflush.h | 4 +++- arch/arm64/include/asm/pgtable.h | 25 ++++++++++++++------ arch/arm64/mm/flush.c | 36 +++++++++++------------------ 3 files changed, 35 insertions(+), 30 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 37185e978aeb..d115451ed263 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -114,7 +114,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, #define copy_to_user_page copy_to_user_page /* - * flush_dcache_page is used when the kernel has written to the page + * flush_dcache_folio is used when the kernel has written to the page * cache page at virtual address page->virtual. * * If this page isn't mapped (ie, page_mapping == NULL), or it might @@ -127,6 +127,8 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio static __always_inline void icache_inval_all_pou(void) { diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 69765dc697af..4d1b79dbff16 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -355,12 +355,21 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, set_pte(ptep, pte); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) -{ - page_table_check_ptes_set(mm, addr, ptep, pte, 1); - return __set_pte_at(mm, addr, ptep, pte); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + page_table_check_ptes_set(mm, addr, ptep, pte, nr); + + for (;;) { + __set_pte_at(mm, addr, ptep, pte); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + pte_val(pte) += PAGE_SIZE; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* * Huge pte definitions. @@ -1059,8 +1068,8 @@ static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) /* * On AArch64, the cache coherency is handled via the set_pte_at() function. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { /* * We don't do anything here, so there's a very small chance of @@ -1069,6 +1078,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) #ifdef CONFIG_ARM64_PA_BITS_52 diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 5f9379b3c8c8..deb781af0a3a 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -50,20 +50,13 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, void __sync_icache_dcache(pte_t pte) { - struct page *page = pte_page(pte); + struct folio *folio = page_folio(pte_page(pte)); - /* - * HugeTLB pages are always fully mapped, so only setting head page's - * PG_dcache_clean flag is enough. - */ - if (PageHuge(page)) - page = compound_head(page); - - if (!test_bit(PG_dcache_clean, &page->flags)) { - sync_icache_aliases((unsigned long)page_address(page), - (unsigned long)page_address(page) + - page_size(page)); - set_bit(PG_dcache_clean, &page->flags); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + sync_icache_aliases((unsigned long)folio_address(folio), + (unsigned long)folio_address(folio) + + folio_size(folio)); + set_bit(PG_dcache_clean, &folio->flags); } } EXPORT_SYMBOL_GPL(__sync_icache_dcache); @@ -73,17 +66,16 @@ EXPORT_SYMBOL_GPL(__sync_icache_dcache); * it as dirty for later flushing when mapped in user space (if executable, * see __sync_icache_dcache). */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - /* - * HugeTLB pages are always fully mapped and only head page will be - * set PG_dcache_clean (see comments in __sync_icache_dcache()). - */ - if (PageHuge(page)) - page = compound_head(page); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +EXPORT_SYMBOL(flush_dcache_folio); - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); } EXPORT_SYMBOL(flush_dcache_page); From patchwork Wed Feb 15 00:04:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13141059 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66389C61DA4 for ; Wed, 15 Feb 2023 00:05:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EF3816B007B; Tue, 14 Feb 2023 19:05:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E54A46B0080; Tue, 14 Feb 2023 19:05:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA79E6B0081; Tue, 14 Feb 2023 19:05:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id AE4E66B007B for ; Tue, 14 Feb 2023 19:05:17 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 6C044160A03 for ; Wed, 15 Feb 2023 00:05:17 +0000 (UTC) X-FDA: 80467581474.21.82C4C31 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id BAF1B14000D for ; Wed, 15 Feb 2023 00:05:15 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="gulMP2V/"; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676419515; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=v+YCB1fx4UPA9iEkgC9GqZSNjcLUQaLXo5P2YNuoVqs=; b=SzuH8BJURSof+incguaekh+IbqmI7uvPKGaNPkQLbyvA4jtiv2zucTKbRP/qCB1yQF8dKN NPQ68mlFv+CX/gIN3PMZAqkOjq157e6N+tYzIWBarGE3L46BrGsSiA4cYUUEI6bUpV7udo FJgWVtpR+HEBGQgcq+7ZPGWcL4sNEuo= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="gulMP2V/"; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676419515; a=rsa-sha256; cv=none; b=MLhc8yfaM5/udKUNG+V8VPSqbXQqhSsalex67pd7gYQ+v7vAQV7LIsoxb9ynmJealtthCj dxB35g+sZsvcj3vEGrahapOPqz7emt6fyGa2iD9gVBGtjwJ/DDJtNjoYUKZuavu7EjuFt9 F+zS9CEq0JIz6IEIEa1g02Qo9VHw7Yw= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=v+YCB1fx4UPA9iEkgC9GqZSNjcLUQaLXo5P2YNuoVqs=; b=gulMP2V/waJdjiu05gvVHbNrv6 hgj16IWg8ovGkNxY3/qNWLG1GfEPbEdR7zfggUscuiwUOvFdgepGat8uqxdFP23N6aSQsJOg02vw5 0u3NieyYTCx3MaJr1Ro89AENDlpJwyFFxp93FwPxy5kC43eS41FIRuPu/2JS/ma342VVSivoAN5yo iRQHVihKskj6YRpY4CGVg9yYs/rrJ+ALVCXlnLvMQ8JdenlQ8NHn6PkkUqVjWJDjCvgmeXfhx5eKr DlRjiOwnh433a96MCrwGLUXJ5wYNYR4unNbEoKHd+oMrT3ViYiyCr9u3hscvF5Qjsic6ulV658bi1 ossiVp4g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS5IM-006wkh-Nt; Wed, 15 Feb 2023 00:05:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-riscv@lists.infradead.org, Alexandre Ghiti , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 10/7] riscv: Implement the new page table range API Date: Wed, 15 Feb 2023 00:04:43 +0000 Message-Id: <20230215000446.1655635-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230215000446.1655635-1-willy@infradead.org> References: <20230211033948.891959-1-willy@infradead.org> <20230215000446.1655635-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: BAF1B14000D X-Stat-Signature: 5u4o93jhh31xp14wwysahuc8qbcfziy8 X-HE-Tag: 1676419515-908639 X-HE-Meta: U2FsdGVkX1+2TyE0OOaxd+VaSYxfCs+nJCVER9uB0dcLDiDBb+okAQfpbJQ/1+Pr12pB+ohqQfZVbIJqRHdo7ETfcWhvL4S6htjjQ1tQADipzgb1wLsgsr0COKYOaAu6l/o0LhKKwnsxXW+jXwN1saLrelxz8iWqYNjHAZ6N0OjgAVmFRIXYjRZ3t0ZOL91G2G7swWaMf/pni814Sb75jUongYNyWOz1t+g/1OGXsDN0NUJV2IcZY+C4iWE1di73fykpH080JH3zyv/FZZ0L899lG5yZUImwS/H7RR16+0HqjxEl6zhavhRvKE6zDi2/Bc37rKDejkrPsS0fZB0LF9riB69/9uxYhlTH3PpC+CM8qPpFOYRgiluXb6y4QeR+1H0MInfZ+JvQYrJPDecgCAUq2+qZT5PtEad4q+lY8DdSrBtywMTbNswFwXQ1QDhwjFH6vKiIZSmzIzPv8zUBPCqmh5ZdwcLPYlSkhS17PPUG/3jc2L90VA0kOrm14eOA4lsW0s4jfo44vul52gyUFnqVAK7uo3HTwBzZ6OHXSHM19IAbhTVZw5vNrp2spFdMcLbUJe1gRjbYVkurc+xpDYgVNT0sb01T1BZf0TNlepJprI20muOImUqUCopb1hblLMQ1XARytXMk1VjGu87yz9YY8a5qfPuqW6/mcHNWfw6BbJBCCubMAU6/yAIA6u03gUyDgU6iYonvySqD9hBBwvWhz4040gPtG1WkxwvE8LCQBLnzfHwK3KX+2tBE0h1K7o/XGh7oAdyKqSBcCC2kvX/wn5AvLDLK1TwYa+V+4fU8upW8GugbODrg9ggbgl95jl3sRut8uY6GbnFpb/6qwBMpuJqftAxPEaKuJLTuj+cDdfaZenCJ9fjtwN+45xeFplt4kqqpj0U1pjTdM6axyuDkgf9dk7kAGtj5UuJO+SzZDU1nNyoS5EXgZY2iiqCNPQyLlhxRtvxYSU/pjJ2 dXlD1v7U hyBuTF/vihM5LxTBHfgGZ0gzhhxGJCfcW5PrzXlT3AbVAbooO0HZse51+pd5o46EkSDkTSbOqeBzzwnOnGJLRnfNwazq+Et/kT/2P3le8BYWd7hZLkkUGjdp/u0Yq+uJUZxr2E6OuvkNm8J3/Xy2HH/PHxuJRsumCsTGBQqZjthGLUbau0aJXmE8whJLPYtGvqvPF/XH7svZ4ZvE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). The PG_dcache_clear flag changes from being a per-page bit to being a per-folio bit. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Alexandre Ghiti --- arch/riscv/include/asm/cacheflush.h | 19 +++++++++---------- arch/riscv/include/asm/pgtable.h | 25 ++++++++++++++++++------- arch/riscv/mm/cacheflush.c | 11 ++--------- 3 files changed, 29 insertions(+), 26 deletions(-) diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h index 03e3b95ae6da..10e5e96f09b5 100644 --- a/arch/riscv/include/asm/cacheflush.h +++ b/arch/riscv/include/asm/cacheflush.h @@ -15,20 +15,19 @@ static inline void local_flush_icache_all(void) #define PG_dcache_clean PG_arch_1 -static inline void flush_dcache_page(struct page *page) +static inline void flush_dcache_folio(struct folio *folio) { - /* - * HugeTLB pages are always fully mapped and only head page will be - * set PG_dcache_clean (see comments in flush_icache_pte()). - */ - if (PageHuge(page)) - page = compound_head(page); - - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); } +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} + /* * RISC-V doesn't have an instruction to flush parts of the instruction cache, * so instead we just flush the whole thing. diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 13222fd5c4b4..03706c833e70 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -405,8 +405,8 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) /* Commit new configuration to MMU hardware */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { /* * The kernel assumes that TLBs don't cache invalid entries, but @@ -415,8 +415,10 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, * Relying on flush_tlb_fix_spurious_fault would suffice, but * the extra traps reduce performance. So, eagerly SFENCE.VMA. */ - flush_tlb_page(vma, address); + flush_tlb_range(vma, address, address + nr * PAGE_SIZE); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache @@ -456,12 +458,21 @@ static inline void __set_pte_at(struct mm_struct *mm, set_pte(ptep, pteval); } -static inline void set_pte_at(struct mm_struct *mm, - unsigned long addr, pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { - page_table_check_ptes_set(mm, addr, ptep, pteval, 1); - __set_pte_at(mm, addr, ptep, pteval); + page_table_check_ptes_set(mm, addr, ptep, pteval, nr); + + for (;;) { + __set_pte_at(mm, addr, ptep, pteval); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + pte_val(pteval) += 1 << _PAGE_PFN_SHIFT; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c index 3cc07ed45aeb..b725c3f6f57f 100644 --- a/arch/riscv/mm/cacheflush.c +++ b/arch/riscv/mm/cacheflush.c @@ -81,16 +81,9 @@ void flush_icache_mm(struct mm_struct *mm, bool local) #ifdef CONFIG_MMU void flush_icache_pte(pte_t pte) { - struct page *page = pte_page(pte); + struct folio *folio = page_folio(pte_page(pte)); - /* - * HugeTLB pages are always fully mapped, so only setting head page's - * PG_dcache_clean flag is enough. - */ - if (PageHuge(page)) - page = compound_head(page); - - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) flush_icache_all(); } #endif /* CONFIG_MMU */ From patchwork Wed Feb 15 00:04:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13141055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7805C61DA4 for ; Wed, 15 Feb 2023 00:05:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 09AC76B0072; Tue, 14 Feb 2023 19:05:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 023356B0073; Tue, 14 Feb 2023 19:05:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2E4C6B0074; Tue, 14 Feb 2023 19:05:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CF9396B0072 for ; Tue, 14 Feb 2023 19:05:14 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A0A0B410C7 for ; Wed, 15 Feb 2023 00:05:14 +0000 (UTC) X-FDA: 80467581348.14.BD850D8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 53A1A180007 for ; Wed, 15 Feb 2023 00:05:12 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="YR6tN/b0"; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676419513; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=I6DsPKc/8oKTSDrS7DmGqt2ttnyNGv79rW5ecTki52o=; b=mS206n5m/69xS06eGYWlm0eQBemBOoWzEEgvqxtpA/y6CjvjwMb7mlYXX2PtpuzdFQouh/ yHEfYJPbuCzifqo05tPGzaWccEl3Iw1TccZZ4A65PcjgiOdEHO5cBORzPwVXIvs0W0xHIb AmPGhJCZt54QdHNnJ4C7CG3GkWHS7dw= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="YR6tN/b0"; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676419513; a=rsa-sha256; cv=none; b=LwaKoFmzNaF8MP9HSqucEgsE4oG/r/6LCHxBeY+yRCQoc8Vlr4o0AntMNf2eQcDNMcM5jf MlvEJE5g8ECwX2+b/Yv+BrObQ1Rka54eSgi8zMrCdT1rAv1miu4/ItAm6/61IsVJoaLoCd 1R0Ghz3BA8gEUlFX7rF+kQEx1ZCN9/Y= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=I6DsPKc/8oKTSDrS7DmGqt2ttnyNGv79rW5ecTki52o=; b=YR6tN/b0SQpWo8xxa7mnaFOI7e uEdA3uBznI3gNqsWAnxwPSvCL9LWifC5CGLRU4Ky/2PBX3daviODQyIR91fm9JgzCDYts0QrBAAam awMtcghx7+0OjllxFB616jQ+i7zJnJKCz91lKjOBLFnU2sXIaaufDvU4xpPtmexVjUc9GqC1GVl4i VHXc7Gr3izXp/9N6nIBt0UFqVwz1XWXmlvmgywhVpqE98oa4/Xst6M5kNrnnlg402HXQLjncH6Y13 aDdKb0C8UQ74TFkqQhikUfvl0VNiob7naRpZIcDXYl8gWdwQBu22v8BRCs8xeRHsdUS81wHaW+T+d +qE3lsGA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS5IM-006wkj-Qt; Wed, 15 Feb 2023 00:05:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Guo Ren , linux-csky@vger.kernel.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 11/7] csky: Implement the new page table range API Date: Wed, 15 Feb 2023 00:04:44 +0000 Message-Id: <20230215000446.1655635-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230215000446.1655635-1-willy@infradead.org> References: <20230211033948.891959-1-willy@infradead.org> <20230215000446.1655635-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 53A1A180007 X-Stat-Signature: p14uugpauc8mqni5ga3pymamtomc9d3a X-Rspam-User: X-HE-Tag: 1676419512-258094 X-HE-Meta: U2FsdGVkX19qTKwaYX90tqX8nr/B7MNuTULMjRjF4IdwJmGszWCB7TuN0QKy4KzUsUTbAHJ/u2N/UULmW0FMbKHJOevR+jzq2Rhol9eg+ZZTdx/CVfciRZDcbokiE0Vf4nMFSXOzwwU+DtINP9gEaZdA6oMjKDPUj3ws+o4Qyj6cETLNBZHKQKP1pao3qJhWv/nKD1/JemDsvlyC4Tl1RqHcGw28i2kLMKiMoIxLdQvQb4/yxnGhNDpc3qWWSNwwZ55jm1mrs12AcBBh8FBAf+FgQGwBzXaO78IqQ1K45rkmG7bRf5vKFZ27vTGutiv1p1MPUEJ191dc1lB3wXpkak+VgJDr6f8T7zxc9woBZ+kU2aWB/UhmcsKlWpZiZxr+H4A/0pXn1q/0kkqRK9QDUyNmraiE0LgptWZ45Jazf/rmbpd1mz6MujddAgVa+mTgSG/JBjy669jFOxABaxlXlWhnQvGdtvjoqMkEhPAHAzfThvcGGp9IvGfCccnuf070G+3RQnizsZt+DqRtd4vzCKLIu3SFg4A5L+MXY7D1wlq6BlMrP5UqJiBAyOazES2nW5KafJMp8jJIVDiAxucYKS0P+sT/u/2gDHtc6QU1MTUN8zgGhvZtmHLbPVsOzkpt6+gse7lBQKpSRNYkdfeWPUVhFwGaV0piQu7KshDoZiOuL/P79mFCOWlygdP6wHwcnAsoGhVM+pyQyrm33+C0WwfROUwaRAP7iQLTJOa4frIcX4Z1iwdH0OW7vu7HPcSIBKjNsYx57eEUt/Y1y9k0d47bG+RLv0EpyGRgE/nMVhsKnDFEZA6YDts9wqYxdaC7K1sG6pb5q+tPOuzZjE5vLT028toL4B2cnvHIPlrNiBkfZ+zEp6CtzOWj9mDX1w8Rj62+LwW3WuRMvbnDt9tCK3HePH0P7puVwYn2OLFcac+5u4w+cZn5pYHCJZuzpPNkCFhWYUW0cgwfdzGP6G5 p/XiX43d wR1BbknE701sQ2xB/KDKtZMstmsCm4KFe5iJh8iXbmIJmEfjBc3eug6H1X4hQhhd73KMzGMEORHcDsFExN4NQ4ebXxa2gjdrcappSm9RBESMhUuzHYnOWqHl9QO1jp97n+mCHQg1e4oydeaS5vEvxbfLeR6TMbEBOKVVHVwOemA7tjTfhPviXX9Jlj6Kiwcyk3rbkTjlkYFfy9x3ct0op2bNL+I/kF6aXLIPE9r56aREv18M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). The PG_dcache_clear flag changes from being a per-page bit to being a per-folio bit. Signed-off-by: Matthew Wilcox (Oracle) --- arch/csky/abiv1/cacheflush.c | 32 +++++++++++++++++----------- arch/csky/abiv1/inc/abi/cacheflush.h | 2 ++ arch/csky/abiv2/cacheflush.c | 30 +++++++++++++------------- arch/csky/abiv2/inc/abi/cacheflush.h | 10 +++++++-- arch/csky/include/asm/pgtable.h | 21 +++++++++++++++--- 5 files changed, 62 insertions(+), 33 deletions(-) diff --git a/arch/csky/abiv1/cacheflush.c b/arch/csky/abiv1/cacheflush.c index fb91b069dc69..ba43f6c26b4f 100644 --- a/arch/csky/abiv1/cacheflush.c +++ b/arch/csky/abiv1/cacheflush.c @@ -14,43 +14,49 @@ #define PG_dcache_clean PG_arch_1 -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); - if (mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + if (mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { dcache_wbinv_all(); if (mapping) icache_inv_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); - struct page *page; + struct folio *folio; if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) + folio = page_folio(pfn_to_page(pfn)); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) dcache_wbinv_all(); - if (page_mapping_file(page)) { + if (folio_flush_mapping(folio)) { if (vma->vm_flags & VM_EXEC) icache_inv_all(); } diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index ed62e2066ba7..0d6cb65624c4 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -9,6 +9,8 @@ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio #define flush_cache_mm(mm) dcache_wbinv_all() #define flush_cache_page(vma, page, pfn) cache_wbinv_all() diff --git a/arch/csky/abiv2/cacheflush.c b/arch/csky/abiv2/cacheflush.c index 39c51399dd81..c1cf0d55a2a1 100644 --- a/arch/csky/abiv2/cacheflush.c +++ b/arch/csky/abiv2/cacheflush.c @@ -6,30 +6,30 @@ #include #include -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *pte, unsigned int nr) { - unsigned long addr; + unsigned long pfn = pte_pfn(*pte); struct page *page; + unsigned int i; - if (!pfn_valid(pte_pfn(*pte))) + if (!pfn_valid(pfn) || is_zero_pfn(pfn)) return; - page = pfn_to_page(pte_pfn(*pte)); - if (page == ZERO_PAGE(0)) - return; + folio = page_folio(pfn_to_page(pfn)); - if (test_and_set_bit(PG_dcache_clean, &page->flags)) + if (test_and_set_bit(PG_dcache_clean, &folio->flags)) return; - addr = (unsigned long) kmap_atomic(page); - - dcache_wb_range(addr, addr + PAGE_SIZE); + for (i = 0; i < folio_nr_pages(folio); i++) { + unsigned long addr = (unsigned long) kmap_local_folio(folio, + i * PAGE_SIZE); - if (vma->vm_flags & VM_EXEC) - icache_inv_range(addr, addr + PAGE_SIZE); - - kunmap_atomic((void *) addr); + dcache_wb_range(addr, addr + PAGE_SIZE); + if (vma->vm_flags & VM_EXEC) + icache_inv_range(addr, addr + PAGE_SIZE); + kunmap_local((void *) addr); + } } void flush_icache_deferred(struct mm_struct *mm) diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index a565e00c3f70..9c728933a776 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -18,11 +18,17 @@ #define PG_dcache_clean PG_arch_1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 static inline void flush_dcache_page(struct page *page) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + flush_dcache_folio(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h index d4042495febc..a30ae048233e 100644 --- a/arch/csky/include/asm/pgtable.h +++ b/arch/csky/include/asm/pgtable.h @@ -90,7 +90,20 @@ static inline void set_pte(pte_t *p, pte_t pte) /* prevent out of order excution */ smp_mb(); } -#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) + +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline pte_t *pmd_page_vaddr(pmd_t pmd) { @@ -263,8 +276,10 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *pte, unsigned int nr); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \ remap_pfn_range(vma, vaddr, pfn, size, prot) From patchwork Wed Feb 15 00:04:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13141058 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 101E0C05027 for ; Wed, 15 Feb 2023 00:05:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 640A86B0075; Tue, 14 Feb 2023 19:05:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 57BA46B007B; Tue, 14 Feb 2023 19:05:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 35C3A6B0080; Tue, 14 Feb 2023 19:05:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 12F696B0075 for ; Tue, 14 Feb 2023 19:05:17 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id CA565140BB0 for ; Wed, 15 Feb 2023 00:05:16 +0000 (UTC) X-FDA: 80467581432.24.1E8D50F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 0FB1E1C0013 for ; Wed, 15 Feb 2023 00:05:14 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DIExkwAi; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676419515; a=rsa-sha256; cv=none; b=qoue+s0/5Tmexpwy5YQ0gfS9IIU6Y839FbURxbm5jyFEjML0KRlOsemJZES8OLb+P6G9UE Lo6l6+07NCzNuNijeC2j1yUO4+h2ZPX00JMy+rokrLzZXUqSguaj8hgnihIox6JBOWmGBq AH+okq9d6tZz373SFlK5TW3xJbTrFIM= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DIExkwAi; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676419515; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h2PRVkAtu8fd3a0bg//rUxs9QnW0skvYBe9Qcbzdifk=; b=oX5kr1xLtx4oE6lapRvQwFEoS4Kdvpj+zUIj2TSYCSbHWRqqKXb8TJjHFQTd9Z4a8P7xz9 bdS2e579Z6ZPF2tCQmtm7ySlayVDcPckC5ci54Y3GC4/1TK6iL9n+fEqxhidlkUrXd+Guv X8FhjQh2hNYZ6gKk3iZcvzH8yvuvdcY= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=h2PRVkAtu8fd3a0bg//rUxs9QnW0skvYBe9Qcbzdifk=; b=DIExkwAi2tqpjSpQji7j9ZuyaX Ii2DX9beCKO5Ej7eEZrmu/UZfKbavuYWZDTHRdYta5L8QefNsL1a+IYkNgojROgRIdWGhermVt0uB VD/9aBb/kv6A2s8MKzfyTFd2TSqs1riYuIuKa/J1bZ2U+XQt5ajncQi65QWMsyFeR14P6w2rxxre+ VW9TGSzzeW0ve+xm4HaB6VpDKlHXXsYjZLwJM7AHQHDs7gOpk8eFe+I94/TuSt7UWuF2vNjvUM+JF TSjby/ePER0UuBe8DcS8eVg7BVR2nMNeq5nFiywKuRvDevMKgU+IttbK33DI4zBgijNgZY5kayMAK HdrwpRKA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS5IM-006wkl-Tq; Wed, 15 Feb 2023 00:05:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Brian Cain , linux-hexagon@vger.kernel.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 12/7] hexagon: Implement the new page table range API Date: Wed, 15 Feb 2023 00:04:45 +0000 Message-Id: <20230215000446.1655635-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230215000446.1655635-1-willy@infradead.org> References: <20230211033948.891959-1-willy@infradead.org> <20230215000446.1655635-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 0FB1E1C0013 X-Rspamd-Server: rspam01 X-Stat-Signature: tntm1k6dgu87fk4ndgtfgxt1w1dgcqh8 X-HE-Tag: 1676419514-913509 X-HE-Meta: U2FsdGVkX185yEzRoZqTcyGG5b4ffHLdu6hnC3LAeOxbK76k6IO0wJaMYGiLB2RTcFoLfBPaYc71SDgPP9BUot/5eOxhqlViZqrhpmviQcP7M/YwnZ9yLFrFgEOW4WXFuxo5bohYks2i1xa8+KjluaPuD8yvKuiSrG1NLi75DjBY3ygA6YgHToQtvvA6tfFiyAw9nAHarcNWyhA6v6CmwFBmoH/tG0s60YbSk70e83Y9+AcMtBgWkoktiqLLkGhSpkTEgLNuzYTaBZANXLaxH4CK3BUAY/oC2JuLQ8dXg2E0t/fIAnbRSxuIRYrOkyh4/5BIvgaveVW8W+5qZ4OZpMabm6ibdv5ngDwEF+wtt4HyaP6hZynzfg1RNcM62b0GJrA8QLQcOaouoD84fsdAbzCcg8Q1ZKfQr1RY/dZHaFIGGx3tfVKCKb5zvyOfDDOpLN3heP7GZaTQleuezhoOaSXLhhSoVllGoCItxeTS7OR/Er3/pshzeI5fKEPV1apgA7NGCOrD9D57eM0DfuifQ7hrvsJbsm0FNj0CqO8aFDt9NtzGu+96fCyGYYOdJC7/8QQl5zjwHOU9wcaqVmsqAdlEb+cR+aaRlamcXQHRHfciv1XJ35xHcaxUHqK61cKvxm6E4n+nSEaJmzhnTNveJJ923xrj6Rw53N0ZomnuQFBvlSta8XY2Nwvu5MMgLB7Z5n1AJ1gWWoLnLSWIY/XvYmND6C+8XK28TH7CYefymYRd3wvQ4/U9pDqgEWn/7LJSNh78+1EVvPapiVExVTzyE05tT8nqz30tEYuahRnWltX8PSxKQJWcwXAtNxa+ZSN2AQstbcduABd5nWs97850PBMfDJ0XkOgpg+hoz+3VqpMb0poWnNQ7E0nE7MWuiMjoBqQxQVGshbHLi93+rJi/aRHoPC8vViIohWjj9CAskjx2SNkEI4xROxqgs4WtB1sitxHfuu9zH5Ia6J+9N0D PXMq+iV0 a1UXg2ecK+F1XNzeEU9PUsKDpcpFvGRG0rRLvE/7MlvCTBZxo653UDmh4I1vOdKhpl4hga+ErkYTd7VaP/w32W1y/daJq3N1BzqQxArrT3I9xxGHVkpu0oX27LjUG/hyC3+S3ucJN3p8Vn/MtN/l6qa0SIx1Xxnzu792/6HPD7NB6clsA6zjg0ZBhskEgAREapWoM X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes() and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Brian Cain --- arch/hexagon/include/asm/cacheflush.h | 7 +++++-- arch/hexagon/include/asm/pgtable.h | 16 ++++++++++++++-- 2 files changed, 19 insertions(+), 4 deletions(-) diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h index 6eff0730e6ef..63ca314ede89 100644 --- a/arch/hexagon/include/asm/cacheflush.h +++ b/arch/hexagon/include/asm/cacheflush.h @@ -58,12 +58,15 @@ extern void flush_cache_all_hexagon(void); * clean the cache when the PTE is set. * */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { /* generic_ptrace_pokedata doesn't wind up here, does it? */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, void *src, int len); #define copy_to_user_page copy_to_user_page diff --git a/arch/hexagon/include/asm/pgtable.h b/arch/hexagon/include/asm/pgtable.h index 59393613d086..f58f1d920769 100644 --- a/arch/hexagon/include/asm/pgtable.h +++ b/arch/hexagon/include/asm/pgtable.h @@ -346,12 +346,24 @@ static inline int pte_exec(pte_t pte) #define set_pmd(pmdptr, pmdval) (*(pmdptr) = (pmdval)) /* - * set_pte_at - update page table and do whatever magic may be + * set_ptes - update page table and do whatever magic may be * necessary to make the underlying hardware/firmware take note. * * VM may require a virtual instruction to alert the MMU. */ -#define set_pte_at(mm, addr, ptep, pte) set_pte(ptep, pte) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline unsigned long pmd_page_vaddr(pmd_t pmd) { From patchwork Wed Feb 15 00:04:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13141057 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5DC4C61DA4 for ; Wed, 15 Feb 2023 00:05:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B49596B0074; Tue, 14 Feb 2023 19:05:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AF92A6B0075; Tue, 14 Feb 2023 19:05:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 974076B0078; Tue, 14 Feb 2023 19:05:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 77CC56B0075 for ; Tue, 14 Feb 2023 19:05:15 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 3481BA1054 for ; Wed, 15 Feb 2023 00:05:15 +0000 (UTC) X-FDA: 80467581390.10.F11402C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 96BCC40007 for ; Wed, 15 Feb 2023 00:05:13 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=VMhl2Mxg; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676419513; a=rsa-sha256; cv=none; b=KTzfNrTYk8jGw0cTyZiiR3L/3VelDYIGUnW5gGESZZDIxRN+QIdzXLvbqnbbfWnERn+Atm LF1nLAbgfUI+3xZX4YNm78icpupkPlZuUnJedlhazgSo9Jyv1IQ2ImEnv9rLtYtaBcvVQP YWV4WgVS78fWNP1kOlGJvUe0nTVSHJc= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=VMhl2Mxg; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676419513; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=14nHaJTTJLogPcUZuCEw0mL+Iya7jGOiXqjT0WfuZdM=; b=PwiDZ4bRuXbVWxNSTbBq5B6duaSwd7vSCpooZFjeKeBsMN43Ptmflp/PLoVSXPU18T/lOk xLzy1Md1o7kAj0nnIBL/aOz1jJrBhfN2CGdQR+DCon2peUGegpbCyoXpUVP2gUr+DyCznV 8imZHmdjj1r0swV+ATmewwwryPvDybk= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=14nHaJTTJLogPcUZuCEw0mL+Iya7jGOiXqjT0WfuZdM=; b=VMhl2MxgG1aA2Z9hNjtO5b58Vy nv826ZYIllfa1zt9yg7SEuriSrs0ex4xZ6nV9Jc3kXc3hXJxnlAHpijGVB1kXI8TakpR1A2AedpKY 3cM+LeFUIKKLJ+8AF3Sapzini1bb49OfSQ8M1yW9pTqmgBmSnDQnbyiCa2Yl9Foc9H849xveo3KYS dorlO+z+W5o9flgAIiPb662GdmEks3ScivOF35YJf2vehTnUhKZJeKCSXBx3Vd0m8pCj/oEbCe2A5 bf543Z+nuQbLbk6t+hlrliycPMkSFVoRDLia01T6ilwUZ+ki/WOy/2fh94XThA/g6n1oxY7fOiBuO xawIPtMw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS5IN-006wkr-1k; Wed, 15 Feb 2023 00:05:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Huacai Chen , WANG Xuerui , loongarch@lists.linux.dev, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 13/7] loongson: Implement the new page table range API Date: Wed, 15 Feb 2023 00:04:46 +0000 Message-Id: <20230215000446.1655635-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230215000446.1655635-1-willy@infradead.org> References: <20230211033948.891959-1-willy@infradead.org> <20230215000446.1655635-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 96BCC40007 X-Rspamd-Server: rspam01 X-Stat-Signature: xt1ppg59tnapf1gu7ukburtfqnhca1ck X-HE-Tag: 1676419513-742157 X-HE-Meta: U2FsdGVkX1+mpgWuzzh5cXQ/kdbppdNiDeat0qLxFM1KPso8eS9MKXrXSPVHF8tC1rc9Rfg64Tr+OMQSK8u8InEsqqgLHTwrjvuRi9KeTthHwlNKzF2OoQ3dzNxC6oiEhOcAi4d3KajF9W7ZLNRJyhtiHf23l7P0gPDGHRPegHMzq8e35lidjfr0EYtQkgZRZZkIzbuak2jNMQwbempHqMZdWi3B5YvssHkO8FbmRrBHvikENPW0vopgMwt/4QBXkOc58cuCkqa77zcvQN1v/j5C8LIoKQbhk9ExzB9GbAK0izxkRsd95OQNfzfonW9/KJ12+7jh2BZppIu9M6ECtNO3Ng9Fiv352YwZC1U6ZiffIt7xPTau7Qgw2+3Jaqcbn9bKvL8dbd22R/cu/MSoRXdf5iuXNjzI3xFpC1U+XFemwGSjPNcqe19Ix62UINYkg0YhjRAMAWIPBKPtaiU/7nEQmVzQK64Fh+KyPjhy79B4RXFbMKYCW7pQez7Kmpo1GF9mLSIT0Tvc88kgCqEWTX+b+HknJ+iAZW3BeC+p2viLPfGdteIYFwh87Hhfux12qqXlBY9hF4X4rLGYJJokqpujt/lyP7jDGF6MYAvkKJ8H5qH024QkMLyrpb6d7p408mhPDESUUh5UoU/PdzSfZba5xvSboXgbIzw6FOvBnbIrtFfAukSZg/+3PWViXm8q2ILkYoyMB/qn3VXnsLxOGJFOCSxdtN1o3Q4fTzZxgGUZ+o0q+YCXZgHj+WExmiExgYmqdjKUUYTs5HWXHT2hSlt3Cs/dVJ8dvvXMCn9l5H1gWvtPK4HYlqoyp5ghy+0qcogPMiUTc+dMJZWWTTMtINpN4YAKxtuRAp73jq3rbQTc99Ga5cF5PDeXx8D5TWE3mXjvGuzD6948gHr8qXRHpVFMoUnvnLRPCIZDk5cvJ39ds2jMB11F+IlbTkFsG75hdjA6F19LMYJHvD0C1PX 0vR7xDGx EpAjqdW1wBrFcXfMyRW9IXxKDRQzLFC6BcMmXUlilB39BskDTngCqzzKssMa5KC3CK23fKde6YjjfBVquaObeSh48bcEsIPlIhE08JrSSFc9fAB+3IQRdTUrJ9VrdGjmKvyRQJPXpa897xg+o2CkpSQMdOdVAXSdvc/P1EaUsrZBvEwrwoiLOtRx3LR8cIdjMpcsG X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes() and update_mmu_cache_range(). THIS PATCH IS INCOMPLETE. I DO NOT KNOW WHAT TO DO IN __update_tlb() Signed-off-by: Matthew Wilcox (Oracle) --- arch/loongarch/include/asm/cacheflush.h | 2 ++ arch/loongarch/include/asm/pgtable.h | 30 ++++++++++++++++--------- arch/loongarch/mm/tlb.c | 4 +++- 3 files changed, 25 insertions(+), 11 deletions(-) diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h index 0681788eb474..7907eb42bfbd 100644 --- a/arch/loongarch/include/asm/cacheflush.h +++ b/arch/loongarch/include/asm/cacheflush.h @@ -47,8 +47,10 @@ void local_flush_icache_range(unsigned long start, unsigned long end); #define flush_cache_vmap(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) #define flush_icache_page(vma, page) do { } while (0) +#define flush_icache_pages(vma, page) do { } while (0) #define flush_icache_user_page(vma, page, addr, len) do { } while (0) #define flush_dcache_page(page) do { } while (0) +#define flush_dcache_folio(folio) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index d28fb9dbec59..0f5fa7c40c52 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -334,12 +334,20 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) } } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) -{ - set_pte(ptep, pteval); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << _PFN_SHIFT; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { /* Preserve global status for the pair */ @@ -442,14 +450,16 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) (pgprot_val(newprot) & ~_PAGE_CHG_MASK)); } -extern void __update_tlb(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep); +extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { - __update_tlb(vma, address, ptep); + __update_tlb(vma, address, ptep, nr); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache @@ -457,7 +467,7 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) { - __update_tlb(vma, address, (pte_t *)pmdp); + __update_tlb(vma, address, (pte_t *)pmdp, 1); } static inline unsigned long pmd_pfn(pmd_t pmd) diff --git a/arch/loongarch/mm/tlb.c b/arch/loongarch/mm/tlb.c index 8bad6b0cff59..ac0b19dbd1dc 100644 --- a/arch/loongarch/mm/tlb.c +++ b/arch/loongarch/mm/tlb.c @@ -162,7 +162,8 @@ static void __update_hugetlb(struct vm_area_struct *vma, unsigned long address, #endif } -void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +void __update_tlb(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { int idx; unsigned long flags; @@ -187,6 +188,7 @@ void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep write_csr_entryhi(address); tlb_probe(); idx = read_csr_tlbidx(); +// I have no idea what to do here write_csr_pagesize(PS_DEFAULT_SIZE); write_csr_entrylo0(pte_val(*ptep++)); write_csr_entrylo1(pte_val(*ptep));