From patchwork Wed Mar 15 05:14:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07532C76195 for ; Wed, 15 Mar 2023 05:15:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 978498E0006; Wed, 15 Mar 2023 01:14:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 836E28E0005; Wed, 15 Mar 2023 01:14:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 688A98E0006; Wed, 15 Mar 2023 01:14:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4880D8E0005 for ; Wed, 15 Mar 2023 01:14:53 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 1B4C8C106F for ; Wed, 15 Mar 2023 05:14:53 +0000 (UTC) X-FDA: 80569968066.23.BD55864 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 5163D18000B for ; Wed, 15 Mar 2023 05:14:51 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=NTzMM1J4; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857291; a=rsa-sha256; cv=none; b=I9ax66bPy5Kk4164tnI4/P8UTwoFH4eXMH/qlMBed8nic/KN/WGXpnwk0j/mfSzpuMtMQu 2qKyo5CvLhszMOqglOOVbulPzb2KNN9EJC9wjCKH92/ZOVwW7nvImwha+o1FHD7O7Qab/w G+V8g9GZXwhcSJ/iHCqcIoWZ1f30VyQ= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=NTzMM1J4; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857291; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WGsuNHgtl7K1C/He2s/0y/Zy8SWDFoeWCMIkXDsJXwc=; b=Ztatsb+4md1PckRvSbY92+fG7Id9VFrb93O1hxt2D3AnGkaQKA7htScHRz5+g4BB1zxm72 EzV0h8QHBjGGs+idjKrI0COhD0i8xmd6LAdLzGcte1HgwWyQrm8nKbahb/3uPKRzy93UFC Zo0NIz/YYV3BrJLz+f8BZI7cdFRApr8= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=WGsuNHgtl7K1C/He2s/0y/Zy8SWDFoeWCMIkXDsJXwc=; b=NTzMM1J4/IOtu3QL2QKZxx9SKf FvKHZglkg/V+OAw7jgJhs6IUh8sugb4N02CL92wx+DlNtMbZam+4QFkhz1JXS9AXyos0D/MP5S0x+ CmClNweLA/k7ogci4CxD2XEyNx9LEMpNvjMiq6U2qBz+xGRCblWEZePIbFNxjONv86bEBYsZtlH9O izF3K8+37JP3Y3fCMyQspB1ZbnCOtAi1KmMyVxSaugIgdyV+sfv3Ss/PxBK41WncGTxAqrbdWk+2V we2BJPqaAKgpN4MRk0uhZbee0ESqhOXmAO1md8iW2FED3dYnzG12wd/ydj6l64pbxfxvlX4bbiIky 82Sw2grw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTK-00DYAr-Eo; Wed, 15 Mar 2023 05:14:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 01/36] mm: Convert page_table_check_pte_set() to page_table_check_ptes_set() Date: Wed, 15 Mar 2023 05:14:09 +0000 Message-Id: <20230315051444.3229621-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 5163D18000B X-Rspamd-Server: rspam01 X-Stat-Signature: 3ctb8oi8yfwctr87r95u4tpx7hxwmsyz X-HE-Tag: 1678857291-849074 X-HE-Meta: U2FsdGVkX19kvpKZ1yLbEaFs/pZsG+ex93OgtvBDa61cC8w3+90jvO1x+m43Wp7Hx4ND5s1NZM4huMawGZUN/Ib4vCsDpKTWKZqZiK4SgBIl5ppsRQuRl6sRxD86etddqfJj51+aY2LR5KCcHMYcm9NY9T9UsPoglvG66gKOMQWz+40gXZs9BLfOYBcPt24dygBFjG01O4vv7JgHOhHRnDfkzRRpi8Ig1jWzTqxK0J3a9dnfqUtoAslIFvobGy6KKElcKXhV1QYD+wGDQw1cYW/exy3T6nr+WYWCqBSU5SKPycx0GVNvR8ShFGWw3ktBPfhFNu2bm1TW6rCcX05f2piElAduZwXxEowOoBsqJk5SKZ7UdCfAcB/sWRmVCRFDOxNOQB62hut8Bo15/Q8Hydm2edLrt9N6EVNduMJCJwzKrLLpsRqPnzwT3GYKNEOGWMIA4T0nU9qTegbQyTBiunQhjvochcOIWp6zVCmskQsm+q1ZhHxdlUqQxKJuigGUql1YxnwdmpwNBx7NF4hL1RJBYZF6bOIShAVCI0hs/EEWyylaNKIDhDzIQC5lgC80Viw5hZUQgMNP0NhZvtfSZbsCKCQ3mZlhrKN2Ik/jWPV4WrMRqR+mLy9r6KMi08ne140ezhRVk5rwoGsFLX/94hdC1yQRVQX+nxiZsO3ttTatdMO+sIjHQPvXr4/MgXJ/SGAxdq6hsXj+k2NBgwsTaYYuN408DqfFrB5vGozM48ql9uZhpvOjh92nh02KF+a8QaJv/BmUr3Cv2JEBx2m9B24G/fDI83ahTJfEbEpKNxfdVax4TWVkFzi8b3vEwUK1ysvsf7HaDzzHtGLFlapw8jdW/JJUynP4jaCTgPSO4TUE3JvqgV6UdB4lIsMISliAC1UxL8C5fGdoQQ6wLUPzr4CiXhPL8MABzuhkLCPJdl+QYAl44ssjY+pTRDxAefS4mjnymXMEa45lTB9TkfR j1eaUImN RZ3RdPGZY8OWbKMqXbQnvdXpeKNCyE3pNWbhniaH64kwSFPC9Mjel1RM+Rta5Ka/viGcpRECe4ytG1Bl1damxddS/1zr0KWMdsTUfwktEfmsBesBokTcboUCQl8QysIZQEc6fatb+WyBYGbb+mSTtGfNX0feFVMmSky6T6eBVnewtnEBnhKbH+hoahXuf/jBj/3/tjpLKpN59LPVl4FNglxJ1Aj80NG3v1Yci5PtTs70beq1MFa1BR8kcYw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Tell the page table check how many PTEs & PFNs we want it to check. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Mike Rapoport (IBM) Acked-by: Pasha Tatashin Reviewed-by: Anshuman Khandual --- arch/arm64/include/asm/pgtable.h | 2 +- arch/riscv/include/asm/pgtable.h | 2 +- arch/x86/include/asm/pgtable.h | 2 +- include/linux/page_table_check.h | 14 +++++++------- mm/page_table_check.c | 14 ++++++++------ 5 files changed, 18 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 0bd18de9fd97..9428748f4691 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -358,7 +358,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - page_table_check_pte_set(mm, addr, ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, 1); return __set_pte_at(mm, addr, ptep, pte); } diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index ab05f892d317..b516f3b59616 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -459,7 +459,7 @@ static inline void __set_pte_at(struct mm_struct *mm, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval) { - page_table_check_pte_set(mm, addr, ptep, pteval); + page_table_check_ptes_set(mm, addr, ptep, pteval, 1); __set_pte_at(mm, addr, ptep, pteval); } diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 15ae4d6ba476..1031025730d0 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1022,7 +1022,7 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - page_table_check_pte_set(mm, addr, ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, 1); set_pte(ptep, pte); } diff --git a/include/linux/page_table_check.h b/include/linux/page_table_check.h index 01e16c7696ec..ba269c7009e4 100644 --- a/include/linux/page_table_check.h +++ b/include/linux/page_table_check.h @@ -20,8 +20,8 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr, pmd_t pmd); void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr, pud_t pud); -void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte); +void __page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr); void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd); void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr, @@ -73,14 +73,14 @@ static inline void page_table_check_pud_clear(struct mm_struct *mm, __page_table_check_pud_clear(mm, addr, pud); } -static inline void page_table_check_pte_set(struct mm_struct *mm, +static inline void page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) + pte_t pte, unsigned int nr) { if (static_branch_likely(&page_table_check_disabled)) return; - __page_table_check_pte_set(mm, addr, ptep, pte); + __page_table_check_ptes_set(mm, addr, ptep, pte, nr); } static inline void page_table_check_pmd_set(struct mm_struct *mm, @@ -138,9 +138,9 @@ static inline void page_table_check_pud_clear(struct mm_struct *mm, { } -static inline void page_table_check_pte_set(struct mm_struct *mm, +static inline void page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) + pte_t pte, unsigned int nr) { } diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 25d8610c0042..e6f4d40caaa2 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -184,20 +184,22 @@ void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL(__page_table_check_pud_clear); -void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +void __page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { + unsigned int i; + if (&init_mm == mm) return; - __page_table_check_pte_clear(mm, addr, *ptep); + for (i = 0; i < nr; i++) + __page_table_check_pte_clear(mm, addr, ptep[i]); if (pte_user_accessible_page(pte)) { - page_table_check_set(mm, addr, pte_pfn(pte), - PAGE_SIZE >> PAGE_SHIFT, + page_table_check_set(mm, addr, pte_pfn(pte), nr, pte_write(pte)); } } -EXPORT_SYMBOL(__page_table_check_pte_set); +EXPORT_SYMBOL(__page_table_check_ptes_set); void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) From patchwork Wed Mar 15 05:14:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4166C76196 for ; Wed, 15 Mar 2023 05:15:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 073188E001E; Wed, 15 Mar 2023 01:15:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F13E08E001D; Wed, 15 Mar 2023 01:15:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D68EA8E001E; Wed, 15 Mar 2023 01:15:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A6A3C8E001D for ; Wed, 15 Mar 2023 01:15:04 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 810D51C69C0 for ; Wed, 15 Mar 2023 05:15:04 +0000 (UTC) X-FDA: 80569968528.22.1C9853B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id C21A0100004 for ; Wed, 15 Mar 2023 05:15:02 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=kMtfC4Y8; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857303; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Y6UQRtApFRekBXOS3Hoo+JaDt5MuymsqP/Dtb80Hd68=; b=AI2u8UqCMJmidqQdzv6re+7T4MpFOE74lQTI8GDVzdyg5x+nq7t+md1W489wlXUvLpTOw+ fQMl0lGSzsMTTn/SJ5Fd+KQPow802bNRSEP+PKQs5RDw5acNumwPrqD0+WQtbLFRx7LgnF rHkgLGWmiU04i6WF6PaJddaT6ZIJ1EQ= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=kMtfC4Y8; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857303; a=rsa-sha256; cv=none; b=4McIXCHNsefZd/0vw+4Arpm4SoTnzYlIxe6YZJyo37ECXKIV9235QfLulj+8hqn6vpyfpl 3kBsYhK/sMs0CTpvxSJJur1mLXs6WK5kHr6g/5HEetMcCL8zDW11U3e2IIOg2H/uMSkl+8 iRguuLLIvu+GzboOm4SrvZftMbL1how= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Y6UQRtApFRekBXOS3Hoo+JaDt5MuymsqP/Dtb80Hd68=; b=kMtfC4Y8cRwzxnyikLnmC+pXrF S08XyEXWi259m5wWZQqlD79SBIbUYDxcBLuSL86NVAnvtg4xXfUpltnS44mI+15iJXI9fg69EAtlg MFIUlzW9PHr3U1aoRU6P/B6aSLfq4i/oLgpIFnUfxfZ6d5qRVlFsATQUZdfABblJUxj02K64BvXpu nOhhgag7XC2+IC+wmMtRq5j7b59hvdYcelXnmrIYdRsXeq+SOSx6p7FPuyxFeXcRHgv5D5Bms1+Ni voGyM7rAG7rdO/7aF//y2RXEjckrpOfLiBlAJ7ioUYzG0feEklpTZUSI/jBHX5Pog4JA0O1BxkSV8 Xm1kdeqA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTK-00DYAt-IK; Wed, 15 Mar 2023 05:14:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 02/36] mm: Add generic flush_icache_pages() and documentation Date: Wed, 15 Mar 2023 05:14:10 +0000 Message-Id: <20230315051444.3229621-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: C21A0100004 X-Rspam-User: X-Stat-Signature: 71rzs4cf83c14atfxpptk9c6u9hgeaxk X-HE-Tag: 1678857302-680683 X-HE-Meta: U2FsdGVkX1993cmwNZJZGQ26k82OGNN+ojUCU1BvXP1dyFBuAIQmrhTimlWlJ3hWK7d6DOVintGFq1rE/mPnwv+J2ELZfTpm8J431bGjqajJg7YamrFUNtQRz3dNt6ET6VzNdHNQKTMGRTMiMSykyA/JyktCMEWAOMLO1hu469q1n1mw0zc5l8p2MwBnbBMFW4BRCA1rRNu60ElutVvYSDeIPjyfydBEP98DV8uWjt9Vf9+AOOUp9XsMkO3WPa2+NRwFmsKEVjbDVGMkx46jrZHanuXbviC0vmTOahcFHFyiVzUdyopldb9uiuPPaXNKy9z4x8brKbX9f05Q/if8MAypBVs5O+tssSYwa/xZh8Ls3lYjLcwAWDHOXIl9t4Im7WL1cmaXB+7dhhS+uoZ0hlaFN4czqpqP35eFgxxP0r+ReBdnIWZA4zOgk10IpD2uBE+HrelobfJMkYblb7wPO3nBUwmkt3vMtMe0H9u2HtwHL3PZ8Mt7lcYMGD0hlL7q1AKT5ZkYgk7LDCDS37js3oL/g3Z9Ut+VCNOy3XlKkqr0HR5mKvrsHFfiGEBdyUxQVmWUHTwzIHhM3RPJmDHZ0zXuqPo9N5UP+IndIwpUFnlQI48iKcHB36OsEbMZeiT+sEwos5JXaw84fUoFabLaBijNZByTxVECYPqCx5678OR65BSFks8Xe0gyVBqIIo8YwS0N89C8tUpueoAiEthclJq6Gc2O6CKDSOZ7qhhoOv62c583QEatZK3jHqdE9fcTT0Un0gi3vbzEEjt4yty5yQSPiM4NulRtxpLszExmNFPq8JCyBwlF/hoA9SeffFmlUlQ+grKdJqjordIkgzaWrljfbJuYGxn+pdFIJfsHkokEUHjXO5h9/qp9e2bQbNQe5itd/HYBvyfppdU7VuizbuKSPLxj1Z54agGneR1q5gBhfEVP2h0Hhm+ZJKElxWoP96r3LvrjKGJkZYpjEHr m2RVXJIC cAKWfCeam61SahjGaNaJgTcszjvZF6iS9UIM+WT2V92XcOCX3i4P+dDgibdoXol0ECV52OKEmhlZ4T1tPvSCV9qlRqWQ7oYXADds7BlkkduSnu26fPFj0op6MMI3XH8/Dxi447Bf9O0qZUL5AK6sVIFNK/7S6si/os321af5n1+/TQiNNHYYA3uev+lXVAfUG/xS3+endO8WpAzCPofMi27Hv/tpVEXR7pRYEw+1i4WbQQejDsT1iJ95T4w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: flush_icache_page() is deprecated but not yet removed, so add a range version of it. Change the documentation to refer to update_mmu_cache_range() instead of update_mmu_cache(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Reviewed-by: Anshuman Khandual --- Documentation/core-api/cachetlb.rst | 35 +++++++++++++++-------------- include/asm-generic/cacheflush.h | 5 +++++ 2 files changed, 23 insertions(+), 17 deletions(-) diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst index 5c0552e78c58..d4c9e2a28d36 100644 --- a/Documentation/core-api/cachetlb.rst +++ b/Documentation/core-api/cachetlb.rst @@ -88,13 +88,13 @@ changes occur: This is used primarily during fault processing. -5) ``void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep)`` +5) ``void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr)`` - At the end of every page fault, this routine is invoked to - tell the architecture specific code that a translation - now exists at virtual address "address" for address space - "vma->vm_mm", in the software page tables. + At the end of every page fault, this routine is invoked to tell + the architecture specific code that translations now exists + in the software page tables for address space "vma->vm_mm" + at virtual address "address" for "nr" consecutive pages. A port may use this information in any way it so chooses. For example, it could use this event to pre-load TLB @@ -306,17 +306,18 @@ maps this page at its virtual address. private". The kernel guarantees that, for pagecache pages, it will clear this bit when such a page first enters the pagecache. - This allows these interfaces to be implemented much more efficiently. - It allows one to "defer" (perhaps indefinitely) the actual flush if - there are currently no user processes mapping this page. See sparc64's - flush_dcache_page and update_mmu_cache implementations for an example - of how to go about doing this. + This allows these interfaces to be implemented much more + efficiently. It allows one to "defer" (perhaps indefinitely) the + actual flush if there are currently no user processes mapping this + page. See sparc64's flush_dcache_page and update_mmu_cache_range + implementations for an example of how to go about doing this. - The idea is, first at flush_dcache_page() time, if page_file_mapping() - returns a mapping, and mapping_mapped on that mapping returns %false, - just mark the architecture private page flag bit. Later, in - update_mmu_cache(), a check is made of this flag bit, and if set the - flush is done and the flag bit is cleared. + The idea is, first at flush_dcache_page() time, if + page_file_mapping() returns a mapping, and mapping_mapped on that + mapping returns %false, just mark the architecture private page + flag bit. Later, in update_mmu_cache_range(), a check is made + of this flag bit, and if set the flush is done and the flag bit + is cleared. .. important:: @@ -369,7 +370,7 @@ maps this page at its virtual address. ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` All the functionality of flush_icache_page can be implemented in - flush_dcache_page and update_mmu_cache. In the future, the hope + flush_dcache_page and update_mmu_cache_range. In the future, the hope is to remove this interface completely. The final category of APIs is for I/O to deliberately aliased address diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index f46258d1a080..09d51a680765 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -78,6 +78,11 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) #endif #ifndef flush_icache_page +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ +} + static inline void flush_icache_page(struct vm_area_struct *vma, struct page *page) { From patchwork Wed Mar 15 05:14:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 203A6C7618D for ; Wed, 15 Mar 2023 05:14:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 10CAE6B0074; Wed, 15 Mar 2023 01:14:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0BB978E0001; Wed, 15 Mar 2023 01:14:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E77696B0078; Wed, 15 Mar 2023 01:14:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D9AF96B0074 for ; Wed, 15 Mar 2023 01:14:50 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id ADE0A1A0F35 for ; Wed, 15 Mar 2023 05:14:50 +0000 (UTC) X-FDA: 80569967940.17.05B9E79 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 15FFFC000F for ; Wed, 15 Mar 2023 05:14:48 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="jXJA/lY+"; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857289; a=rsa-sha256; cv=none; b=TsYmiIb7xfXwZ+xKQzSSR7rgDbCyYdLk5kHdBLMSIbM9XJIB2KkZkt0VLVWB2fEAPmr0ks UTbInVTYjZFAhgjU8+217i541e8jtGTCzR8Dk69A6lOKUTZU5ysF3j63BAURREGUdsEAo/ NFD8Lk+kHaQVz05eJ93AvAB3cnzW4rU= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="jXJA/lY+"; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857289; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=n6Ti8OhlWeNjG2zsHJM+UjK+Z9Pmy+kwTwNJ+mNPVvs=; b=6Xfng9zvpOAE6PGyQ/LP57FksawH3METXogsVmNfHTlhzU9Q3vBpcwhdJc6PSR7yE5jGMm 5GiikPegP984Xunhyp1UP7KInnI2u9VZwUJwHTTcNlWJ95xo5HS629y4UBGAyK3tKDIwSQ +xrJpcf6yoD9h1c95Wmc4VNkBz+G5T8= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=n6Ti8OhlWeNjG2zsHJM+UjK+Z9Pmy+kwTwNJ+mNPVvs=; b=jXJA/lY+mNfxjhnHMcPpOcZktl OP+tmmLUZQplcYMr969H3MlWVh4M8UbWKPP0xxgrWVRdnUZfOR6zLHYFG5wszll0o8pRkCNcOL1JY 16o+kcpPH9LNx1+TKFXHq8TGywKk7dGmxX4G+FSKewMk4lNllHqIF8s8FE+JBz4BDhl7hboF4RL+i CcXrAlV4h3+QyiJJ86OYjHScPv47+m2a0lvGzrAztGy1kUuS1eyTHhvuhboviz5liEIJSJiRABoHg aTUpEB1zZKKrLCb3K+tpcEVwrXjhbemDdSy63QoH8fw5+QNfVJStj6gMw88C4OF4r1znXsskP0ZQ2 +NIVpg0w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTK-00DYAv-LY; Wed, 15 Mar 2023 05:14:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 03/36] mm: Add folio_flush_mapping() Date: Wed, 15 Mar 2023 05:14:11 +0000 Message-Id: <20230315051444.3229621-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 15FFFC000F X-Rspamd-Server: rspam01 X-Stat-Signature: g6tyn9ox3wmy4ns7qewq7eem85ezrfxw X-HE-Tag: 1678857288-884968 X-HE-Meta: U2FsdGVkX1/uPcKbk5vg5Ekg85YwWSbtyk97IrGZWMfDmRV2UnyBchUn+wlysPZsAed/wEbIP3ML47kcc+ZQ1FKjEGbqBZ2JBPK9jRPehfU9wzQUkyWYJ0rCuWpnwC6CNKnUmT994bvdGpK2xR04jLg4tnfNTqMyfyfu7VDvP2ABpiAZ9XENourd+oYwSMpEo2Bvt1tVjrC8KyP0sgblCgrmthH6acGlZ4hjBINg+NEqOLeL8TneLEZazCcAvN7jNg8/sVeWm99bHQwAVwrEredbMKHIFoZcVWscWz3llytd7a9aBBsRwmRYVnL0lf/4vQHUyiAfxYgTiTrx8LJmbMhPEFbsmqABsRW6ivP1yKKwJWjWeJ/DIf3ZpXxB3otUy8bxvticpsrppssKOv5Er5//U/hATeHHPyNKeBruIyqsW6Tlz1wGYMS0DQZzOp7ItvbvrUUelgiYICpPNsAQ6s+Ej9Xs5mU1N1Z0DcR+8e9nM/ar1CzMm7l+hvuUbzpGdHSI0nkR9yJNU8t7IvdlIu8K0VCvXIlItmQ0IzZiYXABgw7yC8aC+3uPXYkrj5x1bf/SC8+rkE0GAWvyc0Tjhb0XuyTGAgqb36HfovNz1lPxVa7MSE4YbIp2p6WyEkHY4N7hGDfx9uusmJDCGDDjnue+nOL+kDSf7YnkJaX9cWyg0KVd9ynICAQIkpdj7Aw41wDkeGSQAhCSHIm9pEeRQ1iTyY5DnnpgLKAh4Jx0R0btaljfuiAnDEoNDN6iMTVGc+5Hx1AwY41E/ZuQD0eoUDKwRHBKqXv3MjUYjubg8UESp/apeGFCoBX4CRl2OuNcvibDaMzFDiFnOINawFfS0CwDaRYo5/RKTnRjPvbQ34lA7TRTcqRtazCradpSubJO78mrHw1Qi435LROaGRadqlVNe/0wHqYWGwQXZzD4QaUpXkBpN3wjQlcsoPhEZZBGGQ7Lxkk8XBFZreVl4ah ogM6ynHf 4eMRGYuKI91JF9cjPTvhjpkR6jkga4gEcbbeCPKOADPUNxJDhn8Xc41edMUU5gZdh55sg22gDm1qMSeVWSOlpgSX/NaH72f5PUnyeH55lFEdpEnYDxGZYo6CXmIUvTuh4YWwtnqcpOeQ2ZSH7zTpejFiVYHbVO7JNxUMJhX4fyI5Y65Yz8aNIFjq5IfHFEnOABT6CPgnKLVE9JnnL804vsu4/UcXtRZzYyXQAK83J26QOgqIkXrZQv+iGnQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the folio equivalent of page_mapping_file(), but rename it to make it clear that it's very different from page_file_mapping(). Theoretically, there's nothing flush-only about it, but there are no other users today, and I doubt there will be; it's almost always more useful to know the swapfile's mapping or the swapcache's mapping. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Reviewed-by: Anshuman Khandual --- include/linux/pagemap.h | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a56308a9d1a4..e56c2023aa0e 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -369,6 +369,26 @@ static inline struct address_space *folio_file_mapping(struct folio *folio) return folio->mapping; } +/** + * folio_flush_mapping - Find the file mapping this folio belongs to. + * @folio: The folio. + * + * For folios which are in the page cache, return the mapping that this + * page belongs to. Anonymous folios return NULL, even if they're in + * the swap cache. Other kinds of folio also return NULL. + * + * This is ONLY used by architecture cache flushing code. If you aren't + * writing cache flushing code, you want either folio_mapping() or + * folio_file_mapping(). + */ +static inline struct address_space *folio_flush_mapping(struct folio *folio) +{ + if (unlikely(folio_test_swapcache(folio))) + return NULL; + + return folio_mapping(folio); +} + static inline struct address_space *page_file_mapping(struct page *page) { return folio_file_mapping(page_folio(page)); @@ -379,11 +399,7 @@ static inline struct address_space *page_file_mapping(struct page *page) */ static inline struct address_space *page_mapping_file(struct page *page) { - struct folio *folio = page_folio(page); - - if (unlikely(folio_test_swapcache(folio))) - return NULL; - return folio_mapping(folio); + return folio_flush_mapping(page_folio(page)); } /** From patchwork Wed Mar 15 05:14:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C559C6FD1D for ; Wed, 15 Mar 2023 05:15:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F3DDC8E001B; Wed, 15 Mar 2023 01:15:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E50E88E000C; Wed, 15 Mar 2023 01:15:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C7D468E001B; Wed, 15 Mar 2023 01:15:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id AE3098E001A for ; Wed, 15 Mar 2023 01:15:02 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 082C4120FF5 for ; Wed, 15 Mar 2023 05:15:02 +0000 (UTC) X-FDA: 80569968444.01.DF191FB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 62E9320016 for ; Wed, 15 Mar 2023 05:15:00 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rgnn4YC9; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857300; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UeQ+O6oeC7OHwfv3c/ZJoE/vNT6tW4e5INVzBD1cDW8=; b=wBNLqhoqR1rP5k/mdCEnz4VuZoVBJVRN4U9TinvaL314DMcS/Q0A8sk1zlw09WR6IsoorV my/bM5NTKXiO6HJ5M925cOYxnUIWXQjL0t4mL6giobiuEJBPcBSvwmhBxmnETHOy3bFJq+ uf6eoFzbaQ44jLnmkm+ciYH4QeGU9EU= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rgnn4YC9; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857300; a=rsa-sha256; cv=none; b=7uM4g3BprTWO6K9EanX615J3xkFvQHIuIQsZUUa0i66EpWCikn1mGRadflWo8rZu0G5h8I j0timn7n8NGyXMVOVRbuLNJ1O48TQivu8yta6hjZOX6B9OcjKkQ1788/fRNftEqfXJbQlR t5nLCOFCjVhglaJpondVHQxdH9n/sFY= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=UeQ+O6oeC7OHwfv3c/ZJoE/vNT6tW4e5INVzBD1cDW8=; b=rgnn4YC90j/RsWvK5pnFiQnU5p hPvoVA2CoeE1JU6WVacxl//cQfpsQE75+0ZNZyGpC2a/tGmEMZvpmXRJHpMFjRybq5d3A0fKqkdhi 36fX35llTJuCW9wWFAdauEzPCDcEUgGJi6wPT82nzDP2+Znx3itTOW/u095Qy3Q6FmoJD4omXAGoZ eKeKVMcCwdR0tvUPmPMGP3ZriYeckK6lFaF7TBFqsWp373L+DkuiR2c4I51weGh70v9G9Fb1ip7Ck BuC2MvbqR1CMGQVwr9Zd1hku6E5lscKQiybvfAFVKWismiy3/4qvIpsmOKH9W/8b8D2kXXdqa9/DG ye7e2R4g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTK-00DYAx-PN; Wed, 15 Mar 2023 05:14:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 04/36] mm: Remove ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO Date: Wed, 15 Mar 2023 05:14:12 +0000 Message-Id: <20230315051444.3229621-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: fpqrjh5431kno51kqyftajohzuurca9s X-Rspamd-Queue-Id: 62E9320016 X-HE-Tag: 1678857300-721521 X-HE-Meta: U2FsdGVkX1/PM4AproldZEsJ0DW9Iq0QUKW4Zkromm7cS+wKhZOuihvzG6yzkoVfdZpHKmKR7TzjJmqd8OzoGSfbIe5dD8WFN66GuEW6x/D39+KYDCMb13iADpLWxqmD+OtJXt9MmzosJmgS/FyBrULVmLHnhFNHk4kHkmHws7vRNbXpH2l8k9+Kvg2akw3iCqxerXMXMafjJt3tFh8b35YLSb//5HhzSaJqNc/scwqYOzmLq63KZ6oGxRZNNYvG7TlIuVRyMw0HHyoVuwOTGlS7HWgGvlkce4sEMtkTDVO4fNQx8wZPqkUjhxjIDrAajCAHx08NeLxcE6HOMzUhtAIZ1ubTpXkE6cbegbfOvuCrzM05/Q6Jtz72jg8Oy3a6BUgPrXPHJPo8YomOR/5avMxnDdPQqbuTS+Wu1nAe0j0bU4Fv/JWBozEwaX7CxR2fWt7m95zaY2CijUJrYMt8+3qfFA7N6lJIh+YZtEDzeUv6ogWFzJ5kC2jDy2V9HzUeiKLF9+bRhBz0ZvG8QOz2OR13X0BLV8VhYzUeNv/F6ajbAxFKv09LgNpLfVEJ7vWHvIUwhRoZZU8qdBtMolxGPqcfiewpGsMPISHoPCQztOo3B+meApc4jtmjuCZhI17dHmdUW/lYUIIHb8Mf8whw3ARzKDuHTVfbnjt58SMDjherxyig0O+QoSUcOwYSA+WLkWsBNQaSG4HUOdIKXPiflfMcDvUxWoNQYtbOI+qQg+smoEmsfXAu9o65WnYa09ZH1VTSp/bZ9grd5oOspAmjt13PpQHATgOwhI8+qTpq1vx6xcgpYAe6wmH6tMWBDGfbF9PxWLXleEa2DDlfihIsmHR3VbELJcRGEo7i/cPX29PGoDt8RjEtyc/+N/VLzu4im3dgq1e174AuTORwOi1LG4ajMpYFlRbEvoKx49UOia4ETq8X52yy+fCwWkMm8aTW1KCtvh46IbsRzkGonA3 X/IpeXPs 6ltlGCs92Qy4+HxEkK5c9QzbPUr2GSX8StVj42OesOHtmL/uO7iuQHoycEv52sVKf5wVtQ1/coVRB4zyDc3JXawB38wwAoEGg2tt2dfvfV4I3kPyKTWiuYXNDQmg3pRIwcYq5CJ8QkytbCidUUISY1Dmd8UbOj3PoNfZZV+lpfCc4E+Yfi4umt3WxxvNTEYKKmUnbkh39qdl0mLLfPLQpyeLgmHXYXSuxiiUshact6gzWNcOE1zm02l3XQw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Current best practice is to reuse the name of the function as a define to indicate that the function is implemented by the architecture. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) Reviewed-by: Anshuman Khandual --- Documentation/core-api/cachetlb.rst | 24 +++++++++--------------- include/linux/cacheflush.h | 4 ++-- mm/util.c | 2 +- 3 files changed, 12 insertions(+), 18 deletions(-) diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst index d4c9e2a28d36..770008afd409 100644 --- a/Documentation/core-api/cachetlb.rst +++ b/Documentation/core-api/cachetlb.rst @@ -269,7 +269,7 @@ maps this page at its virtual address. If D-cache aliasing is not an issue, these two routines may simply call memcpy/memset directly and do nothing more. - ``void flush_dcache_page(struct page *page)`` + ``void flush_dcache_folio(struct folio *folio)`` This routines must be called when: @@ -277,7 +277,7 @@ maps this page at its virtual address. and / or in high memory b) the kernel is about to read from a page cache page and user space shared/writable mappings of this page potentially exist. Note - that {get,pin}_user_pages{_fast} already call flush_dcache_page + that {get,pin}_user_pages{_fast} already call flush_dcache_folio on any page found in the user address space and thus driver code rarely needs to take this into account. @@ -291,7 +291,7 @@ maps this page at its virtual address. The phrase "kernel writes to a page cache page" means, specifically, that the kernel executes store instructions that dirty data in that - page at the page->virtual mapping of that page. It is important to + page at the kernel virtual mapping of that page. It is important to flush here to handle D-cache aliasing, to make sure these kernel stores are visible to user space mappings of that page. @@ -302,18 +302,18 @@ maps this page at its virtual address. If D-cache aliasing is not an issue, this routine may simply be defined as a nop on that architecture. - There is a bit set aside in page->flags (PG_arch_1) as "architecture + There is a bit set aside in folio->flags (PG_arch_1) as "architecture private". The kernel guarantees that, for pagecache pages, it will clear this bit when such a page first enters the pagecache. This allows these interfaces to be implemented much more efficiently. It allows one to "defer" (perhaps indefinitely) the actual flush if there are currently no user processes mapping this - page. See sparc64's flush_dcache_page and update_mmu_cache_range + page. See sparc64's flush_dcache_folio and update_mmu_cache_range implementations for an example of how to go about doing this. - The idea is, first at flush_dcache_page() time, if - page_file_mapping() returns a mapping, and mapping_mapped on that + The idea is, first at flush_dcache_folio() time, if + folio_flush_mapping() returns a mapping, and mapping_mapped() on that mapping returns %false, just mark the architecture private page flag bit. Later, in update_mmu_cache_range(), a check is made of this flag bit, and if set the flush is done and the flag bit @@ -327,12 +327,6 @@ maps this page at its virtual address. dirty. Again, see sparc64 for examples of how to deal with this. - ``void flush_dcache_folio(struct folio *folio)`` - This function is called under the same circumstances as - flush_dcache_page(). It allows the architecture to - optimise for flushing the entire folio of pages instead - of flushing one page at a time. - ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long user_vaddr, void *dst, void *src, int len)`` ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page, @@ -353,7 +347,7 @@ maps this page at its virtual address. When the kernel needs to access the contents of an anonymous page, it calls this function (currently only - get_user_pages()). Note: flush_dcache_page() deliberately + get_user_pages()). Note: flush_dcache_folio() deliberately doesn't work for an anonymous page. The default implementation is a nop (and should remain so for all coherent architectures). For incoherent architectures, it should flush @@ -370,7 +364,7 @@ maps this page at its virtual address. ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` All the functionality of flush_icache_page can be implemented in - flush_dcache_page and update_mmu_cache_range. In the future, the hope + flush_dcache_folio and update_mmu_cache_range. In the future, the hope is to remove this interface completely. The final category of APIs is for I/O to deliberately aliased address diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h index a6189d21f2ba..82136f3fcf54 100644 --- a/include/linux/cacheflush.h +++ b/include/linux/cacheflush.h @@ -7,14 +7,14 @@ struct folio; #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +#ifndef flush_dcache_folio void flush_dcache_folio(struct folio *folio); #endif #else static inline void flush_dcache_folio(struct folio *folio) { } -#define ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO 0 +#define flush_dcache_folio flush_dcache_folio #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ #endif /* _LINUX_CACHEFLUSH_H */ diff --git a/mm/util.c b/mm/util.c index dd12b9531ac4..98ce51b01627 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1125,7 +1125,7 @@ void page_offline_end(void) } EXPORT_SYMBOL(page_offline_end); -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +#ifndef flush_dcache_folio void flush_dcache_folio(struct folio *folio) { long i, nr = folio_nr_pages(folio); From patchwork Wed Mar 15 05:14:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175302 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C171C61DA4 for ; Wed, 15 Mar 2023 05:14:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9BF846B0072; Wed, 15 Mar 2023 01:14:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 96DC26B0074; Wed, 15 Mar 2023 01:14:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 85D038E0001; Wed, 15 Mar 2023 01:14:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7217F6B0072 for ; Wed, 15 Mar 2023 01:14:50 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 3B05CA0C7B for ; Wed, 15 Mar 2023 05:14:50 +0000 (UTC) X-FDA: 80569967940.20.9F033A7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 8BE4D40009 for ; Wed, 15 Mar 2023 05:14:48 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ZBRpDJQf; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857288; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=X05vtzdRVOUEmVQt9LM7AcBEpxNUJlW1nSjcvx+k4II=; b=lQ0o/SXHjpeqtVwfWC8VKfS/R/abB3UysBE0kWO3dmG//OgOvCaxdOJ75hZmn2uxc0MbCl ihmkNUSxKSE1Aijjzn3LoZl4PTKrQS1UEiVOJX1efquxN2sPUj7546APMmQMf1NwrWpwWR htAN6+NAtiw5pSwEWCGhKrgcpejQVP0= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ZBRpDJQf; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857288; a=rsa-sha256; cv=none; b=pUVlxK+RkOBf9/RaPU499FkIbvpmy008PxYbd+VJxrCUHBy5Ykx74vqNVpcjuK8C6VuJW1 C5+tbXdjBU6DMyANWyTpZnHhrCP4ZK1v6S8vZATqMhzyA6GidaALLVS9VDN8sVz3021nlq k4jceYRMFMK29L/oTXn0YgZvhKgBY9g= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=X05vtzdRVOUEmVQt9LM7AcBEpxNUJlW1nSjcvx+k4II=; b=ZBRpDJQfOF7dP3sJ9sUGQlY4E8 Wd8C+NXeZORB60wcJXsshX0L6WAkHB4NpK9zDfaQqAbYw0JJZV7N07uJp2EomGOpgBy9gfRTk3lg/ Ox+YILpn/XDq0gn2XR8EhTlSWztyzXp+irDBpVS7R1MHTA27ryXzwUjDzLAKIYJHTKC6TAFQ2fg4Y T6A23P2KcdW4SSUO2o3P5VaPVXw2jTWXijCa5fiXyZPotPa67xVomAze7tB4zplYixC4Od/95hGLo LRn3w1db8qtNliXIYmFz7fWIJWpYeRWP3KCroHCN4OLdSxm/wTND17EAjE+rBnFJebCCC+nEBhkqD K7FsR6Tg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTK-00DYB3-SQ; Wed, 15 Mar 2023 05:14:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport Subject: [PATCH v4 05/36] mm: Add default definition of set_ptes() Date: Wed, 15 Mar 2023 05:14:13 +0000 Message-Id: <20230315051444.3229621-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 8BE4D40009 X-Rspam-User: X-Stat-Signature: e5xemxsu89fahak6cp7gzkh8x4wdyyh4 X-HE-Tag: 1678857288-43385 X-HE-Meta: U2FsdGVkX1/DqLCtvm+gddz+cyPOr58LCPMeV68bO/NAaNdazY4d333iDJwY5r+szRkeGfmqGVfoKUi6ImaRWQyRPJG5tPtOgf8Zv6653o4D9B5MqPjE3OQnw1btMKXy0swzfeGMEmt6GNCuKfSghOUrfr2cQcHIWCLyWuGNoe1qSL7FDpIMlSJUdAWXUPPsWiuJDNT4Pd1KNeCNAZ8rstlLVrXXBl3H10z6a2wuOtadLnsrCFdGdrGtoBy1LoflWvogtEXPFoKNsDddwQxmDQH0Btnv7fqRczOTFftz9vs6DYD6MAjbKOfRikaPPj8NhJtMBXzL0t7xCmJnPPRWYcwf6MGU/imGkxg3if9CQbQBy38KxfHKtT1e4uQBY9Mr3nSj9Uy3vpC7KQbLGlhGh+eEODKTNM/gE+4kFYewYoouj2SrctHkBxgdseWR8UjnrsgoDPHd1cA8zHJ2+9wpf0nNR1xKYb21GPUAbGqG58eQffgDl2lAvpn/z5OLNIcZNgbCEqxo6oLHd8ki7cKYfXhO7InSL1VlWW8WTBXxf1BHXa4SwzyyaKJLWufpYhUXcparfKBbnKinRaS7umU0f2YQtedRelM1t001s0/gnrKbp6B4A8Uh+d6ns0ir2UOYYCqeFsc0IK3jTQB9oiFU4sJItGoPLW9Trii9ML42QCZwig0BiTa3BmMsw78Pn2q2B6VEEuZCZ8FOEHO+yczgSoP73UZdl8A53EJcLOh4gy0sxcMbjzjQos1rd70FvifCvazEFZBSQNdwQFdV3dwxuX78fmo1OPzU+CyQMpPZQEBubtIA7yKC5qy6hXhhN8nJ2IH+zWv4IKxFEMkHqiWGDL24ouXPODOq9lMlHNh8kLXV4jCq4gtT+PIU0hg28amGIk4zC+ka/Dhj3qdOfZWjiuwCmi0RgVbstv+PBkYI5PqN1j6If71L+oqebeQXdkbdv/z+r8/fRmIy9cBvvhp SWO4349h FjEQeDlEyVQlLeu4FnM/VqPYTElIHJVpZ3uY2qamJe6AVZoSXBugq17D56Cz3Vvc6IbHQEC31ksz20HXRAXWcssv1TD4TaARA9M/Dt6ulr/j23BbqT0xYVK8ZOKIwl83SF4XplgRJwsUaw0oM5xDU0lQ2F1wvap1abBnzkK2d52aYuxn55aWuPN70JSK3R/VOtRy/nWi+n55yYiDWJk7Viw9IeCPJRKHrURu6CbZddbq+IjmvypldXh6V2CUhuxBG4riHwUNZpmwIu3wMf4O+xfAjUna/WxhLxxe8fiWlZTJrGVxn2BF59B2uPg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Most architectures can just define set_pte() and PFN_PTE_SHIFT to use this definition. It's also a handy spot to document the guarantees provided by the MM. Suggested-by: Mike Rapoport (IBM) Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Mike Rapoport (IBM) --- include/linux/pgtable.h | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index c5a51481bbb9..a755fe94b4b4 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -172,6 +172,43 @@ static inline int pmd_young(pmd_t pmd) } #endif +#ifndef set_ptes +#ifdef PFN_PTE_SHIFT +/** + * set_ptes - Map consecutive pages to a contiguous range of addresses. + * @mm: Address space to map the pages into. + * @addr: Address to map the first page at. + * @ptep: Page table pointer for the first entry. + * @pte: Page table entry for the first page. + * @nr: Number of pages to map. + * + * May be overridden by the architecture, or the architecture can define + * set_pte() and PFN_PTE_SHIFT. + * + * Context: The caller holds the page table lock. The pages all belong + * to the same folio. The PTEs are all in the same PMD. + */ +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + page_table_check_ptes_set(mm, addr, ptep, pte, nr); + + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); + } +} +#ifndef set_pte_at +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) +#endif +#endif +#else +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) +#endif + #ifndef __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, pte_t *ptep, From patchwork Wed Mar 15 05:14:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175316 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B611C7619A for ; Wed, 15 Mar 2023 05:15:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 52E298E000F; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B4738E000C; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 245D38E000F; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id EF08B8E0008 for ; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id C1A2DAB77F for ; Wed, 15 Mar 2023 05:14:54 +0000 (UTC) X-FDA: 80569968108.10.903034A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 37F7C1A0008 for ; Wed, 15 Mar 2023 05:14:52 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Dgag4LmK; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857293; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0st9Mrc59fhc9IMmLMdFdFFlAbLT9wbreqz2AWFYyJ8=; b=TtaQLx4P6zMZtjlOEsLk12J98a2wd7UraOKtFHMOKGjhEi+GwTKI9OevwI35/tuWpOVHYV xQZmUxNH6CzGnEYDnpoxCA/jVY36PQ54WhbJLwaBLR9lsrriVPIwrvNltz6OqFoMXWSJ8+ Duvebj2EL8O7YvtpeXVzOo3dtO7E4v8= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Dgag4LmK; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857293; a=rsa-sha256; cv=none; b=TKf7mitmvPWmHkhikMf1Qlqa01s1tE71ECFLnTwBcjumlgfVP6v2FJQ85rlKypOMzE/JFp 07y6uOBIrWs4p2t7qjQFhxXnU9ED9BoHYsLsSVWG1dUaMTHCljLTaNBxQW/4xZ8DnaMP7o lhTeljBmBlaYF4tCRF7H9/SSDIPl6pI= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=0st9Mrc59fhc9IMmLMdFdFFlAbLT9wbreqz2AWFYyJ8=; b=Dgag4LmKF0rIz5AIdwp+wqJheo CLolCrddE7Bc8qR0iHxu6Ho7H150VijFW/ZxoRKOY6x4pkWnZmiPY0OM2oN5P8H/Fs/MeH4QPtXgk ZE9mPbldtrjXnQVG697nsaMbI2wTBcFEO7DWOgSTrwEVysrfAlkiBXRblG53BZqYTrzOgkNvRlNQu 7OChw55UZHmed+ZDbPSiZJwFD+Q5Q56Hv65kwKcSjZV6jnH7UGJWU4GYt9WxNmwbniA0F2rFm/IEy WV9Cu7ZcCxlHNqKM4+Sl7GfbN2anMoSQIpkz59WebhopeUz0ENti95uNjgV9Zi7Gb0vNAl40o1sVl koLF9org==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTK-00DYB9-WE; Wed, 15 Mar 2023 05:14:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Richard Henderson , Ivan Kokshaysky , Matt Turner , linux-alpha@vger.kernel.org Subject: [PATCH v4 06/36] alpha: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:14 +0000 Message-Id: <20230315051444.3229621-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 37F7C1A0008 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: aynhaygmfic66m5nbhixgrj3b5ygtzr5 X-HE-Tag: 1678857292-713529 X-HE-Meta: U2FsdGVkX19ow3ZeDz/tMYyR6tfEAnzSVfXTC9zgpylq1f9N7gEyGbR+VE5FkGa80JAuPs9tEUW5ia+UqAH4mJK4OktFZHjkejuM2AuoL1lbH/SqUAAdLckfQAn40OQ6roXaQ9JI2FC//QrxCHyqp7L8BcF08LZRzhDjXJr8tZzRezE0IDB38D8/G/ml/g7Y6XDFkIaFxCItg9AoKwfKzVhaGfsde1iRefxgTxMgYKGh2Qg4Q3lXM4Y2yp004+lZv/U145pDLD7wEZ6k5BPBgEkBMottT/2pPq48DWGBVH2SmJrR2yiDU1poT5sLALiNRKcd+sd7pKWzkpanT425Hl6M9VncoXwQCs9Iw7LaM6JD1wFAmM/nLkO1FltNocvEfslTyJY1IEzBl9Et3spXVEzQ9SG4ief4i7YLxJC0Mc0PbUcJIBwbDKcJjEiHO7W0DBSNQFX3dkTo12m8LQt9lh/pu7XKVdJkpXApE7q1UUbxOZphfK2BRUceCK5Zdj+jItq9CbDQUstJkXMiXf1SiEh23RsrRIydbtt5+G/lGMY73nHh+9i31zWQLPD3EQPo3ULK7agcEIv8nbFXf8ozN7p9Qq3xlpIBrFv2PtYCSNsQUrUcauuDYmJq05Obf3mwxKiS2jYKYhrEhPpPCbLHz3GCnQtTi7hnn/LvDV5C7V3RE29h1+lwvK5EzrTQvD5M6Q1xwTKPwOA6fzROb91S/dL/RVdGFJHOQvteVDvzi5sswgf7+a4IQ8boQD3lyz24GJWOqDLSdG6cVyelpMPdtYzRFEo58DgIIwKyZaOKB8UlNoisMFr7CBrTfgxukNRFHYQjenPyLq5OtXGLS4wFiCKU8CA7nO4xI8NTYShTPZJzeaPRMm3ZbjeqJZAu42FhOP6pW1U5176yBpyGghPgKJPlAR67SNdvRLMMMUSjgr5J5xBcGUGc+CLxljMDJKoldJNtgjgx/GqYyu8Z3OX aE9ntv+S BlBT44x5v0QWY8Wbezu0R34U6fNvMkLYWG5ij+HlJij0W7mkP+J96ttqkW7QABuwHklI71mxh4X62t5lJwZyz/pfBhIxaSIUI++QX7/o74oSllc2YhiAhKRPF9w4BNFLZhg98ey0vks0u/XPYxP6PnnQR5eWn4uMV83uSBIStgLOnu9nhlZNLwfP1IqqIJP/HHBXJHcs271wwJb1Cu/yXHkdMOzHf9mvdLo5z4f6VWj9mlqT7Ts9ol9VbfkN6hwN5CO/NR623An3V4skEhSNGmNCAhAulj6DLJPMfz4na5N54OIpBhXz/mpX7/luIfpxN6TXN0EhDId0yeEYxEQZiHesGC1cLpYT+OXm12p0Awuf15nNX1mz9jwCg/RO7/AEkfLDQ1JWrwuzbxx9kKqKWqVj1HA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Richard Henderson Cc: Ivan Kokshaysky Cc: Matt Turner Cc: linux-alpha@vger.kernel.org Acked-by: Mike Rapoport (IBM) --- arch/alpha/include/asm/cacheflush.h | 10 ++++++++++ arch/alpha/include/asm/pgtable.h | 9 +++++++-- 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/arch/alpha/include/asm/cacheflush.h b/arch/alpha/include/asm/cacheflush.h index 9945ff483eaf..3956460e69e2 100644 --- a/arch/alpha/include/asm/cacheflush.h +++ b/arch/alpha/include/asm/cacheflush.h @@ -57,6 +57,16 @@ extern void flush_icache_user_page(struct vm_area_struct *vma, #define flush_icache_page(vma, page) \ flush_icache_user_page((vma), (page), 0, 0) +/* + * Both implementations of flush_icache_user_page flush the entire + * address space, so one call, no matter how many pages. + */ +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ + flush_icache_user_page(vma, page, 0, 0); +} + #include #endif /* _ALPHA_CACHEFLUSH_H */ diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h index ba43cb841d19..6c24c408b8e9 100644 --- a/arch/alpha/include/asm/pgtable.h +++ b/arch/alpha/include/asm/pgtable.h @@ -26,7 +26,6 @@ struct vm_area_struct; * hook is made available. */ #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) /* PMD_SHIFT determines the size of the area a second-level page table can map */ #define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT-3)) @@ -189,7 +188,8 @@ extern unsigned long __zero_page(void); * and a page entry and page directory to the page they refer to. */ #define page_to_pa(page) (page_to_pfn(page) << PAGE_SHIFT) -#define pte_pfn(pte) (pte_val(pte) >> 32) +#define PFN_PTE_SHIFT 32 +#define pte_pfn(pte) (pte_val(pte) >> PFN_PTE_SHIFT) #define pte_page(pte) pfn_to_page(pte_pfn(pte)) #define mk_pte(page, pgprot) \ @@ -303,6 +303,11 @@ extern inline void update_mmu_cache(struct vm_area_struct * vma, { } +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) +{ +} + /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that * are !pte_none() && !pte_present(). From patchwork Wed Mar 15 05:14:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175310 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BDDAC6FD1D for ; Wed, 15 Mar 2023 05:15:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E5E5A8E0003; Wed, 15 Mar 2023 01:14:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CCE438E0006; Wed, 15 Mar 2023 01:14:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8FA268E0005; Wed, 15 Mar 2023 01:14:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 718CA8E0003 for ; Wed, 15 Mar 2023 01:14:52 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 4825F81102 for ; Wed, 15 Mar 2023 05:14:52 +0000 (UTC) X-FDA: 80569968024.01.F3EB7ED Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 766EFC000C for ; Wed, 15 Mar 2023 05:14:50 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=npKz7MQ2; dmarc=none; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857290; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6/hWEFc0Vol4F6C3+SeR2Ih7aqS5VmEKy/BZy1zhr7U=; b=TJl9Fchjzu4ivfH3GgCPad1na8mQ9In3NH5VvP02rdSksJ9Na302ddbW27r2uSniKlTyTc UAmIgGNhFYLszncKEGajIpNEiISAHso+xZ6Rq5XHcBFiY6vMcYaOlaDW4D6nwznAzyZCOm +11mWoR/QXGeR0ePeZPWx5HIO1BwHwE= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=npKz7MQ2; dmarc=none; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857290; a=rsa-sha256; cv=none; b=mRpPiF6G50lIz026r43qV/ONfWnNtyKOdslshXrkYWO+CuLFjpHPPemwXiwF6bkEJjwezM gT/uu4J7XCN+osc3+bQb7xozdIfkwcDsfWS7oLaB8s+91nKRn9KFI971W/LKDixn03n5/g kotcBIemHyHAThgUlk0pUv7hsfcpnvo= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6/hWEFc0Vol4F6C3+SeR2Ih7aqS5VmEKy/BZy1zhr7U=; b=npKz7MQ24Jh6hl3M1G3cS63ADn rZvoXlJpzixA29z28EA4sXL4yrktquO6Nk/Bf/OQIM9aT9wYAu9Y5mXU6KdgoUBax+BQNm7LqBklR AwvdTntoISKFcoCV15UOqZgnymu6lzaztZR7RrnKoXnJrLvUdj4wVNk/Id/9Zs9OJzmO6ikFS2IpD vM8kJRUc3KrEUMGUk2855klfzjxyXt3Ri3dZBZB+wD7q1u7+Wccz5cEFFZhx5tqKFyOaXHp89P3tL nljnnt6FD1DsnfRYwgtDrQkRVMVIMtEEui8oo6OaCSeyQuR0OpgCPadctepZ+tJ3nCNroMY/aLp4p aYtVqcTA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTL-00DYBB-2P; Wed, 15 Mar 2023 05:14:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH v4 07/36] arc: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:15 +0000 Message-Id: <20230315051444.3229621-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 766EFC000C X-Stat-Signature: 9okmb1dt3gbj69b9hmqiwwn3hto8af56 X-HE-Tag: 1678857290-505578 X-HE-Meta: U2FsdGVkX1+NmvU8K7lOLeR5PCllAZrRsBnERccP6X26ZWimxNRahdNSrVgXMbZXKs5vHJdJ3H07VYnAhYU1+Y6Ky9aSJW30e2DtICuojPSsBkql/yfm41Hi6AiCPzCKuJJOEm82RzxW2ayZAt+YTFNH7ImmfgQvYUgMIfbPBEZdCqowVMR2nLzhH2LVUnqInKzAr3JnqkT5g4/HS//RVbPo6RzXPK13rfqkmXVJyje4k3Q3OsYGgqlPdYeNilUYomjjT0/Lpp3eyw/kgBLKBh7BtGZCCIH79fskxEyvkZQsnPz4zm9v/1Ypnt3Toh5pq9RQeK6FsxnwPMm7a63sQ30XwwyEqAZH5O7+ii7rtUlmAKYpedHHw2xKh74q7bS3ysHeP+T5xtJURYi4XIo1RCVgZH991JkcIL+HLEYOj6yFFUOtHLKbX7Ft2oEnW2mWWeILXuuswUTBWe1fJJXzwQFfdt5juNVuj8Ex32Te8VKqFKga1ds0UizcWhTdPHPqSslHS8ZaU8EBS5ci8l9a0DY/PCds7B/MaEYJgwQF0DcChQTnuL+znQP2E4n100PLk9SATFq3cn3pbTfSfVSi6TLuOJxtY/mD/8qOxvF54xpq3ehd9x9l4c7iRJgliro2xNPoqrrebEv6FpwAmGM/ydTfeitgMgXlTFZz2+ruHyyH282sjaTOIMldNdhJl6MuPU51F+f4d/3/FmiFrFQ34TpfiMMCaWw71oySfeCmDYg8dZ+DfowhseY8+CmQLm+6xCCshqzjMcRM+giBrwP9NZWYVPD+Ny4NuSU/3D2mNIgIssnvmeUYxcCH/SdqmIYS2UtA82hBT7VN2yUesBbCc2tj3Lu81tIAgKQPsGekGQir0gisp0/y2h5NKeQPWpZXKNlFNMechCSa9ujZGKy4sV7XN9UsiTx4qhzysPIO99FYjhtyxnDOipVXAPuSBgGLFjfydeZXzqybJNiGhIS vWVl3zrW 2qGHzuoyAOa0YhbieCmPRi/0l7D4pTiVGCC4ws/7Na517jXwPPH29CKIH449C4E1kKK7ua/91fwiJWCDIGPEoLxTdk9cZJMTLkFN4A1/QZOccfoP6huablj4rZERob7POiqUe6mpC0RaBWSFr4ka/8ciVuu4hIFmYkp7Ft2dlaDNNduRNSOH1wp8wagdcMbm178TAgNZx8QeyKRKjgz3QA0lRMQ9n9i0TWkX724y4pOGaXN4/AVkNbsE1TqDyC/lyEay6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dc_clean flag from being per-page to per-folio (which means it cannot always be set as we don't know that all pages in this folio were cleaned). Enhance the internal flush routines to take the number of pages to flush. Signed-off-by: Matthew Wilcox (Oracle) Cc: Vineet Gupta Cc: linux-snps-arc@lists.infradead.org Acked-by: Mike Rapoport (IBM) --- arch/arc/include/asm/cacheflush.h | 7 ++- arch/arc/include/asm/pgtable-bits-arcv2.h | 11 ++-- arch/arc/include/asm/pgtable-levels.h | 1 + arch/arc/mm/cache.c | 61 ++++++++++++++--------- arch/arc/mm/tlb.c | 18 ++++--- 5 files changed, 58 insertions(+), 40 deletions(-) diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index e201b4b1655a..04f65f588510 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -25,17 +25,20 @@ * in update_mmu_cache() */ #define flush_icache_page(vma, page) +#define flush_icache_pages(vma, page, nr) void flush_cache_all(void); void flush_icache_range(unsigned long kstart, unsigned long kend); void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len); -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr); -void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr); +void __inv_icache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr); +void __flush_dcache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio void dma_cache_wback_inv(phys_addr_t start, unsigned long sz); void dma_cache_inv(phys_addr_t start, unsigned long sz); diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h index 6e9f8ca6d6a1..06d8039180c0 100644 --- a/arch/arc/include/asm/pgtable-bits-arcv2.h +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h @@ -100,14 +100,11 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) -{ - set_pte(ptep, pteval); -} +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *ptep); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that diff --git a/arch/arc/include/asm/pgtable-levels.h b/arch/arc/include/asm/pgtable-levels.h index ef68758b69f7..fc417c75c24d 100644 --- a/arch/arc/include/asm/pgtable-levels.h +++ b/arch/arc/include/asm/pgtable-levels.h @@ -169,6 +169,7 @@ #define pte_ERROR(e) \ pr_crit("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e)) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pte_none(x) (!pte_val(x)) #define pte_present(x) (pte_val(x) & _PAGE_PRESENT) #define pte_clear(mm,addr,ptep) set_pte_at(mm, addr, ptep, __pte(0)) diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c index 55c6de138eae..3c16ee942a5c 100644 --- a/arch/arc/mm/cache.c +++ b/arch/arc/mm/cache.c @@ -752,17 +752,17 @@ static inline void arc_slc_enable(void) * There's a corollary case, where kernel READs from a userspace mapped page. * If the U-mapping is not congruent to K-mapping, former needs flushing. */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; if (!cache_is_vipt_aliasing()) { - clear_bit(PG_dc_clean, &page->flags); + clear_bit(PG_dc_clean, &folio->flags); return; } /* don't handle anon pages here */ - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!mapping) return; @@ -771,17 +771,27 @@ void flush_dcache_page(struct page *page) * Make a note that K-mapping is dirty */ if (!mapping_mapped(mapping)) { - clear_bit(PG_dc_clean, &page->flags); - } else if (page_mapcount(page)) { - + clear_bit(PG_dc_clean, &folio->flags); + } else if (folio_mapped(folio)) { /* kernel reading from page with U-mapping */ - phys_addr_t paddr = (unsigned long)page_address(page); - unsigned long vaddr = page->index << PAGE_SHIFT; + phys_addr_t paddr = (unsigned long)folio_address(folio); + unsigned long vaddr = folio_pos(folio); + /* + * vaddr is not actually the virtual address, but is + * congruent to every user mapping. + */ if (addr_not_cache_congruent(paddr, vaddr)) - __flush_dcache_page(paddr, vaddr); + __flush_dcache_pages(paddr, vaddr, + folio_nr_pages(folio)); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + return flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); /* @@ -921,18 +931,18 @@ void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len) } /* wrapper to compile time eliminate alignment checks in flush loop */ -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr) +void __inv_icache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr) { - __ic_line_inv_vaddr(paddr, vaddr, PAGE_SIZE); + __ic_line_inv_vaddr(paddr, vaddr, nr * PAGE_SIZE); } /* * wrapper to clearout kernel or userspace mappings of a page * For kernel mappings @vaddr == @paddr */ -void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr) +void __flush_dcache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr) { - __dc_line_op(paddr, vaddr & PAGE_MASK, PAGE_SIZE, OP_FLUSH_N_INV); + __dc_line_op(paddr, vaddr & PAGE_MASK, nr * PAGE_SIZE, OP_FLUSH_N_INV); } noinline void flush_cache_all(void) @@ -962,10 +972,10 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long u_vaddr, u_vaddr &= PAGE_MASK; - __flush_dcache_page(paddr, u_vaddr); + __flush_dcache_pages(paddr, u_vaddr, 1); if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, u_vaddr); + __inv_icache_pages(paddr, u_vaddr, 1); } void flush_cache_range(struct vm_area_struct *vma, unsigned long start, @@ -978,9 +988,9 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long u_vaddr) { /* TBD: do we really need to clear the kernel mapping */ - __flush_dcache_page((phys_addr_t)page_address(page), u_vaddr); - __flush_dcache_page((phys_addr_t)page_address(page), - (phys_addr_t)page_address(page)); + __flush_dcache_pages((phys_addr_t)page_address(page), u_vaddr, 1); + __flush_dcache_pages((phys_addr_t)page_address(page), + (phys_addr_t)page_address(page), 1); } @@ -989,6 +999,8 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, void copy_user_highpage(struct page *to, struct page *from, unsigned long u_vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); + struct folio *dst = page_folio(to); void *kfrom = kmap_atomic(from); void *kto = kmap_atomic(to); int clean_src_k_mappings = 0; @@ -1005,7 +1017,7 @@ void copy_user_highpage(struct page *to, struct page *from, * addr_not_cache_congruent() is 0 */ if (page_mapcount(from) && addr_not_cache_congruent(kfrom, u_vaddr)) { - __flush_dcache_page((unsigned long)kfrom, u_vaddr); + __flush_dcache_pages((unsigned long)kfrom, u_vaddr, 1); clean_src_k_mappings = 1; } @@ -1019,17 +1031,17 @@ void copy_user_highpage(struct page *to, struct page *from, * non copied user pages (e.g. read faults which wire in pagecache page * directly). */ - clear_bit(PG_dc_clean, &to->flags); + clear_bit(PG_dc_clean, &dst->flags); /* * if SRC was already usermapped and non-congruent to kernel mapping * sync the kernel mapping back to physical page */ if (clean_src_k_mappings) { - __flush_dcache_page((unsigned long)kfrom, (unsigned long)kfrom); - set_bit(PG_dc_clean, &from->flags); + __flush_dcache_pages((unsigned long)kfrom, + (unsigned long)kfrom, 1); } else { - clear_bit(PG_dc_clean, &from->flags); + clear_bit(PG_dc_clean, &src->flags); } kunmap_atomic(kto); @@ -1038,8 +1050,9 @@ void copy_user_highpage(struct page *to, struct page *from, void clear_user_page(void *to, unsigned long u_vaddr, struct page *page) { + struct folio *folio = page_folio(page); clear_page(to); - clear_bit(PG_dc_clean, &page->flags); + clear_bit(PG_dc_clean, &folio->flags); } EXPORT_SYMBOL(clear_user_page); diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 5f71445f26bd..0a996b65bb4e 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -467,8 +467,8 @@ void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep) * Note that flush (when done) involves both WBACK - so physical page is * in sync as well as INV - so any non-congruent aliases don't remain */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long vaddr_unaligned, pte_t *ptep, unsigned int nr) { unsigned long vaddr = vaddr_unaligned & PAGE_MASK; phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK_PHYS; @@ -491,15 +491,19 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, */ if ((vma->vm_flags & VM_EXEC) || addr_not_cache_congruent(paddr, vaddr)) { - - int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); + struct folio *folio = page_folio(page); + int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags); if (dirty) { + unsigned long offset = offset_in_folio(folio, paddr); + nr = folio_nr_pages(folio); + paddr -= offset; + vaddr -= offset; /* wback + inv dcache lines (K-mapping) */ - __flush_dcache_page(paddr, paddr); + __flush_dcache_pages(paddr, paddr, nr); /* invalidate any existing icache lines (U-mapping) */ if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, vaddr); + __inv_icache_pages(paddr, vaddr, nr); } } } @@ -531,7 +535,7 @@ void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { pte_t pte = __pte(pmd_val(*pmd)); - update_mmu_cache(vma, addr, &pte); + update_mmu_cache_range(vma, addr, &pte, HPAGE_PMD_NR); } void local_flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, From patchwork Wed Mar 15 05:14:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E53C3C6FD1D for ; Wed, 15 Mar 2023 05:15:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B96C98E001C; Wed, 15 Mar 2023 01:15:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B2F108E001A; Wed, 15 Mar 2023 01:15:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8FAA88E001C; Wed, 15 Mar 2023 01:15:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6E74E8E001A for ; Wed, 15 Mar 2023 01:15:03 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 45F7D14087E for ; Wed, 15 Mar 2023 05:15:03 +0000 (UTC) X-FDA: 80569968486.26.819675E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 53B29140003 for ; Wed, 15 Mar 2023 05:15:01 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DUKr040m; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857301; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pX110XgjOxNU1/0jgifvLXAw2qvdtN7y604t/uKKQbA=; b=x+E2JOfmMwQnYR6L/9Tjtu5qAUDh2+HA01fVYgtfg/L7eZytDv8/sKdsD5kVrrrMmhdkUi YuAmGTw5kSZDHK+0vqS+KLTxZ5JKTZqzUb0m7d/MPMUXv+7dFPH1NjqedCn/mYeqAVcVp6 lZtbMoMg0vANw0mIjIhex3hIe0ZXMXc= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DUKr040m; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857301; a=rsa-sha256; cv=none; b=F1v4JePUjkTFRAhd7fAk9MmQm2tSB7X0BAMAvYAOXcws0e45+198h58pnNXOArRrClM1bR dm98QH1dkYiUT6gmfRZv4NOpIhDSLreqd7Ucp3blyWTk7aldT3cRD5SkG3siqXUYJjg2jU usKpTDF2WLOD+ABKnS50MxQifun8Als= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=pX110XgjOxNU1/0jgifvLXAw2qvdtN7y604t/uKKQbA=; b=DUKr040m19UcDMTwYOw1FWmIT8 Ycr2ZYzUlEZjEPA9yHcljJ8KnOsxJhXcv5hL3MMZeuRx7cBxcC8Rm1rwKBLMI3zstWMZirtmwFn8z +ObRCfOgFOZPNccC7Inu5j9Zsmrv/fm2mXhshUcJ85SJV7sk/wUVxoiF/JGhHMvtUxAnXmcqCqgIg B8MmTJEkQnHyesLec8RgaOh97TdO/qB67Fncn51NXim+GaNF9mso35vTwGe4f8vGxo+/AYTDoOJ1A ZMJIjyIpNfgQSzLf0k79dstAyVByyelvHthM3ZC/j+jFdT2xb3mS5poSxNsEbSrmkL9FVBpmHAG+p xo5tsUJA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTL-00DYBH-7r; Wed, 15 Mar 2023 05:14:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Russell King , linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 08/36] arm: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:16 +0000 Message-Id: <20230315051444.3229621-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: bahhg14qe4ooapt54hyu7xba1hxwdnmj X-Rspam-User: X-Rspamd-Queue-Id: 53B29140003 X-Rspamd-Server: rspam06 X-HE-Tag: 1678857301-92755 X-HE-Meta: U2FsdGVkX1/bQznfINSBBPlZIgDwgOonsd3+98vEv/Xld7X7ep9TXZuJZm/YBHj5uEQGgIQJ6JjJZxFgdAe08TWR655Q5XkKaVtKQ8U9MCC/9Tkl44yZrJB+sy8YhDny/zcV44oQtrmZp4ALgJ41o5wRBzjdY7/+RNzhgUrMm9mUVgBM2HMwvb+DtePGphmLPRLyQPtK+g2niSxkqgEBpMK8dEQX8D2msfECf4hzRVuaGlYaI8xQW93qhOJrgLaUCKDoS/QM+zTwxM4D9SkLe6rTcmm2gelbb/N92PlY5QP0twbuaboFvwASP8KHvPV6y6IXGK6qeCKClEXNQd3OoFOafNziKh3zItCdLMXwe/j54jOWm/aeoJCfjAIgmWGnqZyNuDqduEQYPQQL/0t5jLm7VV9f/SXzeykvXcuuP+/UN5VSI+WVkLHfnSpEMpfbkj6IIcuCgskdVZGzr0MFUJy6HRweN6YvNLpJnpd/GZgLbKRWyxNuW8AXhm4IPKPe73pgoQ1uDoxdcQ7LShpDLGTnA+Oc/x07aThuuncpBFb4kaH1IR5E721dkVtKKjr79M8Z6sucYP2zd0FGAy8wsdbcDTpPgDsiL+hJc2Wl1hqgwHrOPTHOVIHBWQZ/je4mf7PvgbiQFt+2blicm/jBW7Egg9+0/6E023xzIlqkxBvNKiy6G5C5qX1PL6Nt3uzFja8AdHK7x5eTVUHeg5Mv15aP0qNrCgvFYXumWRiDvoR3KzFR+kvcb1pxecUr2raiGxpaX6vuN8B1goSHN+p7Gf3skzOk2bo8wvIJHeUnnVVYf4Prhm2sY8N+ASV+JlF4h6F0CzDdGZOry0vQRfbw4RpGKyh2Jz5OQiR1ft0taV6rnq4rMw4OhnUNvjP/nTvCEQ0cCT+pQ1O6Gn2UixCck31no6OOf3HE+H+a2Vhcm7nOQBCN52FkOejvMx5EUZdKE5+EHU+SZvQaUAeyMYl m4WMGOoK cn0WXFi7c1ecfAhs/UMllsnoV7OSxzhPH5J1QWaNld5WkdSH1zEuFMvVWOpXhGZaL3/HSTF6V+j5BmLlaqyC25xeFqvwnYoA9PBWNqjy+ECBHccBH+HzX4kL+CW/HnvhsC9j+6OPqmKF6prR9zWMdfzgHYnu7au2OAqJy2JZiPpuxk2mDVtsjzUTCiZGK+jgVxAQZy/8X0EqKhrI3lXdqONIEC43SFdC5uO63kgR40KkXQWv3LPZFOW840yvtHVZ2Ejn1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dcache_clear flag from being per-page to per-folio which makes __dma_page_dev_to_cpu() a bit more exciting. Also add flush_cache_pages(), even though this isn't used by generic code (yet?) Signed-off-by: Matthew Wilcox (Oracle) Cc: Russell King Cc: linux-arm-kernel@lists.infradead.org Acked-by: Mike Rapoport (IBM) Reviewed-by: Russell King (Oracle) --- arch/arm/include/asm/cacheflush.h | 24 +++++--- arch/arm/include/asm/pgtable.h | 5 +- arch/arm/include/asm/tlbflush.h | 13 ++-- arch/arm/mm/copypage-v4mc.c | 5 +- arch/arm/mm/copypage-v6.c | 5 +- arch/arm/mm/copypage-xscale.c | 5 +- arch/arm/mm/dma-mapping.c | 24 ++++---- arch/arm/mm/fault-armv.c | 14 ++--- arch/arm/mm/flush.c | 99 +++++++++++++++++++------------ arch/arm/mm/mm.h | 2 +- arch/arm/mm/mmu.c | 14 +++-- 11 files changed, 125 insertions(+), 85 deletions(-) diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index a094f964c869..841e268d2374 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -231,14 +231,15 @@ vivt_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned vma->vm_flags); } -static inline void -vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) +static inline void vivt_flush_cache_pages(struct vm_area_struct *vma, + unsigned long user_addr, unsigned long pfn, unsigned int nr) { struct mm_struct *mm = vma->vm_mm; if (!mm || cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) { unsigned long addr = user_addr & PAGE_MASK; - __cpuc_flush_user_range(addr, addr + PAGE_SIZE, vma->vm_flags); + __cpuc_flush_user_range(addr, addr + nr * PAGE_SIZE, + vma->vm_flags); } } @@ -247,15 +248,17 @@ vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsig vivt_flush_cache_mm(mm) #define flush_cache_range(vma,start,end) \ vivt_flush_cache_range(vma,start,end) -#define flush_cache_page(vma,addr,pfn) \ - vivt_flush_cache_page(vma,addr,pfn) +#define flush_cache_pages(vma, addr, pfn, nr) \ + vivt_flush_cache_pages(vma, addr, pfn, nr) #else -extern void flush_cache_mm(struct mm_struct *mm); -extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); -extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn); +void flush_cache_mm(struct mm_struct *mm); +void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); +void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, + unsigned long pfn, unsigned int nr); #endif #define flush_cache_dup_mm(mm) flush_cache_mm(mm) +#define flush_cache_page(vma, addr, pfn) flush_cache_pages(vma, addr, pfn, 1) /* * flush_icache_user_range is used when we want to ensure that the @@ -289,7 +292,9 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr * See update_mmu_cache for the user space part. */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -extern void flush_dcache_page(struct page *); +void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1 static inline void flush_kernel_vmap_range(void *addr, int size) @@ -321,6 +326,7 @@ static inline void flush_anon_page(struct vm_area_struct *vma, * duplicate cache flushing elsewhere performed by flush_dcache_page(). */ #define flush_icache_page(vma,page) do { } while (0) +#define flush_icache_pages(vma, page, nr) do { } while (0) /* * flush_cache_vmap() is used when creating mappings (eg, via vmap, diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index a58ccbb406ad..841001ab495c 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -207,8 +207,9 @@ static inline void __sync_icache_dcache(pte_t pteval) extern void __sync_icache_dcache(pte_t pteval); #endif -void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval); +void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr); +#define set_ptes set_ptes static inline pte_t clear_pte_bit(pte_t pte, pgprot_t prot) { diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h index 0ccc985b90af..7d792e485f4f 100644 --- a/arch/arm/include/asm/tlbflush.h +++ b/arch/arm/include/asm/tlbflush.h @@ -619,18 +619,21 @@ extern void flush_bp_all(void); * If PG_dcache_clean is not set for the page, we need to ensure that any * cache entries for the kernels virtual memory range are written * back to the page. On ARMv6 and later, the cache coherency is handled via - * the set_pte_at() function. + * the set_ptes() function. */ #if __LINUX_ARM_ARCH__ < 6 -extern void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr); #else -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { } #endif +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) #endif diff --git a/arch/arm/mm/copypage-v4mc.c b/arch/arm/mm/copypage-v4mc.c index f1da3b439b96..7ddd82b9fe8b 100644 --- a/arch/arm/mm/copypage-v4mc.c +++ b/arch/arm/mm/copypage-v4mc.c @@ -64,10 +64,11 @@ static void mc_copy_user_page(void *from, void *to) void v4_mc_copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *kto = kmap_atomic(to); - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); raw_spin_lock(&minicache_lock); diff --git a/arch/arm/mm/copypage-v6.c b/arch/arm/mm/copypage-v6.c index d8a115de5507..a1a71f36d850 100644 --- a/arch/arm/mm/copypage-v6.c +++ b/arch/arm/mm/copypage-v6.c @@ -69,11 +69,12 @@ static void discard_old_kernel_data(void *kto) static void v6_copy_user_highpage_aliasing(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); unsigned int offset = CACHE_COLOUR(vaddr); unsigned long kfrom, kto; - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); /* FIXME: not highmem safe */ discard_old_kernel_data(page_address(to)); diff --git a/arch/arm/mm/copypage-xscale.c b/arch/arm/mm/copypage-xscale.c index bcb485620a05..f1e29d3e8193 100644 --- a/arch/arm/mm/copypage-xscale.c +++ b/arch/arm/mm/copypage-xscale.c @@ -84,10 +84,11 @@ static void mc_copy_user_page(void *from, void *to) void xscale_mc_copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *kto = kmap_atomic(to); - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); raw_spin_lock(&minicache_lock); diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 8bc01071474a..5ecfde41d70a 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -693,6 +693,7 @@ static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, size_t size, enum dma_data_direction dir) { + struct folio *folio = page_folio(page); phys_addr_t paddr = page_to_phys(page) + off; /* FIXME: non-speculating: not required */ @@ -707,19 +708,18 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, * Mark the D-cache clean for these pages to avoid extra flushing. */ if (dir != DMA_TO_DEVICE && size >= PAGE_SIZE) { - unsigned long pfn; - size_t left = size; - - pfn = page_to_pfn(page) + off / PAGE_SIZE; - off %= PAGE_SIZE; - if (off) { - pfn++; - left -= PAGE_SIZE - off; + ssize_t left = size; + size_t offset = offset_in_folio(folio, paddr); + + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); } - while (left >= PAGE_SIZE) { - page = pfn_to_page(pfn++); - set_bit(PG_dcache_clean, &page->flags); - left -= PAGE_SIZE; + + while (left >= (ssize_t)folio_size(folio)) { + set_bit(PG_dcache_clean, &folio->flags); + left -= folio_size(folio); + folio = folio_next(folio); } } } diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c index 0e49154454a6..e2c869b8f012 100644 --- a/arch/arm/mm/fault-armv.c +++ b/arch/arm/mm/fault-armv.c @@ -178,8 +178,8 @@ make_coherent(struct address_space *mapping, struct vm_area_struct *vma, * * Note that the pte lock will be held. */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); struct address_space *mapping; @@ -192,13 +192,13 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - mapping = page_mapping_file(page); - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + folio = page_folio(pfn_to_page(pfn)); + mapping = folio_flush_mapping(page); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); if (mapping) { if (cache_is_vivt()) make_coherent(mapping, vma, addr, ptep, pfn); diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 7ff9feea13a6..07ea0ab51099 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -95,10 +95,10 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned __flush_icache_all(); } -void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) +void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn, unsigned int nr) { if (cache_is_vivt()) { - vivt_flush_cache_page(vma, user_addr, pfn); + vivt_flush_cache_pages(vma, user_addr, pfn, nr); return; } @@ -196,29 +196,31 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, #endif } -void __flush_dcache_page(struct address_space *mapping, struct page *page) +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio) { /* * Writeback any data associated with the kernel mapping of this * page. This ensures that data in the physical page is mutually * coherent with the kernels mapping. */ - if (!PageHighMem(page)) { - __cpuc_flush_dcache_area(page_address(page), page_size(page)); + if (!folio_test_highmem(folio)) { + __cpuc_flush_dcache_area(folio_address(folio), + folio_size(folio)); } else { unsigned long i; if (cache_is_vipt_nonaliasing()) { - for (i = 0; i < compound_nr(page); i++) { - void *addr = kmap_atomic(page + i); + for (i = 0; i < folio_nr_pages(folio); i++) { + void *addr = kmap_local_folio(folio, + i * PAGE_SIZE); __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_atomic(addr); + kunmap_local(addr); } } else { - for (i = 0; i < compound_nr(page); i++) { - void *addr = kmap_high_get(page + i); + for (i = 0; i < folio_nr_pages(folio); i++) { + void *addr = kmap_high_get(folio_page(folio, i)); if (addr) { __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_high(page + i); + kunmap_high(folio_page(folio, i)); } } } @@ -230,15 +232,14 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page) * userspace colour, which is congruent with page->index. */ if (mapping && cache_is_vipt_aliasing()) - flush_pfn_alias(page_to_pfn(page), - page->index << PAGE_SHIFT); + flush_pfn_alias(folio_pfn(folio), folio_pos(folio)); } -static void __flush_dcache_aliases(struct address_space *mapping, struct page *page) +static void __flush_dcache_aliases(struct address_space *mapping, struct folio *folio) { struct mm_struct *mm = current->active_mm; - struct vm_area_struct *mpnt; - pgoff_t pgoff; + struct vm_area_struct *vma; + pgoff_t pgoff, pgoff_end; /* * There are possible user space mappings of this page: @@ -246,21 +247,36 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p * data in the current VM view associated with this page. * - aliasing VIPT: we only need to find one mapping of this page. */ - pgoff = page->index; + pgoff = folio->index; + pgoff_end = pgoff + folio_nr_pages(folio) - 1; flush_dcache_mmap_lock(mapping); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - unsigned long offset; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff_end) { + unsigned long start, offset, pfn; + unsigned int nr; /* * If this VMA is not in our MM, we can ignore it. */ - if (mpnt->vm_mm != mm) + if (vma->vm_mm != mm) continue; - if (!(mpnt->vm_flags & VM_MAYSHARE)) + if (!(vma->vm_flags & VM_MAYSHARE)) continue; - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - flush_cache_page(mpnt, mpnt->vm_start + offset, page_to_pfn(page)); + + start = vma->vm_start; + pfn = folio_pfn(folio); + nr = folio_nr_pages(folio); + offset = pgoff - vma->vm_pgoff; + if (offset > -nr) { + pfn -= offset; + nr += offset; + } else { + start += offset * PAGE_SIZE; + } + if (start + nr * PAGE_SIZE > vma->vm_end) + nr = (vma->vm_end - start) / PAGE_SIZE; + + flush_cache_pages(vma, start, pfn, nr); } flush_dcache_mmap_unlock(mapping); } @@ -269,7 +285,7 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p void __sync_icache_dcache(pte_t pteval) { unsigned long pfn; - struct page *page; + struct folio *folio; struct address_space *mapping; if (cache_is_vipt_nonaliasing() && !pte_exec(pteval)) @@ -279,14 +295,14 @@ void __sync_icache_dcache(pte_t pteval) if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); + folio = page_folio(pfn_to_page(pfn)); if (cache_is_vipt_aliasing()) - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); else mapping = NULL; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); if (pte_exec(pteval)) __flush_icache_all(); @@ -312,7 +328,7 @@ void __sync_icache_dcache(pte_t pteval) * Note that we disable the lazy flush for SMP configurations where * the cache maintenance operations are not automatically broadcasted. */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; @@ -320,31 +336,36 @@ void flush_dcache_page(struct page *page) * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; if (!cache_ops_need_broadcast() && cache_is_vipt_nonaliasing()) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); return; } - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!cache_ops_need_broadcast() && - mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { - __flush_dcache_page(mapping, page); + __flush_dcache_folio(mapping, folio); if (mapping && cache_is_vivt()) - __flush_dcache_aliases(mapping, page); + __flush_dcache_aliases(mapping, folio); else if (mapping) __flush_icache_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} +EXPORT_SYMBOL(flush_dcache_page); /* * Flush an anonymous page so that users of get_user_pages() * can safely access the data. The expected sequence is: diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h index d7ffccb7fea7..419316316711 100644 --- a/arch/arm/mm/mm.h +++ b/arch/arm/mm/mm.h @@ -45,7 +45,7 @@ struct mem_type { const struct mem_type *get_mem_type(unsigned int type); -extern void __flush_dcache_page(struct address_space *mapping, struct page *page); +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio); /* * ARM specific vm_struct->flags bits. diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 463fc2a8448f..9947bbc32b04 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1788,7 +1788,7 @@ void __init paging_init(const struct machine_desc *mdesc) bootmem_init(); empty_zero_page = virt_to_page(zero_page); - __flush_dcache_page(NULL, empty_zero_page); + __flush_dcache_folio(NULL, page_folio(empty_zero_page)); } void __init early_mm_init(const struct machine_desc *mdesc) @@ -1797,8 +1797,8 @@ void __init early_mm_init(const struct machine_desc *mdesc) early_paging_init(mdesc); } -void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { unsigned long ext = 0; @@ -1808,5 +1808,11 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, ext |= PTE_EXT_NG; } - set_pte_ext(ptep, pteval, ext); + for (;;) { + set_pte_ext(ptep, pteval, ext); + if (--nr == 0) + break; + ptep++; + pte_val(pteval) += PAGE_SIZE; + } } From patchwork Wed Mar 15 05:14:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 808AFC7618D for ; Wed, 15 Mar 2023 05:15:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE7E78E0009; Wed, 15 Mar 2023 01:14:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D19AB8E0008; Wed, 15 Mar 2023 01:14:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B44578E0009; Wed, 15 Mar 2023 01:14:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9E4968E0008 for ; Wed, 15 Mar 2023 01:14:53 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7741640D23 for ; Wed, 15 Mar 2023 05:14:53 +0000 (UTC) X-FDA: 80569968066.20.B068F26 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id C93D5180017 for ; Wed, 15 Mar 2023 05:14:50 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=FELhPHZX; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857290; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=G+iFhANmqW/Vof6vYlI0ap5wuntAHSlD5mTdwdGqJu0=; b=t7EQ3CAlC+e+RlAkHR482sWihJ8V9hMPMoyCKIrCk49l6uCejqbcacCDH+4jew55Zd6kQc bJVdl+T0Tm+UV3H4gtzT7Tsx6WXEBm3PWp4fDQoquU54QsHh3RoYqcEMTOAR9BEV5s3GQj A07+Hnhpx7g8ZqIYt50Vjpx4cMuYCcQ= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=FELhPHZX; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857290; a=rsa-sha256; cv=none; b=jBvLKtsDV7rEGTVZIji/mKM8kAIGet5wFXcnHOr5KnbFv2JonFOJ3N/pkwYB5RVeZgLAIx 8B0IgVRDwhnbhZ0y7nw+MbKJxEzu0fjr226cTZYv1Y99+jy2vFxu88a1EOZ5FRnYrKKP7T Oab3sHo8fSuOLElWTlWtg7SlL9iZnls= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=G+iFhANmqW/Vof6vYlI0ap5wuntAHSlD5mTdwdGqJu0=; b=FELhPHZXdQjpXdSMzwkzwDgGRC wBV4aTU8csnaXkIwzr3APqV5LVtGPAApqNJJhz/azMa2mNKMU107TJxC7HvO7rAbF3DmKK211WZIm EoMPwfPo4iAqFItcoh1WkaeH/+Wlji4kJcFERPXCHzCEAon3ZU65PP6ThIpzXMJdavDrdehmh6mi5 esKQXdB4pruCrb/OY8rMZjJBHqYOaGFpMTAXHBs676Z6rabU3vCTrQ4yIGVRJL5s4hkeC9Kuls0kE tUfLLB2V0+5ZVlMfy3zc5J/ZejT+36S8T0bVNDyeSdmafjd5bwgCFqDfN24ZeIph8JU+8qRpT6x1K tGXyT5rw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTL-00DYBN-C8; Wed, 15 Mar 2023 05:14:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Catalin Marinas , linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 09/36] arm64: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:17 +0000 Message-Id: <20230315051444.3229621-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: fa8yr913p1dufphsx7ez1cie9hdq16f7 X-Rspam-User: X-Rspamd-Queue-Id: C93D5180017 X-Rspamd-Server: rspam06 X-HE-Tag: 1678857290-337627 X-HE-Meta: U2FsdGVkX1/WwZjjo5EGkNvqn8RgZxH7DqlUKVAjejVDTv+JpEpHtx1CD6fKxoWs4d5oqiH3gNrwiYHAB2Og63sA1S18OWpg1Rc0sIGIx3s0oRW6AOHz5oG7co2Xh+79v+/6PTOxXByArM/C6FqzQUmKtZVpDQsrBe8SweSVWSlXM9r8zVK0bvDpcT3sg/vTk5X0mHfxg2KXNBuGEBTY85tIFOsZ54AwYxwrPKLKCfFm7A9FmYIge1MP1zvrpCfYN3/mHXcHoIqmnK0QxjRWisDf61fpgvPzvs/J22u0NdthsSprRfdxYZQqV4jk8iv8oEook39HUUHIX4kQPFl/Tpr/rSXc5wZ3T1XeOlq0pMZG+sDJNG7WrgJlGyKt82S2Ug3qD2IC7NDU0g6SeT+IUWFtxDQGf8FcWAnDK/0hvdbtTymAQfjYHZ0GSCxP//Xp01/mKoczPB6r5/yxoXRw0ZX87fTVNAHpMvZPn9qyNhnJu8/oDqjvWuGMZ/2lbbmBg5bTmCw/a5SETw/Gj7CzsIaGQgAlKtd5YfPE/zUKscIFyJHs9wb8xqSadP83xlkpEoVfpPba28SkAZSKd5iR/Ot2jgR+emrIUJH6dq23I7wF7mU9cEBAUlDcKT73ncRHBQSoKWvPQc+RSxJ14o/L7spgwm36eo0MfhN/KqhGjAkr4hBaty9ajhobDGw5Ooz0uDx9pAo3OJ+/i7v8ebKPiUe5Q+pb3YIHX6axLjpJDeUR1hPfpFbQg3gCJXYPVerrmvkWEmYeocePhQnvsk9C/iD6Pa2bjfuhqijr8EVe49UJfpmoXSCxpL/sOlLLY66zSC5EWzunUFiDUGBUhiB/a3k0uLwplIoPgxZDAb2KGNHcp9NdfSY++HICn3Y+tLqOxMI+II+kJF7mSzhpY7KgamRkOIXEmAYH7fkkJQobY9iENYGlspovPbSrgfo1d3oS7WdoEtt3PbKVDlpm5+j OQM2PQiz +UGo0iLPOaR3+Mv5HNmTGSVLpYu8FyTXTCU+LiZdmoyFzNhVqgKQ2PlrXzqOAfUhTpXorZiN3EbWxhXYuGKm+5QBOJoDp3INOsIx83w6SUhUg13t09MP75oTNNsKMAOPOubr2yNvE7CEqXWFAD17mMmW10Vsnp7UMxC+wIdxOfRZhCPPqU6NfKgZbIC+0gs9cxZ/sGMGaDrcMfSBGgZLhj3QhrD0EKKmvrwDurOYR+CxQuj6hQQkBLEtiI/CVvU73UCgx X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Catalin Marinas Cc: linux-arm-kernel@lists.infradead.org Acked-by: Mike Rapoport (IBM) --- arch/arm64/include/asm/cacheflush.h | 4 +++- arch/arm64/include/asm/pgtable.h | 25 ++++++++++++++------ arch/arm64/mm/flush.c | 36 +++++++++++------------------ 3 files changed, 35 insertions(+), 30 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 37185e978aeb..d115451ed263 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -114,7 +114,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, #define copy_to_user_page copy_to_user_page /* - * flush_dcache_page is used when the kernel has written to the page + * flush_dcache_folio is used when the kernel has written to the page * cache page at virtual address page->virtual. * * If this page isn't mapped (ie, page_mapping == NULL), or it might @@ -127,6 +127,8 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio static __always_inline void icache_inval_all_pou(void) { diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 9428748f4691..6fd012663a01 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -355,12 +355,21 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, set_pte(ptep, pte); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) -{ - page_table_check_ptes_set(mm, addr, ptep, pte, 1); - return __set_pte_at(mm, addr, ptep, pte); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + page_table_check_ptes_set(mm, addr, ptep, pte, nr); + + for (;;) { + __set_pte_at(mm, addr, ptep, pte); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + pte_val(pte) += PAGE_SIZE; + } } +#define set_ptes set_ptes /* * Huge pte definitions. @@ -1059,8 +1068,8 @@ static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) /* * On AArch64, the cache coherency is handled via the set_pte_at() function. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { /* * We don't do anything here, so there's a very small chance of @@ -1069,6 +1078,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) #ifdef CONFIG_ARM64_PA_BITS_52 diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 5f9379b3c8c8..deb781af0a3a 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -50,20 +50,13 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, void __sync_icache_dcache(pte_t pte) { - struct page *page = pte_page(pte); + struct folio *folio = page_folio(pte_page(pte)); - /* - * HugeTLB pages are always fully mapped, so only setting head page's - * PG_dcache_clean flag is enough. - */ - if (PageHuge(page)) - page = compound_head(page); - - if (!test_bit(PG_dcache_clean, &page->flags)) { - sync_icache_aliases((unsigned long)page_address(page), - (unsigned long)page_address(page) + - page_size(page)); - set_bit(PG_dcache_clean, &page->flags); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + sync_icache_aliases((unsigned long)folio_address(folio), + (unsigned long)folio_address(folio) + + folio_size(folio)); + set_bit(PG_dcache_clean, &folio->flags); } } EXPORT_SYMBOL_GPL(__sync_icache_dcache); @@ -73,17 +66,16 @@ EXPORT_SYMBOL_GPL(__sync_icache_dcache); * it as dirty for later flushing when mapped in user space (if executable, * see __sync_icache_dcache). */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - /* - * HugeTLB pages are always fully mapped and only head page will be - * set PG_dcache_clean (see comments in __sync_icache_dcache()). - */ - if (PageHuge(page)) - page = compound_head(page); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +EXPORT_SYMBOL(flush_dcache_folio); - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); } EXPORT_SYMBOL(flush_dcache_page); From patchwork Wed Mar 15 05:14:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175304 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8268FC76196 for ; Wed, 15 Mar 2023 05:14:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 79D088E0002; Wed, 15 Mar 2023 01:14:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 725996B0075; Wed, 15 Mar 2023 01:14:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3CA466B007B; Wed, 15 Mar 2023 01:14:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2C8E06B0075 for ; Wed, 15 Mar 2023 01:14:51 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id C7823AB7CA for ; Wed, 15 Mar 2023 05:14:50 +0000 (UTC) X-FDA: 80569967940.14.64D51B2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id 29742C0011 for ; Wed, 15 Mar 2023 05:14:48 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qvKCLfOT; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857289; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LW+93YDvE0rhXiX71/y+XJfftY4tnk1rGrYMJAeDkKc=; b=FIIFh1A445lE/C6JwitBM/r2/IL1WOBkqDgzO2jDgp395HeMqWYcq2NpV/U4uQkhyU4dnJ kN0a4Ykq05PbhfXSVls0GRiQ7aBUIaNm27bEHSII5BS7yRzlroUC1Tqgf7CrgXXClbTcs4 LcgoRVjK2lqjOXMr8k6M5zq7lCzne3M= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qvKCLfOT; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857289; a=rsa-sha256; cv=none; b=bngpUjdl2VCWbsMqvOUdSCUPpmTDSMHsietbPlIp/FGF2niZxfvtTGjCoXb0ef6164OLw8 P03xGXLYtsRc7eeqWknn/4gHswlzyUKe6UxarvjhjpbZ/1A2CUBuOzmMmfJ3JCgfeiYwXt LIPXIlXjGWrMWMoWOCpVk0N1D3tyJa0= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LW+93YDvE0rhXiX71/y+XJfftY4tnk1rGrYMJAeDkKc=; b=qvKCLfOTWAstPHTM9tNUxHlRc+ WTnelyFVDsGQ+YeKDJ7XNwmOTH40+1/2qIkRkn9GsgIXXnV8pFpZ78kxf1Z0+WtC4l1pQ8pZPSZql pvr9OgyHeLbgFVmtSQNnH7c595wjFINTxqzxYnA+blzPnexMwpkd+qfI+YGRTyvlnEXyE099EDw/f NZ74j+2vo8ioFryx+hOe5ygAYZusqhed/xEQ0QPI2H2IhUPsbU90DTyotrfXwX1mzQdyQJSb/q5MC p1f+ShNT5H/yGybCgtisIsE0Ur1WlTLHBRu12qtRz2r2bXAizHLN7v9XvsSi5eOW1r4QO+u17d7ot WWXsf+Xw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTL-00DYBS-Fe; Wed, 15 Mar 2023 05:14:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Guo Ren , linux-csky@vger.kernel.org Subject: [PATCH v4 10/36] csky: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:18 +0000 Message-Id: <20230315051444.3229621-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: pm6firuh7crsyz58r8y87u69gi3zkehn X-Rspam-User: X-Rspamd-Queue-Id: 29742C0011 X-Rspamd-Server: rspam06 X-HE-Tag: 1678857288-475866 X-HE-Meta: U2FsdGVkX18FG5+9usx0WA2he+TTi/cg9jKE76tDY4ouJ49nJM9oI/UVF0eud3Sbj7je3/YmegOORckW2LYm3Fw+byPL5uNBhI9B1WKb608dmDusgANxedZKxSXUetI6CAiVlKL2Gqf0BRgRG7UCXoglGE+tivK6YjtGOhnFgsF/aljhRepIPyclTmhQKtKSnaWedIcPzo11mknyx8gvZUwRsRmd6U9YmKcuyIfaZVKFfRQoJIiglDf0ukjzqtia82OHaUQNh0Cr3u2rOaY+eZ0w1tqD+g+EKe+L4eIMCGcgC7zZQxqdTn/3YBAt1z6TdpS9PMG1/Nd6OvD8O5TKZoubk7XpMrssd54SFrz7G9LTupdLtdxCmXmlA3c80Vwzf5F74rFLzSUm2F+isln5FbXeR5q0oGV4z+W4Vki9aI+/B80tF1s1K3Mp83HP0+dbYRj+MXnP/zFNry1BlgAo94LEdQXTTjCKUmOtnNBdHoE0dDfcmAZlsRxh2utVgCzXdMqMHPQdIB8FsUUdkwYF+u3TF7xl7E6Fse0IZwYWnk/5HdzpctXGP3oUqzqs9AsKSVj3/lgp6pdVfczZ91X1vJU1XP2gCWzygS4Kdoe4HCfX0BYiJfo4DbrbdpOZ3RR2fKcOkP6TpDPA8d/qChdhPb4k616y7D1SrgHXwqF2DZBe9cHIRatacNtVqj5hA26UZCGLDKR8UjNI9VPMlXCNm3J6/5V3vsELSd2N+YS7Ps8ix5vv//b/Vc3728UzEo5xNzrlPDJOHxi3EwhEOOn7wY1NNmyMfh6olJsUmnlQiGiJZkHtKmOkjn+UIcZFUjyaYDdWDy40rOASqFD3zVUCKaIXxMXQEms+/srRtVTLJTixLWhiaAjmWyXOtMl9xS4ctPFAPwdlN+w4MiS1gLm76+HDqH5uPegfoKnnsa9d3BGjTkxDxLbvzzfaDoL7OhIyCRMCwumuGDru+6R0Wtm n8DMSal0 k7uSWQdaUtjurDJCt8Yds3/XpcCm/PL0OklbOSKvpIc55kFJLJZXULvrS7QhzP+veSzNbVyQwcy08eA68rPUID6EBKgXsYKpBn1Fint1PhNRdz5jDh+pZ+k6eaY+rTVdFMpFqd+4oYGaHGsVSzGN1MSWQDBZQVNY+nEOgVEvDtTWqCf0sxcnNiZl1oFaX7ziC9C1gf6wPPyqNbM5PdQYe8CGnN2ewwUIXMKXE7QiWOuA1aZ0pyJoMMT3hFSRrbUcyYcuS X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Guo Ren Cc: linux-csky@vger.kernel.org Acked-by: Mike Rapoport (IBM) --- arch/csky/abiv1/cacheflush.c | 32 +++++++++++++++++----------- arch/csky/abiv1/inc/abi/cacheflush.h | 2 ++ arch/csky/abiv2/cacheflush.c | 32 ++++++++++++++-------------- arch/csky/abiv2/inc/abi/cacheflush.h | 10 +++++++-- arch/csky/include/asm/pgtable.h | 8 ++++--- 5 files changed, 50 insertions(+), 34 deletions(-) diff --git a/arch/csky/abiv1/cacheflush.c b/arch/csky/abiv1/cacheflush.c index fb91b069dc69..ba43f6c26b4f 100644 --- a/arch/csky/abiv1/cacheflush.c +++ b/arch/csky/abiv1/cacheflush.c @@ -14,43 +14,49 @@ #define PG_dcache_clean PG_arch_1 -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); - if (mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + if (mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { dcache_wbinv_all(); if (mapping) icache_inv_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); - struct page *page; + struct folio *folio; if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) + folio = page_folio(pfn_to_page(pfn)); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) dcache_wbinv_all(); - if (page_mapping_file(page)) { + if (folio_flush_mapping(folio)) { if (vma->vm_flags & VM_EXEC) icache_inv_all(); } diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index ed62e2066ba7..0d6cb65624c4 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -9,6 +9,8 @@ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio #define flush_cache_mm(mm) dcache_wbinv_all() #define flush_cache_page(vma, page, pfn) cache_wbinv_all() diff --git a/arch/csky/abiv2/cacheflush.c b/arch/csky/abiv2/cacheflush.c index 39c51399dd81..622e5b1b3f8a 100644 --- a/arch/csky/abiv2/cacheflush.c +++ b/arch/csky/abiv2/cacheflush.c @@ -6,30 +6,30 @@ #include #include -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *pte, unsigned int nr) { - unsigned long addr; - struct page *page; + unsigned long pfn = pte_pfn(*pte); + struct folio *folio; + unsigned int i; - if (!pfn_valid(pte_pfn(*pte))) + if (!pfn_valid(pfn) || is_zero_pfn(pfn)) return; - page = pfn_to_page(pte_pfn(*pte)); - if (page == ZERO_PAGE(0)) - return; + folio = page_folio(pfn_to_page(pfn)); - if (test_and_set_bit(PG_dcache_clean, &page->flags)) + if (test_and_set_bit(PG_dcache_clean, &folio->flags)) return; - addr = (unsigned long) kmap_atomic(page); - - dcache_wb_range(addr, addr + PAGE_SIZE); + for (i = 0; i < folio_nr_pages(folio); i++) { + unsigned long addr = (unsigned long) kmap_local_folio(folio, + i * PAGE_SIZE); - if (vma->vm_flags & VM_EXEC) - icache_inv_range(addr, addr + PAGE_SIZE); - - kunmap_atomic((void *) addr); + dcache_wb_range(addr, addr + PAGE_SIZE); + if (vma->vm_flags & VM_EXEC) + icache_inv_range(addr, addr + PAGE_SIZE); + kunmap_local((void *) addr); + } } void flush_icache_deferred(struct mm_struct *mm) diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index a565e00c3f70..9c728933a776 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -18,11 +18,17 @@ #define PG_dcache_clean PG_arch_1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 static inline void flush_dcache_page(struct page *page) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + flush_dcache_folio(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h index d4042495febc..8cd27104f408 100644 --- a/arch/csky/include/asm/pgtable.h +++ b/arch/csky/include/asm/pgtable.h @@ -28,6 +28,7 @@ #define pgd_ERROR(e) \ pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pmd_pfn(pmd) (pmd_phys(pmd) >> PAGE_SHIFT) #define pmd_page(pmd) (pfn_to_page(pmd_phys(pmd) >> PAGE_SHIFT)) #define pte_clear(mm, addr, ptep) set_pte((ptep), \ @@ -90,7 +91,6 @@ static inline void set_pte(pte_t *p, pte_t pte) /* prevent out of order excution */ smp_mb(); } -#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) static inline pte_t *pmd_page_vaddr(pmd_t pmd) { @@ -263,8 +263,10 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *pte, unsigned int nr); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \ remap_pfn_range(vma, vaddr, pfn, size, prot) From patchwork Wed Mar 15 05:14:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 647C8C761A6 for ; Wed, 15 Mar 2023 05:15:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB85A8E0012; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D23558E0008; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA99B8E0010; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6C0EE8E0010 for ; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4D143A10A2 for ; Wed, 15 Mar 2023 05:14:55 +0000 (UTC) X-FDA: 80569968150.07.AFB911D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id 9A3941C0005 for ; Wed, 15 Mar 2023 05:14:53 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qCXSPv+7; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857293; a=rsa-sha256; cv=none; b=fT/TfKeki2BTU9Rd8gK2fcW2LRYEYXJgUJwHXILXdH5smaIHOQZ27sb2VTCx7R5VhCh5jn 9v5Q3nAJusbADib+M0lp0Hu3QEM/l6otbe3fuIjEFXcn/J1cXQFdxZc6WwES1YYS4LCFbi jV0L52SWceqasJ3pUBENN+1BAV6HZxY= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qCXSPv+7; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857293; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kYj+qH1/7tSlgOsrYtaH9bSHy1IP4XWRsGSUxqRkHdw=; b=pmpeDULVyfAnG8QTnNfIPL/mraZlgK8kaylJ0CBcq97ypooJG9Ti1gC4NzpnPL2YgzU3qf lQkEd+mKocCxKUOutZFm/ygWTPpmuWmcHmPTU9CIBiF8yqOS5GX67rVxw4eF5yRVq7QWfO OopF59ltHHVasLzTtkMJe+DbJEaV39I= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kYj+qH1/7tSlgOsrYtaH9bSHy1IP4XWRsGSUxqRkHdw=; b=qCXSPv+75+CZqxylF5+9tiMWXb QUPFEm/uOKR8S+KHFCVONuqmgQrens6os069ChSrooiIRS9f5rxzkyGLUSk/MZYVcRUOh6MV2V7Z0 qer/Cjb1yxa/DMCzHV5SR234J8FlMvpWzn2UT0qPNx2AR/TGRiU2FoIiVGotwI4w34DCRkaAEKjij CChK9b6ELKvMw2K/DnQwRkvBuVmdBOZbFWrwilne9ogqHbz2VUJ7fAwZlBEbEdkkAlZbI4JQ03Y2D PPww1Dcby4qBwBeLYqY1fxrtY6fTphNabVIbFZBzUTaY+EpXYXYL8u/1kFb5L3QAEm4Q8r1SBBPjA +S2Z8bkg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTL-00DYBY-Kj; Wed, 15 Mar 2023 05:14:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Brian Cain Subject: [PATCH v4 11/36] hexagon: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:19 +0000 Message-Id: <20230315051444.3229621-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 9A3941C0005 X-Rspamd-Server: rspam01 X-Stat-Signature: ysnyisgi63tcbwgyjp9wp1hnar9k56rg X-HE-Tag: 1678857293-617459 X-HE-Meta: U2FsdGVkX18YOMMFBkDcWaQLnmlxoIQasGSyvefztl3yatSoup60KlEuKgEliUt2zsXnN49e3t/BP8pGZ2R5l/FdnVGvQw/dpKeao9DgoDbt/qr0wY6jfaYlgZftTuyiOfTxNhA9AAZQGesPAbg2rhq2VLSM0HYhO5ez99evKCSQZpvuzyI+hhDHZwYmVAEhMyCbQoMLFKSq7ZZTrA5QRPXSSjCWMuXRto0qOUAgvxcxG9Qb1EmIcWpmR1ewcECzqxjDmXGlW6+LZ5gIYYU8tJLkhCL/3b5ob/ErBK1zpXdMGSIYPzB5pZlM3CIU+ug29N3KKjPtP7KdvV8qfETQCcTCeWvCGLC2tYn9lrm9y2s3LT0j2g6I0N2XdXogXJk9HQEtMD/OG9av2jBW3OP/oWJ5bk3OK4OcuRUZNd6Dm5eYYCG4CGdTtqiYzmnyxxcnx20mc6O25OSVb50AZN4IPkFa7nZ9QBHqC/028ecMy03nxa8RJ/YxSt4r5iF5xmNPxjWo6xa90DjLWfDdH4L140kmyeIy5998sXZrWc/HQw3EaPo4u8bOdyBgK5xVnbogumzY5v68EcLO78p4PQtjILuHjqg36CU4oRJ196J/f28f4EGy/ktETKFgRpWRasZD7ovbYNnu5/sQ57ForfVz9wmlsC3Nmqc7F1u/lhwjg2MSyAXhpbW5tGRRuN7kp3SkLtCkr1jMMLXexXMIFUtUanZTv/ZqBHwUHAkjO78Hw383vjse0ro7UafPg/D75qc7gwhjWNmCcWy1SijapAGnOp3sh81d5fBi16FF9tmEVykx4ABROGiJDXsxEx3MObs2zZL2N3KOhr0t04q6k952iTZcWrYXBzIKpsH70Z2G8XFLeP1SjwkhIjxmzM5hLJoIFgQYwIn7ObRqnhyc48a2ys1Zvn79crNcGmme9C+EANkxnv4VZOrCSAHKjC+9B1ArexuY9712n+kUjSz4W1I yBIH7rzq I/UrJvYmYnLMuDwj+2P0MBJk9ZnSxxf4AWBW7lEfmPA7prAxJTkADTyeu4aUaZXkKwKI923GnzMyYdL/JxnroLAjt32Sem6Py1ASrL5B7UCGuH/Ah6+pcA90ykDBUjSsrZeqI3GsCSzzIhSTSsUPUBcZkU3w0+9LMNXJSoeZ+G0kHsvqF3ASbARbxXxrqe3lSdTmq11rEuB5Ipp2738Nyizv02VltHlZJIKYlq12rcO1Pw2aGYMrIbEZQERyVQBhQ4/lW X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Brian Cain Acked-by: Mike Rapoport (IBM) --- arch/hexagon/include/asm/cacheflush.h | 7 +++++-- arch/hexagon/include/asm/pgtable.h | 9 +-------- 2 files changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h index 6eff0730e6ef..63ca314ede89 100644 --- a/arch/hexagon/include/asm/cacheflush.h +++ b/arch/hexagon/include/asm/cacheflush.h @@ -58,12 +58,15 @@ extern void flush_cache_all_hexagon(void); * clean the cache when the PTE is set. * */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { /* generic_ptrace_pokedata doesn't wind up here, does it? */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, void *src, int len); #define copy_to_user_page copy_to_user_page diff --git a/arch/hexagon/include/asm/pgtable.h b/arch/hexagon/include/asm/pgtable.h index 59393613d086..dd05dd71b8ec 100644 --- a/arch/hexagon/include/asm/pgtable.h +++ b/arch/hexagon/include/asm/pgtable.h @@ -338,6 +338,7 @@ static inline int pte_exec(pte_t pte) /* __swp_entry_to_pte - extract PTE from swap entry */ #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) +#define PFN_PTE_SHIFT PAGE_SHIFT /* pfn_pte - convert page number and protection value to page table entry */ #define pfn_pte(pfn, pgprot) __pte((pfn << PAGE_SHIFT) | pgprot_val(pgprot)) @@ -345,14 +346,6 @@ static inline int pte_exec(pte_t pte) #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) #define set_pmd(pmdptr, pmdval) (*(pmdptr) = (pmdval)) -/* - * set_pte_at - update page table and do whatever magic may be - * necessary to make the underlying hardware/firmware take note. - * - * VM may require a virtual instruction to alert the MMU. - */ -#define set_pte_at(mm, addr, ptep, pte) set_pte(ptep, pte) - static inline unsigned long pmd_page_vaddr(pmd_t pmd) { return (unsigned long)__va(pmd_val(pmd) & PAGE_MASK); From patchwork Wed Mar 15 05:14:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BF4EC7618D for ; Wed, 15 Mar 2023 05:14:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A38B76B0075; Wed, 15 Mar 2023 01:14:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 80F288E0003; Wed, 15 Mar 2023 01:14:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B02A8E0001; Wed, 15 Mar 2023 01:14:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 45E736B0075 for ; Wed, 15 Mar 2023 01:14:51 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 08223121065 for ; Wed, 15 Mar 2023 05:14:51 +0000 (UTC) X-FDA: 80569967982.14.DE1C5F4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 65444140007 for ; Wed, 15 Mar 2023 05:14:49 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=cf+tbcgA; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857289; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4pf/rLft+uZgd6JDRs5O+Xa8ZHam0KkL1tl8wFZFlZk=; b=NJHNCgugFn2/ZUfmZ22NBB8T4r9kWvK46231vOvcwmcXX9H6oZpylirGe0Ovkm92kGH0m2 yFbWG+JeBHLmrfRHHV0ayh0rmpH4Zv/y4dk/Gh/aLxbS9o823mRe5JEUeNaxjpzhX/bi9N YphDz24hN3L2BwkAgphhsZZeOOpf7L0= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=cf+tbcgA; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857289; a=rsa-sha256; cv=none; b=m3c4/w/0Jb7A/+XGXiibq+dsawkWYP9ZTfn1UvFRlrYYIouuzUytE/ZqsDCu4CPNWHD9ji /Ol7pyiazqTIKnMBWSQF8NHj1NKRrnaW+nRHc52IB1j1wzygmTSKt3krFd5azSJNTeK27/ hEV0uqaSsENwboVQFX1ebfFJMTs+VbI= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4pf/rLft+uZgd6JDRs5O+Xa8ZHam0KkL1tl8wFZFlZk=; b=cf+tbcgA7nZNIVAuMabdqjl6VU F5KZvf8zJ+HKhlelPGfge4a8lxHFrx76gg7vi5JO7EEDZXRnP+Z9NBhRNnMC4CaHY1TAicWGG/kdb 5uyAK7J8ZfRNMeVfR1ImsPMfnGNs/iaODI8VVJvU0tioNwVM72nOtg/EhCnBo7te4Wjn1wlcmHIel 4Rd74dCa26lGODfVfEiQB2pxIYQpcuAYkI9gSy7+EaezwJD0swkW6gWR9RVWVqPBAF4Y3RUUil4ky B5aWaCvP2rYuJu+m7dEIuLvZEgh3yrakeOLBArHKNip9eyO02qUAiIcwFIzf/w0wvj0nin63z49St sP2Krppg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTL-00DYBe-Pc; Wed, 15 Mar 2023 05:14:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org Subject: [PATCH v4 12/36] ia64: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:20 +0000 Message-Id: <20230315051444.3229621-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 65444140007 X-Rspam-User: X-Stat-Signature: r464nztk86co9r71eoc8doorjjxco4kg X-HE-Tag: 1678857289-659876 X-HE-Meta: U2FsdGVkX1/OHBezato9Hz9JGPPI6GkpU8X+zG9KP/Tq5Mh+H8I1IgmWrZ0GtKxCOMFCyfZjddGDbf6lWP0dPmBTjTXOaemMYgUyP/+SfAK4fNEhWD4Uj/dsB5so1V8f62mLF8ethmiwQ0FdfgFN2zeSjkrRpeQRAzRlHtgIznaO9htb3bcUepX9n6SCCcAli8GMD04eIf4kqHOMB7oWJzuJ7PCkfFaZ+IwDk4M/ytTeZMJVAHTSD3uIcvs2mAyAriHlVP7MPhhF3Kjx7eOMFd49oJt5KXqndDTX6LiP9phJVsbIvuhB5aJCxKAbz1qaa6HIJ4loW94EzLb8JrmwQ8BR408sJQjajbH1Gtjw5RfmVUp9wtvYWTvMQJ1RNs0a0gENnoz44h0XzBjJXRFsWf66TCmTymf/oBvnyO8xu7uWkexrREtXXgXq4TfapmvNWWv+oKigl9Ww8AHSNE+ewUnkHGc8ySJhT41NpRA7IEf7UTFMN/2w1jzKjwD8YpHnJ6t3WOHyxi8PP9efSDOpl3Y1IMj1ReeKuaVrUD5XREE7KOssAWyMIW/QsTBd4B+cj6K+vm7LouutW+hCPBpvBazDu84V16Rfx3WrSNVlnH8izn6odjR9mxI1O2D4KtAv04UjPiiF4Lsdmfjey3ded98N7b8q45brAR1kSnhJoNnCAzYunVUalVgNIbZwe6mnSW0rpk0JdSYZil5rUvgy4ly90iddpRZ6t0mOJJhrdx3hDk3WgmVxLNZKzNXyhJDMl71EVriDKpuIkJY0UimOC5KxSyGa875QC2EolPP7Qpy27qtsjt6CaSZ9WsWJS80wjS7p2YEmlmAaivmj4HvTjH28rbQDbBADHz8nr4vGSwmXCefR/H0UyFk+cK/gcRjoqbI5NCLXz4HbNAvCslEUSAgWfZdi8RBBgfomht/k1DeJtyONNoIV1pmf6L0uj1fuRVMAiz6TtXjVjDyl6wP IpM/6UDw Q5RkMFtHfICSS8aSqS/tYBZP91J53RTlblDAxn8tRFESUkMhxoUzC0xDV/h++pKla/MApDpsPhOW+ClK68e3P0KDUkHgb7rIN/M5NECEGZVUrIZs4CU80OVPIMv3kcf3yqkbuQwY5z+FFvQkowsqiu/P4bZSQm8HRD5aQmIzcaZisxe1lsDPgXWCZUoiTqiXEm6aa79KRH7Z1eU0TbQW0MbD3F9WA173od5oJmGR635ZUkUdAIlwxD+gUFQd227vIADnT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_clean) flag from being per-page to per-folio, which makes arch_dma_mark_clean() and mark_clean() a little more exciting. Signed-off-by: Matthew Wilcox (Oracle) Cc: linux-ia64@vger.kernel.org Acked-by: Mike Rapoport (IBM) --- arch/ia64/hp/common/sba_iommu.c | 26 +++++++++++++++----------- arch/ia64/include/asm/cacheflush.h | 14 ++++++++++---- arch/ia64/include/asm/pgtable.h | 4 ++-- arch/ia64/mm/init.c | 28 +++++++++++++++++++--------- 4 files changed, 46 insertions(+), 26 deletions(-) diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c index 8ad6946521d8..48d475f10003 100644 --- a/arch/ia64/hp/common/sba_iommu.c +++ b/arch/ia64/hp/common/sba_iommu.c @@ -798,22 +798,26 @@ sba_io_pdir_entry(u64 *pdir_ptr, unsigned long vba) #endif #ifdef ENABLE_MARK_CLEAN -/** +/* * Since DMA is i-cache coherent, any (complete) pages that were written via * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ -static void -mark_clean (void *addr, size_t size) +static void mark_clean(void *addr, size_t size) { - unsigned long pg_addr, end; - - pg_addr = PAGE_ALIGN((unsigned long) addr); - end = (unsigned long) addr + size; - while (pg_addr + PAGE_SIZE <= end) { - struct page *page = virt_to_page((void *)pg_addr); - set_bit(PG_arch_1, &page->flags); - pg_addr += PAGE_SIZE; + struct folio *folio = virt_to_folio(addr); + ssize_t left = size; + size_t offset = offset_in_folio(folio, addr); + + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } + + while (left >= folio_size(folio)) { + set_bit(PG_arch_1, &folio->flags); + left -= folio_size(folio); + folio = folio_next(folio); } } #endif diff --git a/arch/ia64/include/asm/cacheflush.h b/arch/ia64/include/asm/cacheflush.h index 708c0fa5d975..eac493fa9e0d 100644 --- a/arch/ia64/include/asm/cacheflush.h +++ b/arch/ia64/include/asm/cacheflush.h @@ -13,10 +13,16 @@ #include #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) \ -do { \ - clear_bit(PG_arch_1, &(page)->flags); \ -} while (0) +static inline void flush_dcache_folio(struct folio *folio) +{ + clear_bit(PG_arch_1, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_range flush_icache_range diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h index 21c97e31a28a..5450d59e4fb9 100644 --- a/arch/ia64/include/asm/pgtable.h +++ b/arch/ia64/include/asm/pgtable.h @@ -206,6 +206,7 @@ ia64_phys_addr_valid (unsigned long addr) #define RGN_MAP_SHIFT (PGDIR_SHIFT + PTRS_PER_PGD_SHIFT - 3) #define RGN_MAP_LIMIT ((1UL << RGN_MAP_SHIFT) - PAGE_SIZE) /* per region addr limit */ +#define PFN_PTE_SHIFT PAGE_SHIFT /* * Conversion functions: convert page frame number (pfn) and a protection value to a page * table entry (pte). @@ -303,8 +304,6 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) *ptep = pteval; } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) - /* * Make page protection values cacheable, uncacheable, or write- * combining. Note that "protection" is really a misnomer here as the @@ -396,6 +395,7 @@ pte_same (pte_t a, pte_t b) return pte_val(a) == pte_val(b); } +#define update_mmu_cache_range(vma, address, ptep, nr) do { } while (0) #define update_mmu_cache(vma, address, ptep) do { } while (0) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index 7f5353e28516..b95debabdc2a 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -50,30 +50,40 @@ void __ia64_sync_icache_dcache (pte_t pte) { unsigned long addr; - struct page *page; + struct folio *folio; - page = pte_page(pte); - addr = (unsigned long) page_address(page); + folio = page_folio(pte_page(pte)); + addr = (unsigned long)folio_address(folio); - if (test_bit(PG_arch_1, &page->flags)) + if (test_bit(PG_arch_1, &folio->flags)) return; /* i-cache is already coherent with d-cache */ - flush_icache_range(addr, addr + page_size(page)); - set_bit(PG_arch_1, &page->flags); /* mark page as clean */ + flush_icache_range(addr, addr + folio_size(folio)); + set_bit(PG_arch_1, &folio->flags); /* mark page as clean */ } /* - * Since DMA is i-cache coherent, any (complete) pages that were written via + * Since DMA is i-cache coherent, any (complete) folios that were written via * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ void arch_dma_mark_clean(phys_addr_t paddr, size_t size) { unsigned long pfn = PHYS_PFN(paddr); + struct folio *folio = page_folio(pfn_to_page(pfn)); + ssize_t left = size; + size_t offset = offset_in_folio(folio, paddr); - do { + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } + + while (left >= (ssize_t)folio_size(folio)) { set_bit(PG_arch_1, &pfn_to_page(pfn)->flags); - } while (++pfn <= PHYS_PFN(paddr + size - 1)); + left -= folio_size(folio); + folio = folio_next(folio); + } } inline void From patchwork Wed Mar 15 05:14:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175354 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B6ACC6FD1D for ; Wed, 15 Mar 2023 05:32:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E1E318E0002; Wed, 15 Mar 2023 01:32:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DCE078E0001; Wed, 15 Mar 2023 01:32:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C953C8E0002; Wed, 15 Mar 2023 01:32:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B79F78E0001 for ; Wed, 15 Mar 2023 01:32:32 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 8601BAB7DF for ; Wed, 15 Mar 2023 05:32:32 +0000 (UTC) X-FDA: 80570012544.20.16D36DE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id 92911100007 for ; Wed, 15 Mar 2023 05:32:30 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=dOAYiOpB; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678858351; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sytJt4ZH2ciYaUZJ2jrdqraRlIjKW/6JOZO8w6mCnuM=; b=w/DTVVaFasR/1aiLTBdj2es/HTsU94GdZ2PdWiFzXPWY90lT00Y16YmjW9iYQHy4cKy615 /vimhYeOhl4IKjLMiAseCo3ojKhGyIGV1/btdAZ2FkAAcyaCMc6vGYvZQFnzBrIUCPlvyB Bs63tAptmj0CrO0JnTrWH4qcUgHYR1M= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=dOAYiOpB; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678858351; a=rsa-sha256; cv=none; b=8mxv/7hHdjjzftlyyF+IUOx/N0FN7+zYi1qOSKkNWTD9G3QLRTeUViFXsW90E43uAjBmFl MYwNsdmPPU2N0kMh1GcahVaPrHS24msbxjon/SJDZDVy7xS+Oxr16wjE53LR1ZSWavduyE +X1ZFJfWNNveeWxJnZsYBF3uN64ztdY= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=sytJt4ZH2ciYaUZJ2jrdqraRlIjKW/6JOZO8w6mCnuM=; b=dOAYiOpB9wvXMhkb4HRsxV4/DU VD4tWO/q9akt/zqW7pniC0O7hOsmpS4ix4eFov1LLKCvqssoGT3xt9onK0+UryXSFFMGoHXyfhMcZ 9/+qHxc+i66HQ5B3Xf0SLne2+NDKjiTViiw0wWQhC2HHtrJMRkDp2ozr729FoKOzQ7SPDOA1xTghw aRUCpIYMIF4kj32L+n9i2dDDXkeRTYPoJZOCu3P1iSYpYtHOHPrVScZaiYU4wssnNwuPPCmWa8Mb2 kYs5tf3trPVVgFVT1bK/bZPiNWNMDi2Aw8selj2F2iCsihbbks8SWrvbySHRkVwofWjgHRPjMiaOg EAnSYmIw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTL-00DYBg-SS; Wed, 15 Mar 2023 05:14:47 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huacai Chen , WANG Xuerui , loongarch@lists.linux.dev Subject: [PATCH v4 13/36] loongarch: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:21 +0000 Message-Id: <20230315051444.3229621-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: bzpfccn8pybqqg9pat69k6zqzbpzfe58 X-Rspamd-Queue-Id: 92911100007 X-HE-Tag: 1678858350-866941 X-HE-Meta: U2FsdGVkX19g9BGgnSUEl3SESJtt5J+ccvqEFRNzZnWSTjeDKpXcz5R7LocFH/oipRazLdbG1ozes5IvCl9Vzg2oWDdL3q/5m3xD82vCwSG6EaMayw4B4A4F3da8ogYQ011Foh5k1iwh5puENTRo6762pGgCNa1hm9K7jHub2muJUpEfDeBud+ZMYw2MUjDZqoP3EI+cIIk1uAM4/KDloD8kQPE2jzYDySw4YwXocPfD6gIEHgeSH36xCydr88wTuE2383ceQZ1m+7e4kc6SqLhvZ6grU+06gP9Cd7SS5eTnsZ+nxUwAWr5sQiViOEORlTBkn++gI770SNC4uIrslfKCDq0CSXsaTwvNI3Pi8BIs+g+nisORcm6Vs2mzQ1jRWArX8k3mjRXfHa/qbSNnkxZyXES2MRO6/j/HSsLXVMZyl7M1Teps0Gws7+tDZAxPLJMIs7yyDd5eyqW04HR3bmXsV3wUdSYjXOKcxuuZmvERS+HlRYC8XY5MCVvX30L6AzKTs9bdUm1EqvbsfUIHSm0/qFEaFFH3hLoEZXU859NVjsDMUpDaIrnDkjFS3BLd8n4C7Gvlo/Dx3Trv+sgANKlXRRdPzlSj/E2CkC5L68mPS0oUmI7l4fRf+cLQ5AEJG2i7/nNoixDEobuHS80efY0YrirVfVJmfRePHWYyHAqrGaxZql5Cp4pMIkECWvU5eULPZLCI2N3M0nxcpTmuFUtiCaNjAOSI7WcXWw35rqAl/msc8iww3cCL9RQ03/2OP/tiAbsSLfsZJgDd8bo/mEfmuFP1rmT3gOcJcHRxKrQjjrL6v1roIPIrMAx/Bks/Ddae8MH8syqQlumuZqGd/Jr3NYjljMQZn5hSMoGa2DaYcx4w11Y5EgeoP/n/urrX3lyr0dmQtDd5YYJOYTv/hwIdiIeR/1GDdGqh2Y+ciKJpe0frIeSbuiGPL4c6IomgkqWkwFUhDxu+idhp/72 SiV4dkJj xF2Hw+p9qeFSBwSHrbmvMzJcW3zMEk8vSyyUskBdNkRVd0Q3D7Tp7aMSECoX7ovmeGuf4F3rXmwjuIG2w7DWTNjbi7AJXSZODThjVgUd7bub1dwwTIY8gb7qeaTZgC4nic+yTqiAh0N+z+INasvsG/dzc29K6CyWGlpUBkChTiB1Jj7WqViECtDxMAnIKfK8azBh8VaThLsmSGIIDCfmfOMZVprdOdkOy83Rm9ENZ+ZRt1tQ0djjaxNOCw/5xqcXeTaQl X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add update_mmu_cache_range() and change _PFN_SHIFT to PFN_PTE_SHIFT. It would probably be more efficient to implement __update_tlb() by flushing the entire folio instead of calling __update_tlb() N times, but I'll leave that for someone who understands the architecture better. Signed-off-by: Matthew Wilcox (Oracle) Cc: Huacai Chen Cc: WANG Xuerui Cc: loongarch@lists.linux.dev Acked-by: Mike Rapoport (IBM) --- arch/loongarch/include/asm/cacheflush.h | 2 ++ arch/loongarch/include/asm/pgtable-bits.h | 4 ++-- arch/loongarch/include/asm/pgtable.h | 28 ++++++++++++----------- arch/loongarch/mm/pgtable.c | 2 +- arch/loongarch/mm/tlb.c | 2 +- 5 files changed, 21 insertions(+), 17 deletions(-) diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h index 0681788eb474..7907eb42bfbd 100644 --- a/arch/loongarch/include/asm/cacheflush.h +++ b/arch/loongarch/include/asm/cacheflush.h @@ -47,8 +47,10 @@ void local_flush_icache_range(unsigned long start, unsigned long end); #define flush_cache_vmap(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) #define flush_icache_page(vma, page) do { } while (0) +#define flush_icache_pages(vma, page) do { } while (0) #define flush_icache_user_page(vma, page, addr, len) do { } while (0) #define flush_dcache_page(page) do { } while (0) +#define flush_dcache_folio(folio) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/loongarch/include/asm/pgtable-bits.h b/arch/loongarch/include/asm/pgtable-bits.h index 8b98d22a145b..a1eb2e25446b 100644 --- a/arch/loongarch/include/asm/pgtable-bits.h +++ b/arch/loongarch/include/asm/pgtable-bits.h @@ -48,12 +48,12 @@ #define _PAGE_NO_EXEC (_ULCAST_(1) << _PAGE_NO_EXEC_SHIFT) #define _PAGE_RPLV (_ULCAST_(1) << _PAGE_RPLV_SHIFT) #define _CACHE_MASK (_ULCAST_(3) << _CACHE_SHIFT) -#define _PFN_SHIFT (PAGE_SHIFT - 12 + _PAGE_PFN_SHIFT) +#define PFN_PTE_SHIFT (PAGE_SHIFT - 12 + _PAGE_PFN_SHIFT) #define _PAGE_USER (PLV_USER << _PAGE_PLV_SHIFT) #define _PAGE_KERN (PLV_KERN << _PAGE_PLV_SHIFT) -#define _PFN_MASK (~((_ULCAST_(1) << (_PFN_SHIFT)) - 1) & \ +#define _PFN_MASK (~((_ULCAST_(1) << (PFN_PTE_SHIFT)) - 1) & \ ((_ULCAST_(1) << (_PAGE_PFN_END_SHIFT)) - 1)) /* diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index d28fb9dbec59..13aad0003e9a 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -237,9 +237,9 @@ extern pmd_t mk_pmd(struct page *page, pgprot_t prot); extern void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd); #define pte_page(x) pfn_to_page(pte_pfn(x)) -#define pte_pfn(x) ((unsigned long)(((x).pte & _PFN_MASK) >> _PFN_SHIFT)) -#define pfn_pte(pfn, prot) __pte(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) -#define pfn_pmd(pfn, prot) __pmd(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) +#define pte_pfn(x) ((unsigned long)(((x).pte & _PFN_MASK) >> PFN_PTE_SHIFT)) +#define pfn_pte(pfn, prot) __pte(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) +#define pfn_pmd(pfn, prot) __pmd(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) /* * Initialize a new pgd / pud / pmd table with invalid pointers. @@ -334,12 +334,6 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) } } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) -{ - set_pte(ptep, pteval); -} - static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { /* Preserve global status for the pair */ @@ -445,11 +439,19 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { - __update_tlb(vma, address, ptep); + for (;;) { + __update_tlb(vma, address, ptep); + if (--nr == 0) + break; + address += PAGE_SIZE; + ptep++; + } } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache @@ -462,7 +464,7 @@ static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, static inline unsigned long pmd_pfn(pmd_t pmd) { - return (pmd_val(pmd) & _PFN_MASK) >> _PFN_SHIFT; + return (pmd_val(pmd) & _PFN_MASK) >> PFN_PTE_SHIFT; } #ifdef CONFIG_TRANSPARENT_HUGEPAGE diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c index 36a6dc0148ae..1260cf30e3ee 100644 --- a/arch/loongarch/mm/pgtable.c +++ b/arch/loongarch/mm/pgtable.c @@ -107,7 +107,7 @@ pmd_t mk_pmd(struct page *page, pgprot_t prot) { pmd_t pmd; - pmd_val(pmd) = (page_to_pfn(page) << _PFN_SHIFT) | pgprot_val(prot); + pmd_val(pmd) = (page_to_pfn(page) << PFN_PTE_SHIFT) | pgprot_val(prot); return pmd; } diff --git a/arch/loongarch/mm/tlb.c b/arch/loongarch/mm/tlb.c index 8bad6b0cff59..73652930b268 100644 --- a/arch/loongarch/mm/tlb.c +++ b/arch/loongarch/mm/tlb.c @@ -246,7 +246,7 @@ static void output_pgtable_bits_defines(void) pr_define("_PAGE_WRITE_SHIFT %d\n", _PAGE_WRITE_SHIFT); pr_define("_PAGE_NO_READ_SHIFT %d\n", _PAGE_NO_READ_SHIFT); pr_define("_PAGE_NO_EXEC_SHIFT %d\n", _PAGE_NO_EXEC_SHIFT); - pr_define("_PFN_SHIFT %d\n", _PFN_SHIFT); + pr_define("PFN_PTE_SHIFT %d\n", PFN_PTE_SHIFT); pr_debug("\n"); } From patchwork Wed Mar 15 05:14:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F255C7618E for ; Wed, 15 Mar 2023 05:15:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 423268E0015; Wed, 15 Mar 2023 01:14:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E0048E000C; Wed, 15 Mar 2023 01:14:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F3B518E0015; Wed, 15 Mar 2023 01:14:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D8B6E8E0013 for ; Wed, 15 Mar 2023 01:14:57 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B637CC106F for ; Wed, 15 Mar 2023 05:14:57 +0000 (UTC) X-FDA: 80569968234.23.9F381AB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 094CA1A000C for ; Wed, 15 Mar 2023 05:14:54 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vJinA88Z; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857295; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=u7otEQw0drUncgREmlSs4Hr4la1Rulf+32AXxHFo/oI=; b=AImBeu2ruSY5toUwrRF5gaTRCZ/0q/SIbMr/6BKgUmRZ0/IXnjANKWh8hQU67UUWp15Cgq +5axy/+jbzacyKk/PqTcUrN4FIuSZ8Blmplsc+I/URjDuxAf2v0yV2k/UIORV358M8iRuL rnJzauu2PUplMR1GN7J5X1f8J4JK6ew= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vJinA88Z; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857295; a=rsa-sha256; cv=none; b=KVhYbDsiXVP65zfrya7I6j0i20ezfcu1JBzmaaZ4BL/G6Xv83Gq/cJfJ6VoDiK09sTUCpK qAGYWcSU8GfylXWEe5kEr3w6gC/MnowVFV4Lz0tw2+zawH5pM9VaYupnxM6zqA/qos0fWb ST9Sjgn94m3OemN7B21Gl1naoQ2gaNk= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=u7otEQw0drUncgREmlSs4Hr4la1Rulf+32AXxHFo/oI=; b=vJinA88ZRxgb83sfXWCSnbOI/6 nlXdQ3RerziKOmizgXaq3IAhFcsCLehGop592Wf3RQpHVJdreA/c/t78ENF4YUkjqmryDMRXFvDUK VkARHWGsl/03/0AgiGHT1K4LdeAoeT9Hhp8uAObzm62xgtr7fRXWXeNwi/8nIZnX99J/Ye00i4/Fn wi8vHT9WPqCwuYBsYOl+G191vCuIr03Zkp8JlyY1GiaZ4GF+g7Hqyw+2f1hzWGlw4+rPruor3p8ko pdCXWQPBAGs2yBfSkjfJu3tiupgkkY8cmW/Y9pFAnfncae0KGmYYVxwxywGzCRBPLKKlN/Ns2O/Lj TL1y9Y6Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTM-00DYBq-1G; Wed, 15 Mar 2023 05:14:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Geert Uytterhoeven , linux-m68k@lists.linux-m68k.org Subject: [PATCH v4 14/36] m68k: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:22 +0000 Message-Id: <20230315051444.3229621-15-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 094CA1A000C X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: pgj747jpjrd575q6t6kdufktgi97pzgz X-HE-Tag: 1678857294-862848 X-HE-Meta: U2FsdGVkX1/jp5reMmirLRa9Tp5Qt4QwFGTiN1uhbEdEDhS8bKIHZbQqqffvlI9Zu1k4SpGS8j0FtudJoSNfQsQEWWI6//BboeiGy52nm/9eoJrZ19H/vkGevKhiI8IwtTDR0tFpLeOZJk8rl7GqK823JjYDwYC9CuyiemYNURAlKzLEzCfnQDVRzhmWi2QhLgvTJk2dKX3wRDIP7nHaORfi9ilvvH4Ape9q4rPb8yr+bCifywWuPhfYzfjg5Q9MNqZaHSTxgILNuLfsKBhAF0nZMcCnXSsEV52sstSixw/pwyXMc/Eu7/RbekcDINkzLZ28gqAQv8fDzdeY9Kj+eEUdfPAwP3aIcNQuaKMLNDIZPAtibg0RMQTZ+8yZgM56aM4PI83U3icGTrDO5LW5OoNhdmbsUHVGoDz25AgLNFNplCI44t4n21tloB7/sACWVnUPODiQf2/dvbgW5lnD0HkrIXO8vqYZxvQl3QnkiD+kHlconG9ZcmgCGTXhlyXdK7xLyHUkFUHfFC8MgSsTKLKj2jxi5ptH09xDyolW/pLp0fBHDadUZERUD29J7UxZOpHcKbr26L2t2FVSdXZn4Wnwm8fxgfNCAm7l5VxDZCzHnS1Gd29rdVBvimM3mNdcT1SHTFIaHWcK76/N8V1VnAAAM2Rj7mGHULzn6Ifm3QxWb+lcVZ1ZEbAyOqIIpexVXKj6MRyS7ihEhuwHb8vk9piTGW/8WT/zckBG0HG5OUkLq6Z5fNkB5l+11MNeXfuRrXb1vYMep9eWNpk+MOPrWl6Mpqg2xedbGbn1QKEr8wBjijLmYQjX2LFQAoDYQyTGfp4vuP6QLfldQIcdaoLuUpIbI++vEGyxsWxjfagxieCAI9VuItf1CWOC374aLFoiFgdsEYLgJSmlBlIoQaCMbdQnPytKCgRt16XnwIs7qWfGo2l+MQJ72UOjBRFU+QLc9wLYcNemLwuCHhSAmOY p1qUucmx xp5IK3S9nBaoDOQGyt6VnI9Wl6WiEs9Qz2OaCcX9xlcJkDuR0dNXbLDWY+LEhPTM3XOMhLRZINeZ0LXk+sYLiNJBIW3fKa1H5PPin28WvueOXdle9XFzCBufXO3OoTofbTF5xKYR/P5YdS3G3wchgCNfKtnvnkoGYVVVFFiHb66XkUtTrfyF8oP99zpCIAOiYQEsJPanWcVtXNogO5j6Roz4/n/xJInZoh3ym/H9FiIDkqLE52n16BBwZBj+0JiOiSmpc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Geert Uytterhoeven Cc: linux-m68k@lists.linux-m68k.org Acked-by: Mike Rapoport (IBM) Tested-by: Geert Uytterhoeven --- arch/m68k/include/asm/cacheflush_mm.h | 27 ++++++++++++++++-------- arch/m68k/include/asm/mcf_pgtable.h | 1 + arch/m68k/include/asm/motorola_pgtable.h | 1 + arch/m68k/include/asm/pgtable_mm.h | 9 ++++---- arch/m68k/include/asm/sun3_pgtable.h | 1 + arch/m68k/mm/motorola.c | 2 +- 6 files changed, 27 insertions(+), 14 deletions(-) diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h index 1ac55e7b47f0..88eb85e81ef6 100644 --- a/arch/m68k/include/asm/cacheflush_mm.h +++ b/arch/m68k/include/asm/cacheflush_mm.h @@ -220,24 +220,29 @@ static inline void flush_cache_page(struct vm_area_struct *vma, unsigned long vm /* Push the page at kernel virtual address and clear the icache */ /* RZ: use cpush %bc instead of cpush %dc, cinv %ic */ -static inline void __flush_page_to_ram(void *vaddr) +static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr) { if (CPU_IS_COLDFIRE) { unsigned long addr, start, end; addr = ((unsigned long) vaddr) & ~(PAGE_SIZE - 1); start = addr & ICACHE_SET_MASK; - end = (addr + PAGE_SIZE - 1) & ICACHE_SET_MASK; + end = (addr + nr * PAGE_SIZE - 1) & ICACHE_SET_MASK; if (start > end) { flush_cf_bcache(0, end); end = ICACHE_MAX_ADDR; } flush_cf_bcache(start, end); } else if (CPU_IS_040_OR_060) { - __asm__ __volatile__("nop\n\t" - ".chip 68040\n\t" - "cpushp %%bc,(%0)\n\t" - ".chip 68k" - : : "a" (__pa(vaddr))); + unsigned long paddr = __pa(vaddr); + + do { + __asm__ __volatile__("nop\n\t" + ".chip 68040\n\t" + "cpushp %%bc,(%0)\n\t" + ".chip 68k" + : : "a" (paddr)); + paddr += PAGE_SIZE; + } while (--nr); } else { unsigned long _tmp; __asm__ __volatile__("movec %%cacr,%0\n\t" @@ -249,10 +254,14 @@ static inline void __flush_page_to_ram(void *vaddr) } #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) __flush_page_to_ram(page_address(page)) +#define flush_dcache_page(page) __flush_pages_to_ram(page_address(page), 1) +#define flush_dcache_folio(folio) \ + __flush_pages_to_ram(folio_address(folio), folio_nr_pages(folio)) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) -#define flush_icache_page(vma, page) __flush_page_to_ram(page_address(page)) +#define flush_icache_pages(vma, page, nr) \ + __flush_pages_to_ram(page_address(page), nr) +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); diff --git a/arch/m68k/include/asm/mcf_pgtable.h b/arch/m68k/include/asm/mcf_pgtable.h index 13741c1245e1..1414b607eff4 100644 --- a/arch/m68k/include/asm/mcf_pgtable.h +++ b/arch/m68k/include/asm/mcf_pgtable.h @@ -292,6 +292,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) return pte; } +#define PFN_PTE_SHIFT PAGE_SHIFT #define pmd_pfn(pmd) (pmd_val(pmd) >> PAGE_SHIFT) #define pmd_page(pmd) (pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT)) diff --git a/arch/m68k/include/asm/motorola_pgtable.h b/arch/m68k/include/asm/motorola_pgtable.h index ec0dc19ab834..38d5e5edc3e1 100644 --- a/arch/m68k/include/asm/motorola_pgtable.h +++ b/arch/m68k/include/asm/motorola_pgtable.h @@ -112,6 +112,7 @@ static inline void pud_set(pud_t *pudp, pmd_t *pmdp) #define pte_present(pte) (pte_val(pte) & (_PAGE_PRESENT | _PAGE_PROTNONE)) #define pte_clear(mm,addr,ptep) ({ pte_val(*(ptep)) = 0; }) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pte_page(pte) virt_to_page(__va(pte_val(pte))) #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) #define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot)) diff --git a/arch/m68k/include/asm/pgtable_mm.h b/arch/m68k/include/asm/pgtable_mm.h index b93c41fe2067..8c2db20abdb6 100644 --- a/arch/m68k/include/asm/pgtable_mm.h +++ b/arch/m68k/include/asm/pgtable_mm.h @@ -31,8 +31,6 @@ do{ \ *(pteptr) = (pteval); \ } while(0) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) - /* PMD_SHIFT determines the size of the area a second-level page table can map */ #if CONFIG_PGTABLE_LEVELS == 3 @@ -138,11 +136,14 @@ extern void kernel_set_cachemode(void *addr, unsigned long size, int cmode); * tables contain all the necessary information. The Sun3 does, but * they are updated on demand. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + #endif /* !__ASSEMBLY__ */ /* MMU-specific headers */ diff --git a/arch/m68k/include/asm/sun3_pgtable.h b/arch/m68k/include/asm/sun3_pgtable.h index e582b0484a55..feae73b3b342 100644 --- a/arch/m68k/include/asm/sun3_pgtable.h +++ b/arch/m68k/include/asm/sun3_pgtable.h @@ -105,6 +105,7 @@ static inline void pte_clear (struct mm_struct *mm, unsigned long addr, pte_t *p pte_val (*ptep) = 0; } +#define PFN_PTE_SHIFT 0 #define pte_pfn(pte) (pte_val(pte) & SUN3_PAGE_PGNUM_MASK) #define pfn_pte(pfn, pgprot) \ ({ pte_t __pte; pte_val(__pte) = pfn | pgprot_val(pgprot); __pte; }) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index 911301224078..790666c6d146 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -81,7 +81,7 @@ static inline void cache_page(void *vaddr) void mmu_page_ctor(void *page) { - __flush_page_to_ram(page); + __flush_pages_to_ram(page, 1); flush_tlb_kernel_page(page); nocache_page(page); } From patchwork Wed Mar 15 05:14:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62DE8C61DA4 for ; Wed, 15 Mar 2023 05:14:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 901528E0007; Wed, 15 Mar 2023 01:14:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 884BC8E0001; Wed, 15 Mar 2023 01:14:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FCFE8E0005; Wed, 15 Mar 2023 01:14:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4DCC08E0003 for ; Wed, 15 Mar 2023 01:14:52 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id E490B810FB for ; Wed, 15 Mar 2023 05:14:51 +0000 (UTC) X-FDA: 80569967982.05.F059AC1 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 240061A0009 for ; Wed, 15 Mar 2023 05:14:49 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=E7wCVp6A; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857290; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Yd+AtF2yaSW+xFWRTFWRTibzPJkXOZuPq6P0eIAoL8o=; b=8oaigrJ6zpVIgtSAywPG9y4UZDD+k6dD06W44ImXwoqRq/oaYszWzBo+itlWUIm5cuLaFg o3nna/l80Q6IzGJzUzITYeLuvpVde3OKe/rrjmR26AU/ucYkW9g7CL3L4P3smajo0YHv4w 99vDINtPeXoW/jU7c9goLitr6CQekYI= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=E7wCVp6A; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857290; a=rsa-sha256; cv=none; b=PDE3T9QPCWJ0CPwnlwAzJ51X66jzzouTx+f3rot+m7ZyXBdGX1p4El0qLdXHVBxuVmaxDx xp7d1ttJz+nOePMLd6S8qcCd77CdtD6JncDkRvwbohkKGuEf459KXNxfwEvGXyDSFOanTr qiI9S1ux0cndM1BXqBC38XNSJ2PY/xI= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Yd+AtF2yaSW+xFWRTFWRTibzPJkXOZuPq6P0eIAoL8o=; b=E7wCVp6Av4GD+zoZ7c/7S3/z7v PqwgtPzVyguptcupNJIssZCLUPLp5Xw+CuqWT+4cXLmYW8Q1ctUd3oUeLnOLNBjKQff7JvkJJyAXV rxp3gOKcK5YxVsb0uSj2xHh49BmvUoa7kRqXBVCnPmdW9FVFt65+c/Wi0GQqp7avqmgLCiNAIf8g9 asA7o7ZAWgPz0K1pzc/drQlelkUazhsDQYKSiy+8VVS0RsELplplySzgwbYax53YpqLDJgZzme7VG H7V02ate6ZbGczXZ4FDoY3Y3daTqxK7TM/vm0+oBBTPiIDjjnN8qk1zrU9Y7d6rU3ko33cD+9Qu3f 9c2el6XQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTM-00DYBv-3Q; Wed, 15 Mar 2023 05:14:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Michal Simek Subject: [PATCH v4 15/36] microblaze: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:23 +0000 Message-Id: <20230315051444.3229621-16-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: fmhrcqnc697xx13ijnwsw7qhxnfti9zz X-Rspam-User: X-Rspamd-Queue-Id: 240061A0009 X-Rspamd-Server: rspam06 X-HE-Tag: 1678857289-40141 X-HE-Meta: U2FsdGVkX18XkrFotWHoitZLe7QZxl7hUqR9zF9P6N+oTcR/R+3t6ZNl+YJE/0tWhzS3n7/MtYxuJE+Qg8npuuCistmMTytQgzNL5KAB6FSjhBpZA9eSBPAZYTZGt0xLgUo0JLT6tk9/N/BRfeDhA+GXHxwaT+89a9Q4cQ8WZSxA5sqivcI0X/w+cd0F+lNu6UmLCy5xwfnrXDWnIRgx7jhS61a1N4/T+CYbICI2vqnBiPjlj0TsKyNix8tkqOY11RFWs8YrK8g3PlCgB9C6aJlQFMjKXh79kuphBuigAmmWQ0/w/17yUgfJmBmQ1QerZ7aLn41WQrDHzAnwEcDB00lZ7mQpVfbNJMa69TMex6SGYDVxLMNlCultBtOvgfQ56zl8NO1odnUvoPrdK84ngUkGzb9zJFtmzVSqHSfSEESgq93ZbNoGO4U4PeDsV0wkhQm3rAYlPOBgg4zRMX9QN6w9BtkeJ3U+wUtjOft4t+kYytvSwiMxuSBV/cMVw0y6FOZA/mr+zHafYYmBIFUbQFOMqQT4qzs+ynqnBadY4XZ/gxCGg/bMvaLSyG0Q87gePhpaYeSzVTIA4QDL7ScXDkbMqfYhuhaIcI1D/CvJn4Ky1/Mxwc4xpEbUSVHVXu5AgVq+ZwpHOMIlouqs+KwyJjV/q23pzDuF/2KWTXdYXKeiOgD2i8O1bYWz/YUuTpZ1zCCI9uZHFRA3mv5mc95PGnQmqvj9yyCBFgF7ZYaoRzbNuwD8ihW4khdZ/cY7/9EUFs0kAAaxfO5KGMkKhV0zVmTAwQ0PdM6MGaVw+aOm3udW4PmSPpW1TUlLYM9AFUaaNpodeipT/lBOpgjCZdbQkGV0ODEnWX1j8VcCKiYeXsnzW+ims3Xl4HOz4FXBOv3Ii1Q3O4lOTa+S+9SuU1bcSsxI6VQP4IgHvHXY9D5wFD1vAVO+4ntq+S4XvBymqU5hunj18ACeSl9NrlORTre wY5tYi45 ge0lIw816xpqCwGFNgc9tVvSNwo2IJ87sT7CuU6+VTsBkbMmO63UAI9p8YT1LB09egMwrVr6UqvoQ+SMZpYjpzjpriyiokB3qH4rDnZUU78NAJ67v5HIsUbKDUuvTJOLycmrpL1LR2OTrM8sHeXUK1XccfTdQlAwOGUpXBODYdcBqLPW5j56YsFL+h4GMHRlChsPAkijxp0qXl55zU4Wptel5UkKoa5XsOesupaQKdppO4AdN+pqCTAZqSceSpwzrTgCy X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Rename PFN_SHIFT_OFFSET to PTE_PFN_SHIFT. Change the calling convention for set_pte() to be the same as other architectures. Add update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Michal Simek Acked-by: Mike Rapoport (IBM) --- arch/microblaze/include/asm/cacheflush.h | 8 ++++++++ arch/microblaze/include/asm/pgtable.h | 15 ++++----------- arch/microblaze/include/asm/tlbflush.h | 4 +++- 3 files changed, 15 insertions(+), 12 deletions(-) diff --git a/arch/microblaze/include/asm/cacheflush.h b/arch/microblaze/include/asm/cacheflush.h index 39f8fb6768d8..e6641ff98cb3 100644 --- a/arch/microblaze/include/asm/cacheflush.h +++ b/arch/microblaze/include/asm/cacheflush.h @@ -74,6 +74,14 @@ do { \ flush_dcache_range((unsigned) (addr), (unsigned) (addr) + PAGE_SIZE); \ } while (0); +static void flush_dcache_folio(struct folio *folio) +{ + unsigned long addr = folio_pfn(folio) << PAGE_SHIFT; + + flush_dcache_range(addr, addr + folio_size(folio)); +} +#define flush_dcache_folio flush_dcache_folio + #define flush_cache_page(vma, vmaddr, pfn) \ flush_dcache_range(pfn << PAGE_SHIFT, (pfn << PAGE_SHIFT) + PAGE_SIZE); diff --git a/arch/microblaze/include/asm/pgtable.h b/arch/microblaze/include/asm/pgtable.h index d1b8272abcd9..19fcd7f8517e 100644 --- a/arch/microblaze/include/asm/pgtable.h +++ b/arch/microblaze/include/asm/pgtable.h @@ -230,12 +230,12 @@ extern unsigned long empty_zero_page[1024]; #define pte_page(x) (mem_map + (unsigned long) \ ((pte_val(x) - memory_start) >> PAGE_SHIFT)) -#define PFN_SHIFT_OFFSET (PAGE_SHIFT) +#define PTE_PFN_SHIFT PAGE_SHIFT -#define pte_pfn(x) (pte_val(x) >> PFN_SHIFT_OFFSET) +#define pte_pfn(x) (pte_val(x) >> PTE_PFN_SHIFT) #define pfn_pte(pfn, prot) \ - __pte(((pte_basic_t)(pfn) << PFN_SHIFT_OFFSET) | pgprot_val(prot)) + __pte(((pte_basic_t)(pfn) << PTE_PFN_SHIFT) | pgprot_val(prot)) #ifndef __ASSEMBLY__ /* @@ -330,14 +330,7 @@ static inline unsigned long pte_update(pte_t *p, unsigned long clr, /* * set_pte stores a linux PTE into the linux page table. */ -static inline void set_pte(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) -{ - *ptep = pte; -} - -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_pte(pte_t *ptep, pte_t pte) { *ptep = pte; } diff --git a/arch/microblaze/include/asm/tlbflush.h b/arch/microblaze/include/asm/tlbflush.h index 2038168ed128..1b179e5e9062 100644 --- a/arch/microblaze/include/asm/tlbflush.h +++ b/arch/microblaze/include/asm/tlbflush.h @@ -33,7 +33,9 @@ static inline void local_flush_tlb_range(struct vm_area_struct *vma, #define flush_tlb_kernel_range(start, end) do { } while (0) -#define update_mmu_cache(vma, addr, ptep) do { } while (0) +#define update_mmu_cache_range(vma, addr, ptep, nr) do { } while (0) +#define update_mmu_cache(vma, addr, pte) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define flush_tlb_all local_flush_tlb_all #define flush_tlb_mm local_flush_tlb_mm From patchwork Wed Mar 15 05:14:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175308 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF979C7618D for ; Wed, 15 Mar 2023 05:14:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BA7418E0001; Wed, 15 Mar 2023 01:14:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AD10E8E0003; Wed, 15 Mar 2023 01:14:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E9468E0006; Wed, 15 Mar 2023 01:14:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4713A8E0001 for ; Wed, 15 Mar 2023 01:14:52 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 07F9281100 for ; Wed, 15 Mar 2023 05:14:52 +0000 (UTC) X-FDA: 80569968024.04.9BDBEB3 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 26BED140008 for ; Wed, 15 Mar 2023 05:14:49 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Qr6mqrM3; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857290; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=X3mqeBLX9+5HNTXvS9HAzSaJR5XLJH8tyUngHdBcvrk=; b=I/ME3Y5S/Wef1EeVPUbVwcHN6KAiLAkckvRQkPAPLXR0OkUNJMUrLOYxNSUEkFkMf8icpL W5HrdYJZjoYVIKa2JGpgX3CNWlqdQdj4tUKIlwMkgiuMilNviE1XP53OCHoej83r+GgWL0 etkp3FLdWA3QDS41FNPjnGFgZAxHXGk= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Qr6mqrM3; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857290; a=rsa-sha256; cv=none; b=U+6YxgUbe8gnQB1+8u2dHJNd0VsHQTLk3Q+jhfI36u671v+fn1ybCogglDIi3azmQ+t2gO OfsULJNUnz7NMOMNnEPtxkCR3CPkzoSL0IAs6v7BLW/4swLJa1C9eRtOqnNMMGJqG/Kgtd sFkzOhzXhjn9dtyj5sX72bh5AiLO4Rg= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=X3mqeBLX9+5HNTXvS9HAzSaJR5XLJH8tyUngHdBcvrk=; b=Qr6mqrM31w410DjLHES93Pxg7s ScRmx/48zwcaU7pZfqcSqzCO8YravY3KBDuoWHzx04mK24HEQrhdfFUC0bmQMrSefgtjFsEEAUHo1 Cb9jw74VWJzjK9NUNCa0GSNQT0QO0dY8exrIyxj31NaRn4rvrvdWp4/yqA8mkzyn0Jk2SgVU71RGw duYrwvst8PLeP0oVYkQLKXHdMMm8rp/vhAg35Ow7b2SDs4mmUokdAGEYeAKY1Xpot/X5k+QqpGITh dbcsHIBiODxRploCW0ojNllpsFQPEiYRQCtejbQAStK+rvgJcXQW/BICpmf0anfB2vnOWrS4WV/+k 7lehsLxA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTM-00DYBy-5v; Wed, 15 Mar 2023 05:14:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org Subject: [PATCH v4 16/36] mips: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:24 +0000 Message-Id: <20230315051444.3229621-17-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 26BED140008 X-Stat-Signature: dk51ugaqx1nzcy8j6wjsacsnrg7hjsz8 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1678857289-780868 X-HE-Meta: U2FsdGVkX19srdYVwo+h8Q2UMFcrwTYm/YOelTIGJ/Vnj7tZmfw1X4d7TtgrSd62ZHNk9pKa20uVC/s7ja7EKelFI1lGlVPB9dmlNdfPyu2cTJjlIutuJpkwjRc/W46lr4FYRrSY+nlwmexyT8LnPS2o4UxM70pruAn7MynlWZ1EvX3lC/hJ1og5KnG8eX3d6G4FqgoGBTV8VSfui7LgwYT4e+lfdgHXJBftF7WEJlJp6RufulJVOpPAo3/JiebRpXjpZingCRHZ5D2cZji9RqRIPl+/hfuPBAli1oFRLpbZ4unhnH68X+qkPD/tmVvHmueoY3T47k5osaEYI3QFYBCOTK71sRPirO0Xbuvl60E/p9S+B17+8jrMFlvLuS2yD8kAFFQmm+CykuLVv6b/5t3xvMsbd8yN+LpUV/hMDOXkromeCXK3qEh2PzArbgEyd4cyxlCs0dDNg8InVcW2JpM4JcD9NBDU1XARn/HeScg376RGG2qUZTpsn9tg9SqZJg6V0ukG/mqZ31pIShmTlDfTJ5IwBSe3DZc5MaiRUXD+YRb8y0YqQQvgRVB+50dJ2UdcSCYHKum/sKPI2Nj97pxX07kANVf/ZbiUpFG1Awk+N6Cfte+3KHzo2qOS9rBzSm60FMgA1qppo6SJWzIplUnVkB89v9/bNE7kVIEDHT8gHYJRT/BELU6icPYW9feaOVIRGPJhuZciLs4/AxN8Ww5zju4IDTkxR/LYD9j1HXiG6XA4sz15nYWDaq6Zd8bAVjXWb/VeB5B65091EwKi1049q6RbkCFfI5nX7tAFkyC1CIz32cAHL0Qn07s6vWctqpYc6AENs+ZKQvnKBg1V4lAv6CNxwLITIKvAEkkwBKVWcHQAZUrpKigq/Z9YvEdwBMVRXbQBseIO3Lt38BFfrhOHHEhtH+N4t9iXpOGZAczmJKjXw97P/FcU53ZpI65tQOzCztcmg1NiMtIcTRg Fgn0PqK3 /HPq6U33GlP0U7wfVdHDTxerYKn24+Elm6p2E5eLVE5OtacN1n3nXduDtn9znZm8TgyqT8nfav6sWDdv+VBBCk/Q+lrDuisk60ckF7eRqMcuKAfO5OuY/O1fmX/k/rTWO2XkPxDvuL8jpcrWK/LHvtWiStiBhuIw+ysLaRir7rycsRX4xCYzoRZuk1grNGUBdXQBNQW0ziwB183t4+kO4pE/oMzcNc5Q8hH9f7nbFrY2LDLAErjdkWEGT44Rp8IwY7CJB X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Rename _PFN_SHIFT to PFN_PTE_SHIFT. Convert a few places to call set_pte() instead of set_pte_at(). Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Thomas Bogendoerfer Cc: linux-mips@vger.kernel.org Acked-by: Mike Rapoport (IBM) --- arch/mips/bcm47xx/prom.c | 2 +- arch/mips/include/asm/cacheflush.h | 32 ++++++++++------ arch/mips/include/asm/pgtable-32.h | 10 ++--- arch/mips/include/asm/pgtable-64.h | 6 +-- arch/mips/include/asm/pgtable-bits.h | 6 +-- arch/mips/include/asm/pgtable.h | 44 +++++++++++++--------- arch/mips/mm/c-r4k.c | 5 ++- arch/mips/mm/cache.c | 56 ++++++++++++++-------------- arch/mips/mm/init.c | 21 +++++++---- arch/mips/mm/pgtable-32.c | 2 +- arch/mips/mm/pgtable-64.c | 2 +- arch/mips/mm/tlbex.c | 2 +- 12 files changed, 107 insertions(+), 81 deletions(-) diff --git a/arch/mips/bcm47xx/prom.c b/arch/mips/bcm47xx/prom.c index a9bea411d928..99a1ba5394e0 100644 --- a/arch/mips/bcm47xx/prom.c +++ b/arch/mips/bcm47xx/prom.c @@ -116,7 +116,7 @@ void __init prom_init(void) #if defined(CONFIG_BCM47XX_BCMA) && defined(CONFIG_HIGHMEM) #define EXTVBASE 0xc0000000 -#define ENTRYLO(x) ((pte_val(pfn_pte((x) >> _PFN_SHIFT, PAGE_KERNEL_UNCACHED)) >> 6) | 1) +#define ENTRYLO(x) ((pte_val(pfn_pte((x) >> PFN_PTE_SHIFT, PAGE_KERNEL_UNCACHED)) >> 6) | 1) #include diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h index b3dc9c589442..2683cade42ef 100644 --- a/arch/mips/include/asm/cacheflush.h +++ b/arch/mips/include/asm/cacheflush.h @@ -36,12 +36,12 @@ */ #define PG_dcache_dirty PG_arch_1 -#define Page_dcache_dirty(page) \ - test_bit(PG_dcache_dirty, &(page)->flags) -#define SetPageDcacheDirty(page) \ - set_bit(PG_dcache_dirty, &(page)->flags) -#define ClearPageDcacheDirty(page) \ - clear_bit(PG_dcache_dirty, &(page)->flags) +#define folio_test_dcache_dirty(folio) \ + test_bit(PG_dcache_dirty, &(folio)->flags) +#define folio_set_dcache_dirty(folio) \ + set_bit(PG_dcache_dirty, &(folio)->flags) +#define folio_clear_dcache_dirty(folio) \ + clear_bit(PG_dcache_dirty, &(folio)->flags) extern void (*flush_cache_all)(void); extern void (*__flush_cache_all)(void); @@ -50,15 +50,24 @@ extern void (*flush_cache_mm)(struct mm_struct *mm); extern void (*flush_cache_range)(struct vm_area_struct *vma, unsigned long start, unsigned long end); extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn); -extern void __flush_dcache_page(struct page *page); +extern void __flush_dcache_pages(struct page *page, unsigned int nr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (cpu_has_dc_aliases) + __flush_dcache_pages(&folio->page, folio_nr_pages(folio)); + else if (!cpu_has_ic_fills_f_dc) + folio_set_dcache_dirty(folio); +} +#define flush_dcache_folio flush_dcache_folio + static inline void flush_dcache_page(struct page *page) { if (cpu_has_dc_aliases) - __flush_dcache_page(page); + __flush_dcache_pages(page, 1); else if (!cpu_has_ic_fills_f_dc) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) @@ -73,10 +82,11 @@ static inline void flush_anon_page(struct vm_area_struct *vma, __flush_anon_page(page, vmaddr); } -static inline void flush_icache_page(struct vm_area_struct *vma, - struct page *page) +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) { } +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void (*flush_icache_range)(unsigned long start, unsigned long end); extern void (*local_flush_icache_range)(unsigned long start, unsigned long end); diff --git a/arch/mips/include/asm/pgtable-32.h b/arch/mips/include/asm/pgtable-32.h index ba0016709a1a..0e196650f4f4 100644 --- a/arch/mips/include/asm/pgtable-32.h +++ b/arch/mips/include/asm/pgtable-32.h @@ -153,7 +153,7 @@ static inline void pmd_clear(pmd_t *pmdp) #if defined(CONFIG_XPA) #define MAX_POSSIBLE_PHYSMEM_BITS 40 -#define pte_pfn(x) (((unsigned long)((x).pte_high >> _PFN_SHIFT)) | (unsigned long)((x).pte_low << _PAGE_PRESENT_SHIFT)) +#define pte_pfn(x) (((unsigned long)((x).pte_high >> PFN_PTE_SHIFT)) | (unsigned long)((x).pte_low << _PAGE_PRESENT_SHIFT)) static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot) { @@ -161,7 +161,7 @@ pfn_pte(unsigned long pfn, pgprot_t prot) pte.pte_low = (pfn >> _PAGE_PRESENT_SHIFT) | (pgprot_val(prot) & ~_PFNX_MASK); - pte.pte_high = (pfn << _PFN_SHIFT) | + pte.pte_high = (pfn << PFN_PTE_SHIFT) | (pgprot_val(prot) & ~_PFN_MASK); return pte; } @@ -184,9 +184,9 @@ static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot) #else #define MAX_POSSIBLE_PHYSMEM_BITS 32 -#define pte_pfn(x) ((unsigned long)((x).pte >> _PFN_SHIFT)) -#define pfn_pte(pfn, prot) __pte(((unsigned long long)(pfn) << _PFN_SHIFT) | pgprot_val(prot)) -#define pfn_pmd(pfn, prot) __pmd(((unsigned long long)(pfn) << _PFN_SHIFT) | pgprot_val(prot)) +#define pte_pfn(x) ((unsigned long)((x).pte >> PFN_PTE_SHIFT)) +#define pfn_pte(pfn, prot) __pte(((unsigned long long)(pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) +#define pfn_pmd(pfn, prot) __pmd(((unsigned long long)(pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) #endif /* defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) */ #define pte_page(x) pfn_to_page(pte_pfn(x)) diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h index 98e24e3e7f2b..20ca48c1b606 100644 --- a/arch/mips/include/asm/pgtable-64.h +++ b/arch/mips/include/asm/pgtable-64.h @@ -298,9 +298,9 @@ static inline void pud_clear(pud_t *pudp) #define pte_page(x) pfn_to_page(pte_pfn(x)) -#define pte_pfn(x) ((unsigned long)((x).pte >> _PFN_SHIFT)) -#define pfn_pte(pfn, prot) __pte(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) -#define pfn_pmd(pfn, prot) __pmd(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) +#define pte_pfn(x) ((unsigned long)((x).pte >> PFN_PTE_SHIFT)) +#define pfn_pte(pfn, prot) __pte(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) +#define pfn_pmd(pfn, prot) __pmd(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) #ifndef __PAGETABLE_PMD_FOLDED static inline pmd_t *pud_pgtable(pud_t pud) diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h index 2362842ee2b5..744abba9111f 100644 --- a/arch/mips/include/asm/pgtable-bits.h +++ b/arch/mips/include/asm/pgtable-bits.h @@ -182,10 +182,10 @@ enum pgtable_bits { #if defined(CONFIG_CPU_R3K_TLB) # define _CACHE_UNCACHED (1 << _CACHE_UNCACHED_SHIFT) # define _CACHE_MASK _CACHE_UNCACHED -# define _PFN_SHIFT PAGE_SHIFT +# define PFN_PTE_SHIFT PAGE_SHIFT #else # define _CACHE_MASK (7 << _CACHE_SHIFT) -# define _PFN_SHIFT (PAGE_SHIFT - 12 + _CACHE_SHIFT + 3) +# define PFN_PTE_SHIFT (PAGE_SHIFT - 12 + _CACHE_SHIFT + 3) #endif #ifndef _PAGE_NO_EXEC @@ -195,7 +195,7 @@ enum pgtable_bits { #define _PAGE_SILENT_READ _PAGE_VALID #define _PAGE_SILENT_WRITE _PAGE_DIRTY -#define _PFN_MASK (~((1 << (_PFN_SHIFT)) - 1)) +#define _PFN_MASK (~((1 << (PFN_PTE_SHIFT)) - 1)) /* * The final layouts of the PTE bits are: diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 574fa14ac8b2..cfcd6a8ba8ef 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -66,7 +66,7 @@ extern void paging_init(void); static inline unsigned long pmd_pfn(pmd_t pmd) { - return pmd_val(pmd) >> _PFN_SHIFT; + return pmd_val(pmd) >> PFN_PTE_SHIFT; } #ifndef CONFIG_MIPS_HUGE_TLB_SUPPORT @@ -105,9 +105,6 @@ do { \ } \ } while(0) -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval); - #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) #ifdef CONFIG_XPA @@ -157,7 +154,7 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt null.pte_low = null.pte_high = _PAGE_GLOBAL; } - set_pte_at(mm, addr, ptep, null); + set_pte(ptep, null); htw_start(); } #else @@ -196,28 +193,41 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt #if !defined(CONFIG_CPU_R3K_TLB) /* Preserve global status for the pair */ if (pte_val(*ptep_buddy(ptep)) & _PAGE_GLOBAL) - set_pte_at(mm, addr, ptep, __pte(_PAGE_GLOBAL)); + set_pte(ptep, __pte(_PAGE_GLOBAL)); else #endif - set_pte_at(mm, addr, ptep, __pte(0)); + set_pte(ptep, __pte(0)); htw_start(); } #endif -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { + unsigned int i; + bool do_sync = false; - if (!pte_present(pteval)) - goto cache_sync_done; + for (i = 0; i < nr; i++) { + if (!pte_present(pte)) + continue; + if (pte_present(ptep[i]) && + (pte_pfn(ptep[i]) == pte_pfn(pte))) + continue; + do_sync = true; + } - if (pte_present(*ptep) && (pte_pfn(*ptep) == pte_pfn(pteval))) - goto cache_sync_done; + if (do_sync) + __update_cache(addr, pte); - __update_cache(addr, pteval); -cache_sync_done: - set_pte(ptep, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << PFN_PTE_SHIFT; + } } +#define set_ptes set_ptes /* * (pmds are folded into puds so this doesn't get actually called, @@ -486,7 +496,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, pte_t entry, int dirty) { if (!pte_same(*ptep, entry)) - set_pte_at(vma->vm_mm, address, ptep, entry); + set_pte(ptep, entry); /* * update_mmu_cache will unconditionally execute, handling both * the case that the PTE changed and the spurious fault case. diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c index a549fa98c2f4..7d2a42f0cffd 100644 --- a/arch/mips/mm/c-r4k.c +++ b/arch/mips/mm/c-r4k.c @@ -679,13 +679,14 @@ static inline void local_r4k_flush_cache_page(void *args) if ((mm == current->active_mm) && (pte_val(*ptep) & _PAGE_VALID)) vaddr = NULL; else { + struct folio *folio = page_folio(page); /* * Use kmap_coherent or kmap_atomic to do flushes for * another ASID than the current one. */ map_coherent = (cpu_has_dc_aliases && - page_mapcount(page) && - !Page_dcache_dirty(page)); + folio_mapped(folio) && + !folio_test_dcache_dirty(folio)); if (map_coherent) vaddr = kmap_coherent(page, addr); else diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c index 11b3e7ddafd5..0668435521fc 100644 --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -82,13 +82,15 @@ SYSCALL_DEFINE3(cacheflush, unsigned long, addr, unsigned long, bytes, return 0; } -void __flush_dcache_page(struct page *page) +void __flush_dcache_pages(struct page *page, unsigned int nr) { - struct address_space *mapping = page_mapping_file(page); + struct folio *folio = page_folio(page); + struct address_space *mapping = folio_flush_mapping(folio); unsigned long addr; + unsigned int i; if (mapping && !mapping_mapped(mapping)) { - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); return; } @@ -97,25 +99,21 @@ void __flush_dcache_page(struct page *page) * case is for exec env/arg pages and those are %99 certainly going to * get faulted into the tlb (and thus flushed) anyways. */ - if (PageHighMem(page)) - addr = (unsigned long)kmap_atomic(page); - else - addr = (unsigned long)page_address(page); - - flush_data_cache_page(addr); - - if (PageHighMem(page)) - kunmap_atomic((void *)addr); + for (i = 0; i < nr; i++) { + addr = (unsigned long)kmap_local_page(page + i); + flush_data_cache_page(addr); + kunmap_local((void *)addr); + } } - -EXPORT_SYMBOL(__flush_dcache_page); +EXPORT_SYMBOL(__flush_dcache_pages); void __flush_anon_page(struct page *page, unsigned long vmaddr) { unsigned long addr = (unsigned long) page_address(page); + struct folio *folio = page_folio(page); if (pages_do_alias(addr, vmaddr)) { - if (page_mapcount(page) && !Page_dcache_dirty(page)) { + if (folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *kaddr; kaddr = kmap_coherent(page, vmaddr); @@ -130,27 +128,29 @@ EXPORT_SYMBOL(__flush_anon_page); void __update_cache(unsigned long address, pte_t pte) { - struct page *page; + struct folio *folio; unsigned long pfn, addr; int exec = !pte_no_exec(pte) && !cpu_has_ic_fills_f_dc; + unsigned int i; pfn = pte_pfn(pte); if (unlikely(!pfn_valid(pfn))) return; - page = pfn_to_page(pfn); - if (Page_dcache_dirty(page)) { - if (PageHighMem(page)) - addr = (unsigned long)kmap_atomic(page); - else - addr = (unsigned long)page_address(page); - - if (exec || pages_do_alias(addr, address & PAGE_MASK)) - flush_data_cache_page(addr); - if (PageHighMem(page)) - kunmap_atomic((void *)addr); + folio = page_folio(pfn_to_page(pfn)); + address &= PAGE_MASK; + address -= offset_in_folio(folio, pfn << PAGE_SHIFT); + + if (folio_test_dcache_dirty(folio)) { + for (i = 0; i < folio_nr_pages(folio); i++) { + addr = (unsigned long)kmap_local_folio(folio, i); - ClearPageDcacheDirty(page); + if (exec || pages_do_alias(addr, address)) + flush_data_cache_page(addr); + kunmap_local((void *)addr); + address += PAGE_SIZE; + } + folio_clear_dcache_dirty(folio); } } diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 5a8002839550..5dcb525a8995 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -88,7 +88,7 @@ static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot) pte_t pte; int tlbidx; - BUG_ON(Page_dcache_dirty(page)); + BUG_ON(folio_test_dcache_dirty(page_folio(page))); preempt_disable(); pagefault_disable(); @@ -169,11 +169,12 @@ void kunmap_coherent(void) void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *vfrom, *vto; vto = kmap_atomic(to); if (cpu_has_dc_aliases && - page_mapcount(from) && !Page_dcache_dirty(from)) { + folio_mapped(src) && !folio_test_dcache_dirty(src)) { vfrom = kmap_coherent(from, vaddr); copy_page(vto, vfrom); kunmap_coherent(); @@ -194,15 +195,17 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (cpu_has_dc_aliases && - page_mapcount(page) && !Page_dcache_dirty(page)) { + folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(vto, src, len); kunmap_coherent(); } else { memcpy(dst, src, len); if (cpu_has_dc_aliases) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); } if (vma->vm_flags & VM_EXEC) flush_cache_page(vma, vaddr, page_to_pfn(page)); @@ -212,15 +215,17 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (cpu_has_dc_aliases && - page_mapcount(page) && !Page_dcache_dirty(page)) { + folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(dst, vfrom, len); kunmap_coherent(); } else { memcpy(dst, src, len); if (cpu_has_dc_aliases) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); } } EXPORT_SYMBOL_GPL(copy_from_user_page); @@ -448,10 +453,10 @@ static inline void __init mem_init_free_highmem(void) void __init mem_init(void) { /* - * When _PFN_SHIFT is greater than PAGE_SHIFT we won't have enough PTE + * When PFN_PTE_SHIFT is greater than PAGE_SHIFT we won't have enough PTE * bits to hold a full 32b physical address on MIPS32 systems. */ - BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT) && (_PFN_SHIFT > PAGE_SHIFT)); + BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT) && (PFN_PTE_SHIFT > PAGE_SHIFT)); #ifdef CONFIG_HIGHMEM max_mapnr = highend_pfn ? highend_pfn : max_low_pfn; diff --git a/arch/mips/mm/pgtable-32.c b/arch/mips/mm/pgtable-32.c index f57fb69472f8..84dd5136d53a 100644 --- a/arch/mips/mm/pgtable-32.c +++ b/arch/mips/mm/pgtable-32.c @@ -35,7 +35,7 @@ pmd_t mk_pmd(struct page *page, pgprot_t prot) { pmd_t pmd; - pmd_val(pmd) = (page_to_pfn(page) << _PFN_SHIFT) | pgprot_val(prot); + pmd_val(pmd) = (page_to_pfn(page) << PFN_PTE_SHIFT) | pgprot_val(prot); return pmd; } diff --git a/arch/mips/mm/pgtable-64.c b/arch/mips/mm/pgtable-64.c index b4386a0e2ef8..c76d21f7dffb 100644 --- a/arch/mips/mm/pgtable-64.c +++ b/arch/mips/mm/pgtable-64.c @@ -93,7 +93,7 @@ pmd_t mk_pmd(struct page *page, pgprot_t prot) { pmd_t pmd; - pmd_val(pmd) = (page_to_pfn(page) << _PFN_SHIFT) | pgprot_val(prot); + pmd_val(pmd) = (page_to_pfn(page) << PFN_PTE_SHIFT) | pgprot_val(prot); return pmd; } diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c index 80e05ee98d62..1393a11af539 100644 --- a/arch/mips/mm/tlbex.c +++ b/arch/mips/mm/tlbex.c @@ -253,7 +253,7 @@ static void output_pgtable_bits_defines(void) pr_define("_PAGE_GLOBAL_SHIFT %d\n", _PAGE_GLOBAL_SHIFT); pr_define("_PAGE_VALID_SHIFT %d\n", _PAGE_VALID_SHIFT); pr_define("_PAGE_DIRTY_SHIFT %d\n", _PAGE_DIRTY_SHIFT); - pr_define("_PFN_SHIFT %d\n", _PFN_SHIFT); + pr_define("PFN_PTE_SHIFT %d\n", PFN_PTE_SHIFT); pr_debug("\n"); } From patchwork Wed Mar 15 05:14:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175306 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F261EC76195 for ; Wed, 15 Mar 2023 05:14:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1F6A76B0078; Wed, 15 Mar 2023 01:14:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 182178E0001; Wed, 15 Mar 2023 01:14:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF8946B007D; Wed, 15 Mar 2023 01:14:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D606B6B0078 for ; Wed, 15 Mar 2023 01:14:51 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 963D614087E for ; Wed, 15 Mar 2023 05:14:51 +0000 (UTC) X-FDA: 80569967982.10.4A48EAE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id D723F40013 for ; Wed, 15 Mar 2023 05:14:49 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=lLPhB4hh; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857290; a=rsa-sha256; cv=none; b=Yu+/GgEGvH74oVGF10dBjp2pZyCw07EUyJQZRQRhQcfLBlDtcJ8UcwoWwzH9mfnkVVVe2N PlsKR1BxkfJYiQlYdEnh0r0k9VY8FxoWOxgdAzv6UXWlQ245B9FuY3jlpZ+tcT0TqdoKyL VaAthR193i3hA84Seu3KRm/mpay2Js4= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=lLPhB4hh; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857290; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Myd/TJquUeBRJJmNm7edaVKXciXbBFqA65N1Zs0hkYU=; b=7gOI2OBtboCi7DVQJD0SiQM8yH3Y3HBBbESvUpdnxR78D0ex1IP0nwAyoUzVidFtOK9khT bXzrqKAVe7nLXI9ooKJmQ5hX2RqCog0DDoyjL3hrK1TTUFpA6VmHo6ixvzMLkA0LeT6Rgj myCYJ+l6hCAXBUCDDBYbZfTVVn+4XO4= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Myd/TJquUeBRJJmNm7edaVKXciXbBFqA65N1Zs0hkYU=; b=lLPhB4hhCA92m/nHhAdrTjgLDi ljABKmqA9dzFdy6p46XK4Ii84j/CojehkzhLZszJYd+mUB8CckHeBTXjLe3Ot0oeLDpVHw8i4WjZL uuiwYDKeg2MSAxhtcNglHAClEhzgm0TZu9W14yZeiLmU56nFhUjM8ogxx1xvggkkF9ekeZLcAZYe7 qweDJESuH0R/LQbVfXoXXxUFkbXZ37Wd0UTAVmORUIHW7BBZcW3kTcaK9e4R2VB+k6sZZgrJ3ZY37 Pcst8OKqCnAWTqES8KUlIQDQSUOhaH+MP+ZY9OWWJGGjAb6Pk6bm8jD30tjjeDUHa7+a4oTCEiK68 o4MeBPvQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTM-00DYC0-9L; Wed, 15 Mar 2023 05:14:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Dinh Nguyen Subject: [PATCH v4 17/36] nios2: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:25 +0000 Message-Id: <20230315051444.3229621-18-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: D723F40013 X-Rspamd-Server: rspam01 X-Stat-Signature: 8kgrpsxmorpp873wyyy3cj4nize8qw3k X-HE-Tag: 1678857289-763905 X-HE-Meta: U2FsdGVkX18DrS4az/pQTSqpQnrDzL5CRKU0IpMat4WfpuCPUZgz5wCfEELZd6WDa2rhDiC0IfL6hoKg4LZHJ6sdqqiEOod9OV+X5bkE9YoXpSpfIxmiSLrmHgYc7rly6VgFP/dSGr8nxLTXpkS8sDrGliotnyjigdZAvmxRRam0DTRes06/bzE7xE2lSby+TCD/4E9UG0NgMGV4UU8VmavlPVkxuO2qdUJUA9zIhdgF7+yimWOcIhZwqC2wE63o0UJ8bhAE0KCtKzS5w+sUHAEGXSPL+wOImPLi9j9MyFMdmZN0B0EK4EiEtT0NjbbmiCoupG6HOSP2LwvkqijosSsoLhe+NE1XxtZvtL8obu62ZaFf6NYGvNXQ5nDwaKJBMz6dvqI3fjvrBNDe88XEOBVgGMMfMd4IpTOCmFKqrR3riXrWVNMAVOcclXZpLH3RaurBoHR09XYRwiHZKPxMXkrdccSTCNJzSGckIqu4ldq2Qa18XSbbaPfaZpSAibBCoaqiTfswKlakL9vwzxMJ3XqnOOGq/cS25cyTbDnWXaRfau5S4khDrhoPJt5m9PHVIX/0izfYPaLl4C2QURrYVs8qrX1ScPlRgGSiuMe/WGlI6X426qa55F7FlxD4VODjE+zzV+hvtMAwn5x9L4EcQ+TdemfI+nlKIdWNk0qOrWbqvAUuTvloB0WjTWz7hQVM1tufmfN/2fQbYcN6gZlfo40qfQu0VhdYBePjaM+h2jFzwkKKjKZol8SlRZhvxGO1Dec58wuuMX/T8IyahUodL/Quy2jqdU4ILdUp+s2ScS8X1dQsCZGklaOlpINoHP0mtNAWqIzbKsNZT9vROgoIljKVoDC0XETFQKxgSXxClrA6Pwka1i3kFMZNjry+XwwJCoCfTw9lakgrsZLCzOKVF1RFZ42ZkhZTJuwJ9z8Ko65QMb7a3cRIv4VpWnNd9SWqx4G3I4U4rCXNrhSw8SR a/1q7yt0 ujYju9nswnocWgXV34l3+RTS4LUMUQyIusuBg9oz+gXApvJsT9fExM9kjd9PUtJEL4AndnhRZyD7tLzKlCjShWu/A6id2q6S6+kjHczkQ+gKLBa5EVvOZbMi9C0ZGsACiaWlRkSGTfshzkx4yfiycUE8C5w65xe8CQkscMmC7wTEXqG6e4qU2wBLUcSk/QmjPF2+Ke+YnGjRWeC5DEda6a01r+ZoA3t+ouuCFjHrvvIZB/fBJg42qRyv6qGm41d/TV5Tk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Dinh Nguyen Acked-by: Mike Rapoport (IBM) --- arch/nios2/include/asm/cacheflush.h | 6 ++- arch/nios2/include/asm/pgtable.h | 28 ++++++++----- arch/nios2/mm/cacheflush.c | 61 ++++++++++++++++------------- 3 files changed, 58 insertions(+), 37 deletions(-) diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h index d0b71dd71287..8624ca83cffe 100644 --- a/arch/nios2/include/asm/cacheflush.h +++ b/arch/nios2/include/asm/cacheflush.h @@ -29,9 +29,13 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio extern void flush_icache_range(unsigned long start, unsigned long end); -extern void flush_icache_page(struct vm_area_struct *vma, struct page *page); +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1); #define flush_cache_vmap(start, end) flush_dcache_range(start, end) #define flush_cache_vunmap(start, end) flush_dcache_range(start, end) diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h index 0f5c2564e9f5..4bb5f4dfff82 100644 --- a/arch/nios2/include/asm/pgtable.h +++ b/arch/nios2/include/asm/pgtable.h @@ -178,14 +178,21 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) *ptep = pteval; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - unsigned long paddr = (unsigned long)page_to_virt(pte_page(pteval)); - - flush_dcache_range(paddr, paddr + PAGE_SIZE); - set_pte(ptep, pteval); + unsigned long paddr = (unsigned long)page_to_virt(pte_page(pte)); + + flush_dcache_range(paddr, paddr + nr * PAGE_SIZE); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1; + } } +#define set_ptes set_ptes static inline int pmd_none(pmd_t pmd) { @@ -202,7 +209,7 @@ static inline void pte_clear(struct mm_struct *mm, pte_val(null) = (addr >> PAGE_SHIFT) & 0xf; - set_pte_at(mm, addr, ptep, null); + set_pte(ptep, null); } /* @@ -273,7 +280,10 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) extern void __init paging_init(void); extern void __init mmu_init(void); -extern void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *pte); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr); + +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #endif /* _ASM_NIOS2_PGTABLE_H */ diff --git a/arch/nios2/mm/cacheflush.c b/arch/nios2/mm/cacheflush.c index 6aa9257c3ede..471485a84b2c 100644 --- a/arch/nios2/mm/cacheflush.c +++ b/arch/nios2/mm/cacheflush.c @@ -138,10 +138,11 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, __flush_icache(start, end); } -void flush_icache_page(struct vm_area_struct *vma, struct page *page) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) { unsigned long start = (unsigned long) page_address(page); - unsigned long end = start + PAGE_SIZE; + unsigned long end = start + nr * PAGE_SIZE; __flush_dcache(start, end); __flush_icache(start, end); @@ -158,19 +159,19 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, __flush_icache(start, end); } -void __flush_dcache_page(struct address_space *mapping, struct page *page) +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio) { /* * Writeback any data associated with the kernel mapping of this * page. This ensures that data in the physical page is mutually * coherent with the kernels mapping. */ - unsigned long start = (unsigned long)page_address(page); + unsigned long start = (unsigned long)folio_address(folio); - __flush_dcache(start, start + PAGE_SIZE); + __flush_dcache(start, start + folio_size(folio)); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; @@ -178,32 +179,38 @@ void flush_dcache_page(struct page *page) * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); /* Flush this page if there are aliases. */ if (mapping && !mapping_mapped(mapping)) { - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } else { - __flush_dcache_page(mapping, page); + __flush_dcache_folio(mapping, folio); if (mapping) { - unsigned long start = (unsigned long)page_address(page); - flush_aliases(mapping, page); - flush_icache_range(start, start + PAGE_SIZE); + unsigned long start = (unsigned long)folio_address(folio); + flush_aliases(mapping, folio); + flush_icache_range(start, start + folio_size(folio)); } - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} +EXPORT_SYMBOL(flush_dcache_folio); -void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { pte_t pte = *ptep; unsigned long pfn = pte_pfn(pte); - struct page *page; + struct folio *folio; struct address_space *mapping; reload_tlb_page(vma, address, pte); @@ -215,19 +222,19 @@ void update_mmu_cache(struct vm_area_struct *vma, * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - mapping = page_mapping_file(page); - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + folio = page_folio(pfn_to_page(pfn)); + mapping = folio_flush_mapping(folio); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); - if(mapping) - { - flush_aliases(mapping, page); + if (mapping) { + flush_aliases(mapping, folio); if (vma->vm_flags & VM_EXEC) - flush_icache_page(vma, page); + flush_icache_pages(vma, &folio->page, + folio_nr_pages(folio)); } } From patchwork Wed Mar 15 05:14:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175312 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0CA9C61DA4 for ; Wed, 15 Mar 2023 05:15:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E7C98E000D; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3322E8E000C; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0200D8E000B; Wed, 15 Mar 2023 01:14:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C611A8E0005 for ; Wed, 15 Mar 2023 01:14:53 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 9F6DBC1076 for ; Wed, 15 Mar 2023 05:14:53 +0000 (UTC) X-FDA: 80569968066.18.2DF147A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 0DC5918000A for ; Wed, 15 Mar 2023 05:14:51 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=QiRYRIYB; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857292; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ROs4pif/kFvkBiDV3NO8zPcw3ClnbGtqAW0UODiyA6c=; b=ZAX9Qkgbv709m32Tik0q3q0OLSgkcl0enTGyU8l+FqSf3/qM8VirAiAItSTy3GxUukmWdE 7L8h+iT6XQmqpD+4hjmnjndo9JhLYfvjhux19r7E1rw4f0sEHXgY0XjenCdeXM2O2jSHvw eR2fZfNqBLfmRCGYPum2h2Ydl7fx6tQ= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=QiRYRIYB; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857292; a=rsa-sha256; cv=none; b=jr/Frs6g+gcDLkB4hZb3ysxL3K0RfBr2vX3RYjdD57LWvqGficc42y0HM9ZY1znbgmwEOP zrY6I8X/GGS81JcmP9j2BxGBh3ZDaojLIWj6VJNUzxeWMcidZdDBgRAogyttqy605SirDd RpNNXDuqj+7xWL+7r8Y+PdHDH1qLQXM= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ROs4pif/kFvkBiDV3NO8zPcw3ClnbGtqAW0UODiyA6c=; b=QiRYRIYBspVSNMHWy9CUnAGFbg xcAKH5Nyt1RloAq+AVAi+8/quBIfjUxcvLjCl1rcdTZ/d0iEXLxmASA9bmZm3a0v/EQK4JxgBh0wY vT1QYq8dRywPqxRFE5ZeubBsDBIPn0wWWfrEg2thY8M5a+ieIT+gH5ykedXNkD8swAOJEn4CkAsC/ VdJRQG7gDXDj5Zm413uOWcD8k5UVu37BWjQTehVoPwsXUyXj00Cdc0dKWPM8dAKb9l+sx6R8kng3O lVbqSK6QaMiJDR2//MEhScv7dI5AxQitEVZ6B+XpHUfFPe276MEpvRtUpH54AZrA4qwb+xEVDyyDM oHFRuGsQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTM-00DYC2-Ce; Wed, 15 Mar 2023 05:14:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jonas Bonn , Stefan Kristiansson , Stafford Horne , linux-openrisc@vger.kernel.org Subject: [PATCH v4 18/36] openrisc: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:26 +0000 Message-Id: <20230315051444.3229621-19-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 0DC5918000A X-Stat-Signature: poqeiagpf7d9tkxhawk4dssyog6enijy X-Rspam-User: X-HE-Tag: 1678857291-874221 X-HE-Meta: U2FsdGVkX19Azd0ECHyJJAD8njXM53qZ5kE2NIQZ6aSUtf2ep4k1uEvOX578h5gpyr615OiV44h0P7EgsT5u8P45InDyD4u4zW/kN6cp8kofinZhPMNzVOX/7jOSGCv96cMgRwhXe0yWAsrowOyyy8fzUyROQO/N1Oy2vcTTS+tf+dQu279JSgvkJLM/g3s6q3KgkWgFgaDLvE9tizVwGnTah7V6I8sTGCsEXfELM0DO6XRvU19hDhJVBgFuQjNC/6GG3tiTlZKeDFyUICtOigP1pgeVrlU5eQhcKTfOp0mSQwAlliwMFMRrPbj0jKwRcxqITwh0dAEMbjmhoYcV6yRIPQaOpiulgviQ+kI5cNO7wzPm+9tpGRR7EYM0xHNeFL5hPAy7R4H5jd9wkG0rYY74Mxp15jzXim6HB6vEskc12QQbzp4HI1BZwh0Iv76EKSnt3e+x04z5Vj/7oggdh0HPSlLxZIex/lr5Mc1p0qXGwIZ1VCONWZ8EcdC5WZzqfz3QTWlra2YUI50OUaijH49laTrw3DEuvxf/FSJ4EelL0lGPscMWXgfKlWm9ZIWfFPk9eYEdyl59hksn/a/udp8+Fgieo+ftNWmxCfqBRkVZh9AtpMWQamY6FixYQ7wKcDmB9O4EONLOzOLblbSMVkc9bATVgDYI+o/RxBT4DTgKMJ2E7fD1B0NvBtVhat1LgSh/ocKTWLM2/tLm+LR/u9RjIFVOu5MeONaTx7JD/QUHdDtj9Lutqxxv3jxXFF+vk4LfDPXgqVo18Fy+KVBOzQ8ogb3qDvgJX2qOqqzXhP82ZMp0xnoTKUzmSegCiiMqg1ibRZWJ8m4tA51roPPjA80Hng9YbVW71+z3VPqHyscYsVRFfWdnqKN3wTsTr030nH/iF+6BeaLLRTS9LwWwmYeTdSG8XBD4c3eusa4Y+YIh6YYo7C2gcuEmm8TI+W9g5XqQGNB7hmfc0FusCAD NLRh+R2f 0NLhR05GdIGuOtWxT+gM0ouz3paEzGb9jyXZKy3LAE5c0Lyo7zFi5LYViE/U5BPY6HmzLQifrB1nEeS3QJZPiHzkOd1op6CKDYGX/yC8l0A9+qHtY1vv4xl9M7lDqGXXXCKGEnGDKr8UNg+dBnABUM48BHz5P1eIYkR9QZfkW2e/VhtxswbD1o2IBAxD+qoAmc22L2N4ITJ2OnlQD3Dzo9iHwOjm7r79l6XIjg6uOEeCbUo4Nb8sDDA915XkwIZtxGbSlb8vnpqMnXhMGdjjjeBrJPg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Jonas Bonn Cc: Stefan Kristiansson Cc: Stafford Horne Cc: linux-openrisc@vger.kernel.org Acked-by: Mike Rapoport (IBM) --- arch/openrisc/include/asm/cacheflush.h | 8 +++++++- arch/openrisc/include/asm/pgtable.h | 14 +++++++++----- arch/openrisc/mm/cache.c | 12 ++++++++---- 3 files changed, 24 insertions(+), 10 deletions(-) diff --git a/arch/openrisc/include/asm/cacheflush.h b/arch/openrisc/include/asm/cacheflush.h index eeac40d4a854..984c331ff5f4 100644 --- a/arch/openrisc/include/asm/cacheflush.h +++ b/arch/openrisc/include/asm/cacheflush.h @@ -56,10 +56,16 @@ static inline void sync_icache_dcache(struct page *page) */ #define PG_dc_clean PG_arch_1 +static inline void flush_dcache_folio(struct folio *folio) +{ + clear_bit(PG_dc_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 static inline void flush_dcache_page(struct page *page) { - clear_bit(PG_dc_clean, &page->flags); + flush_dcache_folio(page_folio(page)); } #define flush_icache_user_page(vma, page, addr, len) \ diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h index 3eb9b9555d0d..2f42a12c40ab 100644 --- a/arch/openrisc/include/asm/pgtable.h +++ b/arch/openrisc/include/asm/pgtable.h @@ -46,7 +46,7 @@ extern void paging_init(void); * hook is made available. */ #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) + /* * (pmds are folded into pgds so this doesn't get actually called, * but the define is needed for a generic inline function.) @@ -357,6 +357,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) #define __pmd_offset(address) \ (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pte_pfn(x) ((unsigned long)(((x).pte)) >> PAGE_SHIFT) #define pfn_pte(pfn, prot) __pte((((pfn) << PAGE_SHIFT)) | pgprot_val(prot)) @@ -379,13 +380,16 @@ static inline void update_tlb(struct vm_area_struct *vma, extern void update_cache(struct vm_area_struct *vma, unsigned long address, pte_t *pte); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *pte) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { - update_tlb(vma, address, pte); - update_cache(vma, address, pte); + update_tlb(vma, address, ptep); + update_cache(vma, address, ptep); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + /* __PHX__ FIXME, SWAP, this probably doesn't work */ /* diff --git a/arch/openrisc/mm/cache.c b/arch/openrisc/mm/cache.c index 534a52ec5e66..eb43b73f3855 100644 --- a/arch/openrisc/mm/cache.c +++ b/arch/openrisc/mm/cache.c @@ -43,15 +43,19 @@ void update_cache(struct vm_area_struct *vma, unsigned long address, pte_t *pte) { unsigned long pfn = pte_val(*pte) >> PAGE_SHIFT; - struct page *page = pfn_to_page(pfn); - int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); + struct folio *folio = page_folio(pfn_to_page(pfn)); + int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags); /* * Since icaches do not snoop for updated data on OpenRISC, we * must write back and invalidate any dirty pages manually. We * can skip data pages, since they will not end up in icaches. */ - if ((vma->vm_flags & VM_EXEC) && dirty) - sync_icache_dcache(page); + if ((vma->vm_flags & VM_EXEC) && dirty) { + unsigned int nr = folio_nr_pages(folio); + + while (nr--) + sync_icache_dcache(folio_page(folio, nr)); + } } From patchwork Wed Mar 15 05:14:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9241C77B61 for ; Wed, 15 Mar 2023 05:15:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1436F8E000E; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ECE788E000C; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B1EDA8E0008; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 27F018E000A for ; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5A38916102E for ; Wed, 15 Mar 2023 05:14:53 +0000 (UTC) X-FDA: 80569968066.24.B6F0F89 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id 93C951C0006 for ; Wed, 15 Mar 2023 05:14:50 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=GEzWCMgS; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857290; a=rsa-sha256; cv=none; b=0CuDV4ypJJVG0x4ibV8gnIDFIhNfS1Wp2apnWdpLhbky77VUXhIb/ZdI2OGy4lAOZw+UhB IXzD0ResRAUDtv0R9HtzIBz0XT+xYmRxnyAvjIJvS0GZj5iZUXne8Rk5PbEkgcXJqwovv+ NRZoipgcREg3R0PP7WTKXypY8BzsKVY= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=GEzWCMgS; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857290; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XcBrYZOtMdLQs3fqRtRjsBvdTd2bVZVBULsGcsDMrog=; b=nVBK2dcAJTwRw4S7EHkw6NMSboh0CmeeHI2FbZ9nL6U/EWtJALH3bg1m8TirIne8As79LI 0ybVcVpxoCm/YkISAJWYsTmR+EEa+S/D/EJa9TRnWiP8qWXoC+7loRyjWXY4ChxT0TE6pv ikBXSA2++1Eg1bKoQw6s+RV8UzhFYEs= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=XcBrYZOtMdLQs3fqRtRjsBvdTd2bVZVBULsGcsDMrog=; b=GEzWCMgSQzhXZ957r4DcbB/bwU o7fFu2F8oR+WxOG1tKeNpMDwHtaKN6+tYQ6wXdeCOryx6Pq4GoOwwTVXGbA8kmw3uQlTOAdSVV/Ek DjFfZbzX6kmu5zQh4IevOYHDOJIW6lUqFcrIzSqoO7doodfpRxcFIATVD4HFFw0M7lIwfFkxoVHlO DVt2sGz9ImMePiKBO6CreyaL4tZoo1V9ntexgmRNciBn0Pu3bvvtRZMNSLFevzLUPMKmmxEvBnnTR 2CTDm0D81TEGlTrgUHNNgttB2HB/xUe9X/W/jYN+Or6ZnFllFURulPdcH7Wzen7tmw4ex2ZQvsJl2 ACqgue0A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTM-00DYCB-Ia; Wed, 15 Mar 2023 05:14:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, "James E.J. Bottomley" , Helge Deller , linux-parisc@vger.kernel.org Subject: [PATCH v4 19/36] parisc: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:27 +0000 Message-Id: <20230315051444.3229621-20-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 93C951C0006 X-Rspamd-Server: rspam01 X-Stat-Signature: oixy7f3mifiaeurgt5wtw6f64r1eehio X-HE-Tag: 1678857290-843004 X-HE-Meta: U2FsdGVkX1/eVK6dh8STSghGU52nOfNF2CJW0WoxYmWt5LgH98PnB44KPZG7274klvSz4HvGq9OnsYj8yxE+MSQXZahl0ZiSl+cEfqlHwHQDeDQ0YCvfGO96l+KjANeR4yjib3B0l4XUUtyRbtkT7c3vsbSWwqnC0aueSHGFEcQUtgoDnfdLo1HEr7g8UmoPJIZEuejVxYoBKvn+LTaRjgP8vo59mSQpaT3CDSEecrWfh+So/X/5MXstR3zMvr09UgNCrDr28DIEbreLSFvM/ev8q9dq06zT3uU5QEakkQMl80k9QzyjVVWr8vcF3L/fdlNft/JcsBxPkQ2uc71CgXVpBOlR2QEWa3V/qZDdDI6t4uFHkGeykaGTWMH3KDOoLgQhAcFNIRlGsvKyd9SYMVc8IXexNC8oJWxMkTg157VwPix/pKZFe+X7asYQBwMgRUWd4U9ukZP2vnNRWWVwVXCff0XFmqvHqazidnhArcUp/edmkHpd3gaLM1Ye0jTiwk6wBGvKmzA6C66R1rJOntKEJFh8JyAYZJmBihBW4k+xGkyAaDtcuL/1U39Xpye86BjlRjkmyDvYsHZYYTWSbySsBJQ1Gqvj80bApPmZvfuqpq611VPig6sOMVex6dNX4grNS974mL8oH9i6NdRG/f62788tdM9+Dyn783Qd74cqgJffmXGDCPRS+Drnwv0ac46BYjueFrPssL3Fdc5AetgSzmxyDT1sOTLUtfB8hByqkg4xH4kW6yefYOSMU1paSkRekW8jUp60cKwK0wEUh8cM9r8h/cFgIFLRvmmDXtVC21PGmTLIrXnK9bKnleQjPcBZ1h1YmoBG6WINuHDErJCiCSeeOwi3vYoF90g0DmKA73vau+hponupLb/vdIUSG8tY1fuGwJ0CXy6YxqT9ASFLiRZKtKFsN8bLSvIsEFGHldZNPWgvTIC+uaTB/3xm2BocR9dl/deDNJ8u/hd baqJcWEq mOwGt7TzO1JQSNpugdmOszBtmt0BDRUt58okjVNKOBKunBuNIwqCGabB8dZzw5dsI+HEhzBin3DMFbgvHjCOXlcbkxnSS5t9LdAYAj2YHc9G/tQ0yQe3eaXeoSYU/LiKS7zsMsaMwSdlZ4PHDoNreWETDkSykKZgVLmqc6EeRSUOB5VsNdy9ulO0V7R35Ot4Rr9R5UjGG3vcSiE0GiFfbXEHcxorlNRSHZW3mRSNV7B7I60/IDp1nBtWA0iXG+336BH4br+kfOcuVX5+HnEC+VhVxouTe8zTQ8oKURuzEx6Bu+fomN8Z8kxiXjtRRbvD4koAR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: "James E.J. Bottomley" Cc: Helge Deller Cc: linux-parisc@vger.kernel.org Acked-by: Mike Rapoport (IBM) --- arch/parisc/include/asm/cacheflush.h | 14 ++-- arch/parisc/include/asm/pgtable.h | 37 ++++++---- arch/parisc/kernel/cache.c | 101 +++++++++++++++++++-------- 3 files changed, 103 insertions(+), 49 deletions(-) diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h index 0bdee6724132..2cdc0ea562d6 100644 --- a/arch/parisc/include/asm/cacheflush.h +++ b/arch/parisc/include/asm/cacheflush.h @@ -43,16 +43,20 @@ void invalidate_kernel_vmap_range(void *vaddr, int size); #define flush_cache_vmap(start, end) flush_cache_all() #define flush_cache_vunmap(start, end) flush_cache_all() +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) -#define flush_icache_page(vma,page) do { \ - flush_kernel_dcache_page_addr(page_address(page)); \ - flush_kernel_icache_page(page_address(page)); \ -} while (0) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) #define flush_icache_range(s,e) do { \ flush_kernel_dcache_range_asm(s,e); \ diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h index e2950f5db7c9..ca6afe1980a5 100644 --- a/arch/parisc/include/asm/pgtable.h +++ b/arch/parisc/include/asm/pgtable.h @@ -73,15 +73,6 @@ extern void __update_cache(pte_t pte); mb(); \ } while(0) -#define set_pte_at(mm, addr, pteptr, pteval) \ - do { \ - if (pte_present(pteval) && \ - pte_user(pteval)) \ - __update_cache(pteval); \ - *(pteptr) = (pteval); \ - purge_tlb_entries(mm, addr); \ - } while (0) - #endif /* !__ASSEMBLY__ */ #define pte_ERROR(e) \ @@ -285,7 +276,7 @@ extern unsigned long *empty_zero_page; #define pte_none(x) (pte_val(x) == 0) #define pte_present(x) (pte_val(x) & _PAGE_PRESENT) #define pte_user(x) (pte_val(x) & _PAGE_USER) -#define pte_clear(mm, addr, xp) set_pte_at(mm, addr, xp, __pte(0)) +#define pte_clear(mm, addr, xp) set_pte(xp, __pte(0)) #define pmd_flag(x) (pmd_val(x) & PxD_FLAG_MASK) #define pmd_address(x) ((unsigned long)(pmd_val(x) &~ PxD_FLAG_MASK) << PxD_VALUE_SHIFT) @@ -391,11 +382,29 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) extern void paging_init (void); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + if (pte_present(pte) && pte_user(pte)) + __update_cache(pte); + for (;;) { + *ptep = pte; + purge_tlb_entries(mm, addr); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << PFN_PTE_SHIFT; + addr += PAGE_SIZE; + } +} +#define set_ptes set_ptes + /* Used for deferring calls to flush_dcache_page() */ #define PG_dcache_dirty PG_arch_1 -#define update_mmu_cache(vms,addr,ptep) __update_cache(*ptep) +#define update_mmu_cache_range(vma, addr, ptep, nr) __update_cache(*ptep) +#define update_mmu_cache(vma, addr, ptep) __update_cache(*ptep) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that @@ -450,7 +459,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned if (!pte_young(pte)) { return 0; } - set_pte_at(vma->vm_mm, addr, ptep, pte_mkold(pte)); + set_pte(ptep, pte_mkold(pte)); return 1; } @@ -460,14 +469,14 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t old_pte; old_pte = *ptep; - set_pte_at(mm, addr, ptep, __pte(0)); + set_pte(ptep, __pte(0)); return old_pte; } static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - set_pte_at(mm, addr, ptep, pte_wrprotect(*ptep)); + set_pte(ptep, pte_wrprotect(*ptep)); } #define pte_same(A,B) (pte_val(A) == pte_val(B)) diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c index 1d3b8bc8a623..ceaa268fc1a6 100644 --- a/arch/parisc/kernel/cache.c +++ b/arch/parisc/kernel/cache.c @@ -92,11 +92,11 @@ static inline void flush_data_cache(void) /* Kernel virtual address of pfn. */ #define pfn_va(pfn) __va(PFN_PHYS(pfn)) -void -__update_cache(pte_t pte) +void __update_cache(pte_t pte) { unsigned long pfn = pte_pfn(pte); - struct page *page; + struct folio *folio; + unsigned int nr; /* We don't have pte special. As a result, we can be called with an invalid pfn and we don't need to flush the kernel dcache page. @@ -104,13 +104,17 @@ __update_cache(pte_t pte) if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); - if (page_mapping_file(page) && - test_bit(PG_dcache_dirty, &page->flags)) { - flush_kernel_dcache_page_addr(pfn_va(pfn)); - clear_bit(PG_dcache_dirty, &page->flags); + folio = page_folio(pfn_to_page(pfn)); + pfn = folio_pfn(folio); + nr = folio_nr_pages(folio); + if (folio_flush_mapping(folio) && + test_bit(PG_dcache_dirty, &folio->flags)) { + while (nr--) + flush_kernel_dcache_page_addr(pfn_va(pfn + nr)); + clear_bit(PG_dcache_dirty, &folio->flags); } else if (parisc_requires_coherency()) - flush_kernel_dcache_page_addr(pfn_va(pfn)); + while (nr--) + flush_kernel_dcache_page_addr(pfn_va(pfn + nr)); } void @@ -364,6 +368,20 @@ static void flush_user_cache_page(struct vm_area_struct *vma, unsigned long vmad preempt_enable(); } +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) +{ + void *kaddr = page_address(page); + + for (;;) { + flush_kernel_dcache_page_addr(kaddr); + flush_kernel_icache_page(kaddr); + if (--nr == 0) + break; + page += PAGE_SIZE; + } +} + static inline pte_t *get_ptep(struct mm_struct *mm, unsigned long addr) { pte_t *ptep = NULL; @@ -392,26 +410,30 @@ static inline bool pte_needs_flush(pte_t pte) == (_PAGE_PRESENT | _PAGE_ACCESSED); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - struct address_space *mapping = page_mapping_file(page); - struct vm_area_struct *mpnt; - unsigned long offset; + struct address_space *mapping = folio_flush_mapping(folio); + struct vm_area_struct *vma; unsigned long addr, old_addr = 0; + void *kaddr; unsigned long count = 0; + unsigned long i, nr; pgoff_t pgoff; if (mapping && !mapping_mapped(mapping)) { - set_bit(PG_dcache_dirty, &page->flags); + set_bit(PG_dcache_dirty, &folio->flags); return; } - flush_kernel_dcache_page_addr(page_address(page)); + nr = folio_nr_pages(folio); + kaddr = folio_address(folio); + for (i = 0; i < nr; i++) + flush_kernel_dcache_page_addr(kaddr + i * PAGE_SIZE); if (!mapping) return; - pgoff = page->index; + pgoff = folio->index; /* * We have carefully arranged in arch_get_unmapped_area() that @@ -421,15 +443,29 @@ void flush_dcache_page(struct page *page) * on machines that support equivalent aliasing */ flush_dcache_mmap_lock(mapping); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - addr = mpnt->vm_start + offset; - if (parisc_requires_coherency()) { - pte_t *ptep; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff + nr - 1) { + unsigned long offset = pgoff - vma->vm_pgoff; + unsigned long pfn = folio_pfn(folio); + + addr = vma->vm_start; + nr = folio_nr_pages(folio); + if (offset > -nr) { + pfn -= offset; + nr += offset; + } else { + addr += offset * PAGE_SIZE; + } + if (addr + nr * PAGE_SIZE > vma->vm_end) + nr = (vma->vm_end - addr) / PAGE_SIZE; - ptep = get_ptep(mpnt->vm_mm, addr); - if (ptep && pte_needs_flush(*ptep)) - flush_user_cache_page(mpnt, addr); + if (parisc_requires_coherency()) { + for (i = 0; i < nr; i++) { + pte_t *ptep = get_ptep(vma->vm_mm, + addr + i * PAGE_SIZE); + if (ptep && pte_needs_flush(*ptep)) + flush_user_cache_page(vma, + addr + i * PAGE_SIZE); + } } else { /* * The TLB is the engine of coherence on parisc: @@ -442,27 +478,32 @@ void flush_dcache_page(struct page *page) * in (until the user or kernel specifically * accesses it, of course) */ - flush_tlb_page(mpnt, addr); + for (i = 0; i < nr; i++) + flush_tlb_page(vma, addr + i * PAGE_SIZE); if (old_addr == 0 || (old_addr & (SHM_COLOUR - 1)) != (addr & (SHM_COLOUR - 1))) { - __flush_cache_page(mpnt, addr, page_to_phys(page)); + for (i = 0; i < nr; i++) + __flush_cache_page(vma, + addr + i * PAGE_SIZE, + (pfn + i) * PAGE_SIZE); /* * Software is allowed to have any number * of private mappings to a page. */ - if (!(mpnt->vm_flags & VM_SHARED)) + if (!(vma->vm_flags & VM_SHARED)) continue; if (old_addr) pr_err("INEQUIVALENT ALIASES 0x%lx and 0x%lx in file %pD\n", - old_addr, addr, mpnt->vm_file); - old_addr = addr; + old_addr, addr, vma->vm_file); + if (nr == folio_nr_pages(folio)) + old_addr = addr; } } WARN_ON(++count == 4096); } flush_dcache_mmap_unlock(mapping); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); /* Defined in arch/parisc/kernel/pacache.S */ EXPORT_SYMBOL(flush_kernel_dcache_range_asm); From patchwork Wed Mar 15 05:14:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175313 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AE60C6FD1D for ; Wed, 15 Mar 2023 05:15:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 665348E000B; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5C2AC8E000E; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 257318E0008; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E59F88E000A for ; Wed, 15 Mar 2023 01:14:53 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A86D714048E for ; Wed, 15 Mar 2023 05:14:53 +0000 (UTC) X-FDA: 80569968066.19.CE37941 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id AEAAAC000C for ; Wed, 15 Mar 2023 05:14:51 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=idxhAyEt; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857292; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IN76baGZ/avUg05hF2AkfAibLkZz5ZazcnEC799npsk=; b=JivKTiWJJfjfg/CGsGEeQGK91ZvOIIoGwQFlCUgucywL+xOokNH11QyQq4L2YIl9ayHHy2 9Xa6wM3qP+Oneq4J253dMQ6awTF13HTMZg2FaS6EKCd4Bxp1tYu/Xp/qty78BVmlOHzvcR J3fqHr8LC6QaWS1RakMTbonNsmdDotc= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=idxhAyEt; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857292; a=rsa-sha256; cv=none; b=XS1q47VWlP/S2az5cM3cNfdHzcDlnBnwzUM3t/KBTciK8MerRp41Dsypomxi8BQDRNjC6k 7upuudlMMci9uj8HYqpVZ8TvCpPF2yFpG4nRqybd8+MwfUQgr5s76HeMGR1qNr6UuyNJgw +z9mClKaA3dMKc5us809Hvo3drbWaYA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IN76baGZ/avUg05hF2AkfAibLkZz5ZazcnEC799npsk=; b=idxhAyEtalvdb6Y9Ro+FLA01Hj 9p3NBtHZfmyqjgOa7tQ7vAcALyWMquZbhwUgWclw/+e6wufv94ImP0l/4m7PRysWiiiI+FWe9FhSd VXzk72qM3Ij1/diP8yguXpso0eaJgFLLWDC8MIUOAUNst/cmCQTd3Exz863kEJ+BBUjJnBlKhjVcX TSgkEj1z1b0W2P+cSLWYQks8iPeL9t7J1w7bYNZysUgkcv2yty8Q+7mkLOr4GPXZ1JxToKUsBEv9k 6foA/5583rDHIvzND91akHGKMAEOWKSILWLKqPulcKmBwJfDFkAEPbwCuqEnOlezLMc30kEOOmYnI rjJdmvig==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTM-00DYCM-ON; Wed, 15 Mar 2023 05:14:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v4 20/36] powerpc: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:28 +0000 Message-Id: <20230315051444.3229621-21-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: AEAAAC000C X-Rspam-User: X-Stat-Signature: dp4dh546cpbr8sjnicot7acktzy8y38p X-HE-Tag: 1678857291-294034 X-HE-Meta: U2FsdGVkX19iEMhx/ZjifPHwvy7dbPVcNSdHohw0VfZExtwfLIw5jeFXV3mLAgAtRgYVMU7DHOmbBy1K4cKZLsqkz1RBXifJ7UcD/somx8Rlav/jfFKwjPRiROTbsFJwUi7KiQjjCorEyez2eyOEHbbadCk56uoNgVRHtVsqujZRoE1Tinmk+9rouaeB7dBFlupLbYhPBoGwkHUpoGlXD9KFf1LFuzi7y0LGkGXi8GQXyukBIH5zV9Qo22u4q6ptFOlraSpbpPdtKcberd3E81T3b8Y2YWiElmAwTJZCj9aRP4nqC6qWwJISUh/4AA05cqt5FoJRbvYaH2CqVKoap+mF7Pugf/eTB8WJjaZTOcjP21t5CIuiRG1jYJ4fUHEwIc3j0a5Gz+LulRJSUqD6QMnkjyKEnbBkLiPQAi24DrndUhS/cQ+3soTcrPjnRyui3KOdWM0d/lFhNFnCPj7/p0+QwZN5UtYiSC3IKiKgSYZkNYDvlZLVVbSfjcrC2D8kTy/vUzt3/JkkVxMHis1LhRBP9l7wtHNE52q1tpmKCURBs99fP8u/aGSeAJfncdpJKAXTlYj8n5KWowGsRGl005Z0FC1fbjexQ/fcln5WYnDHC7Z+WIavvMf5WDt2dMEwBKK1BTApK3WyHeZKYVYbSzJUWGkiumOkX8KYR9OiSIXVgIOpi04MuUBWCxmwApS1ZboUO17fmG01hGQ8eeI4DeRTqi0tfmWegJdfEd2lqXWBPDRJaBw7zAdljctUKtEPG7jH17wRFcTz5FKhEQJMuAIDa7HFWQLCY0Nz3TvKf5A48iptm48xCQehjoqfBs/04NUGBBly93iYCtxR4TdBHne8F6x8ui8GOf4r1wRXxYWCQbeChElJ4qmJGa0kB9K6riLpzVNWVqU2Qrqc1DlMsbP+aqEDy4rB8MVVhMqDUbWcIcPYRkP5cOIEne5vFaLlaHPmg8UR+fk6XnVS+GM yKxiDiFX oQTcWyYrqK/dkmmuelgwZrxzA3RiTgiQMc8SYVtG07imy9jZSBaoSv0gRyY+yRR0D1RCsBIcgY/W4vVCq2jBVAcgRJj/PjhpHL/QhIVB5OtiNWWXe/6iSALZ1jcG8lT3PyVq1wrpgcWXXei+7eBJCcdi/sKhUA2spbYKuayHEmRVuCstmCMxeKMP15Gx2DxwlP4bOJ5Y0Ptj7ncR3iNMt0gGy2+dQghVC+dgJutOIUl9S8FI8SERpqRt4lMWYwGDAMkoiyCUQrMo4QSumjDKOTQOJzC9keNLw8uJNgD7lvpjZz6VsfyZu1DGlp3KMEKI9mftYwxQyeGAOXlAXD2Pcje5eEw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: linuxppc-dev@lists.ozlabs.org Acked-by: Mike Rapoport (IBM) --- arch/powerpc/include/asm/book3s/pgtable.h | 10 +---- arch/powerpc/include/asm/cacheflush.h | 14 +++++-- arch/powerpc/include/asm/kvm_ppc.h | 10 ++--- arch/powerpc/include/asm/nohash/pgtable.h | 13 ++---- arch/powerpc/include/asm/pgtable.h | 6 +++ arch/powerpc/mm/book3s64/hash_utils.c | 11 ++--- arch/powerpc/mm/cacheflush.c | 40 ++++++------------ arch/powerpc/mm/nohash/e500_hugetlbpage.c | 3 +- arch/powerpc/mm/pgtable.c | 51 +++++++++++++---------- 9 files changed, 77 insertions(+), 81 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/pgtable.h b/arch/powerpc/include/asm/book3s/pgtable.h index d18b748ea3ae..c2ef811505b0 100644 --- a/arch/powerpc/include/asm/book3s/pgtable.h +++ b/arch/powerpc/include/asm/book3s/pgtable.h @@ -9,13 +9,6 @@ #endif #ifndef __ASSEMBLY__ -/* Insert a PTE, top-level function is out of line. It uses an inline - * low level function in the respective pgtable-* files - */ -extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte); - - #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, pte_t *ptep, pte_t entry, int dirty); @@ -36,7 +29,8 @@ void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * corresponding HPTE into the hash table ahead of time, instead of * waiting for the inevitable extra hash-table miss exception. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { if (IS_ENABLED(CONFIG_PPC32) && !mmu_has_feature(MMU_FTR_HPTE_TABLE)) return; diff --git a/arch/powerpc/include/asm/cacheflush.h b/arch/powerpc/include/asm/cacheflush.h index 7564dd4fd12b..ef7d2de33b89 100644 --- a/arch/powerpc/include/asm/cacheflush.h +++ b/arch/powerpc/include/asm/cacheflush.h @@ -35,13 +35,19 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end) * It just marks the page as not i-cache clean. We do the i-cache * flush later when the page is given to a user process, if necessary. */ -static inline void flush_dcache_page(struct page *page) +static inline void flush_dcache_folio(struct folio *folio) { if (cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) return; /* avoid an atomic op if possible */ - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); } void flush_icache_range(unsigned long start, unsigned long stop); @@ -51,7 +57,7 @@ void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); #define flush_icache_user_page flush_icache_user_page -void flush_dcache_icache_page(struct page *page); +void flush_dcache_icache_folio(struct folio *folio); /** * flush_dcache_range(): Write any modified data cache blocks out to memory and diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 6bef23d6d0e3..e91dd8e88bb7 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -868,7 +868,7 @@ void kvmppc_init_lpid(unsigned long nr_lpids); static inline void kvmppc_mmu_flush_icache(kvm_pfn_t pfn) { - struct page *page; + struct folio *folio; /* * We can only access pages that the kernel maps * as memory. Bail out for unmapped ones. @@ -877,10 +877,10 @@ static inline void kvmppc_mmu_flush_icache(kvm_pfn_t pfn) return; /* Clear i-cache for new pages */ - page = pfn_to_page(pfn); - if (!test_bit(PG_dcache_clean, &page->flags)) { - flush_dcache_icache_page(page); - set_bit(PG_dcache_clean, &page->flags); + folio = page_folio(pfn_to_page(pfn)); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } } diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h index a6caaaab6f92..69a7dd47a9f0 100644 --- a/arch/powerpc/include/asm/nohash/pgtable.h +++ b/arch/powerpc/include/asm/nohash/pgtable.h @@ -166,12 +166,6 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) return __pte(pte_val(pte) & ~_PAGE_SWP_EXCLUSIVE); } -/* Insert a PTE, top-level function is out of line. It uses an inline - * low level function in the respective pgtable-* files - */ -extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte); - /* This low level function performs the actual PTE insertion * Setting the PTE depends on the MMU type and other factors. It's * an horrible mess that I'm not going to try to clean up now but @@ -282,10 +276,11 @@ static inline int pud_huge(pud_t pud) * for the page which has just been mapped in. */ #if defined(CONFIG_PPC_E500) && defined(CONFIG_HUGETLB_PAGE) -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr); #else -static inline -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) {} +static inline void update_mmu_cache(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) {} #endif #endif /* __ASSEMBLY__ */ diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h index 9972626ddaf6..656ecf2b10cd 100644 --- a/arch/powerpc/include/asm/pgtable.h +++ b/arch/powerpc/include/asm/pgtable.h @@ -41,6 +41,12 @@ struct mm_struct; #ifndef __ASSEMBLY__ +void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, + pte_t pte, unsigned int nr); +#define set_ptes set_ptes +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + #ifndef MAX_PTRS_PER_PGD #define MAX_PTRS_PER_PGD PTRS_PER_PGD #endif diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index fedffe3ae136..ad2afa08e62e 100644 --- a/arch/powerpc/mm/book3s64/hash_utils.c +++ b/arch/powerpc/mm/book3s64/hash_utils.c @@ -1307,18 +1307,19 @@ void hash__early_init_mmu_secondary(void) */ unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap) { - struct page *page; + struct folio *folio; if (!pfn_valid(pte_pfn(pte))) return pp; - page = pte_page(pte); + folio = page_folio(pte_page(pte)); /* page is dirty */ - if (!test_bit(PG_dcache_clean, &page->flags) && !PageReserved(page)) { + if (!test_bit(PG_dcache_clean, &folio->flags) && + !folio_test_reserved(folio)) { if (trap == INTERRUPT_INST_STORAGE) { - flush_dcache_icache_page(page); - set_bit(PG_dcache_clean, &page->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } else pp |= HPTE_R_N; } diff --git a/arch/powerpc/mm/cacheflush.c b/arch/powerpc/mm/cacheflush.c index 0e9b4879c0f9..8760d2223abe 100644 --- a/arch/powerpc/mm/cacheflush.c +++ b/arch/powerpc/mm/cacheflush.c @@ -148,44 +148,30 @@ static void __flush_dcache_icache(void *p) invalidate_icache_range(addr, addr + PAGE_SIZE); } -static void flush_dcache_icache_hugepage(struct page *page) +void flush_dcache_icache_folio(struct folio *folio) { - int i; - int nr = compound_nr(page); + unsigned int i, nr = folio_nr_pages(folio); - if (!PageHighMem(page)) { + if (flush_coherent_icache()) + return; + + if (!folio_test_highmem(folio)) { + void *addr = folio_address(folio); for (i = 0; i < nr; i++) - __flush_dcache_icache(lowmem_page_address(page + i)); - } else { + __flush_dcache_icache(addr + i * PAGE_SIZE); + } else if (IS_ENABLED(CONFIG_BOOKE) || sizeof(phys_addr_t) > sizeof(void *)) { for (i = 0; i < nr; i++) { - void *start = kmap_local_page(page + i); + void *start = kmap_local_folio(folio, i * PAGE_SIZE); __flush_dcache_icache(start); kunmap_local(start); } - } -} - -void flush_dcache_icache_page(struct page *page) -{ - if (flush_coherent_icache()) - return; - - if (PageCompound(page)) - return flush_dcache_icache_hugepage(page); - - if (!PageHighMem(page)) { - __flush_dcache_icache(lowmem_page_address(page)); - } else if (IS_ENABLED(CONFIG_BOOKE) || sizeof(phys_addr_t) > sizeof(void *)) { - void *start = kmap_local_page(page); - - __flush_dcache_icache(start); - kunmap_local(start); } else { - flush_dcache_icache_phys(page_to_phys(page)); + unsigned long pfn = folio_pfn(folio); + for (i = 0; i < nr; i++) + flush_dcache_icache_phys((pfn + i) * PAGE_SIZE); } } -EXPORT_SYMBOL(flush_dcache_icache_page); void clear_user_page(void *page, unsigned long vaddr, struct page *pg) { diff --git a/arch/powerpc/mm/nohash/e500_hugetlbpage.c b/arch/powerpc/mm/nohash/e500_hugetlbpage.c index 58c8d9849cb1..f3cb91107a47 100644 --- a/arch/powerpc/mm/nohash/e500_hugetlbpage.c +++ b/arch/powerpc/mm/nohash/e500_hugetlbpage.c @@ -178,7 +178,8 @@ book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea, pte_t pte) * * This must always be called with the pte lock held. */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { if (is_vm_hugetlb_page(vma)) book3e_hugetlb_preload(vma, address, *ptep); diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index cb2dcdb18f8e..b3c7b874a7a2 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -58,7 +58,7 @@ static inline int pte_looks_normal(pte_t pte) return 0; } -static struct page *maybe_pte_to_page(pte_t pte) +static struct folio *maybe_pte_to_folio(pte_t pte) { unsigned long pfn = pte_pfn(pte); struct page *page; @@ -68,7 +68,7 @@ static struct page *maybe_pte_to_page(pte_t pte) page = pfn_to_page(pfn); if (PageReserved(page)) return NULL; - return page; + return page_folio(page); } #ifdef CONFIG_PPC_BOOK3S @@ -84,12 +84,12 @@ static pte_t set_pte_filter_hash(pte_t pte) pte = __pte(pte_val(pte) & ~_PAGE_HPTEFLAGS); if (pte_looks_normal(pte) && !(cpu_has_feature(CPU_FTR_COHERENT_ICACHE) || cpu_has_feature(CPU_FTR_NOEXECUTE))) { - struct page *pg = maybe_pte_to_page(pte); - if (!pg) + struct folio *folio = maybe_pte_to_folio(pte); + if (!folio) return pte; - if (!test_bit(PG_dcache_clean, &pg->flags)) { - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } } return pte; @@ -107,7 +107,7 @@ static pte_t set_pte_filter_hash(pte_t pte) { return pte; } */ static inline pte_t set_pte_filter(pte_t pte) { - struct page *pg; + struct folio *folio; if (radix_enabled()) return pte; @@ -120,18 +120,18 @@ static inline pte_t set_pte_filter(pte_t pte) return pte; /* If you set _PAGE_EXEC on weird pages you're on your own */ - pg = maybe_pte_to_page(pte); - if (unlikely(!pg)) + folio = maybe_pte_to_folio(pte); + if (unlikely(!folio)) return pte; /* If the page clean, we move on */ - if (test_bit(PG_dcache_clean, &pg->flags)) + if (test_bit(PG_dcache_clean, &folio->flags)) return pte; /* If it's an exec fault, we flush the cache and make it clean */ if (is_exec_fault()) { - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); return pte; } @@ -142,7 +142,7 @@ static inline pte_t set_pte_filter(pte_t pte) static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, int dirty) { - struct page *pg; + struct folio *folio; if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) return pte; @@ -168,17 +168,17 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, #endif /* CONFIG_DEBUG_VM */ /* If you set _PAGE_EXEC on weird pages you're on your own */ - pg = maybe_pte_to_page(pte); - if (unlikely(!pg)) + folio = maybe_pte_to_folio(pte); + if (unlikely(!folio)) goto bail; /* If the page is already clean, we move on */ - if (test_bit(PG_dcache_clean, &pg->flags)) + if (test_bit(PG_dcache_clean, &folio->flags)) goto bail; /* Clean the page and set PG_dcache_clean */ - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); bail: return pte_mkexec(pte); @@ -187,8 +187,8 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, /* * set_pte stores a linux PTE into the linux page table. */ -void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) +void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, + pte_t pte, unsigned int nr) { /* * Make sure hardware valid bit is not set. We don't do @@ -203,7 +203,14 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte = set_pte_filter(pte); /* Perform the setting of the PTE */ - __set_pte_at(mm, addr, ptep, pte, 0); + for (;;) { + __set_pte_at(mm, addr, ptep, pte, 0); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + PAGE_SIZE); + addr += PAGE_SIZE; + } } void unmap_kernel_page(unsigned long va) From patchwork Wed Mar 15 05:14:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175326 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AC8EC7618D for ; Wed, 15 Mar 2023 05:15:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A5BA38E0017; Wed, 15 Mar 2023 01:15:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BF828E000C; Wed, 15 Mar 2023 01:15:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 85E0A8E0017; Wed, 15 Mar 2023 01:15:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 65FFC8E000C for ; Wed, 15 Mar 2023 01:15:00 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 41201160A90 for ; Wed, 15 Mar 2023 05:15:00 +0000 (UTC) X-FDA: 80569968360.08.2415E93 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 8A8B7100004 for ; Wed, 15 Mar 2023 05:14:57 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="YSru/PFc"; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857297; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=38B1+LejdXOH6t72uoRMfRzsT0ieNPnJ8TZi4WwwYkA=; b=586r6O9rGqLYP+naG/c5EwPR/FIWEE9/f801EMv+y5hsDeK6SBVoW4IT6kxCE8PoO3jHrI G56yJ+J9tbE9wuF8GwJLxvdHIPkfADgQOqes+2LnZjS1x5jY44K+v9KGlpa+GsKx9ecxGr LmoQ0j1Zs6w6WAjxuhosF5BkWOduMLA= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="YSru/PFc"; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857297; a=rsa-sha256; cv=none; b=eH6Uo2mJH4ydQhWPp8kMesLerwVTlt+yJp45N3zX7M9NVQvxbTPRj4deIb7E29kt2+Z0pH 8N/OMPWUDw2nupP1MzEABWP3MZuHn5peYfJAE0xjBTuj9Fiz13HKDr6T0oRu3o/tV5oYZc HUKKDh1yBmfTHvmdwuZ40QcLJKkAWSc= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=38B1+LejdXOH6t72uoRMfRzsT0ieNPnJ8TZi4WwwYkA=; b=YSru/PFci3y3bCGvb/huM4Bus8 xg7e8+jo7m6B1eY/kE0HJW0YZ/jpoBBuU9KeOSdWhZ42Zi7JLSm5tEG6aFOLKQkrpXpupQ2fGhm9U pA96HH6Qhxyy4UbB7cdqndmz5bxU3sgj/DfTeLck48vJE/qvEtpNw1fzrrcDSKvztNXKOGiTsqjdf Pfs+hzfnE/S3roSZw6GVOLjmPUCfX2mjmWfM3nzUnJyygzCWp5P9JSgqk1wcDQ8/m2IyXv5lHjuoJ GYoNBiPiS74kGZX5/SYXTFhGv1U9PV0fzL88fcYpy6vK6ydgySUupqbQjzCwu1UhkncUr1iorqjao kUVFBJxw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTM-00DYCR-RM; Wed, 15 Mar 2023 05:14:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Alexandre Ghiti , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [PATCH v4 21/36] riscv: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:29 +0000 Message-Id: <20230315051444.3229621-22-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 8A8B7100004 X-Rspam-User: X-Stat-Signature: wqxb5a6i7madrktxkxqp3haxh5az1d69 X-HE-Tag: 1678857297-954883 X-HE-Meta: U2FsdGVkX18SefVBJ2rRTNmXVtNoxuHEeNfpKK0yLcuI5a5yNxeEoHnqktaeeQGXr/S9Hqbu+WtadHO3erMQp5owcOpVMrqYeKtZFH6lKNDew7GiZDSh+fqn7uZMlQTqiP2a8tzwEk+53DB2/MJF7HywkwxoGAd/WtZSp6HQMVczSAyGImUeKDmrf7ZkFNIPiqrgWcoxHJM8eBUGgkKKbarcwnkQsLLn3TsTNPvsCsCYWZ/0iN+2tLyIAFzZOEILy2XkCkPzy7kR1mM1uYspSJTlyrS3sdrSrCDXyUef9izdR0o6BCNC4iTAyMhvImuFKC2cZadgzYdXecwfaa6husNPu13DRJJxpndvYGQqJ1XCZabcZrKlwYMibTlVxXRLj/5h13Z+2cPCfQl2abqnWcItKMrgBWjGYiSzXYM8rBWQWHySm0/9EmTV5CX/UINiiUgrDzD4zTM7J+GcGNeKdZfeoLIXHVbk6yIxuqBg30AfVOBsBTz7jC44t1MJkHjxzLC51ShzyicAAfI5ENLC5y0HhtFtTJLKTMBzo8M//W8FWiWRoxCZeVxjdwI238Bsj73H4cR2l7e5vk/1FWTfw/bbA3yTSIFm5k847GNV1a7LPMc8D6KvKSk19b2q9SGUpXi/S5KrAgCUrcDbkycSSZ5h2y3b+cRg5eRkbN2GKvK7zMZP55B37GcYARf2Fs00Eqh+NoewUX2EyTUzUSof74Znrbbm0cXJiRl/2xnck1pAsOIJMSWRGZKPbTaykb+gRd3IhP9+H1wu2m72lmOkH0n+nrbz1Ke1N/3Z6w92j7As49XFn7o51g+BSZAq2twtYZpHvE6qOaFBM1YZiwE37JdC3GuhPlEnSVu4FJqAl8hKw0QXgKycZJzUuZXXJc+6RnhQOH0zaWrgvWDh1K4eopUaJncJTDIFas3hmK0x8c5iInERZjniKtkpV8eSHZxggMJ//waLQkIJ/EhsLHt BqSsf0fs fibIVrTS4+MHFl7Mu7mLzmxO3tcYdGgXdU3NtULbcASADESTM3AzmcYEGhHJ9HIZmP6hxRqe0vY9rWtlolMFQnjHtoqcV5Hu1Yf7lPnWticro1Upo+m2hmqALAl3wlB8Rles+p13jhD7MMIb7q5pDZ4ii82B4Tdwjl419mIChaSiDE6McPmpb0sH3BwUBhqgwdl4SXsOPKg3eLJpzhUQ5gJw7JPEoa1eh3n1C3xRw1KXZCewTOldnJxtSYhZucFqq5mrnS+nTyG1i028GTMFgXDvvyFCm0MrH+x5eH0hOwl7l18ZKr0r8xpxclPq+IorcKV46CHfAlkfD0CNyE7F68vviZyMxCJcU0mQpX6a6Dhsep+5Xiz+0X4VlIw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Alexandre Ghiti Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: linux-riscv@lists.infradead.org Acked-by: Mike Rapoport (IBM) --- arch/riscv/include/asm/cacheflush.h | 19 +++++++++---------- arch/riscv/include/asm/pgtable.h | 26 +++++++++++++++++++------- arch/riscv/mm/cacheflush.c | 11 ++--------- 3 files changed, 30 insertions(+), 26 deletions(-) diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h index 03e3b95ae6da..10e5e96f09b5 100644 --- a/arch/riscv/include/asm/cacheflush.h +++ b/arch/riscv/include/asm/cacheflush.h @@ -15,20 +15,19 @@ static inline void local_flush_icache_all(void) #define PG_dcache_clean PG_arch_1 -static inline void flush_dcache_page(struct page *page) +static inline void flush_dcache_folio(struct folio *folio) { - /* - * HugeTLB pages are always fully mapped and only head page will be - * set PG_dcache_clean (see comments in flush_icache_pte()). - */ - if (PageHuge(page)) - page = compound_head(page); - - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); } +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} + /* * RISC-V doesn't have an instruction to flush parts of the instruction cache, * so instead we just flush the whole thing. diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index b516f3b59616..b077bc8c498c 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -405,8 +405,8 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) /* Commit new configuration to MMU hardware */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { /* * The kernel assumes that TLBs don't cache invalid entries, but @@ -415,8 +415,11 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, * Relying on flush_tlb_fix_spurious_fault would suffice, but * the extra traps reduce performance. So, eagerly SFENCE.VMA. */ - local_flush_tlb_page(address); + while (nr--) + local_flush_tlb_page(address + nr * PAGE_SIZE); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache @@ -456,12 +459,21 @@ static inline void __set_pte_at(struct mm_struct *mm, set_pte(ptep, pteval); } -static inline void set_pte_at(struct mm_struct *mm, - unsigned long addr, pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { - page_table_check_ptes_set(mm, addr, ptep, pteval, 1); - __set_pte_at(mm, addr, ptep, pteval); + page_table_check_ptes_set(mm, addr, ptep, pteval, nr); + + for (;;) { + __set_pte_at(mm, addr, ptep, pteval); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + pte_val(pteval) += 1 << _PAGE_PFN_SHIFT; + } } +#define set_ptes set_ptes static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c index fcd6145fbead..e36a851e5788 100644 --- a/arch/riscv/mm/cacheflush.c +++ b/arch/riscv/mm/cacheflush.c @@ -81,16 +81,9 @@ void flush_icache_mm(struct mm_struct *mm, bool local) #ifdef CONFIG_MMU void flush_icache_pte(pte_t pte) { - struct page *page = pte_page(pte); + struct folio *folio = page_folio(pte_page(pte)); - /* - * HugeTLB pages are always fully mapped, so only setting head page's - * PG_dcache_clean flag is enough. - */ - if (PageHuge(page)) - page = compound_head(page); - - if (!test_bit(PG_dcache_clean, &page->flags)) { + if (!test_bit(PG_dcache_clean, &folio->flags)) { flush_icache_all(); set_bit(PG_dcache_clean, &page->flags); } From patchwork Wed Mar 15 05:14:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175314 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3777C76195 for ; Wed, 15 Mar 2023 05:15:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B1EEE8E000A; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A592A8E0005; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6639D8E0008; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 2C9F78E000B for ; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BC3E8160145 for ; Wed, 15 Mar 2023 05:14:52 +0000 (UTC) X-FDA: 80569968024.01.E86F492 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 21E2120013 for ; Wed, 15 Mar 2023 05:14:50 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=aoGMnxNj; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857291; a=rsa-sha256; cv=none; b=5iyMgirqO+V2qbW1FccVzn7nDAKnqhyWQuG6F9KHxbRSXmZ2Slehxhft5ObZXbLDAZSCC3 eVup4qaG7SLu7/pj8bAWYgetQlqNsaGR9S5ct54AoslO+6C96qttsX/DHY1CdUIoSfEH/R Ci66b8Rrju/d87yfQkwI2pAO/9E2dK4= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=aoGMnxNj; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857291; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/cEb0Fh1cCHXRm4Z6x+edjUOJMIxZVeD5/H0HnIu8NQ=; b=JaY8yRQw6BvKF3QIEbHVaTvCLSIfn9lH/ynmXjRWOetRtv3BWG64Hug4m9ShECINN4GHAl K33zsZiH98XbXyLAu0oALrV7xTKqnzUVBZQvD7LIUeP88cplPAMOuXg1FAUVojlmNVe4NM 36CpIesEHvoKJcYxW5FaeVSGnUsyxdY= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/cEb0Fh1cCHXRm4Z6x+edjUOJMIxZVeD5/H0HnIu8NQ=; b=aoGMnxNj6CJ0nsLkjqwsFDXb7C iFVn+nQtirpkE4qoXAj2jv5MTfGfnpFrTukGMy+mVvlfWq2v1UD7fGZgOy5RR7JYm7SH3TKMh/nC0 GXed+5mzaF9nfFRGnpRsz9AIuYGhi11Hm0bNkHLpiGDbwAGJ6DEcD2p4pnOAaSZdiYp2Fanf13Vek 1uZ4EVfnbYvA8uvA8hlgZcPQ1L5c8fafAGcHS49MuqcVsQn8vaOTUIfb1Aukp/FofnRNeR7W3xmGl IPnyDuSguyjfJ7L2y/yl+oTDQVk0y+3G3ArzPosXCI4Qz9Ov9CFL8IqxFk0GJn7Mv6tQPyiggcZHV RPYPknCQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTM-00DYCU-Ut; Wed, 15 Mar 2023 05:14:48 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org Subject: [PATCH v4 22/36] s390: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:30 +0000 Message-Id: <20230315051444.3229621-23-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 21E2120013 X-Rspamd-Server: rspam01 X-Stat-Signature: 8woubysy3636oq41ai3ncrsiaazipcq4 X-HE-Tag: 1678857290-323871 X-HE-Meta: U2FsdGVkX19ROoU8kiyMHk7QCM9kh/MMtv9Rz7FcMyAW2aTLVa1+scL8ZKNMrCUrVJWP9CfhjunSvgY6qbrtVoqBXSjWy9HLfHdILkNBoTL3oJNgCoTlVuprag8pBwbtg0vM3iIYNpPvywSQOwHPJyjzRCRikkNvnlTfiITokTZu4KZ407y2mug0jjTTsg9MDVOBBEGNIP/0A0MOm1kRbO0BUu4KqPqRXqZafOi19ve7SjK+rp7hyOTQ3pNoBgX2I9eq8RpCxVbEfcSBKEG5ch2jO6dkejSnXxuNl2/IEOJro8Y88HAXVOcARDXVkz0HPZxaN8nI46PxeMzaw4eAGI76ZtQAAuqvISuX/91yOAKU9i6YwFALQmidgL/tEeo3NxFlpdW2NidT/SXiTFw04yRrLu62wUJEoPZIgQG7KEtk8A+WQ/UPvA8B8pV2fjU+2wZbG1kEwHVo0v4rlXl3veDS/Q6i9vusPBRauHLbobvnEkDReRlfRCE2f/7MKp7j8FKGaIZ30ceEQ5TJokPjVBpwiTOMmxSOp20561mjFoiqbcqZcyyq5JfooZVv0AJ7uOe6rTSMkIJixMcs11Opd9YCoDF+FIn1LSMbmDbKdnwEJOLkONn31xnPSFYqQ81gZAOhaifOYry9a13KG8SeOrVMGr9SCCVhesNZULGWOQcXq59ud1QEUFKKkxP6LHBsEsUplWF3Kd3NO6YSL3mJkmDhn9r/EGG70pL5FfnXvkvnozf5obaHfdWAfkqzVOeiCada1OLONe/QrkW+gD5vMpWajaENgctLw7tnH/WmrpV1nFkt3ySIjn6mIVxzg55Wn07hNaauXxM8jWqsLPlTUf3F/3V9gEmN2QczwmGbb6SH9G+qYcsjt+2KwmJYCWSSB1iw8wZenmX7w/6fgA6bk2aokoQhU+0moi3TZgPgOjpYmzmIz7f8N0AjdoW4kRmteAJp3tUbuBOve/j+Z3x jJuIhTQR zCjmw6W/XD+iz0t1cDhMRO8SG/X6yEsMAUC7Dz7mKY7vUKV8CfWkrYAXL0WOwletilJPD3PZmrsdO7U5b2qdeVA7Da8KdbEhKZ+ilUkGRYKcSjKekGaHoN2aKiZhfWQWMG4eslugiEyMkHWcYJwxZ10/pXy/niitLxhLYPDkL805EPiMLQdTwVVIy0wPTTG431oh8b7Cu5AjuSmAvlJVgvdm+sA92CYApEV5nlO6mjX5H7m5v/IQzZh4+Co69XZp0lfSRgKK0rHrM5l2XiZfcyD/TxA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes() and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Gerald Schaefer Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Cc: linux-s390@vger.kernel.org Acked-by: Mike Rapoport (IBM) --- arch/s390/include/asm/pgtable.h | 33 ++++++++++++++++++++++++--------- 1 file changed, 24 insertions(+), 9 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index c1f6b46ec555..fea678c67e51 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -50,6 +50,7 @@ void arch_report_meminfo(struct seq_file *m); * tables contain all the necessary information. */ #define update_mmu_cache(vma, address, ptep) do { } while (0) +#define update_mmu_cache_range(vma, addr, ptep, nr) do { } while (0) #define update_mmu_cache_pmd(vma, address, ptep) do { } while (0) /* @@ -1319,20 +1320,34 @@ pgprot_t pgprot_writecombine(pgprot_t prot); pgprot_t pgprot_writethrough(pgprot_t prot); /* - * Certain architectures need to do special things when PTEs - * within a page table are directly modified. Thus, the following - * hook is made available. + * Set multiple PTEs to consecutive pages with a single call. All PTEs + * are within the same folio, PMD and VMA. */ -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t entry) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t entry, unsigned int nr) { if (pte_present(entry)) entry = clear_pte_bit(entry, __pgprot(_PAGE_UNUSED)); - if (mm_has_pgste(mm)) - ptep_set_pte_at(mm, addr, ptep, entry); - else - set_pte(ptep, entry); + if (mm_has_pgste(mm)) { + for (;;) { + ptep_set_pte_at(mm, addr, ptep, entry); + if (--nr == 0) + break; + ptep++; + entry = __pte(pte_val(entry) + PAGE_SIZE); + addr += PAGE_SIZE; + } + } else { + for (;;) { + set_pte(ptep, entry); + if (--nr == 0) + break; + ptep++; + entry = __pte(pte_val(entry) + PAGE_SIZE); + } + } } +#define set_ptes set_ptes /* * Conversion functions: convert a page and protection to a page entry, From patchwork Wed Mar 15 05:14:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175322 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA07EC6FD1D for ; Wed, 15 Mar 2023 05:15:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E32938E0014; Wed, 15 Mar 2023 01:14:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CA3A28E000C; Wed, 15 Mar 2023 01:14:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 995408E0013; Wed, 15 Mar 2023 01:14:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8114C8E000C for ; Wed, 15 Mar 2023 01:14:57 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 484B680664 for ; Wed, 15 Mar 2023 05:14:57 +0000 (UTC) X-FDA: 80569968234.29.964C55B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 58D9C100004 for ; Wed, 15 Mar 2023 05:14:53 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vTCL08QS; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857293; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kXlqfEiSGs6Y0V6eTOWWbjz5Iw0PqwuVUMEIwxtlJKQ=; b=CWOoVrgIYtblQ5szGvYceH21kdpbf4Z7qvmDgX+Jq8/G8gSsYi6UV+DNSGIaVNhsCUkhHw mA7n5zlDbV6n1vJtiWIDtCaK7D5WtGDAvrwIDww3F2q8emIAaZQghsncjyvoRktBegdG7n XQKll5jwIcHmxFCi5b9arw67jJgNbRM= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vTCL08QS; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857293; a=rsa-sha256; cv=none; b=SD2+M0nl0i2s9ZZzEsd+KcbDCnymDDQu2+9WX8sg5HiUlbTuf9k5dIAvzUtOWu32EAm42m YpBeESOsxZILkE3egOKMe8akYAW2aNimGUKv6+FpZriJOHJuRV1D40ctfy5vpELBYVQFBr c25BqcLfF7sIchBrl3zKMziXoUC7ew4= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kXlqfEiSGs6Y0V6eTOWWbjz5Iw0PqwuVUMEIwxtlJKQ=; b=vTCL08QSwA1inUS4E6dLuqL1e4 3f6t0W9ClbXWl/eU9V9zxMm88Peop7LdAPBtmGcFNLyhDipt6dPukDvH+pn4irvtiO/h74OKoP2Mi BNa8wszdsd4fjE05rD+6Hcfay+2sSSzhb96kJMWzZJmPitNiHHneNj0GTIEeX+49p2VsolSr8zL+R rnmEVBgY09PuweQKovUHoMV3PmsYwtqZWEawTAztUNLSBl5nn+OrHvaPxlRpZj5Go11Edo3qjB8Rn 6ibv7HgzBSeCPHG5t+RxjtTUP17Hwa0oK2UgKGehFaZFIvjp1r8N5DQFZsDPUV6GmoFeE3F+MMaWS ROG0/vww==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTN-00DYCb-2R; Wed, 15 Mar 2023 05:14:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , linux-sh@vger.kernel.org Subject: [PATCH v4 23/36] superh: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:31 +0000 Message-Id: <20230315051444.3229621-24-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 58D9C100004 X-Rspam-User: X-Stat-Signature: cwsfcuyn6rbm6y6xuym7wqeckhmptwuh X-HE-Tag: 1678857293-597204 X-HE-Meta: U2FsdGVkX19mDmBpOoHq3gG2Yo3RZEo6i1PVXuONgyQzxMm61FnX05MlnaZk3uz8JZmByrLFS4cDHtm59T1WSky4MOG9cuIcnj/MWChR0ElFm/4l7c5frhiK1SCBOUsMzT9OcJ2fDHo/Avq6g2j2yxkMFnOZJRYyjxr/uWPFpGvMylWeUgEhi++0WEBGJNzO/IXMbuQnSWUkgvM36mKjNTBRjtO+JETgG6TbYe/oyj7gmHXLGIQWnfHjiih5E3HUaBUsEzKi0zyMLggbwMleqE7eMhGWMq7rC+AQy+ascbRmyd73VCjbJwJd8JJ0dpxzPMzp7JHterAdECYE6V7Z6vskOCoZetoa/Zi/TOWMjukymlV9nQSbOyS/YszbA7nCmJ37x220zy0A0MI+pS4CTj5m3sh7EvK4JH5Zxp+GrH7MDDYaVj3FCw7PPGgBj73Wsa7oyWJ2337eps4jwZpTNzCywAHG6XSzJ/d8ZWL8p9Jm4uIX6I9LzEd4JIm00XTXhO0tay3rU5DMejOu/F/Ke3YwNRRKRT5uqQZBgk7o89HkKPqOt36xVAea5ulom4stiWnY6zLEITuxrt0CxEw+NOTPXEHrb+Cw3ousztUpggYr/E/X+x5lSXUavY0NcPoiB6kmMR5aE7w0yPNZamB0WfGdPCAMr5D5pE31EYAyCk5zSHBZR/w/jXx3ZDrZj5/za882a4K47kiAxPGQ1xvEeE97PC0ipwvizYqbgVDEuF3o01JVzRsIWDHlUUGxNji5/xEJFr0v4epS2L/COXRihqGq14ADsI3D/GFZC0woC6q1JmW1FMQ3yzae1c5qVmUgrz72Ls5KywuWvvGoqZ0r2en6hfuY0oNrV0AozYXKirpBMcBUZ9tPzvPXMuPs+XStcR7rM8WqAlBz/RW72kfHMU5It9ASzon6n48RYza1VhkssllasHZY4HlZLLdeRwTg+++a/fS71iPh5KVkL3D I16eOxyJ 0NXspTRK+mPXGOub6REaJxlq3i3yAmvbjuwrurPo13gbYT36swh2Uu8q9CXq8n5kwzRXBNMw/dnz0yncb6Q6wZyjBJaUSoo4cvAtqM4flFthRLxC0HglJnMzvl66YkyF0QYiwJ2Or0liTCKn0XotBW8QNON7Jav45T46UCxw7pNKeWQleo6BgILs+752m5qEaP7EcBk9X3T7UGxaiJKhrtvP5drx80l+G8dMAys6P6p4BBgGENb5668na1OqgVYvp/0+NtyvlgOM1H0+od0iWNNpjc0VwrVtiAKpcP6nk/8A19ecjtrzST55OYOYmfNvTVbQR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dcache_clean flag from being per-page to per-folio. Flush the entire folio containing the pages in flush_icache_pages() for ease of implementation. Signed-off-by: Matthew Wilcox (Oracle) Cc: Yoshinori Sato Cc: Rich Felker Cc: John Paul Adrian Glaubitz Cc: linux-sh@vger.kernel.org Acked-by: Mike Rapoport (IBM) --- arch/sh/include/asm/cacheflush.h | 21 ++++++++----- arch/sh/include/asm/pgtable.h | 6 ++-- arch/sh/include/asm/pgtable_32.h | 5 ++- arch/sh/mm/cache-j2.c | 4 +-- arch/sh/mm/cache-sh4.c | 26 +++++++++++----- arch/sh/mm/cache-sh7705.c | 26 ++++++++++------ arch/sh/mm/cache.c | 52 ++++++++++++++++++-------------- arch/sh/mm/kmap.c | 3 +- 8 files changed, 88 insertions(+), 55 deletions(-) diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h index 481a664287e2..9fceef6f3e00 100644 --- a/arch/sh/include/asm/cacheflush.h +++ b/arch/sh/include/asm/cacheflush.h @@ -13,9 +13,9 @@ * - flush_cache_page(mm, vmaddr, pfn) flushes a single page * - flush_cache_range(vma, start, end) flushes a range of pages * - * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache + * - flush_dcache_folio(folio) flushes(wback&invalidates) a folio for dcache * - flush_icache_range(start, end) flushes(invalidates) a range for icache - * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache + * - flush_icache_pages(vma, pg, nr) flushes(invalidates) pages for icache * - flush_cache_sigtramp(vaddr) flushes the signal trampoline */ extern void (*local_flush_cache_all)(void *args); @@ -23,9 +23,9 @@ extern void (*local_flush_cache_mm)(void *args); extern void (*local_flush_cache_dup_mm)(void *args); extern void (*local_flush_cache_page)(void *args); extern void (*local_flush_cache_range)(void *args); -extern void (*local_flush_dcache_page)(void *args); +extern void (*local_flush_dcache_folio)(void *args); extern void (*local_flush_icache_range)(void *args); -extern void (*local_flush_icache_page)(void *args); +extern void (*local_flush_icache_folio)(void *args); extern void (*local_flush_cache_sigtramp)(void *args); static inline void cache_noop(void *args) { } @@ -42,11 +42,18 @@ extern void flush_cache_page(struct vm_area_struct *vma, extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} + extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_user_range flush_icache_range -extern void flush_icache_page(struct vm_area_struct *vma, - struct page *page); +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_cache_sigtramp(unsigned long address); struct flusher_data { diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h index 3ce30becf6df..1a8fdc3bc363 100644 --- a/arch/sh/include/asm/pgtable.h +++ b/arch/sh/include/asm/pgtable.h @@ -102,13 +102,15 @@ extern void __update_cache(struct vm_area_struct *vma, extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte); -static inline void -update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { pte_t pte = *ptep; __update_cache(vma, address, pte); __update_tlb(vma, address, pte); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); diff --git a/arch/sh/include/asm/pgtable_32.h b/arch/sh/include/asm/pgtable_32.h index 21952b094650..676f3d4ef6ce 100644 --- a/arch/sh/include/asm/pgtable_32.h +++ b/arch/sh/include/asm/pgtable_32.h @@ -307,14 +307,13 @@ static inline void set_pte(pte_t *ptep, pte_t pte) #define set_pte(pteptr, pteval) (*(pteptr) = pteval) #endif -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) - /* * (pmds are folded into pgds so this doesn't get actually called, * but the define is needed for a generic inline function.) */ #define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval) +#define PFN_PTE_SHIFT PAGE_SHIFT #define pfn_pte(pfn, prot) \ __pte(((unsigned long long)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) #define pfn_pmd(pfn, prot) \ @@ -323,7 +322,7 @@ static inline void set_pte(pte_t *ptep, pte_t pte) #define pte_none(x) (!pte_val(x)) #define pte_present(x) ((x).pte_low & (_PAGE_PRESENT | _PAGE_PROTNONE)) -#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0) +#define pte_clear(mm, addr, ptep) set_pte(ptep, __pte(0)) #define pmd_none(x) (!pmd_val(x)) #define pmd_present(x) (pmd_val(x)) diff --git a/arch/sh/mm/cache-j2.c b/arch/sh/mm/cache-j2.c index f277862a11f5..9ac960214380 100644 --- a/arch/sh/mm/cache-j2.c +++ b/arch/sh/mm/cache-j2.c @@ -55,9 +55,9 @@ void __init j2_cache_init(void) local_flush_cache_dup_mm = j2_flush_both; local_flush_cache_page = j2_flush_both; local_flush_cache_range = j2_flush_both; - local_flush_dcache_page = j2_flush_dcache; + local_flush_dcache_folio = j2_flush_dcache; local_flush_icache_range = j2_flush_icache; - local_flush_icache_page = j2_flush_icache; + local_flush_icache_folio = j2_flush_icache; local_flush_cache_sigtramp = j2_flush_icache; pr_info("Initial J2 CCR is %.8x\n", __raw_readl(j2_ccr_base)); diff --git a/arch/sh/mm/cache-sh4.c b/arch/sh/mm/cache-sh4.c index 72c2e1b46c08..862046f26981 100644 --- a/arch/sh/mm/cache-sh4.c +++ b/arch/sh/mm/cache-sh4.c @@ -107,19 +107,29 @@ static inline void flush_cache_one(unsigned long start, unsigned long phys) * Write back & invalidate the D-cache of the page. * (To avoid "alias" issues) */ -static void sh4_flush_dcache_page(void *arg) +static void sh4_flush_dcache_folio(void *arg) { - struct page *page = arg; - unsigned long addr = (unsigned long)page_address(page); + struct folio *folio = arg; #ifndef CONFIG_SMP - struct address_space *mapping = page_mapping_file(page); + struct address_space *mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); else #endif - flush_cache_one(CACHE_OC_ADDRESS_ARRAY | - (addr & shm_align_mask), page_to_phys(page)); + { + unsigned long pfn = folio_pfn(folio); + unsigned long addr = (unsigned long)folio_address(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + flush_cache_one(CACHE_OC_ADDRESS_ARRAY | + (addr & shm_align_mask), + pfn * PAGE_SIZE); + addr += PAGE_SIZE; + pfn++; + } + } wmb(); } @@ -379,7 +389,7 @@ void __init sh4_cache_init(void) __raw_readl(CCN_PRR)); local_flush_icache_range = sh4_flush_icache_range; - local_flush_dcache_page = sh4_flush_dcache_page; + local_flush_dcache_folio = sh4_flush_dcache_folio; local_flush_cache_all = sh4_flush_cache_all; local_flush_cache_mm = sh4_flush_cache_mm; local_flush_cache_dup_mm = sh4_flush_cache_mm; diff --git a/arch/sh/mm/cache-sh7705.c b/arch/sh/mm/cache-sh7705.c index 9b63a53a5e46..b509a407588f 100644 --- a/arch/sh/mm/cache-sh7705.c +++ b/arch/sh/mm/cache-sh7705.c @@ -132,15 +132,20 @@ static void __flush_dcache_page(unsigned long phys) * Write back & invalidate the D-cache of the page. * (To avoid "alias" issues) */ -static void sh7705_flush_dcache_page(void *arg) +static void sh7705_flush_dcache_folio(void *arg) { - struct page *page = arg; - struct address_space *mapping = page_mapping_file(page); + struct folio *folio = arg; + struct address_space *mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) - clear_bit(PG_dcache_clean, &page->flags); - else - __flush_dcache_page(__pa(page_address(page))); + clear_bit(PG_dcache_clean, &folio->flags); + else { + unsigned long pfn = folio_pfn(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) + __flush_dcache_page((pfn + i) * PAGE_SIZE); + } } static void sh7705_flush_cache_all(void *args) @@ -176,19 +181,20 @@ static void sh7705_flush_cache_page(void *args) * Not entirely sure why this is necessary on SH3 with 32K cache but * without it we get occasional "Memory fault" when loading a program. */ -static void sh7705_flush_icache_page(void *page) +static void sh7705_flush_icache_folio(void *arg) { - __flush_purge_region(page_address(page), PAGE_SIZE); + struct folio *folio = arg; + __flush_purge_region(folio_address(folio), folio_size(folio)); } void __init sh7705_cache_init(void) { local_flush_icache_range = sh7705_flush_icache_range; - local_flush_dcache_page = sh7705_flush_dcache_page; + local_flush_dcache_folio = sh7705_flush_dcache_folio; local_flush_cache_all = sh7705_flush_cache_all; local_flush_cache_mm = sh7705_flush_cache_all; local_flush_cache_dup_mm = sh7705_flush_cache_all; local_flush_cache_range = sh7705_flush_cache_all; local_flush_cache_page = sh7705_flush_cache_page; - local_flush_icache_page = sh7705_flush_icache_page; + local_flush_icache_folio = sh7705_flush_icache_folio; } diff --git a/arch/sh/mm/cache.c b/arch/sh/mm/cache.c index 3aef78ceb820..9bcaa5619eab 100644 --- a/arch/sh/mm/cache.c +++ b/arch/sh/mm/cache.c @@ -20,9 +20,9 @@ void (*local_flush_cache_mm)(void *args) = cache_noop; void (*local_flush_cache_dup_mm)(void *args) = cache_noop; void (*local_flush_cache_page)(void *args) = cache_noop; void (*local_flush_cache_range)(void *args) = cache_noop; -void (*local_flush_dcache_page)(void *args) = cache_noop; +void (*local_flush_dcache_folio)(void *args) = cache_noop; void (*local_flush_icache_range)(void *args) = cache_noop; -void (*local_flush_icache_page)(void *args) = cache_noop; +void (*local_flush_icache_folio)(void *args) = cache_noop; void (*local_flush_cache_sigtramp)(void *args) = cache_noop; void (*__flush_wback_region)(void *start, int size); @@ -61,15 +61,17 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { - if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + struct folio *folio = page_folio(page); + + if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) && + test_bit(PG_dcache_clean, &folio->flags)) { void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(vto, src, len); kunmap_coherent(vto); } else { memcpy(dst, src, len); if (boot_cpu_data.dcache.n_aliases) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } if (vma->vm_flags & VM_EXEC) @@ -80,27 +82,30 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + test_bit(PG_dcache_clean, &folio->flags)) { void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(dst, vfrom, len); kunmap_coherent(vfrom); } else { memcpy(dst, src, len); if (boot_cpu_data.dcache.n_aliases) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } } void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *vfrom, *vto; vto = kmap_atomic(to); - if (boot_cpu_data.dcache.n_aliases && page_mapcount(from) && - test_bit(PG_dcache_clean, &from->flags)) { + if (boot_cpu_data.dcache.n_aliases && folio_mapped(src) && + test_bit(PG_dcache_clean, &src->flags)) { vfrom = kmap_coherent(from, vaddr); copy_page(vto, vfrom); kunmap_coherent(vfrom); @@ -136,27 +141,28 @@ EXPORT_SYMBOL(clear_user_highpage); void __update_cache(struct vm_area_struct *vma, unsigned long address, pte_t pte) { - struct page *page; unsigned long pfn = pte_pfn(pte); if (!boot_cpu_data.dcache.n_aliases) return; - page = pfn_to_page(pfn); if (pfn_valid(pfn)) { - int dirty = !test_and_set_bit(PG_dcache_clean, &page->flags); + struct folio *folio = page_folio(pfn_to_page(pfn)); + int dirty = !test_and_set_bit(PG_dcache_clean, &folio->flags); if (dirty) - __flush_purge_region(page_address(page), PAGE_SIZE); + __flush_purge_region(folio_address(folio), + folio_size(folio)); } } void __flush_anon_page(struct page *page, unsigned long vmaddr) { + struct folio *folio = page_folio(page); unsigned long addr = (unsigned long) page_address(page); if (pages_do_alias(addr, vmaddr)) { - if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) && + test_bit(PG_dcache_clean, &folio->flags)) { void *kaddr; kaddr = kmap_coherent(page, vmaddr); @@ -164,7 +170,8 @@ void __flush_anon_page(struct page *page, unsigned long vmaddr) /* __flush_purge_region((void *)kaddr, PAGE_SIZE); */ kunmap_coherent(kaddr); } else - __flush_purge_region((void *)addr, PAGE_SIZE); + __flush_purge_region(folio_address(folio), + folio_size(folio)); } } @@ -215,11 +222,11 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, } EXPORT_SYMBOL(flush_cache_range); -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - cacheop_on_each_cpu(local_flush_dcache_page, page, 1); + cacheop_on_each_cpu(local_flush_dcache_folio, folio, 1); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); void flush_icache_range(unsigned long start, unsigned long end) { @@ -233,10 +240,11 @@ void flush_icache_range(unsigned long start, unsigned long end) } EXPORT_SYMBOL(flush_icache_range); -void flush_icache_page(struct vm_area_struct *vma, struct page *page) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) { - /* Nothing uses the VMA, so just pass the struct page along */ - cacheop_on_each_cpu(local_flush_icache_page, page, 1); + /* Nothing uses the VMA, so just pass the folio along */ + cacheop_on_each_cpu(local_flush_icache_folio, page_folio(page), 1); } void flush_cache_sigtramp(unsigned long address) diff --git a/arch/sh/mm/kmap.c b/arch/sh/mm/kmap.c index 73fd7cc99430..fa50e8f6e7a9 100644 --- a/arch/sh/mm/kmap.c +++ b/arch/sh/mm/kmap.c @@ -27,10 +27,11 @@ void __init kmap_coherent_init(void) void *kmap_coherent(struct page *page, unsigned long addr) { + struct folio *folio = page_folio(page); enum fixed_addresses idx; unsigned long vaddr; - BUG_ON(!test_bit(PG_dcache_clean, &page->flags)); + BUG_ON(!test_bit(PG_dcache_clean, &folio->flags)); preempt_disable(); pagefault_disable(); From patchwork Wed Mar 15 05:14:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 561DAC761A6 for ; Wed, 15 Mar 2023 05:15:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D9AF58E0005; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB9218E000E; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 80F518E000C; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 1F4298E0005 for ; Wed, 15 Mar 2023 01:14:54 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E1E4A140906 for ; Wed, 15 Mar 2023 05:14:53 +0000 (UTC) X-FDA: 80569968066.17.36BEAAD Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id 4564316000A for ; Wed, 15 Mar 2023 05:14:51 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Bn827xvC; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857292; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=E+qaM1m3GAxfI6UYUGryaj1jY7UeUF5LhRTvOfinifs=; b=6tra7b7Q+rDMdY9xgPY5xjtOcsCMWNwO+i7Pe4GZjTx8BXgX524qYuYuZUBaWBRcEpOStc LrxsLeGuOoJtJ5ug67fx+OMqvyuyWMYufLIt5fxk/cSZqwCkrGn/nnSMRkOu9bIuFGqYr0 Ppw2hQOLv4myiLbdkP8F2eerJrMJs7E= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Bn827xvC; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857292; a=rsa-sha256; cv=none; b=V5yGO1NfKyU1mZs5RUPS6eWObduYq8FfdCaLeoNV98v3H8N5DMmQ7uhTEcda5NoPAoUzwe StcqU6STpwMbEd/1F/ZCS/nqsXujuuhf5XxO+b5tuxq6neRpnyJvKkIuTWBNTS6M0Mx697 BGHL+E1RHvJNjci3kjdFQh3+wiMIg5w= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=E+qaM1m3GAxfI6UYUGryaj1jY7UeUF5LhRTvOfinifs=; b=Bn827xvCQmVylfF2Yl+v7IvyIz y1/dym7q2L3h/f9lYFKGMaGvaxIFAyijlm9E8NLKKFp4y9n76kvpnmYfdj/qTXqHr/NSHDY4CZErW XWJxOXB2/nFn36gbTXBsQPx7qkDSzPUBhFwlmbppNv2DRmI5xeO18IrtOqlZC0rfahPdkijc4g5ci ZGmGa7UwXAhe6g4xAbCFvmQoLuXqrw3twueIKBdwpPbQBoZYsjuwIzoQy/t2MR0av0AAHwAmlQxaO MVlMO2gELeNyNUFhpsY+tEHO7N9vNE3Cy2NIuVpxWEnKEGuFbPzzavp/7ZbIATHXRttwfYiG1CmC5 czXDl38g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTN-00DYCi-7A; Wed, 15 Mar 2023 05:14:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, "David S. Miller" , sparclinux@vger.kernel.org Subject: [PATCH v4 24/36] sparc32: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:32 +0000 Message-Id: <20230315051444.3229621-25-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4564316000A X-Stat-Signature: mo9bctinggg8omj8n6s6ynmp9kwyw4jy X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1678857291-22450 X-HE-Meta: U2FsdGVkX1+d76wj4ND6aUbqX31jJBbI4XZjfbI636HFolcJYxB25HwKll7ZJkh0FUhxtYKL0IxPZ3F2ooWkSclkcmnkoenrYikgILBjyiFLEvlALNE4PE9Ds78kNesuD+LoByNpzMdBXeMGiFbHJf6PklVJE+jPeUuHi8mToB136YURYSsPhQp1d0t20D8yroMedMnoVgJKsUp7dlIIctt5eACMSq4oVFUKweVYbWdzIUxpAaL0Xis0QjFkGhhuznZOWyyzWXnDc2PBBxOBhq9zZ/hlA8xvXXG0+T7s6DufeCemW0pkByg1DNZILbN/LZRMHHk0ADgnN+NcPxwqTpYZd1iZTEE46JUPibnErLlFEmr3DyoDfm7s5BfE04hTXtymAtTIhBjnmt4nUgJjuBYW/uuPzji+YgngdW6ecHFM5SYB9RVsD1q9jQ2u+xBYBv+5XYXMJs2SH7k9CcylaD9Yy8F9OHV/qKa4X6rk2JH2OIoiMkHgtWcZVl3IN0vbJa/JbqW+trvkExUP63tSOQPrSJoT0jMym928gnnkLE3C9AdbMUddQ92UvhbHhqgvuMeJ7hslhS02RzAVfnR2aGe33AzShkDMu2eTOkUXBJU2C5Cb7LWq9QLrTd4gAOIzspocGLm4SH9oHNA0/icKmSzjq+vVcyjiHUlqYgam8clmpDY4DDRzV3OCGvxz4hTeonjQ91FhO4PRMjbxDxUknj+77nqECd2BhdaxAb1useIfsT2Aq2ND06IHq6yKMKi67ganunpJFpuFmCbaDSmiltyhYMH7GrNRal0y2iCulBPo0S1lynEEnUCgKujNB5Hre0/9POqX059mjQhnG1cJwploDA2K/oTRYWII3dmq02ShW1qygIotFg9zRT2PLiLrn51hK2dMp1DE4ktb+ZxEtB7Bf0NQW6cpIsQYaULcb3RlHjWISjSFB7CpShTVItJHqXmdAHEZXFpPLjkvnBW bLSrn0aU dhRCjMThUB1j0ZxSUQt4F7YXsQJe4IccT9/sGOrCYBTjHo+TTB4v+o+rdIh392PSH9pA+IfBTtRcXAmEnGxk5d0VzaiWCiEipaijmmGjgkLOVHcF2XhLMZIkER/uIB1HtPLIgTbdCUUBFJG3qcUWRLtmd2ixsCAtEmmHaO2mN9pwWY4/f8cTy0U7IVrO1PA6oyGBPiDc99kagYRq/A6kdTK2ZFXV/uo5q+52R2x8OPFkUOlznlk1jAYif4Tw3C0Q4OQ/s1+u5OJBUFuapote565xDXtosg7Tu0pNETnsK5iKuoBXr2r9htOKktW8CVFLpqN9X X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Cc: "David S. Miller" Cc: sparclinux@vger.kernel.org Acked-by: Mike Rapoport (IBM) --- arch/sparc/include/asm/cacheflush_32.h | 9 +++++++-- arch/sparc/include/asm/pgtable_32.h | 8 ++++---- arch/sparc/mm/init_32.c | 13 +++++++++++-- 3 files changed, 22 insertions(+), 8 deletions(-) diff --git a/arch/sparc/include/asm/cacheflush_32.h b/arch/sparc/include/asm/cacheflush_32.h index adb6991d0455..8dba35d63328 100644 --- a/arch/sparc/include/asm/cacheflush_32.h +++ b/arch/sparc/include/asm/cacheflush_32.h @@ -16,6 +16,7 @@ sparc32_cachetlb_ops->cache_page(vma, addr) #define flush_icache_range(start, end) do { } while (0) #define flush_icache_page(vma, pg) do { } while (0) +#define flush_icache_pages(vma, pg, nr) do { } while (0) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ @@ -35,11 +36,15 @@ #define flush_page_for_dma(addr) \ sparc32_cachetlb_ops->page_for_dma(addr) -struct page; void sparc_flush_page_to_ram(struct page *page); +void sparc_flush_folio_to_ram(struct folio *folio); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) sparc_flush_page_to_ram(page) +#define flush_dcache_folio(folio) sparc_flush_folio_to_ram(folio) +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h index d4330e3c57a6..7514611d14d3 100644 --- a/arch/sparc/include/asm/pgtable_32.h +++ b/arch/sparc/include/asm/pgtable_32.h @@ -101,8 +101,6 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) srmmu_swap((unsigned long *)ptep, pte_val(pteval)); } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) - static inline int srmmu_device_memory(unsigned long x) { return ((x & 0xF0000000) != 0); @@ -256,6 +254,7 @@ static inline pte_t pte_mkyoung(pte_t pte) return __pte(pte_val(pte) | SRMMU_REF); } +#define PFN_PTE_SHIFT (PAGE_SHIFT - 4) #define pfn_pte(pfn, prot) mk_pte(pfn_to_page(pfn), prot) static inline unsigned long pte_pfn(pte_t pte) @@ -268,7 +267,7 @@ static inline unsigned long pte_pfn(pte_t pte) */ return ~0UL; } - return (pte_val(pte) & SRMMU_PTE_PMASK) >> (PAGE_SHIFT-4); + return (pte_val(pte) & SRMMU_PTE_PMASK) >> PFN_PTE_SHIFT; } #define pte_page(pte) pfn_to_page(pte_pfn(pte)) @@ -318,6 +317,7 @@ void mmu_info(struct seq_file *m); #define FAULT_CODE_USER 0x4 #define update_mmu_cache(vma, address, ptep) do { } while (0) +#define update_mmu_cache_range(vma, address, ptep, nr) do { } while (0) void srmmu_mapiorange(unsigned int bus, unsigned long xpa, unsigned long xva, unsigned int len); @@ -422,7 +422,7 @@ static inline int io_remap_pfn_range(struct vm_area_struct *vma, ({ \ int __changed = !pte_same(*(__ptep), __entry); \ if (__changed) { \ - set_pte_at((__vma)->vm_mm, (__address), __ptep, __entry); \ + set_pte(__ptep, __entry); \ flush_tlb_page(__vma, __address); \ } \ __changed; \ diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c index 9c0ea457bdf0..d96a14ffceeb 100644 --- a/arch/sparc/mm/init_32.c +++ b/arch/sparc/mm/init_32.c @@ -297,11 +297,20 @@ void sparc_flush_page_to_ram(struct page *page) { unsigned long vaddr = (unsigned long)page_address(page); - if (vaddr) - __flush_page_to_ram(vaddr); + __flush_page_to_ram(vaddr); } EXPORT_SYMBOL(sparc_flush_page_to_ram); +void sparc_flush_folio_to_ram(struct folio *folio) +{ + unsigned long vaddr = (unsigned long)folio_address(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) + __flush_page_to_ram(vaddr + i * PAGE_SIZE); +} +EXPORT_SYMBOL(sparc_flush_folio_to_ram); + static const pgprot_t protection_map[16] = { [VM_NONE] = PAGE_NONE, [VM_READ] = PAGE_READONLY, From patchwork Wed Mar 15 05:14:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175332 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD118C61DA4 for ; Wed, 15 Mar 2023 05:15:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 356638E001A; Wed, 15 Mar 2023 01:15:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2DEC48E001D; Wed, 15 Mar 2023 01:15:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 15A758E001A; Wed, 15 Mar 2023 01:15:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DA0C48E001D for ; Wed, 15 Mar 2023 01:15:03 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id B5D5D1A0F10 for ; Wed, 15 Mar 2023 05:15:03 +0000 (UTC) X-FDA: 80569968486.26.AD539FB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id E69501A0008 for ; Wed, 15 Mar 2023 05:15:01 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HO5+Anyq; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857302; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=q48/ZzmysDoQBmsdLn9QzayIGB5Qy0jtW3IKJj2Rm3o=; b=FvTfLHD7xSFShrp9xTqzWST42HU9aFWQvC4erAN4Nn+98cqODqNjmXCpTfFe8qaAJgjvI1 83dSQltZrQBxmZGrJBBW1ko9nx/36eL2Rm1eaducdaIgjqnLsFt29P7G4T4P6NPBb9bU+H eCULQ6BxFq66jgfWhZsuewc6DfsF+AM= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HO5+Anyq; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857302; a=rsa-sha256; cv=none; b=OqbW43wpxgI7kY8kjwx0IN0mCy1GopMap9ESlw2oDBk/ULjubSKidrDWOYZMpa/dRaU5c0 Il6hFthcmaXtU4JYRj3YrWJtw/OXCmGyJyRbXv03L36lVA+zcuG4Gi59jmG1eyp8K8yb3q bEOt5EEZYs/iEySKclvXId5/qm+JFFg= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=q48/ZzmysDoQBmsdLn9QzayIGB5Qy0jtW3IKJj2Rm3o=; b=HO5+AnyqeDVmC+HC4e6la28QeT 9S3YESbZZx6l2dckxO04hklJ4XWqLeckKz69jQYJ28U8FqzgHqpUD1fgj0JwflCGRvw62ZKtTBhkC ehVFmrE7qIAdJPBS/w7VsVmJvnaKgXqngbAYqwnBvD2FX41evCEO/RKuM3ylcN426usarW9ln0pZT K6lqBrsqBEFbjZ1K/AXyLkEDKxWo1fZGpoL/v8YEkOifLA5Luq71EpKMJ/Bj+LtY7KtI9IusLlaIq ysiydQZgp3DvRkeYMw8+K3zIuyqb3XSWiFbN2GuoXXJmaWHbXRRtAACAUzorkjny6zrs2qCIi6Cmb TIsyJ1vg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTN-00DYCv-Du; Wed, 15 Mar 2023 05:14:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, "David S. Miller" , sparclinux@vger.kernel.org Subject: [PATCH v4 25/36] sparc64: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:33 +0000 Message-Id: <20230315051444.3229621-26-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: E69501A0008 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: x5qtmyzxwtf4aap3ozwodihewf7wt969 X-HE-Tag: 1678857301-689525 X-HE-Meta: U2FsdGVkX18jwEd8q+M9yiLW513N73Vej+HYSpmhVtmnNtmsvxQCoBd9BBIJ3hvKvGexWfXA7Wmog3tHYqk1Jl+49sUYis/HeYg5+/731tVIYSCQcR448EHVVnrX9m3l88Z2Og0tWMu6TwGgGG9dqEc7WkIDDaIjLqX3464vcC62CB3tx/G/eEDs53ERN75Oy6XeF+l9ZKiAaaYexS61vgtwAH79EKoZW4yrEqUrGiEFkInYstT4DDqxuXAZdqu4rj8Bwo0yifENJ2W2u8o9GNDSS+g9shwJmm0ewFU+QCBgSXzA5Q81lgFdMfKVr1SPBuvEoC9TdhWsAeJwKkff+H8npStjP4cSUwV6W7fLecGHn2dmeweRD6PCUYvL6Nc4gHTvFukRdqmTtb3S7nqndkEQwuGm1sM1pTL7N8x3IbI0mGqK8cYlli8GFJ14Bl00GTBxSQ2mD169FRVGkIBZTkHZ9A5rxZDyl/xshkZbZ0cThfOXf7Ndz6PyK832YJpnRMryyeAwQ/2oN9XrFvULAOGXsqUK6jK1XJe6InJs4ScaRP0SW/nm04Mv3hu7F2OrsUmiBGaJX1omqvM44b3geaw/xpLPaRLuyj9SwuD4RF3ms16zdIbs4FJDzuMToZSuWDqrPx7Yie2jdbqu1pHfTxocmpLyZmThlDUh+damMB5AaR/QX4ywi18vK3TXr5Rs28V0UZRXsINQJ1nF5mRJFAkTuaCQ1G86EMu+G9EN4gSJwTmFXEeDi0WbwP26KyGiLHBG9P+NuCgvX7wOMd1mp18E6qJSLshTXTghmkBVRBmrgv0p4SEHI7LiGR67IVmHqgVBjuuoFL9YWAJqSe/gFIFssziiN79AJcK9gQp5+68JSb5zWR7tZ5ctCLUMF4GUw12ecI4GtdsDngfWRucQsnZF/HDCI8VjozQYKkTt/LHmqqcJDE6SzaW4Hg8ZvSNZ3hyFeyBfzm2jzFg71MO 1uFtQODz aIkcOR0chFsndYcC+3xBggiK3BtiUuwabqxjZPvw9SIJSlOmKOnV9L6+xQiCgbxsF24PXwuNzWcpKNF9tomv0Ohl6+pQpn/yJE+KEO180/R5uJ5Ubx19YS4s/dnFlJuAoVsfa2NNjFfbwF4REDEz3wOZJ3ntIm3oNnH8ogneDuwoUb/CH+y4OfMu8TPd+hRYMhAI/70B2jjx7tNMxjdHBORX1CXUGf3S9hHJo/1HiyvnFBgjlyLfwtA1p/Z/UfAXso6JWwqr2dQtb1Z1Dy9p3e3k+ckc1dUvbGxIjpgJbm+x7pUZf2oyiL5d8irgtDsbrusp3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Convert the PG_dcache_dirty flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: "David S. Miller" Cc: sparclinux@vger.kernel.org Acked-by: Mike Rapoport (IBM) --- arch/sparc/include/asm/cacheflush_64.h | 18 ++++-- arch/sparc/include/asm/pgtable_64.h | 24 ++++++-- arch/sparc/kernel/smp_64.c | 56 +++++++++++------- arch/sparc/mm/init_64.c | 78 +++++++++++++++----------- arch/sparc/mm/tlb.c | 5 +- 5 files changed, 116 insertions(+), 65 deletions(-) diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h index b9341836597e..a9a719f04d06 100644 --- a/arch/sparc/include/asm/cacheflush_64.h +++ b/arch/sparc/include/asm/cacheflush_64.h @@ -35,20 +35,26 @@ void flush_icache_range(unsigned long start, unsigned long end); void __flush_icache_page(unsigned long); void __flush_dcache_page(void *addr, int flush_icache); -void flush_dcache_page_impl(struct page *page); +void flush_dcache_folio_impl(struct folio *folio); #ifdef CONFIG_SMP -void smp_flush_dcache_page_impl(struct page *page, int cpu); -void flush_dcache_page_all(struct mm_struct *mm, struct page *page); +void smp_flush_dcache_folio_impl(struct folio *folio, int cpu); +void flush_dcache_folio_all(struct mm_struct *mm, struct folio *folio); #else -#define smp_flush_dcache_page_impl(page,cpu) flush_dcache_page_impl(page) -#define flush_dcache_page_all(mm,page) flush_dcache_page_impl(page) +#define smp_flush_dcache_folio_impl(folio, cpu) flush_dcache_folio_impl(folio) +#define flush_dcache_folio_all(mm, folio) flush_dcache_folio_impl(folio) #endif void __flush_dcache_range(unsigned long start, unsigned long end); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_icache_page(vma, pg) do { } while(0) +#define flush_icache_pages(vma, pg, nr) do { } while(0) void flush_ptrace_access(struct vm_area_struct *, struct page *, unsigned long uaddr, void *kaddr, diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 2dc8d4641734..49c37000e1b1 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -911,8 +911,19 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, maybe_tlb_batch_add(mm, addr, ptep, orig, fullmm, PAGE_SHIFT); } -#define set_pte_at(mm,addr,ptep,pte) \ - __set_pte_at((mm), (addr), (ptep), (pte), 0) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + __set_pte_at(mm, addr, ptep, pte, 0); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + addr += PAGE_SIZE; + } +} +#define set_ptes set_ptes #define pte_clear(mm,addr,ptep) \ set_pte_at((mm), (addr), (ptep), __pte(0UL)) @@ -931,8 +942,8 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, \ if (pfn_valid(this_pfn) && \ (((old_addr) ^ (new_addr)) & (1 << 13))) \ - flush_dcache_page_all(current->mm, \ - pfn_to_page(this_pfn)); \ + flush_dcache_folio_all(current->mm, \ + page_folio(pfn_to_page(this_pfn))); \ } \ newpte; \ }) @@ -947,7 +958,10 @@ struct seq_file; void mmu_info(struct seq_file *); struct vm_area_struct; -void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *); +void update_mmu_cache_range(struct vm_area_struct *, unsigned long addr, + pte_t *ptep, unsigned int nr); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #ifdef CONFIG_TRANSPARENT_HUGEPAGE void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd); diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c index a55295d1b924..90ef8677ac89 100644 --- a/arch/sparc/kernel/smp_64.c +++ b/arch/sparc/kernel/smp_64.c @@ -921,20 +921,26 @@ extern unsigned long xcall_flush_dcache_page_cheetah; #endif extern unsigned long xcall_flush_dcache_page_spitfire; -static inline void __local_flush_dcache_page(struct page *page) +static inline void __local_flush_dcache_folio(struct folio *folio) { + unsigned int i, nr = folio_nr_pages(folio); + #ifdef DCACHE_ALIASING_POSSIBLE - __flush_dcache_page(page_address(page), + for (i = 0; i < nr; i++) + __flush_dcache_page(folio_address(folio) + i * PAGE_SIZE, ((tlb_type == spitfire) && - page_mapping_file(page) != NULL)); + folio_flush_mapping(folio) != NULL)); #else - if (page_mapping_file(page) != NULL && - tlb_type == spitfire) - __flush_icache_page(__pa(page_address(page))); + if (folio_flush_mapping(folio) != NULL && + tlb_type == spitfire) { + unsigned long pfn = folio_pfn(folio) + for (i = 0; i < nr; i++) + __flush_icache_page((pfn + i) * PAGE_SIZE); + } #endif } -void smp_flush_dcache_page_impl(struct page *page, int cpu) +void smp_flush_dcache_folio_impl(struct folio *folio, int cpu) { int this_cpu; @@ -948,14 +954,14 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu) this_cpu = get_cpu(); if (cpu == this_cpu) { - __local_flush_dcache_page(page); + __local_flush_dcache_folio(folio); } else if (cpu_online(cpu)) { - void *pg_addr = page_address(page); + void *pg_addr = folio_address(folio); u64 data0 = 0; if (tlb_type == spitfire) { data0 = ((u64)&xcall_flush_dcache_page_spitfire); - if (page_mapping_file(page) != NULL) + if (folio_flush_mapping(folio) != NULL) data0 |= ((u64)1 << 32); } else if (tlb_type == cheetah || tlb_type == cheetah_plus) { #ifdef DCACHE_ALIASING_POSSIBLE @@ -963,18 +969,23 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu) #endif } if (data0) { - xcall_deliver(data0, __pa(pg_addr), - (u64) pg_addr, cpumask_of(cpu)); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + xcall_deliver(data0, __pa(pg_addr), + (u64) pg_addr, cpumask_of(cpu)); #ifdef CONFIG_DEBUG_DCFLUSH - atomic_inc(&dcpage_flushes_xcall); + atomic_inc(&dcpage_flushes_xcall); #endif + pg_addr += PAGE_SIZE; + } } } put_cpu(); } -void flush_dcache_page_all(struct mm_struct *mm, struct page *page) +void flush_dcache_folio_all(struct mm_struct *mm, struct folio *folio) { void *pg_addr; u64 data0; @@ -988,10 +999,10 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page) atomic_inc(&dcpage_flushes); #endif data0 = 0; - pg_addr = page_address(page); + pg_addr = folio_address(folio); if (tlb_type == spitfire) { data0 = ((u64)&xcall_flush_dcache_page_spitfire); - if (page_mapping_file(page) != NULL) + if (folio_flush_mapping(folio) != NULL) data0 |= ((u64)1 << 32); } else if (tlb_type == cheetah || tlb_type == cheetah_plus) { #ifdef DCACHE_ALIASING_POSSIBLE @@ -999,13 +1010,18 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page) #endif } if (data0) { - xcall_deliver(data0, __pa(pg_addr), - (u64) pg_addr, cpu_online_mask); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + xcall_deliver(data0, __pa(pg_addr), + (u64) pg_addr, cpu_online_mask); #ifdef CONFIG_DEBUG_DCFLUSH - atomic_inc(&dcpage_flushes_xcall); + atomic_inc(&dcpage_flushes_xcall); #endif + pg_addr += PAGE_SIZE; + } } - __local_flush_dcache_page(page); + __local_flush_dcache_folio(folio); preempt_enable(); } diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 04f9db0c3111..ab9aacbaf43c 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -195,21 +195,26 @@ atomic_t dcpage_flushes_xcall = ATOMIC_INIT(0); #endif #endif -inline void flush_dcache_page_impl(struct page *page) +inline void flush_dcache_folio_impl(struct folio *folio) { + unsigned int i, nr = folio_nr_pages(folio); + BUG_ON(tlb_type == hypervisor); #ifdef CONFIG_DEBUG_DCFLUSH atomic_inc(&dcpage_flushes); #endif #ifdef DCACHE_ALIASING_POSSIBLE - __flush_dcache_page(page_address(page), - ((tlb_type == spitfire) && - page_mapping_file(page) != NULL)); + for (i = 0; i < nr; i++) + __flush_dcache_page(folio_address(folio) + i * PAGE_SIZE, + ((tlb_type == spitfire) && + folio_flush_mapping(folio) != NULL)); #else - if (page_mapping_file(page) != NULL && - tlb_type == spitfire) - __flush_icache_page(__pa(page_address(page))); + if (folio_flush_mapping(folio) != NULL && + tlb_type == spitfire) { + for (i = 0; i < nr; i++) + __flush_icache_page((pfn + i) * PAGE_SIZE); + } #endif } @@ -218,10 +223,10 @@ inline void flush_dcache_page_impl(struct page *page) #define PG_dcache_cpu_mask \ ((1UL<flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask) +#define dcache_dirty_cpu(folio) \ + (((folio)->flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask) -static inline void set_dcache_dirty(struct page *page, int this_cpu) +static inline void set_dcache_dirty(struct folio *folio, int this_cpu) { unsigned long mask = this_cpu; unsigned long non_cpu_bits; @@ -238,11 +243,11 @@ static inline void set_dcache_dirty(struct page *page, int this_cpu) "bne,pn %%xcc, 1b\n\t" " nop" : /* no outputs */ - : "r" (mask), "r" (non_cpu_bits), "r" (&page->flags) + : "r" (mask), "r" (non_cpu_bits), "r" (&folio->flags) : "g1", "g7"); } -static inline void clear_dcache_dirty_cpu(struct page *page, unsigned long cpu) +static inline void clear_dcache_dirty_cpu(struct folio *folio, unsigned long cpu) { unsigned long mask = (1UL << PG_dcache_dirty); @@ -260,7 +265,7 @@ static inline void clear_dcache_dirty_cpu(struct page *page, unsigned long cpu) " nop\n" "2:" : /* no outputs */ - : "r" (cpu), "r" (mask), "r" (&page->flags), + : "r" (cpu), "r" (mask), "r" (&folio->flags), "i" (PG_dcache_cpu_mask), "i" (PG_dcache_cpu_shift) : "g1", "g7"); @@ -284,9 +289,10 @@ static void flush_dcache(unsigned long pfn) page = pfn_to_page(pfn); if (page) { + struct folio *folio = page_folio(page); unsigned long pg_flags; - pg_flags = page->flags; + pg_flags = folio->flags; if (pg_flags & (1UL << PG_dcache_dirty)) { int cpu = ((pg_flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask); @@ -296,11 +302,11 @@ static void flush_dcache(unsigned long pfn) * in the SMP case. */ if (cpu == this_cpu) - flush_dcache_page_impl(page); + flush_dcache_folio_impl(folio); else - smp_flush_dcache_page_impl(page, cpu); + smp_flush_dcache_folio_impl(folio, cpu); - clear_dcache_dirty_cpu(page, cpu); + clear_dcache_dirty_cpu(folio, cpu); put_cpu(); } @@ -388,12 +394,14 @@ bool __init arch_hugetlb_valid_size(unsigned long size) } #endif /* CONFIG_HUGETLB_PAGE */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { struct mm_struct *mm; unsigned long flags; bool is_huge_tsb; pte_t pte = *ptep; + unsigned int i; if (tlb_type != hypervisor) { unsigned long pfn = pte_pfn(pte); @@ -440,15 +448,21 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * } } #endif - if (!is_huge_tsb) - __update_mmu_tsb_insert(mm, MM_TSB_BASE, PAGE_SHIFT, - address, pte_val(pte)); + if (!is_huge_tsb) { + for (i = 0; i < nr; i++) { + __update_mmu_tsb_insert(mm, MM_TSB_BASE, PAGE_SHIFT, + address, pte_val(pte)); + address += PAGE_SIZE; + pte_val(pte) += PAGE_SIZE; + } + } spin_unlock_irqrestore(&mm->context.lock, flags); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { + unsigned long pfn = folio_pfn(folio); struct address_space *mapping; int this_cpu; @@ -459,35 +473,35 @@ void flush_dcache_page(struct page *page) * is merely the zero page. The 'bigcore' testcase in GDB * causes this case to run millions of times. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; this_cpu = get_cpu(); - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) { - int dirty = test_bit(PG_dcache_dirty, &page->flags); + bool dirty = test_bit(PG_dcache_dirty, &folio->flags); if (dirty) { - int dirty_cpu = dcache_dirty_cpu(page); + int dirty_cpu = dcache_dirty_cpu(folio); if (dirty_cpu == this_cpu) goto out; - smp_flush_dcache_page_impl(page, dirty_cpu); + smp_flush_dcache_folio_impl(folio, dirty_cpu); } - set_dcache_dirty(page, this_cpu); + set_dcache_dirty(folio, this_cpu); } else { /* We could delay the flush for the !page_mapping * case too. But that case is for exec env/arg * pages and those are %99 certainly going to get * faulted into the tlb (and thus flushed) anyways. */ - flush_dcache_page_impl(page); + flush_dcache_folio_impl(folio); } out: put_cpu(); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); void __kprobes flush_icache_range(unsigned long start, unsigned long end) { @@ -2280,10 +2294,10 @@ void __init paging_init(void) setup_page_offset(); /* These build time checkes make sure that the dcache_dirty_cpu() - * page->flags usage will work. + * folio->flags usage will work. * * When a page gets marked as dcache-dirty, we store the - * cpu number starting at bit 32 in the page->flags. Also, + * cpu number starting at bit 32 in the folio->flags. Also, * functions like clear_dcache_dirty_cpu use the cpu mask * in 13-bit signed-immediate instruction fields. */ diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index 9a725547578e..3fa6a070912d 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -118,6 +118,7 @@ void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr, unsigned long paddr, pfn = pte_pfn(orig); struct address_space *mapping; struct page *page; + struct folio *folio; if (!pfn_valid(pfn)) goto no_cache_flush; @@ -127,13 +128,13 @@ void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr, goto no_cache_flush; /* A real file page? */ - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!mapping) goto no_cache_flush; paddr = (unsigned long) page_address(page); if ((paddr ^ vaddr) & (1 << 13)) - flush_dcache_page_all(mm, page); + flush_dcache_folio_all(mm, folio); } no_cache_flush: From patchwork Wed Mar 15 05:14:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175318 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFB7DC76195 for ; Wed, 15 Mar 2023 05:15:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AACE58E0011; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 999008E0008; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 709B18E000C; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 470F28E0008 for ; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 22D36AB77F for ; Wed, 15 Mar 2023 05:14:55 +0000 (UTC) X-FDA: 80569968150.12.A7A7CA8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 823551C0003 for ; Wed, 15 Mar 2023 05:14:53 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="L3nJEGj/"; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857293; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BhDHElA1+7XNNQUc53zmXL/VNEpjb/1kRYcXHjVwh6U=; b=M5z5kNtz85uAoDTHxt22VRZUoFCHKT96rixBfF5Cu4YUWZnu/WsPGCoJl46UDhhGWFTokj r0tjY8xJwSAikkOVffjq8aF8+6hmPLiNmt6oKoxRGEp8BG95b39DKF6OTg8uQzryOMZ7mv Tpy8U+v4HuC0KFW4mgIVgku7ZGYkDfE= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="L3nJEGj/"; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857293; a=rsa-sha256; cv=none; b=6ip3Exxw1co4hZx3dVeCbKOMPCfoC6RzPzrohwVke2wVKSeZ/AUoSACprTDVXXTB+KmSVh Luc6yDRsJ22t6AA2T3lrNonj/qi0/cxZD9NuhAhWyMHQBSKZlQ2wnlVEIUUPiuRGQLX0Dz NLpOmoC8uuqf4IdMkX0NE+N2nVePaRU= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=BhDHElA1+7XNNQUc53zmXL/VNEpjb/1kRYcXHjVwh6U=; b=L3nJEGj//+nPckdFL5A0mFETmy lr7XPYbTtAaH2vP0P5HP+wBtnDWlG5b7uYikWQ/fJi0a4Vt9b/xt8Sww93NIb2vKIimPE9Oy6GZdz mNvJFBOvv5hx4U2SLTqASBtB1VDHjrNZWSQ+UdnoaSfxl1n7daNmCihmbbled4/1UbvtRe3hhL2pE ++KLiRw8n0BqJ1AyiIcC/wm4+19G7IrigGstnEw8/D20hnfrAf+fialHk4Z3PtzZWdFJRgr9KzqBY JJ/8fbBrEaXLPR+F5jk9C2hUICZcfm6Hd9VNJs/p60YKGbDlvTMwSV0WS5LbQ7KG72LW5GboZTm3y scb/WSSg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTN-00DYD1-Ht; Wed, 15 Mar 2023 05:14:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Richard Weinberger , Anton Ivanov , Johannes Berg , linux-um@lists.infradead.org Subject: [PATCH v4 26/36] um: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:34 +0000 Message-Id: <20230315051444.3229621-27-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 823551C0003 X-Rspam-User: X-Stat-Signature: fd4mfz47akj8zni56xzrqc5w9rzo6rpu X-HE-Tag: 1678857293-289414 X-HE-Meta: U2FsdGVkX19vr4Bj8xT088mY01yaJjEkl3JjWInPDDQAn9gk54Z6itzCGChQy+VnBIhkMRtPpx5NAEoUUFgRyNxiUh981ZzK5LlUni045oPp5kboLEkvK5CvapO1X1lEokz5PVe3Jd4OkOA4ank5sedvS6rjd343+AlD90barzRxaG5QpshrORfjIOOYLyDHjzs26Fi8id8Pf2U9g3R6Io3O5KjhdE9Wlj+EmL1TITYyHHW56UkmWgzrf1D1TXHkOjeJ4GUtmMBdf1I+6PWpnv59G4F9xVK1iZcVQf1UzUZNSOkMvDxuzQ3VrBG/+wPf7e5xALW9ptkitL9uFC8RIPbcIbI26KN5etk3pn76hcz6S6Qin4JQ3AFBwMYREsUSfs3jHJyPnXC13FIHaruIMjqVKc+AXEgne8bbORO+NRLLzx2Z6H3EF6nBeI2XcYKrTZEPpSnxIiKdNEqrchJolqFSmePRLQCqKzRVcGkaltXUfkymD7OF66pat4qW+gbQZrPYc5ChsD9pMUCppOmK9sFAxrYpjcxGetYhCsfum/RuydyzSoh5jLQAnsU7ApjPY5V7Smmnpa13Pk2Cng7vVuPWTXa1NkSSq2tls3u/yGnDqKs2jLpNciCE/TRqe3UNZlp7wrkbwmq3a9u6EOrIxCrz3/CkJgu5G19/D1fZSTJfFK+D70ifRQHSxHBJCNggQUcxokzyY7tLxaMfyhTjokKr8rDlqFV6ExfzSPDHpRzgTq1FHoPqZPJJ6xaerawchJVtWzJVYTuettO1t7scGlUh3owGpI50T77ar3rUD5RTeuCSOuMmAbgvZHvuUzRPjlgUROFpsoz4J3I91nsqZjau0WUBK6Vx9rER+xVGOwFBXD0wFxRmW81m9wIyb296FmJ1QWWBUBbabGnniHdbSsBLtlO1sdPOMypuaGqw42Sg4OvGI7BhCF+uDWWjJfjurvm2dfpSUDhlgkNQu+Q m6gagGC5 R8aMkSX6u5vQTCfgT3+VtxnQSvwIWpqhrqCwZTDoeF/1P3kj8rN9Uwpbu5JzFUKN4WAl7vm4PMoNUsz713LJTZlJPsNFAOF5wW313KYNlxBHvL5G7YfFg99mRr41vkWjdJ4jP5Yn+BVm0qfKLgpTcKREO0kF88BB/JgJ8u3TwhIjv6R+wFCJmdkkhtiHJrF9wF5IxrflYpN25BPwMwRyGrJLuJ1N08McY9fblBUX9zlGjl5V8W+lGPfbmHOOzOXsema+0Jdg73TXJC1vMn9ew/5lFqqBVP0+c/4+FAiQbo1e6ARGCstHJ2o9jA9sa5yZCf8Mj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Richard Weinberger Cc: Anton Ivanov Cc: Johannes Berg Cc: linux-um@lists.infradead.org Acked-by: Mike Rapoport (IBM) --- arch/um/include/asm/pgtable.h | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h index a70d1618eb35..ea5f8122f128 100644 --- a/arch/um/include/asm/pgtable.h +++ b/arch/um/include/asm/pgtable.h @@ -242,11 +242,7 @@ static inline void set_pte(pte_t *pteptr, pte_t pteval) if(pte_present(*pteptr)) *pteptr = pte_mknewprot(*pteptr); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *pteptr, pte_t pteval) -{ - set_pte(pteptr, pteval); -} +#define PFN_PTE_SHIFT PAGE_SHIFT #define __HAVE_ARCH_PTE_SAME static inline int pte_same(pte_t pte_a, pte_t pte_b) @@ -290,6 +286,7 @@ struct mm_struct; extern pte_t *virt_to_pte(struct mm_struct *mm, unsigned long addr); #define update_mmu_cache(vma,address,ptep) do {} while (0) +#define update_mmu_cache_range(vma, address, ptep, nr) do {} while (0) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that From patchwork Wed Mar 15 05:14:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0E1DC7619A for ; Wed, 15 Mar 2023 05:15:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2957A8E0008; Wed, 15 Mar 2023 01:14:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E7448E000C; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB4738E0010; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A7A468E000C for ; Wed, 15 Mar 2023 01:14:55 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 80A21120FF5 for ; Wed, 15 Mar 2023 05:14:55 +0000 (UTC) X-FDA: 80569968150.09.0A878DE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id DC5A240004 for ; Wed, 15 Mar 2023 05:14:52 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=CgXM4A7X; dmarc=none; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857293; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RJm37s6exHMhL80A15n5dqXB4pKNus7yxXTpBOkgNUI=; b=Lkpv+gIEqZ/gOSTGjPpcZ5T4lnfNANsHCHCQbMcrzlxCxvK+zOye8QkzNBlo1ZPUSS1Gf2 EJs4In7GBUNaaGkyvUXX0vflg1l93+hI1TqJFdLZrJVZITIyOMMOW36bAT3+BJMdcCstXd i3Sv7GlcWln1VN95l8/i9w7e4PkdatI= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=CgXM4A7X; dmarc=none; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857293; a=rsa-sha256; cv=none; b=IH5cV1irtHbYpiL3j0YwoUHi9lPDGUpCBvMqsmUFaviVmt2cJ8xdXfi6gikeiP56QL3VH3 ZG2oE3fMfPUI3J82qE9ZfXAYsVVi2Ou8J6Ah9GAc3EbYaroI/qcdGcXaD/ESBerY2zSm2p j7dtdqmv1JsLZDFd3HKbPti5NAoTCac= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=RJm37s6exHMhL80A15n5dqXB4pKNus7yxXTpBOkgNUI=; b=CgXM4A7XHbUEAmUDTF22UPLVCd zvb+5FvXwSGpI2Q/Z66mghrtiyZmHXJIph19KRPz/fBfzE/+9CCB0Bmt+elDnjhZ1YDKbiVhfyd3c hHd/olfBHzkGv7CkRoS0Vyr5N43mJTWVKq1Ua7jzhPGE8cb1a8qDACgRemAqw2HCP7Y1BbC00UouK oZ5b5pVsjuyhG9FxOwaQOqDZ3hceid7bahSI+fqwNsM26XOCuKFQcm3DGZ0GpA0pBkwsxmwsYwIOH 4FgZp/0MyQGgy6+/6Ma10Q/XbKrp8wIQ1qmoahbMNU/lEca8Nm5MawGg0fp5nL86jr4kr2FyClPHI GqqqmsoQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTN-00DYD4-MQ; Wed, 15 Mar 2023 05:14:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH v4 27/36] x86: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:35 +0000 Message-Id: <20230315051444.3229621-28-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: DC5A240004 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: stbek5xowmx1syx9gqyhwfph4q5btz5p X-HE-Tag: 1678857292-935980 X-HE-Meta: U2FsdGVkX18fqmQ4vOCDuVpI5BPAK5eCb++JjsQf8rpjjRLCeWPDQn1sbKmyb6+ihGp5PBkp57N5REVwOzJXx7WVz6R7mD3TGlESfDDCB7PN2JI84w1GYNy8i83kKEzkEkRigO+KkVrpV89OiCOkKo7muWUNQN1yZt/vM3ZanyfQ9tbeBB1Nl4iURzCl5KweKUQG6kSmcEaGMvAz/EaQQxbs/WXuoChPL+E5OpFl5b1hpC6feUsuMS4p/JhKRCUkraaabgiqCajMXcrJ9lJbNosPcSCLYV0QngC+z1PTp8BYHpXVxlHHA5WR/uJ5/nb+mgvlbI9V35p8ppMpVhrGzd8lU93SFjRBTKBKamYqZ4aCRW3j4fdAibbmgRBg7IX5Xwtnvff77npI1zyK0nvcZ7KOUx584rZznnwgL60rkOjB4pbGPLoFskjoOQqBili//OewK8D6WiW4Ox0BEFjpOdwdFTHxZPpmBqPF+0Jxzvr32RAp7Om3pIH6TKB1GKN5c2Y1HnQGEsl5u+Z9fTVaGGCNFNSR+azjExu0FRQaw7rLZCFzqM8/2A0jftAtDzDFONT7OKuum9dnNexbPDp/v7BscoVUEZjZb+obS5AhtVAhj1D1O+EXSUYW4Y/QyAVJpGN7P6HM4V1GSKGhz0MJgNgTlIJa0NYVjTqJGeIM5CPhEAx75QKkchBYthwh+E7rIqA4ceHz4351Nmc+RG3pJTRcHesvsVcB08lq8gohUloWfB+iBWz8aVQm+0IkTOlwrxK/waFyCX2iA1OZ72e0yWMYEnXA3rR/f+pLz4Pt3K9D+fhbFjHrMLgyhyC+wj6lfcha3wrCC05paZ1QPrvGmZGbNh34N3ejg0UPKlm3z+Tp4j5cuNzOsGbSvcWuyfF3fpj5pBO+8Wrr/sH6xSBSYtVZkloBxY+BViEqm5BKrfV84aQBnr9iIBGLg0Z8m7yx+TOJQZljgUEBE5GDtNK gZCK1OuR fI8EQjaG53Zk4yX/fQvNDotVWu07acZY5Ymqnx1cd13IW0NTO4pWftlUCMASdsow1u8LV535ArTZLnbUJJWnk0v7Tfbkg4xqCHyN9A6WWjfTo3rkClCDZ3TPglhqkocuwLKCW9Lp2AIIAP5TW5V5Se5LMfn3gM5kBC2ShYpryolVHPI09YobwpqWJd6ADyvQt9My9D9VxR3hqjXqKwIp18Qm+YDhtKbJAwqasTlPQwVTv3Ustu3o0CQBxHPr0UfwFW1O2RyXI0FGpFUN+w47uWjxZr7h4miZXpaD3Ir5ygnJQK1lc43qEIZZNNV5PrYHlgFvDuK0QX0OCG6d36nH5RxBo9UawHCgEerj0sAA/JKUdVpw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT and a noop update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: "H. Peter Anvin" Acked-by: Mike Rapoport (IBM) --- arch/x86/include/asm/pgtable.h | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 1031025730d0..b237878061c4 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -184,6 +184,8 @@ static inline int pte_special(pte_t pte) static inline u64 protnone_mask(u64 val); +#define PFN_PTE_SHIFT PAGE_SHIFT + static inline unsigned long pte_pfn(pte_t pte) { phys_addr_t pfn = pte_val(pte); @@ -1019,13 +1021,6 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) return res; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) -{ - page_table_check_ptes_set(mm, addr, ptep, pte, 1); - set_pte(ptep, pte); -} - static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) { @@ -1291,6 +1286,10 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { } +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) +{ +} static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { From patchwork Wed Mar 15 05:14:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 786B2C7618E for ; Wed, 15 Mar 2023 05:15:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7C5748E0018; Wed, 15 Mar 2023 01:15:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D62A8E000C; Wed, 15 Mar 2023 01:15:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 528F78E0018; Wed, 15 Mar 2023 01:15:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 3966C8E000C for ; Wed, 15 Mar 2023 01:15:01 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 0A3B61201E6 for ; Wed, 15 Mar 2023 05:15:01 +0000 (UTC) X-FDA: 80569968402.10.D67678E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 56D87140003 for ; Wed, 15 Mar 2023 05:14:58 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=bWldaHg1; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857298; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Iif1EIg10x/fk8vQ/H01s5moA0vWLGq/BoHe3AMK43I=; b=1COoK7NdOmOUjO4UO4Qh3Uq/QC9pRlqcCt2T8ck40aoYu1Snz3rTboK++rAWadfU2QNIWc dwqthic3KK79GZ9FCZvAt1V50/IgHdUXnEY1is+gsXv8P6XfJiWSvxtlS4DrE19nwmu9+h GyhQxOhUZs9pnvTH0IaTWEcmADPbvrI= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=bWldaHg1; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857298; a=rsa-sha256; cv=none; b=hP3g/hwco0Yle2FpHQ+cinjnxYFAFIOBcQEjmzUEJ0k9LbDD01/TLMCN13mWqsr2K4xx9Y a2jrKYeoDc+aubwI6YD1RhUXMvs5OxMLdial7h6TVkD1W0djpI3ZXmkF9Cj3IPv8RH7nkt fQxkFJi2PnzSyDCyjWt4LB7id5juyNs= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Iif1EIg10x/fk8vQ/H01s5moA0vWLGq/BoHe3AMK43I=; b=bWldaHg1l/WRZ/9mfqI0gUQhJK yGthN77nxm2deTK2owJ6wBIIB/cb8rzWW+4uhkf87ONURUW/pReVxrFD/vlPnoif/fbcylqW9+pL6 6w5bR5vbCMtV7HAP90Hpat/BmkNe608dCt90QTH/8aCTN5vX7Uf9GLj0TD4OHSkQyH1judEqaWA8q O7w9d/pqPkjbq7gpFCveKoMr2Mdzm1AWgAnqA7geqnLRD0Pr1Khs75UI5okoChPXG0Ka68xqbXBoF KZPIPQABPzDzzrt5C+nv/QyaqU6T7RJrXHpfoH3tEAbxWXoTF5pee5VZd0b2QOJ2Jc9sEaToITsg3 6SUOP6Iw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTN-00DYD9-Po; Wed, 15 Mar 2023 05:14:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Max Filippov , linux-xtensa@linux-xtensa.org Subject: [PATCH v4 28/36] xtensa: Implement the new page table range API Date: Wed, 15 Mar 2023 05:14:36 +0000 Message-Id: <20230315051444.3229621-29-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: ex48aanepbrtcqsh855ykfdgz73mytr3 X-Rspam-User: X-Rspamd-Queue-Id: 56D87140003 X-Rspamd-Server: rspam06 X-HE-Tag: 1678857298-635283 X-HE-Meta: U2FsdGVkX19KGswvBo7PNj9wdt/opoKbbl6cVPj7GPPmVBqCJUrxse9loQgLWFBsRRztNH5Q6Ycj0f6j/49uGjgxUjEu11pInGUO07+nX9goxOtNTvQYJVQ+jMQWmRXs4CtodsuTr5n4XbQBfrFQiphMVsRguZJXyZA+PrncpPSBoynh/wB6Mm6ZR1fVjWBYf5J0/3/1pTgZ/r/qVBWiHSVYS2vSIgJLsbFeX0mXDf+m6KySSS9lSSyWzQu70FcaNjkXlwkI1pLqcCVuaeFRaXEuSTHQhe/Kjjj3lA3KyWfGqZGFR0nMCOqdg7d+1F21tu2AoAQkJe+OHtLhDQgc4nYiiL6w21eT5qKZoTPYAM7CJ+mbbt5iF5NmOU81e3gW3DmC+TBAACzLJtM8FQi617Dpzn696ng2Z6yMVcAEOo/jcgykH4R56U/TxuQD9sitfeG06OE/hy0YQDJJJjFF9ZVJ4+Mldtsu9XtTT9Atj6a8NHRDfku2mx/SWY17+NJKTFsb18xNnFxBrJFhQ807X730a81W428zRwUY5te8GUThr1wsgD1GyjeattWu3RWOmsef8jRiEkqxLdkI9nODiB/mTs5rklWcz5WYnIGNoU1Pt9KffC+yRmBI4rkQBVN5gAZDeLGt+LyztDlr0w1Ft0unJxeQD9Cc1pJYf6cgvx2N/f8mf8KFIYJXFIqX9pf28zZNM/1OyNjbDUXFa3qpKaqAyY3TtJiRrmINhKQ0HWRRELyNe3W0jvwkKG56YCAjzsST1EfkCZn4ZXAndMpujQAdzGfg6477fJikiAnVCrLIapu9ObzatvrZIYAkME5WGD0vO2Lw69gmW6yfxHy6zC0GTxRzTNHOle2XDDJki18ZEK0kgxuy6rjKR4/MN8YQnYtPBEvteAypf82uZN0hEdpx8o+OGm9iOiU04xRg9FCQ4MDqKkQfj0FEbzBB4fmygl+kVSyyPAKd0u8RvKh pm+yTeTX CSacrCPux39JXYjkaGWDn+BmLJ6mOqsvwKgfD/AP9LeHkDAnM19jkFfzg3sTwwAc5t8StPtU3qBbVT+WZnAE9bECo8nnDPYE4x+2cpIbQcf+ILxLmA+aR2fyeSQnDCFipHIB1Zb5SaGvmBRRl+Uk3Jyh5qw9cyiGGOggCZAKu9SyZ9L3IMEYAFq76ssQ5jQaSvpkDhMBwhKXCf4rrIKvO4/JyXYOEJgiFNPa6CAWCbjiymibFteX8RL5MEnRhHabAYNbI4MxygLYBlxPIAcdKYS6QbQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Max Filippov Cc: linux-xtensa@linux-xtensa.org Acked-by: Mike Rapoport (IBM) --- arch/xtensa/include/asm/cacheflush.h | 9 ++- arch/xtensa/include/asm/pgtable.h | 17 +++--- arch/xtensa/mm/cache.c | 83 ++++++++++++++++------------ 3 files changed, 62 insertions(+), 47 deletions(-) diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h index 7b4359312c25..35153f6725e4 100644 --- a/arch/xtensa/include/asm/cacheflush.h +++ b/arch/xtensa/include/asm/cacheflush.h @@ -119,8 +119,14 @@ void flush_cache_page(struct vm_area_struct*, #define flush_cache_vmap(start,end) flush_cache_all() #define flush_cache_vunmap(start,end) flush_cache_all() +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *); +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} void local_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); @@ -156,6 +162,7 @@ void local_flush_cache_page(struct vm_area_struct *vma, /* This is not required, see Documentation/core-api/cachetlb.rst */ #define flush_icache_page(vma,page) do { } while (0) +#define flush_icache_pages(vma, page, nr) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h index fc7a14884c6c..80bc70251aad 100644 --- a/arch/xtensa/include/asm/pgtable.h +++ b/arch/xtensa/include/asm/pgtable.h @@ -274,6 +274,7 @@ static inline pte_t pte_mkwrite(pte_t pte) * and a page entry and page directory to the page they refer to. */ +#define PFN_PTE_SHIFT PAGE_SHIFT #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) #define pte_same(a,b) (pte_val(a) == pte_val(b)) #define pte_page(x) pfn_to_page(pte_pfn(x)) @@ -301,15 +302,9 @@ static inline void update_pte(pte_t *ptep, pte_t pteval) struct mm_struct; -static inline void -set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval) -{ - update_pte(ptep, pteval); -} - -static inline void set_pte(pte_t *ptep, pte_t pteval) +static inline void set_pte(pte_t *ptep, pte_t pte) { - update_pte(ptep, pteval); + update_pte(ptep, pte); } static inline void @@ -407,8 +402,10 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) #else -extern void update_mmu_cache(struct vm_area_struct * vma, - unsigned long address, pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr); +#define update_mmu_cache(vma, address, ptep) \ + update_mmu_cache_range(vma, address, ptep, 1) typedef pte_t *pte_addr_t; diff --git a/arch/xtensa/mm/cache.c b/arch/xtensa/mm/cache.c index 19e5a478a7e8..27bd798e4d89 100644 --- a/arch/xtensa/mm/cache.c +++ b/arch/xtensa/mm/cache.c @@ -121,9 +121,9 @@ EXPORT_SYMBOL(copy_user_highpage); * */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - struct address_space *mapping = page_mapping_file(page); + struct address_space *mapping = folio_flush_mapping(folio); /* * If we have a mapping but the page is not mapped to user-space @@ -132,14 +132,14 @@ void flush_dcache_page(struct page *page) */ if (mapping && !mapping_mapped(mapping)) { - if (!test_bit(PG_arch_1, &page->flags)) - set_bit(PG_arch_1, &page->flags); + if (!test_bit(PG_arch_1, &folio->flags)) + set_bit(PG_arch_1, &folio->flags); return; } else { - - unsigned long phys = page_to_phys(page); - unsigned long temp = page->index << PAGE_SHIFT; + unsigned long phys = folio_pfn(folio) * PAGE_SIZE; + unsigned long temp = folio_pos(folio); + unsigned int i, nr = folio_nr_pages(folio); unsigned long alias = !(DCACHE_ALIAS_EQ(temp, phys)); unsigned long virt; @@ -154,22 +154,26 @@ void flush_dcache_page(struct page *page) return; preempt_disable(); - virt = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(virt, phys); + for (i = 0; i < nr; i++) { + virt = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(virt, phys); - virt = TLBTEMP_BASE_1 + (temp & DCACHE_ALIAS_MASK); + virt = TLBTEMP_BASE_1 + (temp & DCACHE_ALIAS_MASK); - if (alias) - __flush_invalidate_dcache_page_alias(virt, phys); + if (alias) + __flush_invalidate_dcache_page_alias(virt, phys); - if (mapping) - __invalidate_icache_page_alias(virt, phys); + if (mapping) + __invalidate_icache_page_alias(virt, phys); + phys += PAGE_SIZE; + temp += PAGE_SIZE; + } preempt_enable(); } /* There shouldn't be an entry in the cache for this page anymore. */ } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); /* * For now, flush the whole cache. FIXME?? @@ -207,45 +211,52 @@ EXPORT_SYMBOL(local_flush_cache_page); #endif /* DCACHE_WAY_SIZE > PAGE_SIZE */ -void -update_mmu_cache(struct vm_area_struct * vma, unsigned long addr, pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); - struct page *page; + struct folio *folio; + unsigned int i; if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); + folio = page_folio(pfn_to_page(pfn)); - /* Invalidate old entry in TLBs */ - - flush_tlb_page(vma, addr); + /* Invalidate old entries in TLBs */ + for (i = 0; i < nr; i++) + flush_tlb_page(vma, addr + i * PAGE_SIZE); + nr = folio_nr_pages(folio); #if (DCACHE_WAY_SIZE > PAGE_SIZE) - if (!PageReserved(page) && test_bit(PG_arch_1, &page->flags)) { - unsigned long phys = page_to_phys(page); + if (!folio_test_reserved(folio) && test_bit(PG_arch_1, &folio->flags)) { + unsigned long phys = folio_pfn(folio) * PAGE_SIZE; unsigned long tmp; preempt_disable(); - tmp = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(tmp, phys); - tmp = TLBTEMP_BASE_1 + (addr & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(tmp, phys); - __invalidate_icache_page_alias(tmp, phys); + for (i = 0; i < nr; i++) { + tmp = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(tmp, phys); + tmp = TLBTEMP_BASE_1 + (addr & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(tmp, phys); + __invalidate_icache_page_alias(tmp, phys); + phys += PAGE_SIZE; + } preempt_enable(); - clear_bit(PG_arch_1, &page->flags); + clear_bit(PG_arch_1, &folio->flags); } #else - if (!PageReserved(page) && !test_bit(PG_arch_1, &page->flags) + if (!folio_test_reserved(folio) && !test_bit(PG_arch_1, &folio->flags) && (vma->vm_flags & VM_EXEC) != 0) { - unsigned long paddr = (unsigned long)kmap_atomic(page); - __flush_dcache_page(paddr); - __invalidate_icache_page(paddr); - set_bit(PG_arch_1, &page->flags); - kunmap_atomic((void *)paddr); + for (i = 0; i < nr; i++) { + void *paddr = kmap_local_folio(folio, i * PAGE_SIZE); + __flush_dcache_page((unsigned long)paddr); + __invalidate_icache_page((unsigned long)paddr); + kunmap_local(paddr); + } + set_bit(PG_arch_1, &folio->flags); } #endif } From patchwork Wed Mar 15 05:14:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBDCFC76195 for ; Wed, 15 Mar 2023 05:15:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 132E28E0016; Wed, 15 Mar 2023 01:15:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0BD018E000C; Wed, 15 Mar 2023 01:14:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2B1B8E0016; Wed, 15 Mar 2023 01:14:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C9BED8E000C for ; Wed, 15 Mar 2023 01:14:59 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A7C2EAB7C8 for ; Wed, 15 Mar 2023 05:14:59 +0000 (UTC) X-FDA: 80569968318.13.C7F2378 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 021451A0003 for ; Wed, 15 Mar 2023 05:14:57 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=LFipvvEQ; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857298; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rzzYbuX4ToR4PRSGg4gxiQg60PZuy1bmg30AVicE5lc=; b=HlyBUrMjNfElHAkAJ8CmHkOHjzFNDN7Hcy6ttuO+zFH6SllA0bh1GWnFzO4j5ct0fzs5u7 rWFgEvG3T7nRopp0hbQ1Iqf2Q/nNcPHcjR+HWNelx8002LwAUiJfVQyWzmxn7rwxz0FRsd UDVoZB0+Gmjk4/txwjGpgyOwq2yK0f0= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=LFipvvEQ; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857298; a=rsa-sha256; cv=none; b=1FMfmzFlEdg3e1FIjc2oHxRQNBPNXcNjsQflAXMRAUa+zipggihWl3EYt4E9eHM+oeom+5 f8wb2Bau1nOesxGXWDknPDyM7gL5ZQV7eBRksR4QlcVDvu+vgb3d13Xc6oAmFFBoymsIgn 2cO71DJ9QnwEbvUeDNytThWeVf/boM0= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=rzzYbuX4ToR4PRSGg4gxiQg60PZuy1bmg30AVicE5lc=; b=LFipvvEQakTIInjheUdNgkYGQq PQlNsLrLwxSeSBjE9GBWTNlVTV5an9fUmN4eHue46GVqBu5wp0EHoeKdB7HoGOszydP0kOyYoL7bP +57XeupqJHTvYryjfwkHT24Dg5ZN1CWuVA5nyVaBITI8APSFSLXxKCji+zmpN6kJA9usBKnKqayRz JyLjpYf4dbg9w5l7JG2bNPpcYuiSr/Mimk1ikLjZaFxwxNy2ipQ2GMaqWQ6LhcXNQh9DeYzFrF2qm EoaJ5MFMLwBA2cGyS1jlYi3Q8wFWnbrRNwhLuGPfDc3318NHAlhNGbsAaVKOfBb0yrbztvVx4p/3d rLZekx0g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTN-00DYDB-U3; Wed, 15 Mar 2023 05:14:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 29/36] mm: Remove page_mapping_file() Date: Wed, 15 Mar 2023 05:14:37 +0000 Message-Id: <20230315051444.3229621-30-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 021451A0003 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: m181p8zichp743p81ou6tbykeuf7ksmr X-HE-Tag: 1678857297-853358 X-HE-Meta: U2FsdGVkX18wVJNSR4ytZbb9+AKXVkjhe5Ag0hPgPB6ETmK68Lo69m6DukWfNLF7sbH4miYiaYOaYsRZapGY/wAGON2VvdsjAj23Jehj0p/RC3C8lC/elsKF6AehYqNXmdt4JPtEW6sl5YQOvoKUv+f8n2zSpqflEu1Z6nNqPo4r88HIsBZUyAXcA+uYcUNHh91zUO6Z+7FoXlAlEV5D82lyGqRR9Q6TRaIl91KN01+wwaUacSaIqd0z/phL0c1QRivKUk5RJRR/4ikOUCBPGCDYEbZFGBP3YqVlDiHBJYhOKJrEnP1nqZrGrxe9tXplsnWXzGINLvxSlFAPfvHQSUDN1xyVditzuVxJtJQKLviHr1JPYatavYpJunxf4gbnMa+6f7rNQ2e1T83JuQeIr8xS9P2PWEulOFDUPoiQnCxu8R4y331Z6rdMfoVJSvawh6xY5ivTX1KAF7L6nRZewQJAe1xjnSuo+TL0p0Sxde17ped+e//MQ3jC621OT04kv0tMmQfUxNV1F2+/9HmsA1S2wP2QGuvjMoE45Ae+XUhJDYvhLVde0mZ+L3T1+pot0yqCRPBqIKwUcG9QFjLNo+58SXdB3A3Bf/QYlI6mJ9Q9BWWlud7Jvt61+V/O4/wqVsCfIlU958ELTIVydefdsQ4/RdyC5S95vcKo2vBqtqeNJEgE+ZgzwG70Zyvbt6qaZWVZgtWn5KJoHXEr8gc+AcdsdwdrOWp+/AYO9lnr3EZ3KjLaCjKsr+mSLDY6gqqgGzvCnawWzXmrQeFkws30OVRtZP0YhbmEiPWOYJMcvKr1QsnEfgPMqPabg7EpXuoL/WLeq6hvPAslbRwaKlhMlNiIw3JfNk22ZdKd+KzuBG6u8eoJcMEl6LeNXvqjYkcs9sajzgF02T7qUL4pe0f4Zo6HYdOuynDKF0tlvcnXw3xKIlzMU/aJ+7XgvgtfP7U/LO5KcngimNGQdseW1Qh fLbwFkP9 k58aRVVCH39EPNQop7x4S4GzpZaXWCaQO6bPtzJ7ScDvmgWHhpGsf6HhQtLxQ2+B5qMOL3EIktkj9ezkSBsuHzyn53WHtLdUWFnT408DGo78rMTGTsxL4TsfnFaK0x51xeOXBQjjuBG8mLTvIuIox5x35BQ54sFQjalOWmlcA69Fn+o5YI9WEI8JHXM1k1/ouY8SUWRrdwEGgBYaViIr7tZ9FOnPEQI1dlhQrwhM5T+93uEj45cimsrkZtg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function has no more users. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Anshuman Khandual --- include/linux/pagemap.h | 8 -------- 1 file changed, 8 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index e56c2023aa0e..a87113055b9c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -394,14 +394,6 @@ static inline struct address_space *page_file_mapping(struct page *page) return folio_file_mapping(page_folio(page)); } -/* - * For file cache pages, return the address_space, otherwise return NULL - */ -static inline struct address_space *page_mapping_file(struct page *page) -{ - return folio_flush_mapping(page_folio(page)); -} - /** * folio_inode - Get the host inode for this folio. * @folio: The folio. From patchwork Wed Mar 15 05:14:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175330 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0539CC61DA4 for ; Wed, 15 Mar 2023 05:15:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 206238E000C; Wed, 15 Mar 2023 01:15:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1A8078E001C; Wed, 15 Mar 2023 01:15:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8FAC8E001A; Wed, 15 Mar 2023 01:15:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A23D58E000C for ; Wed, 15 Mar 2023 01:15:02 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 72F6480664 for ; Wed, 15 Mar 2023 05:15:02 +0000 (UTC) X-FDA: 80569968444.03.23520FD Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 97F6F100004 for ; Wed, 15 Mar 2023 05:15:00 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=PLcX5vBl; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857300; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GeyWkEuIr/Ynx3k6ZdK6Qj6xmHy4T/7qS9+gQNbMVHI=; b=bU1zxf398R83DUImmRg381msxg+UhXNnc5WN12RTbjRnx+31QP3XsFFK0qkfI34zWeC/ui MnqCCAUPM2gSB8oVeraf3GsU9DgPhDRT0JwOC5uB3i6cdmagXJBv7Bf876AR2Tx9lc8D3u JQKKtAsr/Kh1cMmri/AgnezNOV5apwU= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=PLcX5vBl; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857300; a=rsa-sha256; cv=none; b=nUsEpakxrjvzkCpUtYvnc1vpfReEarSk0W7bdsA3QXpej1LHlT1PPRteKU9j73s+jl1liF A0a0d0EX6lBzzTK6cbqsGx6gYHggykzbLyCpkzZxbzXEYssBoKNRp8cRAbjYWOCXWnr15R HZdT3/g0cpBgm6tM7X4xpNix9E+saEs= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=GeyWkEuIr/Ynx3k6ZdK6Qj6xmHy4T/7qS9+gQNbMVHI=; b=PLcX5vBliP1IzmIOi2WtwUFmJN fbPngdxDeKa2NNNjSW2aBDUpD3xnBg4//fclHC3pc1wRMd651ut7+K68H3TD8xEX1L3BJJ4ysVwp0 CI7tKRVkbdSXz57Hgrlxy2Vi0RQw8tUNzG3flKtfd0vuD8Vh1iuSVGPlDBZd2URE3/PG/nQKvxwcE VIe/ilb+VhfwuoKl4paB+DKPPjhHywPXjHuTW2Hnu8azFD89wNCyuFDRt/vaxQnZfObOvB3ZJ6rpN dvkNy1wBhdvl20Rfj0OVqcNtB0Q+IpA7wd1l7irUHR+eeFNQDtnOdQrTLXJFR+2eoMRgeztfKlpeW y7U+9Xhg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTO-00DYDL-1n; Wed, 15 Mar 2023 05:14:50 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 30/36] mm: Rationalise flush_icache_pages() and flush_icache_page() Date: Wed, 15 Mar 2023 05:14:38 +0000 Message-Id: <20230315051444.3229621-31-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 97F6F100004 X-Rspam-User: X-Stat-Signature: 4h8zt959iitwrpripnkcq43gqax97zuy X-HE-Tag: 1678857300-903732 X-HE-Meta: U2FsdGVkX1+A1VttLVTPRyKcY0tJHpmEzUBoD9o7650Qao/XwPJ3UXdfRpD4RfmM3U00uXcAnatVEuxqvgNcJli1L/tkc9n2lwepmVKx4ZcuQnA6DTsa7GiFAfwh2DWSCnFdgmkG1MecSoXhvCjAUCHRn74qvhM3xg9dU/2NQwquGU8j6HBZddz6g8wSulW/RDWuwIaTNGvcLSQAjMhfrAMirIbq7M/4hWeEoRmctSn4iq+EuFZWgYN9LNWb7uZRYRw5NjW0+7aWSnNgT6LTmf+bfnGrGWKkgaJm+L2u3RUL6vtbF0h+GIOuwMvjeznJCoRn03Id5Y9p3qxquw50cJPt+P4xeXLzjC/un6+TNMA6nnN2vzB/YF+Ie7puTMCNigwIe7+6ccga9HZHcCfSnynRynTeqTWfI2fjwk3/N60gYkVhAZb8tz+RcmazdxImscfJm9SWr/77jTGpo7XVy3h1r1cI5z8Obamza0KniAa/pXr8Y7tfeQLrSUwSUzbgxpdO4VL66jpSK3Qcx8vuAHL1N2YKzpranGFSzETggHJZCX/cwHju973EJRXNZweI6Grkq7MacDNP/pNQW2buDiXVcUHxAL0vhcrNsiSHpfMc2sgUJFQmQ+5K5jVahBvskjgdxUuPXwqoIB8j8wsc1f109YvD2MytQqwoijUuaWvLfloLynGoGp2LvsHSBeF9D+o+RNUbOFkhtcNZJEd4yKbFHdFffHUCrzH5/JYHGSHW5uTNtRk2bp5e0RneihazB5psYCWoe+iY10Vlp8llrL96gvQd9ecBx74jaqk65DeGyOB1BD1hG3ZW4SLl1rFS9k2ZGZ43VUcsx4hrVDbaQ8qwkXPEe30xZ4oQMMMeJdWPqiMrOPDCqpTyouf1UKYVrZTtTey55gTxAiwcS9LpMs1ie7KvDUp4pE21EOKpAdgPGepQopU9+hKS8g5yNqsHz35kuacToV1KR8sfPaV cdeMisr9 XlGr2/YdOGvczIQAl+zXhZmt0YyW1eCHM3/v59RiPVDZVk8O3sH6RvAmulVazkhvHvttif/XxZYWXNM73j8Ho+Ny0mj2WjjtK66kZlx/VOWV+qe8Wq1aOwBPgR4Iq02ZjLneNWeL3S6ewOtlvkB58TAAKZpONURYlRFcG8y+CBiAqs0wMVwi5jFZMmBCfh9hy6MwZrriyyDyGxIIIa00S5W1OU5k/HeIT/Tf0/oIbRzujvX/uqppMueFBaQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move the default (no-op) implementation of flush_icache_pages() to from . Remove the flush_icache_page() wrapper from each architecture into . Signed-off-by: Matthew Wilcox (Oracle) --- arch/alpha/include/asm/cacheflush.h | 5 +---- arch/arc/include/asm/cacheflush.h | 9 --------- arch/arm/include/asm/cacheflush.h | 7 ------- arch/csky/abiv1/inc/abi/cacheflush.h | 1 - arch/csky/abiv2/inc/abi/cacheflush.h | 1 - arch/hexagon/include/asm/cacheflush.h | 2 +- arch/loongarch/include/asm/cacheflush.h | 2 -- arch/m68k/include/asm/cacheflush_mm.h | 1 - arch/mips/include/asm/cacheflush.h | 6 ------ arch/nios2/include/asm/cacheflush.h | 2 +- arch/nios2/mm/cacheflush.c | 1 + arch/parisc/include/asm/cacheflush.h | 2 +- arch/sh/include/asm/cacheflush.h | 2 +- arch/sparc/include/asm/cacheflush_32.h | 2 -- arch/sparc/include/asm/cacheflush_64.h | 3 --- arch/xtensa/include/asm/cacheflush.h | 4 ---- include/asm-generic/cacheflush.h | 12 ------------ include/linux/cacheflush.h | 9 +++++++++ 18 files changed, 15 insertions(+), 56 deletions(-) diff --git a/arch/alpha/include/asm/cacheflush.h b/arch/alpha/include/asm/cacheflush.h index 3956460e69e2..36a7e924c3b9 100644 --- a/arch/alpha/include/asm/cacheflush.h +++ b/arch/alpha/include/asm/cacheflush.h @@ -53,10 +53,6 @@ extern void flush_icache_user_page(struct vm_area_struct *vma, #define flush_icache_user_page flush_icache_user_page #endif /* CONFIG_SMP */ -/* This is used only in __do_fault and do_swap_page. */ -#define flush_icache_page(vma, page) \ - flush_icache_user_page((vma), (page), 0, 0) - /* * Both implementations of flush_icache_user_page flush the entire * address space, so one call, no matter how many pages. @@ -66,6 +62,7 @@ static inline void flush_icache_pages(struct vm_area_struct *vma, { flush_icache_user_page(vma, page, 0, 0); } +#define flush_icache_pages flush_icache_pages #include diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index 04f65f588510..bd5b1a9a0544 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -18,15 +18,6 @@ #include #include -/* - * Semantically we need this because icache doesn't snoop dcache/dma. - * However ARC Cache flush requires paddr as well as vaddr, latter not available - * in the flush_icache_page() API. So we no-op it but do the equivalent work - * in update_mmu_cache() - */ -#define flush_icache_page(vma, page) -#define flush_icache_pages(vma, page, nr) - void flush_cache_all(void); void flush_icache_range(unsigned long kstart, unsigned long kend); diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 841e268d2374..f6181f69577f 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -321,13 +321,6 @@ static inline void flush_anon_page(struct vm_area_struct *vma, #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) -/* - * We don't appear to need to do anything here. In fact, if we did, we'd - * duplicate cache flushing elsewhere performed by flush_dcache_page(). - */ -#define flush_icache_page(vma,page) do { } while (0) -#define flush_icache_pages(vma, page, nr) do { } while (0) - /* * flush_cache_vmap() is used when creating mappings (eg, via vmap, * vmalloc, ioremap etc) in kernel space for pages. On non-VIPT diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index 0d6cb65624c4..908d8b0bc4fd 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -45,7 +45,6 @@ extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, u #define flush_cache_vmap(start, end) cache_wbinv_all() #define flush_cache_vunmap(start, end) cache_wbinv_all() -#define flush_icache_page(vma, page) do {} while (0); #define flush_icache_range(start, end) cache_wbinv_range(start, end) #define flush_icache_mm_range(mm, start, end) cache_wbinv_range(start, end) #define flush_icache_deferred(mm) do {} while (0); diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index 9c728933a776..40be16907267 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -33,7 +33,6 @@ static inline void flush_dcache_page(struct page *page) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) -#define flush_icache_page(vma, page) do { } while (0) #define flush_icache_range(start, end) cache_wbinv_range(start, end) diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h index 63ca314ede89..bdacf72d97e1 100644 --- a/arch/hexagon/include/asm/cacheflush.h +++ b/arch/hexagon/include/asm/cacheflush.h @@ -18,7 +18,7 @@ * - flush_cache_range(vma, start, end) flushes a range of pages * - flush_icache_range(start, end) flush a range of instructions * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache - * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache + * - flush_icache_pages(vma, pg, nr) flushes(invalidates) nr pages for icache * * Need to doublecheck which one is really needed for ptrace stuff to work. */ diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h index 7907eb42bfbd..326ac6f1b27c 100644 --- a/arch/loongarch/include/asm/cacheflush.h +++ b/arch/loongarch/include/asm/cacheflush.h @@ -46,8 +46,6 @@ void local_flush_icache_range(unsigned long start, unsigned long end); #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) #define flush_cache_vmap(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) -#define flush_icache_page(vma, page) do { } while (0) -#define flush_icache_pages(vma, page) do { } while (0) #define flush_icache_user_page(vma, page, addr, len) do { } while (0) #define flush_dcache_page(page) do { } while (0) #define flush_dcache_folio(folio) do { } while (0) diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h index 88eb85e81ef6..ed12358c4783 100644 --- a/arch/m68k/include/asm/cacheflush_mm.h +++ b/arch/m68k/include/asm/cacheflush_mm.h @@ -261,7 +261,6 @@ static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr) #define flush_dcache_mmap_unlock(mapping) do { } while (0) #define flush_icache_pages(vma, page, nr) \ __flush_pages_to_ram(page_address(page), nr) -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h index 2683cade42ef..043e50effc62 100644 --- a/arch/mips/include/asm/cacheflush.h +++ b/arch/mips/include/asm/cacheflush.h @@ -82,12 +82,6 @@ static inline void flush_anon_page(struct vm_area_struct *vma, __flush_anon_page(page, vmaddr); } -static inline void flush_icache_pages(struct vm_area_struct *vma, - struct page *page, unsigned int nr) -{ -} -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) - extern void (*flush_icache_range)(unsigned long start, unsigned long end); extern void (*local_flush_icache_range)(unsigned long start, unsigned long end); extern void (*__flush_icache_user_range)(unsigned long start, diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h index 8624ca83cffe..7c48c5213fb7 100644 --- a/arch/nios2/include/asm/cacheflush.h +++ b/arch/nios2/include/asm/cacheflush.h @@ -35,7 +35,7 @@ void flush_dcache_folio(struct folio *folio); extern void flush_icache_range(unsigned long start, unsigned long end); void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1); +#define flush_icache_pages flush_icache_pages #define flush_cache_vmap(start, end) flush_dcache_range(start, end) #define flush_cache_vunmap(start, end) flush_dcache_range(start, end) diff --git a/arch/nios2/mm/cacheflush.c b/arch/nios2/mm/cacheflush.c index 471485a84b2c..2565767b98a3 100644 --- a/arch/nios2/mm/cacheflush.c +++ b/arch/nios2/mm/cacheflush.c @@ -147,6 +147,7 @@ void flush_icache_pages(struct vm_area_struct *vma, struct page *page, __flush_dcache(start, end); __flush_icache(start, end); } +#define flush_icache_pages flush_icache_pages void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn) diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h index 2cdc0ea562d6..cd0bfbd244db 100644 --- a/arch/parisc/include/asm/cacheflush.h +++ b/arch/parisc/include/asm/cacheflush.h @@ -56,7 +56,7 @@ static inline void flush_dcache_page(struct page *page) void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) +#define flush_icache_pages flush_icache_pages #define flush_icache_range(s,e) do { \ flush_kernel_dcache_range_asm(s,e); \ diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h index 9fceef6f3e00..878b6b551bd2 100644 --- a/arch/sh/include/asm/cacheflush.h +++ b/arch/sh/include/asm/cacheflush.h @@ -53,7 +53,7 @@ extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_user_range flush_icache_range void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) +#define flush_icache_pages flush_icache_pages extern void flush_cache_sigtramp(unsigned long address); struct flusher_data { diff --git a/arch/sparc/include/asm/cacheflush_32.h b/arch/sparc/include/asm/cacheflush_32.h index 8dba35d63328..21f6c918238b 100644 --- a/arch/sparc/include/asm/cacheflush_32.h +++ b/arch/sparc/include/asm/cacheflush_32.h @@ -15,8 +15,6 @@ #define flush_cache_page(vma,addr,pfn) \ sparc32_cachetlb_ops->cache_page(vma, addr) #define flush_icache_range(start, end) do { } while (0) -#define flush_icache_page(vma, pg) do { } while (0) -#define flush_icache_pages(vma, pg, nr) do { } while (0) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h index a9a719f04d06..0e879004efff 100644 --- a/arch/sparc/include/asm/cacheflush_64.h +++ b/arch/sparc/include/asm/cacheflush_64.h @@ -53,9 +53,6 @@ static inline void flush_dcache_page(struct page *page) flush_dcache_folio(page_folio(page)); } -#define flush_icache_page(vma, pg) do { } while(0) -#define flush_icache_pages(vma, pg, nr) do { } while(0) - void flush_ptrace_access(struct vm_area_struct *, struct page *, unsigned long uaddr, void *kaddr, unsigned long len, int write); diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h index 35153f6725e4..785a00ce83c1 100644 --- a/arch/xtensa/include/asm/cacheflush.h +++ b/arch/xtensa/include/asm/cacheflush.h @@ -160,10 +160,6 @@ void local_flush_cache_page(struct vm_area_struct *vma, __invalidate_icache_range(start,(end) - (start)); \ } while (0) -/* This is not required, see Documentation/core-api/cachetlb.rst */ -#define flush_icache_page(vma,page) do { } while (0) -#define flush_icache_pages(vma, page, nr) do { } while (0) - #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index 09d51a680765..84ec53ccc450 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -77,18 +77,6 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) #define flush_icache_user_range flush_icache_range #endif -#ifndef flush_icache_page -static inline void flush_icache_pages(struct vm_area_struct *vma, - struct page *page, unsigned int nr) -{ -} - -static inline void flush_icache_page(struct vm_area_struct *vma, - struct page *page) -{ -} -#endif - #ifndef flush_icache_user_page static inline void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h index 82136f3fcf54..55f297b2c23f 100644 --- a/include/linux/cacheflush.h +++ b/include/linux/cacheflush.h @@ -17,4 +17,13 @@ static inline void flush_dcache_folio(struct folio *folio) #define flush_dcache_folio flush_dcache_folio #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ +#ifndef flush_icache_pages +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ +} +#endif + +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) + #endif /* _LINUX_CACHEFLUSH_H */ From patchwork Wed Mar 15 05:14:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175324 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 601C9C61DA4 for ; Wed, 15 Mar 2023 05:15:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 99BA98E0013; Wed, 15 Mar 2023 01:14:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 887B28E000C; Wed, 15 Mar 2023 01:14:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 64B2D8E0016; Wed, 15 Mar 2023 01:14:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 35F7F8E0013 for ; Wed, 15 Mar 2023 01:14:58 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 1622A121062 for ; Wed, 15 Mar 2023 05:14:58 +0000 (UTC) X-FDA: 80569968276.22.49126F9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 60EA5140003 for ; Wed, 15 Mar 2023 05:14:56 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=KO1zVLlG; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857296; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BuY0++zuEEGNdRYvMzcc0MIGd74y5or4vnxSMjxSxAU=; b=QXqif9a68EmtJ7hjGwg5jSiRGal+8Z2yxSIfEGu1xvfbmlWFWib2ECYAjozNA2i5WfPfxD /nDtwq8bp2+cE5/QVzAXtr7g+1tYWrq2g3qc7OlBO508IwIcv1qVnnQLhdG4/qwNRZd2/N Tff7BAcwIeSDSgl+KEuiP9kBRDexFfk= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=KO1zVLlG; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857296; a=rsa-sha256; cv=none; b=a0mxFlTwfP4kA+EbmdQO47r6qCqQr5hbXcI6fs36bxwIkURkZNs0hpiWRE0Ci4VVLFcLuh J5xCPNSJdB3UDJWpYqZ26iKRfvrG3bs9tdxzdtPzLB1MGSirzSYJS5Y3EFwFE9NoNY2Sb0 ixAn28ihowsgHMpTtaDOKh3g/wbCz18= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=BuY0++zuEEGNdRYvMzcc0MIGd74y5or4vnxSMjxSxAU=; b=KO1zVLlGb/MzuwQDc6H5JGyaiM Rcd8vcHf9egA+FO/fneFm2txL7VYi7RGvKiYak65nKIhpMQYh5LYZ23GREoZnDyWUhfJJlzR6cz5E z8KN/MPG9T2jlYkxKrF4K33ud3w2oV4+pqBCVlhSddpm1HAiF2CpmpsGIOdC/zLyrNYNIlfh0c3aZ d8oO9am1dBH9CzW4W+jf64/ZJe6NYzB9er70DxS58dqSMTGNPqh16kMVXBx4aPUrzrc9zzT2eJ3eF zPkCTWdMjtaMHnvzt85vbQXvo/OnTpecryfD+3uwDbqVIpn8hor2cZSjxd3xuyKg5td/+gj7MpPeF RY1ek+SQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTO-00DYDX-87; Wed, 15 Mar 2023 05:14:50 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 31/36] mm: Tidy up set_ptes definition Date: Wed, 15 Mar 2023 05:14:39 +0000 Message-Id: <20230315051444.3229621-32-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: twrzxdcru4muaemu8ttpkxdc53qode9i X-Rspam-User: X-Rspamd-Queue-Id: 60EA5140003 X-Rspamd-Server: rspam06 X-HE-Tag: 1678857296-842695 X-HE-Meta: U2FsdGVkX1/Ud9X+bvTTMa/mi1XXxgJhVti9C31JWYpSnENuxVa9TNBY72wf3y1RlI2VTOHWKvNRwzdHO8OJtry0OAeCjtfcbWiPYR0KFrry+RvM5uJlwPjWf4npNsk7is2KIZ5SzJK6hoLVYE2g60mivfOGCFbxJnENgWMlCC9ewOY4loGLQcIX+hXdSZP+IjNziuahWWscYfP7Q+Q95jUJi3MG86QoNBWl2yUGwPSxgj/iQ3UyF0rf/BnrqjsyWKwyFfcAJyH/5/l/6dlVFxiEJHThM56jtKQd8ALCdrlFlowcS+Bb0kNR9VKwj8a1dj8ZCKnhlXytE6bROjmqQrOvw5aiwCcmbsweVFBpNIwQxXJgYFmprwfxjHnORcCYx69l46uBiqkRSFCGbJQxqc9FV4wfAZW5IEOlH8QvWyJFiJzRW/AT2zd2/9/+lQhZM89g6M6OCDRWCwwI/dVCNAYu8h3nt3VaSSl7pAFJeaZzcDbDFFlfhhqObEzm0/xIbV9qjqvgDTf26XApYrRiw9+VTpuiX/2yQvyLcZbh2PWp35dWGADmtVFqV/y9efI/b3n2ldfxG6G2WQKMVPZ39+L9uuXLprLhl47gtlRnnQIoVs2Z/qAgrJEsQpmAzyVa9OwR2V5clvGi0eA97g5oGDaaBOTwiFO85hKtrK+ZoDHH0ynfXvAtB2ihaCDpfjmc22cjiLg+9YO9C28J2OHeAVoXTv5z9Sa+KkWr5lP79mp7aGf6CokUGP/7qkR16qC3Q+anG0yDfeNixSARZoNZ/5iOofrlr62mLU8HteEg6av9pfWAnjO9f+6vh+OMjUczuyrkmEIFxjf/dmCQGWV1y6Tc+c2cMvHeJENeNiqe28KqojmCn6VEvpn/ZQ6HY7nLT3XzPNfReTq8iv4+sSlHhVDQ6HCfeSNGyyNGNmqIjUgFijAo6elijZByoGC2NpZjTJo6YVKjoH273HyU1+4 fbNJ0jIc pSFeTUbYkfwhnPJuHqwkiaXb56WRANiLADucRe3e3UZpbkwnz3La/MqXiLolENcHMYO5XwMQjzKe13Zuz9X4TxsWj9sfnXZHGCtgOeQcdIOAZeAV02qlZNogWVFu0p7+VQGs46B9Fasg6dcEBLHVR1s1cOtlustRDp53eAXQF+g5/NCJo7XtDAAXrZItsFaTIX4h1VPCa4cLkhWHgdXBf6pc9c5/cMq+zYk1GCqLm9vKSa7hC2p2uZpfPzA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that all architectures are converted, we can remove the PFN_PTE_SHIFT ifdef and we can define set_pte_at() unconditionally. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Anshuman Khandual --- include/linux/pgtable.h | 6 ------ 1 file changed, 6 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index a755fe94b4b4..a54b9197f2f2 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -173,7 +173,6 @@ static inline int pmd_young(pmd_t pmd) #endif #ifndef set_ptes -#ifdef PFN_PTE_SHIFT /** * set_ptes - Map consecutive pages to a contiguous range of addresses. * @mm: Address space to map the pages into. @@ -201,13 +200,8 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr, pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); } } -#ifndef set_pte_at -#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) -#endif #endif -#else #define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) -#endif #ifndef __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS extern int ptep_set_access_flags(struct vm_area_struct *vma, From patchwork Wed Mar 15 05:14:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175328 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E726FC7618D for ; Wed, 15 Mar 2023 05:15:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F3C948E0019; Wed, 15 Mar 2023 01:15:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EC7D88E000C; Wed, 15 Mar 2023 01:15:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CF16D8E0019; Wed, 15 Mar 2023 01:15:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B3C6B8E000C for ; Wed, 15 Mar 2023 01:15:01 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 97532AB7C8 for ; Wed, 15 Mar 2023 05:15:01 +0000 (UTC) X-FDA: 80569968402.07.EDA167E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id E8B181A0008 for ; Wed, 15 Mar 2023 05:14:59 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=masNXlur; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678857300; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=73ezd5W5E0KEIO6VAvV47amCBEQJWRQwkcGKvspT3vU=; b=HaV03ogRx2djl17dwV+xSX/umvUW/IOxVfH1g+pABL5chovPMxYVl7hKdY240NPFiPLlUR gPCG3EQ52MKJsvzjXNwYYUAAuS1yU/ThJOHBd7/4lDimyYn3AjZaL4I0qlqC0iQlBl3Cf6 nkfrukJ/ilDtP91c2U8Ivqn5G+TDT1k= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=masNXlur; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678857300; a=rsa-sha256; cv=none; b=6r6534DPQt14/6oM7OzWwJat2bcLsa+lurc34D3pWvtNSlLeKQXZeepaKOMemjURIoCk3A wA8J+5X49QYsCOkKQCNVx/L/NBWYBa8tMmK16aOjBvizFhCxQPQlyrwV0al91hpQO1zodU 9+4hez+8jME9EnxmNFc/DVgf6vniTGg= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=73ezd5W5E0KEIO6VAvV47amCBEQJWRQwkcGKvspT3vU=; b=masNXlurESfGKFlz8RtpG3Ix6m c4wAtigik6OB2AQV5nlqZHF40DQ4SrO2QghheJqdi4JolrLkVBcE9Yd5IK23rR+n/r9h/QmMvK3ZU ElD/3Y4EjDSwQH54M0hFmcONJ2bo+CjcPl3G4AygvaJU0Cy5Gbnnz96Ye7+TAf8rujSVl7nV0lHh/ 7yqn+jtIlBnnXwW3QX20uiMY39Vi/D9atJNh43Eq4qt5SiXsDTJP2Fou6RtcgrVPwLb0vLi/HTe/3 ktJdJDSo3K4fmYtt1pytHUU5UYwNDRF45mZX+FvpApd2hzNqngqiie8i8+JYj9OeATVTI+fXiWYGQ H271YV6Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTO-00DYDg-CJ; Wed, 15 Mar 2023 05:14:50 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 32/36] mm: Use flush_icache_pages() in do_set_pmd() Date: Wed, 15 Mar 2023 05:14:40 +0000 Message-Id: <20230315051444.3229621-33-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: E8B181A0008 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: 7qu4ue348797dqwb1zz77hs9b5inumii X-HE-Tag: 1678857299-555486 X-HE-Meta: U2FsdGVkX18teXIl6+X0GKlmJBNNpXWZUbhPnj9hlU0bEhQ20ulQz/SPNbJ7hf/EhuzXKYY9eawAPHgolJXj9JiA1ZeIvkBaoZJMnyjP8thSUI0VHwJO1LzdTcPzOiybSGGzstPE73+IRbugYjubNU6bhp2pZwRY/32wc6x6auGEmctwp9axSGshYVSDwUiPbQksuAzXSQbgMFAJ5VOaGd1ZURSPwHutUWSoAmrRF23DtG7cntKZZkRGQwzWdTCE+ZDtHDmoCB/hVcYdjWm70or453WanPNTnSiDdDYwQESnw32z8kG6zvjPsH+doobv4t1JZ/sXPSQTQ33PSLpBl4R8DtqodYOFlkn9DgPAPxeFitJt+6rX6Z9lP7929vv07rKvivGABold64u/bbYs8TFUcJlhzOW6cMti5KrR6GUhzjfNiLvO2gfu/iGZnvjX5+e3GPs4mcaaq7UYYISCOHwirzhCG/khhA+TUiFYMQErBFwDbZpTIFHg5kiA8xp6Eb4ozHWF3jTLPkDv/68lIbWmbipD4mZQfEafz+0Z0RyycRyveiXpRvZegkpN/gw+EVKsJpWoWav0K0V5dUWTMasdSx/0jkVq+vBh0NOduaendlYkcwtG8mHJ/wQ9j8cXHdzqPUdXc+8PjEmVRl3XIupJoa4jrZ9sCdaN3QXUm976cBGKtqXcE2frWlGR6GMQ024V/MmQOAQ2KFSiJTSpm8eOEj2QXtYt5mOB+cTApHM8GyWXkCuFU3Skb4ZY1jnKFfxX7/tvvvAlyPSGHm+K+ZUn/tqWQQond0xNH3wS69qJ0dAe34S6S4keGltf/EnC0CMKGDdAFh3/fULP3Pn8KQDFWmM6rHr6779Ff+rbibTeJZj7yfwPG2zzfr/72Ymx93v/uozXD3t6nxV4yO8s+ofSbKGXAYwaX1bW/wkAnzh8h8ZVPrIarmuBnH8y73ilXxc7olEvQFXrxgHIb0P ggXVMAAy P7YzCaO5Rff5Q+A2+lulRqhB5QF4BgtL7Utw0JAQ7UOW6CGmyrx65P94yorLlbEd/xojV8QTKyypC+jbQftyt/4tuvE0mlx+ZzH6KyRKza4O1DkfoOZcbDjwj1f4a7sEjOvTMoCt5EBkoMBeo/IeoGgiuki5IOSOWqoKSnqgj+LXF4xvlwoMrna8Q0IApP7fcBKsl8kPQi553OQZYqRepDrR+yKQBJlGEp4eZ030KmLFsxxSjvQ8u9ZVvH3AT/LqOA8OwxQTLebcaw3jtYitjzoYUCQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Push the iteration over each page down to the architectures (many can flush the entire THP without iteration). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Anshuman Khandual --- mm/memory.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index c5f1bf906d0c..6aa21e8f3753 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4209,7 +4209,6 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) bool write = vmf->flags & FAULT_FLAG_WRITE; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; pmd_t entry; - int i; vm_fault_t ret = VM_FAULT_FALLBACK; if (!transhuge_vma_suitable(vma, haddr)) @@ -4242,8 +4241,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) if (unlikely(!pmd_none(*vmf->pmd))) goto out; - for (i = 0; i < HPAGE_PMD_NR; i++) - flush_icache_page(vma, page + i); + flush_icache_pages(vma, page, HPAGE_PMD_NR); entry = mk_huge_pmd(page, vma->vm_page_prot); if (write) From patchwork Wed Mar 15 05:14:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9028C7618D for ; Wed, 15 Mar 2023 05:32:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 544158E0003; Wed, 15 Mar 2023 01:32:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F42E8E0001; Wed, 15 Mar 2023 01:32:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3BD648E0003; Wed, 15 Mar 2023 01:32:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2160C8E0001 for ; Wed, 15 Mar 2023 01:32:35 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id D348A1A0F4D for ; Wed, 15 Mar 2023 05:32:34 +0000 (UTC) X-FDA: 80570012628.07.698B349 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id E2CAB100012 for ; Wed, 15 Mar 2023 05:32:32 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vvhYyf+z; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678858353; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PaqUJoSuStWz+a+eZ6H5Czv2RDkRGCSPMhY7d1rRpLQ=; b=6mbn3rHv2rWy00bHar3Nz0O/HUqZwmRV3T/kr1HvO28N5639+YRkXR4pZ0E7sue5aznBWs ocqfzffBa64kgr5KDMgVGPUfBf2s/VO2g7tT0VbIZ+piKjMNq61Ehq0CRXxvf0yq5Oa4ZR HQ0EdaD60VDYLqLt1a7f3P62Skrg/8I= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vvhYyf+z; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678858353; a=rsa-sha256; cv=none; b=Ei62aix5Umvw7rhzMbSgtN8vfCX2u9RPJkVlVop5s6CqhgTW3wb8QUTmVVE8bOlLDEfgaw vWhb8ia3zqtkFSH8PZlNy1hoJGqFH3huI1A9Y9PSlHZBMl7TQLvJr4mB9TtsZDCZCnAsoH VGZQevvxj0mW1xMEFzSrAHh6R5U47+E= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=PaqUJoSuStWz+a+eZ6H5Czv2RDkRGCSPMhY7d1rRpLQ=; b=vvhYyf+z+JwQ/j/pQJl+0gs8Cg Z7U9+n33E4ya8Rs6VKmmnqEH28jdQ4DqxJoZzM5mdflmtSMd+yi4QD5DY7G6MwTu/aY/fxbFfqpzA GNRHXJsPYag8wyB2MsDXq0FNR167SZ8hfIU8jfQYsAEEWY7xVUG2EUAiHGSR8zgrURZk0SBcC/tw3 /zcFkUw5x8LSpC4C5WwfSqdey12gZW0dngWQsr7691RQiTnLYh7PJh4w2M+kkStz9lDF4v7fY+LUA HFBny6CHlaD22KncUJZ6rWt/MD07tTSoYRCHZOIQywVMEC83Jj2z/yp/0ev2MTAuRFR9j6kKg88OI cmYZqZ9g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTO-00DYDm-Gh; Wed, 15 Mar 2023 05:14:50 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v4 33/36] filemap: Add filemap_map_folio_range() Date: Wed, 15 Mar 2023 05:14:41 +0000 Message-Id: <20230315051444.3229621-34-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: orfwjsp9pn3imt5cfbbj71n7gnacy3pj X-Rspamd-Queue-Id: E2CAB100012 X-HE-Tag: 1678858352-166834 X-HE-Meta: U2FsdGVkX18naZFhqp1vIQ2wl4iwRuyEr6IDRtqBcvnZEVe/Bm4D3y2/qY08uimXiU9S4Dmnzr7TvY1LwCLtHiZHhFpIUX6qvEkXqmcD0LoyfcxKRcspyd9llgLoHyqfTVdHtJrTZMdxEUDJUjKMuGvX5Z9e/r3OZAYYwNgBVpKKpxMGPOSMSwXMsupWFnQURASdup8P6Xlc1Q/xzxiIQ6XYCslMBpNtkI3QR7D+mJz36KWT3ZEBP6nvJzsAsA3P9is7ppBzkYod4BWuEm5mwGd51/+zpftKTjFSuijDQe3PYg/B2FOAqBIfZUrvM5iXGwhnuMn6F1R8oE32GhvyB6VITwBc40OoJ4IbV7EXb0wbRpRr/TqYOFozzW5xikzik2ZOKU43H0smKpjdW1Nd6m6F9cUJ06LlvXpQTnfDBU71xJaVK1GZZMm1glm48oOUyot5HJhb7oSpcASMi+4H7nIXO0oiUWu4rdxR87ceNzSs1Y9VmlArM2YFJTevpWLCxBKNLJDWD2MTgI4cJuQLxvIH69PAEfUvdvuK/6DVC4JkBdwlfTZkb6eHlsfe6Zhz7EVZyh6Zh4C8+VSudQ9h9Thi7j2MbpVT8vySrwmfsaswmOIv8FlRNr0ZRuRDQQ5ciOZXExP9yGY9BhTqOdBfEZtD2QA0RpoR5ORutFGKX99vnM/N25jF+XlRCVtbwu5h3H5f8RgtBtRnPSLoHFFC2OSd05Uz8nCAEvpwpPJRPTbIZu+jkJma8IGZEj1WMwt4XqT/TZjMEzfxTPXSVEyE7j9SPspZMS1DdI2J2oz2J0nYaURe5FS6iB9QRaKOT9dk2VQxnvwD90OEix4WquPYYMYRzXlRWFjKGk+7oWXazmdu4NO1+yVpMX/8oMlZOQn1yEbrZkUPst2hphE8pBUO/J/Qb/z4sFvu7L6aAmm8uMaA95kQ9ok08Gy9ZRm2pN4UqUhaehjdvUBNyBZ3WRI JZcnGkid 3wznexCunnjsAicsNNwqtrQ0FEM1UgDN084Y0pnX093dK+jkIZrG/7fMejYlBhVvM+1DO37nEUtc5GEAXsMcHlxGgzhIPIEEwx8lbi8KFnKLgL4zoZyGn2VuIHVqZb1VI/8eVnXMgBCaj7DpYADgo3yc3T43U6jFtOAplzrggGNld00FbxhMh016tDkbhCBlZXeciuY35494FdRpQRQIbYyh5aZYbqlYJ2oIxhonDY6Lf0TDE1QAV2rxWIF1lwm/x+gh+VCWni2KjRhgHuLj/+sm3I2sbxpTE4Ctw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yin Fengwei filemap_map_folio_range() maps partial/full folio. Comparing to original filemap_map_pages(), it updates refcount once per folio instead of per page and gets minor performance improvement for large folio. With a will-it-scale.page_fault3 like app (change file write fault testing to read fault testing. Trying to upstream it to will-it-scale at [1]), got 2% performance gain on a 48C/96T Cascade Lake test box with 96 processes running against xfs. [1]: https://github.com/antonblanchard/will-it-scale/pull/37 Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 98 +++++++++++++++++++++++++++++----------------------- 1 file changed, 54 insertions(+), 44 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index a34abfe8c654..6e2b0778db45 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2199,16 +2199,6 @@ unsigned filemap_get_folios(struct address_space *mapping, pgoff_t *start, } EXPORT_SYMBOL(filemap_get_folios); -static inline -bool folio_more_pages(struct folio *folio, pgoff_t index, pgoff_t max) -{ - if (!folio_test_large(folio) || folio_test_hugetlb(folio)) - return false; - if (index >= max) - return false; - return index < folio->index + folio_nr_pages(folio) - 1; -} - /** * filemap_get_folios_contig - Get a batch of contiguous folios * @mapping: The address_space to search @@ -3480,6 +3470,53 @@ static inline struct folio *next_map_page(struct address_space *mapping, mapping, xas, end_pgoff); } +/* + * Map page range [start_page, start_page + nr_pages) of folio. + * start_page is gotten from start by folio_page(folio, start) + */ +static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, + struct folio *folio, unsigned long start, + unsigned long addr, unsigned int nr_pages) +{ + vm_fault_t ret = 0; + struct vm_area_struct *vma = vmf->vma; + struct file *file = vma->vm_file; + struct page *page = folio_page(folio, start); + unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); + unsigned int ref_count = 0, count = 0; + + do { + if (PageHWPoison(page)) + continue; + + if (mmap_miss > 0) + mmap_miss--; + + /* + * NOTE: If there're PTE markers, we'll leave them to be + * handled in the specific fault path, and it'll prohibit the + * fault-around logic. + */ + if (!pte_none(*vmf->pte)) + continue; + + if (vmf->address == addr) + ret = VM_FAULT_NOPAGE; + + ref_count++; + do_set_pte(vmf, page, addr); + update_mmu_cache(vma, addr, vmf->pte); + } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); + + /* Restore the vmf->pte */ + vmf->pte -= nr_pages; + + folio_ref_add(folio, ref_count); + WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); + + return ret; +} + vm_fault_t filemap_map_pages(struct vm_fault *vmf, pgoff_t start_pgoff, pgoff_t end_pgoff) { @@ -3490,9 +3527,9 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, unsigned long addr; XA_STATE(xas, &mapping->i_pages, start_pgoff); struct folio *folio; - struct page *page; unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); vm_fault_t ret = 0; + int nr_pages = 0; rcu_read_lock(); folio = first_map_page(mapping, &xas, end_pgoff); @@ -3507,45 +3544,18 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, addr = vma->vm_start + ((start_pgoff - vma->vm_pgoff) << PAGE_SHIFT); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); do { -again: - page = folio_file_page(folio, xas.xa_index); - if (PageHWPoison(page)) - goto unlock; - - if (mmap_miss > 0) - mmap_miss--; + unsigned long end; addr += (xas.xa_index - last_pgoff) << PAGE_SHIFT; vmf->pte += xas.xa_index - last_pgoff; last_pgoff = xas.xa_index; + end = folio->index + folio_nr_pages(folio) - 1; + nr_pages = min(end, end_pgoff) - xas.xa_index + 1; - /* - * NOTE: If there're PTE markers, we'll leave them to be - * handled in the specific fault path, and it'll prohibit the - * fault-around logic. - */ - if (!pte_none(*vmf->pte)) - goto unlock; + ret |= filemap_map_folio_range(vmf, folio, + xas.xa_index - folio->index, addr, nr_pages); + xas.xa_index += nr_pages; - /* We're about to handle the fault */ - if (vmf->address == addr) - ret = VM_FAULT_NOPAGE; - - do_set_pte(vmf, page, addr); - /* no need to invalidate: a not-present page won't be cached */ - update_mmu_cache(vma, addr, vmf->pte); - if (folio_more_pages(folio, xas.xa_index, end_pgoff)) { - xas.xa_index++; - folio_ref_inc(folio); - goto again; - } - folio_unlock(folio); - continue; -unlock: - if (folio_more_pages(folio, xas.xa_index, end_pgoff)) { - xas.xa_index++; - goto again; - } folio_unlock(folio); folio_put(folio); } while ((folio = next_map_page(mapping, &xas, end_pgoff)) != NULL); From patchwork Wed Mar 15 05:14:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175358 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6890C6FD1D for ; Wed, 15 Mar 2023 05:32:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 544478E0008; Wed, 15 Mar 2023 01:32:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F4938E0001; Wed, 15 Mar 2023 01:32:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 394F68E0008; Wed, 15 Mar 2023 01:32:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 2B4E28E0001 for ; Wed, 15 Mar 2023 01:32:41 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0462D1C695E for ; Wed, 15 Mar 2023 05:32:40 +0000 (UTC) X-FDA: 80570012922.13.E461684 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id E916E100010 for ; Wed, 15 Mar 2023 05:32:38 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=g5mC1e+E; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678858359; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wGk85CR/Vctng7oBFjkMJq+NnlgBFRutHAfl1XoD7Y0=; b=3Vg3MNR1mya9AC+SEhk1JyKCksYIP+cg/hziflI/cKm9VY5EfM7I13WwVSgnTi9GO739ha Yxh4vzzYOWkwCETkdgRmjx1yUyeylt4yG9nT9Qq+8p99Je0VgVgVnYEYxn3O6YfexewIQ/ tIzB9wWnbD2GcqQ5TI2jMZdBn3PNSqs= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=g5mC1e+E; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678858359; a=rsa-sha256; cv=none; b=Rf5gcNMSzo0dAdCTPHF4Hp4XQqXeWZNByuyNOH3VdmM8mqpsvsjwTb2qIwrv3B3adxLJN7 H+aBafK8B48sdB8dT9yAtquZPY3tXCWamr39r2XEEkQ7pWrM5nVuhy9hKw/FcEu5egyEqr sKIGCkmgsV32/Nd1oK9H7Hw6yRhL22M= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wGk85CR/Vctng7oBFjkMJq+NnlgBFRutHAfl1XoD7Y0=; b=g5mC1e+EyDeps/N6wOgNK48uE5 NY6T5TtawAzHoLWsQhDILMRhrQX9ZM+D075XgvvrVQWkZJLn7Q8AhVPPqiyi4J11YgM7o+meENJAL Dw16eYKgzteuuxGMuXlRMTH+J9Q6cDtVus0ee7yF2jdGr63Wa0pkqZIiEIL13eC3ASjbCgBqw4l3P PH4awXOcSI2eluFl5bJc028cmRgeBqG1mkXRZyGtFp5+XsRMny84X5Q21WcTuzpnFcvjDwcI7V3M2 vdQcdA8eIAXECldrZFJcR39/rd9cfuGhbwfRxGKw1tyhasbwt3TDiuF8/mntoH2cVEgfPhNQRwgF0 56XMdsMQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTO-00DYE0-MR; Wed, 15 Mar 2023 05:14:50 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v4 34/36] rmap: add folio_add_file_rmap_range() Date: Wed, 15 Mar 2023 05:14:42 +0000 Message-Id: <20230315051444.3229621-35-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 9e71jeaqfaer1iyr8fwkokqk85suot7x X-Rspamd-Queue-Id: E916E100010 X-HE-Tag: 1678858358-699048 X-HE-Meta: U2FsdGVkX1/tqlAjIx/fTmgmg4o+6ZnZdWkRL04zdmdy1gopWhM+SODBMm9hybSNGEPJQI7zMs9aIpljyzCirdgoyi07fD2EHVclIoDZBlyNg2xRyREcKUr5xdMS7UhnvvuQ1EHfVuiEdqShR38/jPW8oXV1o4RRJhC5Ds946+gzNub3A0WppwxydmFg7YrFfK1MXet53/yZSU5OYfOf/yjcbJs3r7JlgVl2XKcY7cbCmBzX2UaxzKb1M0PEqWK3tKmX8TPuYUB3hroLdA3Dnd9pkfJv3k3WlLe7sIWyN3i7J+/1kY+0OOVIUlb3h2xGBCNNWnDzde1Xql5sceO2F6DXjS7FkeCdMgbV9ZRo2ohSoisiHtpetbo51myVCqWlm8bYU435BxyQNJD+jWVZmjIfuDQSUUbQZ6QndbTa+eHqro4UBz1o0pwZu6+wcJAQN9yZX68Vc3dPdjd2SV1FAiFHXzukH7mef738G3LUV9e+d3DNao5wBhxmkcLjepD/WmR0Gzd7Gj7xOIE4lU+I1yzQmdLL6rZqU3bUpfyB/qJnsXhC/gv0uNY9IeUfpfU4ylN75PJabQ/L9XnJ2TeJWLciCYkqeKxtOMSwgNO7fjCQMjagIDxC+/dL6GwQZSDj/mPFjsNrPoKSNQmoXadIteHg2i92OGaPF/dShcVVzUkHd9gGzkNMsGHsXusbnbZr9LlpVybN9W/S9JVhhuIhFOtSf8qL1k4E3FL9fKUSItso3vXP6L+BPcZTPCb8ANO4NqHJbkn4pXa8+LI+Ee8VJN6ZcT+6iUYVL2g9BlMo5/R1AFyZRz/c0YrWPcVPtiXwDzjmt3oOANO0WsrRpANJh0wn1ciVNkTF0AIAtUB0htRU8bVGRPby0Cv+WyKlXcKtcS0f+mI+TGKonPbPXtW3x3FmoAF+Xthx/Sf6lLtA4O0DWPhW38sEa3vrcdgfSquTDYN9uHWB9Na5Ui6WX/2 /73bnjdw ZQVY5bXQM9IbiQds6lVPHle1qb8f7ahmNXmoRF46wEtc/Uk3yAiyXtIXKIE10XULdrZvn2/fMkdbLSuy+qNqyc9KoToD19Sik+2UL1KoawxryUozRooBh+CKnPT7EtUDh4UtISOkh8nPoq2Cr92DHEahXj0bYf+GSo86roUMH7EyVvz7xqC8orz1Tlogq7Yg4vS3uc6gTjin5r50bOBcj5G/9qfaNtCDhI0EnAlWNm3ZZuSzHu2pfDvr1NzIUGlacqAUm78FgcsxYDsA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yin Fengwei folio_add_file_rmap_range() allows to add pte mapping to a specific range of file folio. Comparing to page_add_file_rmap(), it batched updates __lruvec_stat for large folio. Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rmap.h | 2 ++ mm/rmap.c | 60 +++++++++++++++++++++++++++++++++----------- 2 files changed, 48 insertions(+), 14 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b87d01660412..a3825ce81102 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -198,6 +198,8 @@ void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); void page_add_file_rmap(struct page *, struct vm_area_struct *, bool compound); +void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr, + struct vm_area_struct *, bool compound); void page_remove_rmap(struct page *, struct vm_area_struct *, bool compound); diff --git a/mm/rmap.c b/mm/rmap.c index 4898e10c569a..a91906b28835 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1301,31 +1301,39 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, } /** - * page_add_file_rmap - add pte mapping to a file page - * @page: the page to add the mapping to + * folio_add_file_rmap_range - add pte mapping to page range of a folio + * @folio: The folio to add the mapping to + * @page: The first page to add + * @nr_pages: The number of pages which will be mapped * @vma: the vm area in which the mapping is added * @compound: charge the page as compound or small page * + * The page range of folio is defined by [first_page, first_page + nr_pages) + * * The caller needs to hold the pte lock. */ -void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, - bool compound) +void folio_add_file_rmap_range(struct folio *folio, struct page *page, + unsigned int nr_pages, struct vm_area_struct *vma, + bool compound) { - struct folio *folio = page_folio(page); atomic_t *mapped = &folio->_nr_pages_mapped; - int nr = 0, nr_pmdmapped = 0; - bool first; + unsigned int nr_pmdmapped = 0, first; + int nr = 0; - VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); + VM_WARN_ON_FOLIO(compound && !folio_test_pmd_mappable(folio), folio); /* Is page being mapped by PTE? Is this its first map to be added? */ if (likely(!compound)) { - first = atomic_inc_and_test(&page->_mapcount); - nr = first; - if (first && folio_test_large(folio)) { - nr = atomic_inc_return_relaxed(mapped); - nr = (nr < COMPOUND_MAPPED); - } + do { + first = atomic_inc_and_test(&page->_mapcount); + if (first && folio_test_large(folio)) { + first = atomic_inc_return_relaxed(mapped); + first = (nr < COMPOUND_MAPPED); + } + + if (first) + nr++; + } while (page++, --nr_pages > 0); } else if (folio_test_pmd_mappable(folio)) { /* That test is redundant: it's for safety or to optimize out */ @@ -1354,6 +1362,30 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, mlock_vma_folio(folio, vma, compound); } +/** + * page_add_file_rmap - add pte mapping to a file page + * @page: the page to add the mapping to + * @vma: the vm area in which the mapping is added + * @compound: charge the page as compound or small page + * + * The caller needs to hold the pte lock. + */ +void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, + bool compound) +{ + struct folio *folio = page_folio(page); + unsigned int nr_pages; + + VM_WARN_ON_ONCE_PAGE(compound && !PageTransHuge(page), page); + + if (likely(!compound)) + nr_pages = 1; + else + nr_pages = folio_nr_pages(folio); + + folio_add_file_rmap_range(folio, page, nr_pages, vma, compound); +} + /** * page_remove_rmap - take down pte mapping from a page * @page: page to remove mapping from From patchwork Wed Mar 15 05:14:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175356 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87FE0C6FD1D for ; Wed, 15 Mar 2023 05:32:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 25E128E0005; Wed, 15 Mar 2023 01:32:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 20F9E8E0001; Wed, 15 Mar 2023 01:32:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0AE438E0005; Wed, 15 Mar 2023 01:32:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id EE9C48E0001 for ; Wed, 15 Mar 2023 01:32:36 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id AD9E0AB7E5 for ; Wed, 15 Mar 2023 05:32:36 +0000 (UTC) X-FDA: 80570012712.03.F3791A0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id E7F37100010 for ; Wed, 15 Mar 2023 05:32:34 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=S8RSPV+L; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678858355; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Qw0Yc4WVV1Zo/whVBkY4SGQKG5WI2G272Aisa/gvvD0=; b=kdf3m3oKGIYTtbWEOrNTCKjbeF6BmweriPZ3OjOgR4bUj0z1plavPyqSXRKZXSeN1lJt/+ hb0JRVzGIZ53xzB991uy/NwFrQEGsJFLHM4D4zvMuxsizc0GEkmNrwjQMR1bTaQiDcPdnr qJG5GuLi7UiN/7R2QOMHNDPMtODpND8= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=S8RSPV+L; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678858355; a=rsa-sha256; cv=none; b=dET/y3K8E33q5vjbAgRxvTRBr98HQJed4PUugLtqMbfg21PEUOFcCRqmX7f4mPd5AjrXV0 RbyK0ITOjsv7DpWvMB3g5sAbzJyC9Hgg7agat9+PF6RyPPQQH1/xi40/BT2yEzQibX7mU6 fUZuRB1ZolASa8ITrm9yyBynUKq3SoA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Qw0Yc4WVV1Zo/whVBkY4SGQKG5WI2G272Aisa/gvvD0=; b=S8RSPV+LjPHtyJ+ZmGK5RpjWsO x9RMGY+gK6PDYY8ah5J55iodIm8ajJ9dYMel7Bzxl0mHem3mk6/MyBwozsoPXmw/vg7mDV7Fcbk4m zcoB1S8IuxVNd1FylJEl3kWT6+26QRF0BEtLTgseAZVqrUVwUeFY3d8otRcJCL5gTtENArL4nzm59 6aH5mtUGvrKj3wUMHEEX9DkHcbUAttV07JpGgI7MtRsGuSCFWHqEUrIMsopV3MrUNu9axsQOALVo0 WAwGliINhd73W+rtpXo6yMbZFX1HB9yXcet4rCWEcArGnrTuopm70g434W9/f7XXHL+snwmJkgzOn T0hOfrfw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTO-00DYE9-Qm; Wed, 15 Mar 2023 05:14:50 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v4 35/36] mm: Convert do_set_pte() to set_pte_range() Date: Wed, 15 Mar 2023 05:14:43 +0000 Message-Id: <20230315051444.3229621-36-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: ns3pk9wcj9ysrxf47jtw5syuy7jb8ij9 X-Rspamd-Queue-Id: E7F37100010 X-HE-Tag: 1678858354-694462 X-HE-Meta: U2FsdGVkX18QIa+O7MEr1DV1tie+WSgM8giYiduNmZi2YFX01OVoVWqePbYgv99AbhCmVZkyKnTq+UnqK0B9eOd64CoDwM/mFWpChhKWywFR6L2Ms725pMwGGzh++Txz8V7T9F7LK2hKqbx/omQn/iigfLeO0vHazsV8dk414owzrJrYIUbfq50gA9Lm6j02OTqKts8GJwPRmazwzKaVu5xws5snOJZ3sNhI2fe8Im7AEEkUx+Lp5OCwkxUH7BlmxlqCan4DKqnM97a4hZd1j5/Ix18fR7fBVtu2s34d0Isk8kVU6v2rzrzK3MQ4jVm48HPzLBuSbNjrpwnlGQJz5KwqH1B8Ah6zvgpQdYohGr+Ad4F2Hf+rQMsk8pnNlVMZxGuiMgbDjxV/adPKhe7kh4jsm+OYrTck9qNFfU90D8vKpAUBYCZD5alvVEJrRuT3A+M+feRa5oav9LrlT1uiYy2zPg+e8VhsU025/GHemRdx7kfR9SXFlBhTECJCuIQFfapx/ZU9OQFw+mRCNgepYYlS226zVBTWPuZTLPqHrewASPv8uRjPSd/UtlmBpMkCwDQEhEwRKG8Hf55Umjmzb14KHwshBz7Zxv71hxeHv3Z9o1GFEFNjHQueMMplt6qG6hn4WFMj3F8brFSDqHRtnSJU7VOk9Ra9momeeP5qJRfMxniOwWDLCL55uzwK9V9qP37Qu3OjQYYSlhkWjaDHdSSCI1rfsvN3Od2ybpvzwNsf6r84WM+XWGrAXtfTDs7RP8jE+eqykdlBds/5f8si+1uso+JwXFBOtWQS4Y942toEnzcFIYW1F/5WbwiCkXZIOqwMgTDcsLUawbOGXGb+E1neGuL63Z80ZkQF9v7NFxj/Yc256C1A6I3t7U+A9jcXSVqkBF+n96wBkRiT8pO/gQI/TM6g363s0DVSLepnFVr9lgNh18+AP04S4UxZZq89yzqItGPpnYimCWzAVtu xk1O2jrh yd7njasKAlw+8mxr2/SaFHNJJ7EEdlHoznF/qGTCW9TAM3ewgr3sYvtzVegyDg6EfrKkCpzknYrLP7tHe2DkZAjLeLLgtW68HW5+0JUl0w6c1kyE6u+EpGI2SKk7CDZKMWVZzjbqRFO0OneaXu17jxA27QlPUEj1GmpvrA2+c3MJNQ5VSevgEPxfGcLAw2xlEgdAK6cbDyAcyCW9wxaEtBiFbW8pHw811F34TVGT+DkJQa+TFa/pBUebkgo8Iu/WNGRw4glcYkgGROqw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yin Fengwei set_pte_range() allows to setup page table entries for a specific range. It takes advantage of batched rmap update for large folio. It now takes care of calling update_mmu_cache_range(). Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/filesystems/locking.rst | 2 +- include/linux/mm.h | 3 ++- mm/filemap.c | 3 +-- mm/memory.c | 27 +++++++++++++++------------ 4 files changed, 19 insertions(+), 16 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst index 7de7a7272a5e..922886fefb7f 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -663,7 +663,7 @@ locked. The VM will unlock the page. Filesystem should find and map pages associated with offsets from "start_pgoff" till "end_pgoff". ->map_pages() is called with page table locked and must not block. If it's not possible to reach a page without blocking, -filesystem should skip it. Filesystem should use do_set_pte() to setup +filesystem should skip it. Filesystem should use set_pte_range() to setup page table entry. Pointer to entry associated with the page is passed in "pte" field in vm_fault structure. Pointers to entries for other offsets should be calculated relative to "pte". diff --git a/include/linux/mm.h b/include/linux/mm.h index ee755bb4e1c1..81788c985a8c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1299,7 +1299,8 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) } vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page); -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr); +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr); vm_fault_t finish_fault(struct vm_fault *vmf); vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); diff --git a/mm/filemap.c b/mm/filemap.c index 6e2b0778db45..e2317623dcbf 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3504,8 +3504,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, ret = VM_FAULT_NOPAGE; ref_count++; - do_set_pte(vmf, page, addr); - update_mmu_cache(vma, addr, vmf->pte); + set_pte_range(vmf, folio, page, 1, addr); } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); /* Restore the vmf->pte */ diff --git a/mm/memory.c b/mm/memory.c index 6aa21e8f3753..9a654802f104 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4274,7 +4274,8 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) } #endif -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr) { struct vm_area_struct *vma = vmf->vma; bool uffd_wp = vmf_orig_pte_uffd_wp(vmf); @@ -4282,7 +4283,7 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) bool prefault = vmf->address != addr; pte_t entry; - flush_icache_page(vma, page); + flush_icache_pages(vma, page, nr); entry = mk_pte(page, vma->vm_page_prot); if (prefault && arch_wants_old_prefaulted_pte()) @@ -4296,14 +4297,18 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) entry = pte_mkuffd_wp(entry); /* copy-on-write page */ if (write && !(vma->vm_flags & VM_SHARED)) { - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, vma, addr); - lru_cache_add_inactive_or_unevictable(page, vma); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr); + VM_BUG_ON_FOLIO(nr != 1, folio); + folio_add_new_anon_rmap(folio, vma, addr); + folio_add_lru_vma(folio, vma); } else { - inc_mm_counter(vma->vm_mm, mm_counter_file(page)); - page_add_file_rmap(page, vma, false); + add_mm_counter(vma->vm_mm, mm_counter_file(page), nr); + folio_add_file_rmap_range(folio, page, nr, vma, false); } - set_pte_at(vma->vm_mm, addr, vmf->pte, entry); + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr); + + /* no need to invalidate: a not-present page won't be cached */ + update_mmu_cache_range(vma, addr, vmf->pte, nr); } static bool vmf_pte_changed(struct vm_fault *vmf) @@ -4376,11 +4381,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf) /* Re-check under ptl */ if (likely(!vmf_pte_changed(vmf))) { - do_set_pte(vmf, page, vmf->address); - - /* no need to invalidate: a not-present page won't be cached */ - update_mmu_cache(vma, vmf->address, vmf->pte); + struct folio *folio = page_folio(page); + set_pte_range(vmf, folio, page, 1, vmf->address); ret = 0; } else { update_mmu_tlb(vma, vmf->address, vmf->pte); From patchwork Wed Mar 15 05:14:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13175357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A672C7618D for ; Wed, 15 Mar 2023 05:32:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 03AB78E0007; Wed, 15 Mar 2023 01:32:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F2FF58E0001; Wed, 15 Mar 2023 01:32:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF5598E0007; Wed, 15 Mar 2023 01:32:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id CF1218E0001 for ; Wed, 15 Mar 2023 01:32:38 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id ADFEC1C5EFA for ; Wed, 15 Mar 2023 05:32:38 +0000 (UTC) X-FDA: 80570012796.26.86531FF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id EBD9B100010 for ; Wed, 15 Mar 2023 05:32:36 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hY0UUqTG; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678858357; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZB/nojsqXTtfSMaj4C8ZgsYbK0b2FnBz3f1ii2GaXkQ=; b=SAfvd8XMD7+kyEU9+x9O7IWodlqjAllZ9ytHYDVbnJPzXOgXW2IUU0B9YYADc2hdMhDZWr onPn2ZkgtGYWN2mH05kpow/0SdTw29uNeRCx1V2JX2JiMaCGBORQ0mj8Rp/czCQ7TC2pDP ITj3vVggGsnIwfB80TVGesUSL86YaUE= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hY0UUqTG; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678858357; a=rsa-sha256; cv=none; b=P4pvjQvEJfvtQ7AycTRDHXUgJLKT8xNLtXpcPCDLcKhcVOhtdigV2NTgg4Y3cR+3TXK6Z8 KHSXGlznCpGfg3zKmmQYDNlsY3b6WMM/HZOvChHWmJilqyfOIG5Q0S+KExlFAU7TSD1BVp sy8wBU4BSYR0FxZ3/aXSbFJKotMLM6g= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ZB/nojsqXTtfSMaj4C8ZgsYbK0b2FnBz3f1ii2GaXkQ=; b=hY0UUqTGJDzLw4RBYtIYxgeVvN 63iW/9r3FQoz3cmvNkkqfiUK9D3PXsg4xMaIMPfprUGUdDWLfFX8lqQ3C82dUtIIJbUVjiPS7bkew 3SVH8K0ra3lSNQjOGGGVyPINBnyh+l1lpy9vvC+bXrgxroKRo6SBBqMHZ/6OCUELUKsVgdksovkJT OvXxNguIF8woiUKmuJj3JF3fXhM+G7vSEr0sxyhz6N12DAq6wpZJTX8hDZKK1dlve+VZatmATm7Qi Cujmwp3ehFlZS15JwLjFg7ZbiZQcI3M7wVp8XqdQrQfxLY2iEj5jOexlolxHUH/oEfQmY/I+3E1Oo YD831Jww==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pcJTO-00DYEB-U2; Wed, 15 Mar 2023 05:14:50 +0000 From: "Matthew Wilcox (Oracle)" To: linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v4 36/36] filemap: Batch PTE mappings Date: Wed, 15 Mar 2023 05:14:44 +0000 Message-Id: <20230315051444.3229621-37-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230315051444.3229621-1-willy@infradead.org> References: <20230315051444.3229621-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: ictoawo367a7sc6rcikhj9bop1dsywtp X-Rspamd-Queue-Id: EBD9B100010 X-HE-Tag: 1678858356-427075 X-HE-Meta: U2FsdGVkX1/+iPmP+/gpQSPbeb6+1arGGZWgT40FABqZj9cBcrGlklJg43h1bzyddAXD1ero2yZ8Vvma9COk+gklt67fq1kLfq7pbF4IJjy9mbBEb9xB+CL6gXqh++M6V5AguW0P+VdxIugkQ+ye/VMaXRzFw0wbSM3/kKGyTWUwLrXwVc7Z4g9k5M6Amewe2vQZsHNTt2WNmlwOAvb1acCgZbuj6zxRo8BtY5wEifJmbykNUA+iK5eThb+b16AO3l+KS88yjuNrfqrgmqCUC73QIIRauehvc1cTZRMDIRAouIbPyVq6LIHp6IIY8Lg0LUXSdC7sdz3xNHIQ3heLxaj9RQRHcGfdUAC8V/JX2Z3fuYLNs7Vn+CtfThozqNnCM1OP5Fn5sv32j+HSEdEvb8vE7Qdz5PcCIMI8hWp0BqElrvg9ZGZCc+K0GPHeooV49SP6azkflZnVf0b/ucMlLbkAlJ5c2hFfdej+xD0N41cCeKQPAtTwAw+6IxG0KzsYjDKrLWLDzFWD741z85hVk/8/nAthOMZAxXDBGvD2a+u/ytIKQaK1oX8CvOQZ4RKyweu+z5jhr6J4Tq8/ZqSOwC4nc4ML+5GAogD93z29oNeLc1EYO5VDv6bEjS4Cz+HWKjTXmFCnaFBgk4VjsggcaXG4gW/Ey1fq0I8iAg/aaTG1Ed1asbm5aubOMuMz6uslMRZpC+YVDB9L/Bopvi20NjRbxh1bD9zOyD1GQWr060vs38pMEHjeLMJWZDiFuWTc+0DQt6BqFwika6dlT4JAR1aBe6RyRbcATwbgXy8xkbWnGXAn/zdKd7amhYvZ4opHHuzc0a37etQnk4HYhENcWzRrCoPk+YmbA2ZL3P2H9snK/Zn8nauqah+EAFCerYmCzxsYfWGhsW+pVl5ddstMRdOQyiullH0VYewUoes25oVeQs/MhJUWyHfv08vB2w67SsM3fEQUNbyKanEvTF+ tYRIkkEE moCgCkhnMBHvV1SmmoKnYz7AFf6bMKIsgjITKYCCmwrBhRammSXjy54ov8jjvPI8ljcrM9YPAqC4qVZbfmZZvrmcL+2m+pRF72h/8gHIzD1uq2J5L8nCFAKDZHhyLy6DmeLEhLLqwBDxmD8nIQ9djgQdH8gVhbLYEdXUvgGV3mOXSj3tnFfMZK+UKoqPDUCdNoBc8j7Au+D9LgWGISoIRArT/J5dzZHA5ohsvfBz3BuC3AZPqk42ofZ0Zlg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yin Fengwei Call set_pte_range() once per contiguous range of the folio instead of once per page. This batches the updates to mm counters and the rmap. With a will-it-scale.page_fault3 like app (change file write fault testing to read fault testing. Trying to upstream it to will-it-scale at [1]) got 15% performance gain on a 48C/96T Cascade Lake test box with 96 processes running against xfs. Perf data collected before/after the change: 18.73%--page_add_file_rmap | --11.60%--__mod_lruvec_page_state | |--7.40%--__mod_memcg_lruvec_state | | | --5.58%--cgroup_rstat_updated | --2.53%--__mod_lruvec_state | --1.48%--__mod_node_page_state 9.93%--page_add_file_rmap_range | --2.67%--__mod_lruvec_page_state | |--1.95%--__mod_memcg_lruvec_state | | | --1.57%--cgroup_rstat_updated | --0.61%--__mod_lruvec_state | --0.54%--__mod_node_page_state The running time of __mode_lruvec_page_state() is reduced about 9%. [1]: https://github.com/antonblanchard/will-it-scale/pull/37 Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 36 +++++++++++++++++++++++++----------- 1 file changed, 25 insertions(+), 11 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index e2317623dcbf..7a1534460b55 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3483,11 +3483,12 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, struct file *file = vma->vm_file; struct page *page = folio_page(folio, start); unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); - unsigned int ref_count = 0, count = 0; + unsigned int count = 0; + pte_t *old_ptep = vmf->pte; do { - if (PageHWPoison(page)) - continue; + if (PageHWPoison(page + count)) + goto skip; if (mmap_miss > 0) mmap_miss--; @@ -3497,20 +3498,33 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, * handled in the specific fault path, and it'll prohibit the * fault-around logic. */ - if (!pte_none(*vmf->pte)) - continue; + if (!pte_none(vmf->pte[count])) + goto skip; if (vmf->address == addr) ret = VM_FAULT_NOPAGE; - ref_count++; - set_pte_range(vmf, folio, page, 1, addr); - } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); + count++; + continue; +skip: + if (count) { + set_pte_range(vmf, folio, page, count, addr); + folio_ref_add(folio, count); + } - /* Restore the vmf->pte */ - vmf->pte -= nr_pages; + count++; + page += count; + vmf->pte += count; + addr += count * PAGE_SIZE; + count = 0; + } while (--nr_pages > 0); + + if (count) { + set_pte_range(vmf, folio, page, count, addr); + folio_ref_add(folio, count); + } - folio_ref_add(folio, ref_count); + vmf->pte = old_ptep; WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); return ret;