From patchwork Tue Feb 28 21:37:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51611C64EC4 for ; Tue, 28 Feb 2023 21:38:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2DA116B008A; Tue, 28 Feb 2023 16:37:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B24C6B0089; Tue, 28 Feb 2023 16:37:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C512E6B008A; Tue, 28 Feb 2023 16:37:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 6AFB16B0083 for ; Tue, 28 Feb 2023 16:37:46 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 45F661C641A for ; Tue, 28 Feb 2023 21:37:46 +0000 (UTC) X-FDA: 80518012932.05.CC6666E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id 97A4C100012 for ; Tue, 28 Feb 2023 21:37:44 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=C4l9hofg; dmarc=none; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620264; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ohh0hBicoR9cEJC1f0QXvtIvk3E2NGoKfRsPDD6eKkI=; b=VMv7AcIyWrEHSDhDtfzFi6xZza79kF5TsC97scJq921o4gh+hzRO4oBEAp96h2qoSlO+6c QYWx95PhelXwPlkMF05O8KD31qsCKsH0ZByWtM+02wM58LpsyYbX4mOo47dqYk9nAe50VX dQ5IluvsEYvuh2lcYlbu5L4rxAQjI7o= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=C4l9hofg; dmarc=none; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620264; a=rsa-sha256; cv=none; b=UK4B6WwGLIZ/Q0TQXYtrj88tkddt1fzIDgl4yZ1uHa44JoOdzVtCvF8yqFSDSQ4fAfp9U2 sDRqdVG74JCVKHt6sXPZy0CQwJ8V+idhRsTDwcgieAFGxC4Mq1pliJJ0kRtNcRTc6GqFgX +ap+43rAHdQhIaQml09S9qwBKGOAHXk= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ohh0hBicoR9cEJC1f0QXvtIvk3E2NGoKfRsPDD6eKkI=; b=C4l9hofg4rXftyXEerYvPHhQWb dfQB5LKm+FDa6C47uGBFnFC3zM3BryxXu8sj/gjDOY2O+D49YhfYWpVTZxEV/shabx37cO1Dva5zN D3mCsa9NIAKTpB7OrGAFf8e+244BialSZcSfy1ocAy27AG1DniLnfMQPg8aRpFWr/OMM6UK1Mkpxv W0As55hUkjr87Ho6l/vHOgFkeKR8CsjU1Un3j625SmEIZ3v+hTCZK+nvQnTAQOlaNV29QcYwvG8zF YfTNGdH3SPPddEAtGGWqWC+fugeAGBF4tEhu7ueNcMRAbblnJOP5ra5PWDQ3HUsUpBnaOyHqPvSza PlHdlyag==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fH-0018oU-J8; Tue, 28 Feb 2023 21:37:39 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v3 01/34] mm: Convert page_table_check_pte_set() to page_table_check_ptes_set() Date: Tue, 28 Feb 2023 21:37:04 +0000 Message-Id: <20230228213738.272178-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 97A4C100012 X-Stat-Signature: jk7yeox5f3aq7xipyohg7356g8mytiyz X-HE-Tag: 1677620264-297445 X-HE-Meta: U2FsdGVkX1/DpMVIb1H4tilnU2plDTr0+5+ZNtaTS2VZ8f6nlIrBNpw8wxrRDMD5N8jb+wMS9j9YGfAbtBPdPeXWiwWF4c+TsrkqYoyDZhk9E+JTPugUqVX7WfBjXiBgKrYdaCjHtv654u7jF8W7lqkq/i4UjEQfXVWhvsDoZyiMdnTcinc8fwjfMW/GSSq0+1udj5dGtkVn5c67RQAwlyirQpqU2Xy4pN2WVMjuRKgL/cyz5/czLBWb/nUq9ZcH1YSSCtGHdq7Nj5PAwK77MbEKP8PySyAIkszURCKFQi8rH224wVtM7coCeoTP7Gc4ZjpoSTV3IJZ51h5IWBbaKNRIQTVmccORXMZwzJTu5PUk6yo9Yv+pmY2IeO/ZgJ6NQ9PWNhKkxUNpzJW0TMJW70LSGgvnyXMlq5rZJ39ZGETKH4H5tUwpYMENDTMG2CkFa1X9HcWsHmXzgQ2chWMxz60mPT7Ehdtx1/pogt7P6xWQnqNX10Tq/Q8jn4MBXVdShNq2LvB2al+sgKezL5uoIwcOvabAzDKsvXPtr2v9BJdT7NacHfZhQxowe/qZEK3c6MhLJSbWUST8mkORJCJ36AaKxHFIb99AnnvNQlSAypVqRxyoErUKFctdIDwnb4FiCKYPs7dPMq54Xif5MImPb9i3Agg+toGNv3bT7yVDlL2u2HggcRFjPRkvovtP8cbFzBewdGF9h+qvnikjUrPdmeRUJsHf1CbbWHxR5aWhrsWodg2EUvSdI/WjLtpWuaEelUyj+dwqMJi4i24bi/6Vub8SGyVZZfSgLCEe4KFs9Nvq3L3wr8NwmLw7uWU9YIme2QSaMUwHiBY5FxJUfkqAzdCXQOq/pwpO7MO+JstMlkcAPFrgzqWyQVvxquUSCHetAw0ioNGnkKrzwKxt4/2EN2IaXhtMOx96TBgL9M+T4mHWvK9yno/lT5lfFKXyYr0kWUsjbTjL83BqM3/qhUx Ij7IMqj2 Lhiy4fdQI9OHhqJtVqjGII082rSl0iMGggzLSCe2aUwtPYEfQl9Ahnfn/RRHT5esJkYS+kyfFAMCYJt1ijJKLXRaXTwfdsCY2s4tPdrU36xvtbVADtqSMXWUO+ibN4Qp5xdgqt5D+WH0bd7y/MkFe6aWJlXoyjW9bQKHA5HNJ6HPZzDN7hlKmYqZ6kIqNF3G5kXid1Pr2TXkTVcixqmbv8/vquBGoIoWrh14IFEv0xVtEaXN5zw7HcSFoxQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Tell the page table check how many PTEs & PFNs we want it to check. Signed-off-by: Matthew Wilcox (Oracle) --- arch/arm64/include/asm/pgtable.h | 2 +- arch/riscv/include/asm/pgtable.h | 2 +- arch/x86/include/asm/pgtable.h | 2 +- include/linux/page_table_check.h | 14 +++++++------- mm/page_table_check.c | 14 ++++++++------ 5 files changed, 18 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b6ba466e2e8a..69765dc697af 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -358,7 +358,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - page_table_check_pte_set(mm, addr, ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, 1); return __set_pte_at(mm, addr, ptep, pte); } diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index ab05f892d317..b516f3b59616 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -459,7 +459,7 @@ static inline void __set_pte_at(struct mm_struct *mm, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval) { - page_table_check_pte_set(mm, addr, ptep, pteval); + page_table_check_ptes_set(mm, addr, ptep, pteval, 1); __set_pte_at(mm, addr, ptep, pteval); } diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 7425f32e5293..84be3e07b112 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1022,7 +1022,7 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { - page_table_check_pte_set(mm, addr, ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, 1); set_pte(ptep, pte); } diff --git a/include/linux/page_table_check.h b/include/linux/page_table_check.h index 01e16c7696ec..ba269c7009e4 100644 --- a/include/linux/page_table_check.h +++ b/include/linux/page_table_check.h @@ -20,8 +20,8 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr, pmd_t pmd); void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr, pud_t pud); -void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte); +void __page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr); void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd); void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr, @@ -73,14 +73,14 @@ static inline void page_table_check_pud_clear(struct mm_struct *mm, __page_table_check_pud_clear(mm, addr, pud); } -static inline void page_table_check_pte_set(struct mm_struct *mm, +static inline void page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) + pte_t pte, unsigned int nr) { if (static_branch_likely(&page_table_check_disabled)) return; - __page_table_check_pte_set(mm, addr, ptep, pte); + __page_table_check_ptes_set(mm, addr, ptep, pte, nr); } static inline void page_table_check_pmd_set(struct mm_struct *mm, @@ -138,9 +138,9 @@ static inline void page_table_check_pud_clear(struct mm_struct *mm, { } -static inline void page_table_check_pte_set(struct mm_struct *mm, +static inline void page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) + pte_t pte, unsigned int nr) { } diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 25d8610c0042..e6f4d40caaa2 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -184,20 +184,22 @@ void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL(__page_table_check_pud_clear); -void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +void __page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { + unsigned int i; + if (&init_mm == mm) return; - __page_table_check_pte_clear(mm, addr, *ptep); + for (i = 0; i < nr; i++) + __page_table_check_pte_clear(mm, addr, ptep[i]); if (pte_user_accessible_page(pte)) { - page_table_check_set(mm, addr, pte_pfn(pte), - PAGE_SIZE >> PAGE_SHIFT, + page_table_check_set(mm, addr, pte_pfn(pte), nr, pte_write(pte)); } } -EXPORT_SYMBOL(__page_table_check_pte_set); +EXPORT_SYMBOL(__page_table_check_ptes_set); void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) From patchwork Tue Feb 28 21:37:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 385F8C64EC4 for ; Tue, 28 Feb 2023 21:38:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C4F6F6B0093; Tue, 28 Feb 2023 16:37:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C00076B0092; Tue, 28 Feb 2023 16:37:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA0256B0093; Tue, 28 Feb 2023 16:37:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 934A26B008C for ; Tue, 28 Feb 2023 16:37:49 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6D13DA0D69 for ; Tue, 28 Feb 2023 21:37:49 +0000 (UTC) X-FDA: 80518013058.11.6983AD8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id A7E4A160014 for ; Tue, 28 Feb 2023 21:37:47 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=SnSZ9lDB; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620267; a=rsa-sha256; cv=none; b=TUKrrDvowbbQBOha5TzeFunvSTpBeE09L1u60ws/zl6at6fFhACya8Agiu6LyoS1KLHhA8 /QOCTGXJzziRfC2EjfxU7GyHQc+xd0rKtjMVRV2FUdZYujhiCdEfkTA6luHq9hTYFMyZyJ h5g+TpC53JjEvnkhf3w/ny/i+dMEQ/k= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=SnSZ9lDB; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620267; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=goynt5niP0v9aEgqldbKfY9zrWSKLPIYoftetp9ETpc=; b=h0LGFuJdfuH+UixlRJ60/bz+YfZc3qR9mqboIyNVCzSg5DPsk6IYTFSr2H3yedIOAqrrWP b8Z3NRDhnpkrqNcgwY8l4Jgv3NTHs37yuhg8yfSiE/0Db1IAvtIf2dW0/K3e7LFZYfOM2S yPlJrWQZ1voRAK55HuawOoIE1ZIF074= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=goynt5niP0v9aEgqldbKfY9zrWSKLPIYoftetp9ETpc=; b=SnSZ9lDB/BBxP/CV69NmolSCQS 0cuOuB1PvC3Cyo73sMiE72INL5B+MlCDkHoO/vCso0e/cPBdn1Mo66xeJgrZvdDk2JEmaFWJIGbNL C+m/5sDD/h7mlDQW0dBCb7b3LMoxbGCVKIJcosPbhZnBx1tAP7kLWRM8OlecdSwoKgquypQCiBlI0 yQySTZC0IrVYS4ytXo8/oBAashPe0mVVrXG02yJo5exF0rIXUvmTkZvZEMo1oSHLpQidx4NitEJX7 kPKib2ibpZZXHT1STrdXWxBQdErEEB+e3GKFC5s81JmJmd2otXTYWYnZiZTDzGPZ9JftAhMFId4Ln lOQWXEig==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fH-0018oW-MA; Tue, 28 Feb 2023 21:37:39 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v3 02/34] mm: Add generic flush_icache_pages() and documentation Date: Tue, 28 Feb 2023 21:37:05 +0000 Message-Id: <20230228213738.272178-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: A7E4A160014 X-Rspamd-Server: rspam01 X-Stat-Signature: ckfkuyzxac5gikqytcsoiqtbkqk7cs7z X-HE-Tag: 1677620267-216361 X-HE-Meta: U2FsdGVkX1+gpyIBhw7xxGnRV0JrK2MFsF7k+5+vQxR4Q1htCDjFA0uZ+jIbd6wF86iK9vzX7dcAoAcjp8jI+HUOvr/EbVIK0Idrfb56OX/kwNwmEOEhAlidO7f8W5wjxIf6MRIQ023raIYDsQUNYc2haM1SrnEdRZgGWTU2Qd9qWfctPcgEslbbDstjGVvfgjxReGbS7yPn2WwxU+7gEe8fwIxEtgLSrEwJfbA5sANtNN5KGzOljQefYOtOLJUo9VHEs1wkiHDWGYPQY+2Q5EWV3bXoKYx/CB2uub2Bbzlv1CoFvHAn0MXrrPTBDnfIW4tkb+qRhKYWVALb7PG4fx6PZFBvyuNSZCzvlhdVa8+4ZNmMRsW1moMfy5XbsIED1A/WR0uDtu64GHL0efHHyJoZ3beBi2hd3/nkx86W+/cXLtscYddoVK7gqNVE72U9Moynaks9Y0v7ikFUwks61or9YbupGKO8/pwS0X3pvaYBxiAn7sWIfaz8GLdOXIOZGwQSpeN48W/zCsz0w+PanxYlry6H4jZpnWGMswBymQL0+lyUQtT86ynBpl1jLOLHnBI3/CMtn3bb6/FRJNrM2mL+4QJxKs0EAk6IrnSthKUGPqyWX1KiWuAyxJl4kT6W83B4yqXAk1Wo0giuifxfV8FZY/1rn9KkTthW4pmyipfMCONLFwNi0Ip85P0KPILBZY/D4Yt9QEvrsPPhCBNdAFz5ejT9QFvS1LUhXEhzbpMwAMWSgL/S1S7PBE6o5nwAaRnOATbf5Qm4lpPgWT6T7z1p2jnQdggXmNPSTh9pGFOtF2MkgVdrhuLqobXrmKgSnG/SmIrXi4/2j0CKvLSZyr/GoJyhfgmzLthYahyWmrYbmK6ch/449yFtIGBDpCRZs1X9HFR+1SMWMnwJelD8VBwQI1MtHH58bGajx8oze4ZZ+42PmlyZYAJs4kgxY6RnySezePVGSXWsmhG1IcO LV733CzV Q2/DNVgz1qfTZe7jZ7pR55K58E5HsqV/NvGGn7oZtTxnyAfFTtikWKFUVYTtsJVZe/EFyuAby3xfY4g3DDXilG6wt3mTGkusTM+TNfM7yzru2KvXsC3YmIBfp3zdBxh1yHnpSHWnrAXo1m0k0tAttpzDlJC1dVtzNvuUe0g11UfLld5k4lEKWtQtA09PSyOv/B9fUQvuRH8T6HyGRnlrPKZUb5J20RqWCtNFyvocwM+RxH1shkKCthBzpzw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: flush_icache_page() is deprecated but not yet removed, so add a range version of it. Change the documentation to refer to update_mmu_cache_range() instead of update_mmu_cache(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Mike Rapoport (IBM) --- Documentation/core-api/cachetlb.rst | 35 +++++++++++++++-------------- include/asm-generic/cacheflush.h | 5 +++++ 2 files changed, 23 insertions(+), 17 deletions(-) diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst index 5c0552e78c58..d4c9e2a28d36 100644 --- a/Documentation/core-api/cachetlb.rst +++ b/Documentation/core-api/cachetlb.rst @@ -88,13 +88,13 @@ changes occur: This is used primarily during fault processing. -5) ``void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep)`` +5) ``void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr)`` - At the end of every page fault, this routine is invoked to - tell the architecture specific code that a translation - now exists at virtual address "address" for address space - "vma->vm_mm", in the software page tables. + At the end of every page fault, this routine is invoked to tell + the architecture specific code that translations now exists + in the software page tables for address space "vma->vm_mm" + at virtual address "address" for "nr" consecutive pages. A port may use this information in any way it so chooses. For example, it could use this event to pre-load TLB @@ -306,17 +306,18 @@ maps this page at its virtual address. private". The kernel guarantees that, for pagecache pages, it will clear this bit when such a page first enters the pagecache. - This allows these interfaces to be implemented much more efficiently. - It allows one to "defer" (perhaps indefinitely) the actual flush if - there are currently no user processes mapping this page. See sparc64's - flush_dcache_page and update_mmu_cache implementations for an example - of how to go about doing this. + This allows these interfaces to be implemented much more + efficiently. It allows one to "defer" (perhaps indefinitely) the + actual flush if there are currently no user processes mapping this + page. See sparc64's flush_dcache_page and update_mmu_cache_range + implementations for an example of how to go about doing this. - The idea is, first at flush_dcache_page() time, if page_file_mapping() - returns a mapping, and mapping_mapped on that mapping returns %false, - just mark the architecture private page flag bit. Later, in - update_mmu_cache(), a check is made of this flag bit, and if set the - flush is done and the flag bit is cleared. + The idea is, first at flush_dcache_page() time, if + page_file_mapping() returns a mapping, and mapping_mapped on that + mapping returns %false, just mark the architecture private page + flag bit. Later, in update_mmu_cache_range(), a check is made + of this flag bit, and if set the flush is done and the flag bit + is cleared. .. important:: @@ -369,7 +370,7 @@ maps this page at its virtual address. ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` All the functionality of flush_icache_page can be implemented in - flush_dcache_page and update_mmu_cache. In the future, the hope + flush_dcache_page and update_mmu_cache_range. In the future, the hope is to remove this interface completely. The final category of APIs is for I/O to deliberately aliased address diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index f46258d1a080..09d51a680765 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -78,6 +78,11 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) #endif #ifndef flush_icache_page +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ +} + static inline void flush_icache_page(struct vm_area_struct *vma, struct page *page) { From patchwork Tue Feb 28 21:37:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155234 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44E13C7EE31 for ; Tue, 28 Feb 2023 21:37:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EE6A06B007B; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BD68F6B0082; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 657E66B0083; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 48CF96B0080 for ; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id F2B5E1C655A for ; Tue, 28 Feb 2023 21:37:44 +0000 (UTC) X-FDA: 80518012848.18.EF9BC54 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 5D2A620012 for ; Tue, 28 Feb 2023 21:37:42 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=V7oeXdyW; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620262; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4/i61ZL50yz6ye25FSWBiGacTMwROTdk2fb5bLUdjZQ=; b=kBm4bJcdAOiYSVRk9SJHBRWGFdBEmAcJG1ySCjAziMGgRJzjG7FUXS1kStuDlkC6miIfNF 6w/TWxWsVZRwXDcyCuD8FN4syqu+E/aavBSMnxC+wGtPoXVF+3HhWHoNC04gc9BzbKwRSv 8V+NYGZcnP2DMAnx5AIjYqFmzVXxPvM= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=V7oeXdyW; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620262; a=rsa-sha256; cv=none; b=q1Z1TJgPpR+8fHxTYJBDgLuICENDy1oTQKxSbluD5bYVMz/AZs2MDq6A3+VQLI96s43Ql0 G/Js9oGMC5jzwSbGCgC7DUIUOHlncFrhXL69exjD8VO9g5ahP4ZYK5iUpVnEHgnQpIu+2a LztkxnoLBsUdFQ1ZBLRrRnGxwAqkrXo= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4/i61ZL50yz6ye25FSWBiGacTMwROTdk2fb5bLUdjZQ=; b=V7oeXdyWrSBQgRuryxvAjDdmK/ PmEA/0paMrtQ6HBE6tePsYJZz6k8Eh2IV/Cduw5PCatHygY5uf1HHh9QXAJPpxd+2Gt1XNNJ38oBs 7jgK2jLi0kEqTGbXcVMBwUjbU8B8InVRi5eVhNtY/7bIXj5PQ0pDmofSBVZsUlt31MoRZzGWn21xe ShtzNNaoqjpLDJEoZAQx6SbSpLUGkzQUxHAln8/TjK2Ab5c/gDiu2+elc2SGxedi70Wnn4dR15nyK UfY0ANK45Jvq1+6mUEFQF1tYFeZx/1od4+UoclrTdV+pqDp7Fq4FbeDqoDP2CwXIOnaiRJmBGcjTR 1cStI3qg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fH-0018oY-P6; Tue, 28 Feb 2023 21:37:39 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v3 03/34] mm: Add folio_flush_mapping() Date: Tue, 28 Feb 2023 21:37:06 +0000 Message-Id: <20230228213738.272178-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 5D2A620012 X-Stat-Signature: 5in3xeh4t5hmmhjwnbkr6rjjrzookt7h X-Rspam-User: X-HE-Tag: 1677620262-296620 X-HE-Meta: U2FsdGVkX1+NK9KvlQO5Wm34B3TcYkA6RiaZeCN8zCOuNo4pvdtAp6xb/nzu+XQSjG1HZzIi44vUgMPApLtEfOQYXx2UppZSVn3QvY9aXZO2DNzhQMIs4f4KYrR8MJMREcn8BuJ1csS5bM138VFmjqEQmMG9Dxk97GaR7Ehfdk0iM5Drg9BgCtRMXz8p/VyjYbr6bpVpHPTW2rJUk3MGpBrPhF6Me0HgTXrdHb2Zj97qs/s3fUwC+pBXzgRTfQUwOazY5IQHLTz5F6H1j6EmKkYK0LnuToYpjiqA9oqT092HMb9/4/4SYRo36Ic4uTLwosvO4/SywGUFcPp6YCdLgn6uUFBbSITMH1CmFLGBti3Is7AW+uY99aQV3brGaDgwKDrouQe7nB85lQDtv8+eBJFMaUBhHyRyz70xIxyiDYIzAwJvza38fhgk0rJB38uPdxYnDdRaMXNsyGyqE7ovtpzaB+TVhcpD+WOpGeFSzUEwEe6BqWy8YrV0s0vnxCgGxn81+ehvc943jezEFYY5NezKYO6RnOawhri6nBheMKADfbZj8syetZUGjXADhgOaVCsjmko52QM3YpR5qOXeH7k3U7EC5nnSfGC7zTapO6XfR1y5AjDWeKX7CazIaFT1dNCuCNDPZDZcVbwiouY1IDS71/XWaGu/ysuS4W9f48upzzT8GPWZ23BxgwOkw297FX9OtFPd3+O10S0W9cYFrCQibtcAcwZ//uQnvXTkpq0XprdBw6Xc3UHQBMmZT2vKykb5R+5Eziq311VC24NlCRyhCdoFQ6r5Vb0ldNj1XfiEwirZ+94StBQSx9S0cnmUXZRJBwlN4M6P8MqEJ7kucp+frRK4W/JB2v8YcMtHOrQtwH/ZrNQRIjW1Rgss07DLDCDU9rPWfdHDcVDTOY7MpxfGmhM9MyaAUdmbKbKQ/XVkxzRP54BN2A/Rn7BLLKAvHakLaKXVLNXvm5PuB66 VtlXOveH nYfDFpmPLP2rleztzclNVYSaNElkv8G2uIFGnSmcqlRtfxTHVbpVExPVsQW4aIR9Sioha2VAZc810+oxqNAIVhwrFthdBk04j94YEFd4JmFTZSLOPcAAUir3Y2YJVb/BEB4FXl6WqjEQA2MZLV8QdFTzXRC+/soQLUjj18T03DVx9l9u7zjqphqwWCisTgk+h17Ahdz8AF4Nfimd97UOqZ7QC/zU2XGJy7sja0UByIXUoznNkOgWsoib7iA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the folio equivalent of page_mapping_file(), but rename it to make it clear that it's very different from page_file_mapping(). Theoretically, there's nothing flush-only about it, but there are no other users today, and I doubt there will be; it's almost always more useful to know the swapfile's mapping or the swapcache's mapping. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 51b75b89730e..1b1ba3d5100d 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -369,6 +369,26 @@ static inline struct address_space *folio_file_mapping(struct folio *folio) return folio->mapping; } +/** + * folio_flush_mapping - Find the file mapping this folio belongs to. + * @folio: The folio. + * + * For folios which are in the page cache, return the mapping that this + * page belongs to. Anonymous folios return NULL, even if they're in + * the swap cache. Other kinds of folio also return NULL. + * + * This is ONLY used by architecture cache flushing code. If you aren't + * writing cache flushing code, you want either folio_mapping() or + * folio_file_mapping(). + */ +static inline struct address_space *folio_flush_mapping(struct folio *folio) +{ + if (unlikely(folio_test_swapcache(folio))) + return NULL; + + return folio_mapping(folio); +} + static inline struct address_space *page_file_mapping(struct page *page) { return folio_file_mapping(page_folio(page)); @@ -379,11 +399,7 @@ static inline struct address_space *page_file_mapping(struct page *page) */ static inline struct address_space *page_mapping_file(struct page *page) { - struct folio *folio = page_folio(page); - - if (unlikely(folio_test_swapcache(folio))) - return NULL; - return folio_mapping(folio); + return folio_flush_mapping(page_folio(page)); } /** From patchwork Tue Feb 28 21:37:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155258 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09489C64EC4 for ; Tue, 28 Feb 2023 21:38:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5E81F6B009F; Tue, 28 Feb 2023 16:38:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 598D96B00A1; Tue, 28 Feb 2023 16:38:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 43B606B00A2; Tue, 28 Feb 2023 16:38:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 1CAA46B009F for ; Tue, 28 Feb 2023 16:38:02 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id D60D11A06E2 for ; Tue, 28 Feb 2023 21:38:01 +0000 (UTC) X-FDA: 80518013562.20.EEB60C1 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 2EA92C0006 for ; Tue, 28 Feb 2023 21:37:59 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Mb2B/eUZ"; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620280; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PAvoXOK82YrXRCPGzqy64Am4nG7Ga3Zg7EaGsyDnllk=; b=EYEt2BDTA4uTUVvdROmBPshzqhhJOq7WoiLE49+wJ15ojNW9RwDOuCGai+qKRl4FVo8shX k91OyXQHsY06W7QTZqd7OnPgPQRAPmLuV/oX2PSnk5FY1upMB+ws9zC+acgisNVelTxYJQ YuStfpo2lchlBLfFKDVSBu2PRzkYgRY= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Mb2B/eUZ"; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620280; a=rsa-sha256; cv=none; b=M2oeApolbyIiZz6ZUeuFmz79TzmrhRIZl0fso+/VIkcc+X2W+yf8ZJNdW4+ij+byNEGYLl 2lFmzjzJ6E+cTuOErn4dJcgK60X26snmHSIxL+s2s4cDCX/jT2zuZLqzYOnjaMe9qR8DES MnMXrs4fOc3Lyy3lbC/0fdspsE0sQcs= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=PAvoXOK82YrXRCPGzqy64Am4nG7Ga3Zg7EaGsyDnllk=; b=Mb2B/eUZDO3A6Nr3GqwqsvSxmi lKzUIgIND8zK6qKTldjjPDYxZwwUlrvqIytprySaAQ0Lpm+NiO9mXdipCnTBaC9jHw5AmdYfzRvvY 8fFXPiTzh+I/yYVlLSZ0W2vDwreJkCfzxII57ZVJ/rDE+jp1VGmOYdZ6Ks6anE8rOpKH9Z0roqoXQ yuS+T7W9QUQlE5mJGWTdDHOUq5nZtOaK2PzoF0ieWlTtKT/I0MczMVn53GhjjhuuoHRrr+tnaFoOA +6Zp2va55mYQfwv8i0peZPzId35gouagzWffGihIaW1Y3/efFZrJk9mAWxH+i1U1gOygMMhnuqGdT wNNvlzFQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fH-0018oa-S8; Tue, 28 Feb 2023 21:37:39 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v3 04/34] mm: Remove ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO Date: Tue, 28 Feb 2023 21:37:07 +0000 Message-Id: <20230228213738.272178-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2EA92C0006 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: xqxijb8psbgdz6isyydm43iin3wzjsop X-HE-Tag: 1677620279-512899 X-HE-Meta: U2FsdGVkX19IZDgUAQL0qwCBDJtsNMMAggtsLlAMicFfnhQyt0/Cs6LI3wkVSV/8tw4MbSP8qCVIuzwiwBbgoSNS/oYSPEhptdezZsXtlO4NZOUND6/o+PfJh681W+ajsH0HTrsAGPiL24iLFTULafKvkYfQpUrEe6zh6SBUJiMBl7C1JHwDEAzshisidSw73bw8gi9BelEEd7895f2EiA+xAlw+9iXDBLXTY3kVZZmMcPyA/U09OKteath/3GS1lEkvUjnOQmrJKoiIukQX9XoaFTOcVZIoFIEL+/V0lK39pxUWc16OybHqEDX++LH5Bbia4uoJagXP4gHo0616iF4l0qCu3rL9zWTs3oMIyHPZ6YC3mlLFN4tHf2Zly/AhpSShcf84iUdhlCcHjyWug6EEXT8iim9zqCswfoAlJo7B1ZDpRYOu2Vc8PohPyPF4mv+mqSP48Fxy4r+NwAgY+Aq1DBq8VlR2rwp/euHbacV3DXubNSbyAjHlkTt0u2H8iV3IAHiN9G0gys1VBsaK88OJhvuzCGuNaQUakVR4VhuKZn/+A0oV41JbkxA4hLUBLik5OCGUea6G7/dGPDaxhdObrmDmztEySNBhgX7eX9n+ohYD+Vhd7rMoY0ZS27+LL29vj1nHO7J9BHFCJVIJdGFNC9pUDUk5rnNP5yhz5vu0JbRyp3qmIF2KkUEJzebylELHN+ZXeLCHDln3jsLO6YaKMF9Z8+7cbSE4ylYJm9lxXAEC1JOAGaDCd1ptdM9BgH7p3nMKE+Tg7OypCGX99N1qsRb4WI6qX1bgj7JPM+AN3FBfd4baDc9f309o2/LCPJ6e5gJgl6b/BN3Qglv6XhGle47n7pS7w3ra1kIuc+VTMQGDHowQgdDazs5qoy6HX8qYjQOOtWG6xqeCaL4FIJMGfS2zrm+75edEQlp3fb+8MMuMyRNp9oQZycPvcAsQwmaPm8whY+RKH2EoR1Q v/mfr+mW OawvNkjI6OuQ4F0Zqtr3n3qAuXkSMC6NwOvFhOJNATWju5cONHkx4iyXhvqs9O2o6j4kDSDu9+iMJ/JxCTtdGnDGKM8BMKEh19tTOWIgpCnAV4UHhWS+M6TrqX/gYIxu/q8WtIVMjWoYc8YJENy5E4GtOEOWhmx0+3fg2v+xX4Uj3srJsZgYHUvMuLDLtL2tY02wx54GfMuNiNNgiatsME/S+A5/7WyWkQFMQbvz8KxtaP9hBY0L9mAESNQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Current best practice is to reuse the name of the function as a define to indicate that the function is implemented by the architecture. Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/core-api/cachetlb.rst | 24 +++++++++--------------- include/linux/cacheflush.h | 4 ++-- mm/util.c | 2 +- 3 files changed, 12 insertions(+), 18 deletions(-) diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst index d4c9e2a28d36..770008afd409 100644 --- a/Documentation/core-api/cachetlb.rst +++ b/Documentation/core-api/cachetlb.rst @@ -269,7 +269,7 @@ maps this page at its virtual address. If D-cache aliasing is not an issue, these two routines may simply call memcpy/memset directly and do nothing more. - ``void flush_dcache_page(struct page *page)`` + ``void flush_dcache_folio(struct folio *folio)`` This routines must be called when: @@ -277,7 +277,7 @@ maps this page at its virtual address. and / or in high memory b) the kernel is about to read from a page cache page and user space shared/writable mappings of this page potentially exist. Note - that {get,pin}_user_pages{_fast} already call flush_dcache_page + that {get,pin}_user_pages{_fast} already call flush_dcache_folio on any page found in the user address space and thus driver code rarely needs to take this into account. @@ -291,7 +291,7 @@ maps this page at its virtual address. The phrase "kernel writes to a page cache page" means, specifically, that the kernel executes store instructions that dirty data in that - page at the page->virtual mapping of that page. It is important to + page at the kernel virtual mapping of that page. It is important to flush here to handle D-cache aliasing, to make sure these kernel stores are visible to user space mappings of that page. @@ -302,18 +302,18 @@ maps this page at its virtual address. If D-cache aliasing is not an issue, this routine may simply be defined as a nop on that architecture. - There is a bit set aside in page->flags (PG_arch_1) as "architecture + There is a bit set aside in folio->flags (PG_arch_1) as "architecture private". The kernel guarantees that, for pagecache pages, it will clear this bit when such a page first enters the pagecache. This allows these interfaces to be implemented much more efficiently. It allows one to "defer" (perhaps indefinitely) the actual flush if there are currently no user processes mapping this - page. See sparc64's flush_dcache_page and update_mmu_cache_range + page. See sparc64's flush_dcache_folio and update_mmu_cache_range implementations for an example of how to go about doing this. - The idea is, first at flush_dcache_page() time, if - page_file_mapping() returns a mapping, and mapping_mapped on that + The idea is, first at flush_dcache_folio() time, if + folio_flush_mapping() returns a mapping, and mapping_mapped() on that mapping returns %false, just mark the architecture private page flag bit. Later, in update_mmu_cache_range(), a check is made of this flag bit, and if set the flush is done and the flag bit @@ -327,12 +327,6 @@ maps this page at its virtual address. dirty. Again, see sparc64 for examples of how to deal with this. - ``void flush_dcache_folio(struct folio *folio)`` - This function is called under the same circumstances as - flush_dcache_page(). It allows the architecture to - optimise for flushing the entire folio of pages instead - of flushing one page at a time. - ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long user_vaddr, void *dst, void *src, int len)`` ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page, @@ -353,7 +347,7 @@ maps this page at its virtual address. When the kernel needs to access the contents of an anonymous page, it calls this function (currently only - get_user_pages()). Note: flush_dcache_page() deliberately + get_user_pages()). Note: flush_dcache_folio() deliberately doesn't work for an anonymous page. The default implementation is a nop (and should remain so for all coherent architectures). For incoherent architectures, it should flush @@ -370,7 +364,7 @@ maps this page at its virtual address. ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` All the functionality of flush_icache_page can be implemented in - flush_dcache_page and update_mmu_cache_range. In the future, the hope + flush_dcache_folio and update_mmu_cache_range. In the future, the hope is to remove this interface completely. The final category of APIs is for I/O to deliberately aliased address diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h index a6189d21f2ba..82136f3fcf54 100644 --- a/include/linux/cacheflush.h +++ b/include/linux/cacheflush.h @@ -7,14 +7,14 @@ struct folio; #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +#ifndef flush_dcache_folio void flush_dcache_folio(struct folio *folio); #endif #else static inline void flush_dcache_folio(struct folio *folio) { } -#define ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO 0 +#define flush_dcache_folio flush_dcache_folio #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ #endif /* _LINUX_CACHEFLUSH_H */ diff --git a/mm/util.c b/mm/util.c index b8ed9dbc7fd5..f66e0ca82d2d 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1124,7 +1124,7 @@ void page_offline_end(void) } EXPORT_SYMBOL(page_offline_end); -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_FOLIO +#ifndef flush_dcache_folio void flush_dcache_folio(struct folio *folio) { long i, nr = folio_nr_pages(folio); From patchwork Tue Feb 28 21:37:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155236 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AEDBC7EE33 for ; Tue, 28 Feb 2023 21:37:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B6CF6B0073; Tue, 28 Feb 2023 16:37:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 153C96B0088; Tue, 28 Feb 2023 16:37:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C89A56B007E; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 88BB76B0088 for ; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 50C28AB2AF for ; Tue, 28 Feb 2023 21:37:45 +0000 (UTC) X-FDA: 80518012890.16.B5C74D9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id 34680C0025 for ; Tue, 28 Feb 2023 21:37:41 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jUkVd05i; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620263; a=rsa-sha256; cv=none; b=UsVD2ziBMfP4NKyNw/JYG7lptKYO6L3rjnmApwP8KxNvX5PrFnK9UIJBu2iGWg6Q7EivxD h7AfUQxuazevatFEUy8MMVMoQuic6JstQp+LOfK6aBX5nIzwVAsPrGi8f32G6TptbgCoJL jqHCb1bTHYeV+8YCDdTvaZV8oz3qJMs= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jUkVd05i; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620263; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sxlK/dYV0ak5B9qHymTAjJFaSXLib2XJVaYWuBBEzbA=; b=0G9nlJTpQZnUKxFSUwdyzt7WaxnieEc1eEt5jYKZFPiqFSjKCmdrspIrEvXsQB+MasLS6v bWYZemfh+HQoBdAtMdq9gW0jJDmLk3U/MYww3+KAIYbXu0bFdQQqpEPv2w/0LHddxDf86z a2REyfuQ6u6+HH+CGFy+ilmfcbdctco= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=sxlK/dYV0ak5B9qHymTAjJFaSXLib2XJVaYWuBBEzbA=; b=jUkVd05icQZ06ujFqkhm4tlL7P 3KsRXEKFKwCYnut2p/HhHGJkGHEBc4DSoZLw9EV+fHO+JETgxVJaBIVfaro/iPWSQppKKi06COZPd +9YJxdtCk7ZYzWLChXfjkBEgIkaMHE9V9zJzcF0jNsOJfQKOVMWVvAUcddP8L3MltcuwBECQossZT qNF+iiYu12rBvdEwuvZTfX/fZj2BqaORssvM3jcGnJlA/ZhjimkeX5x5KyIZnX0S+jZvFM84Tsr3G u6rH8EcgKI6WeV84/NCL1Diz44x7ciNaxkPnaUmY1jDi7vB9nR0Lq7X58u2P+JTOikuyef9xzWg9c neOWuItA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fH-0018oc-Ud; Tue, 28 Feb 2023 21:37:39 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Richard Henderson , Ivan Kokshaysky , Matt Turner , linux-alpha@vger.kernel.org Subject: [PATCH v3 05/34] alpha: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:08 +0000 Message-Id: <20230228213738.272178-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 34680C0025 X-Rspamd-Server: rspam01 X-Stat-Signature: aa4rh7dmadpztdiyypp5eryfc6iyhps9 X-HE-Tag: 1677620261-946757 X-HE-Meta: U2FsdGVkX1/g226Sq2CRz12rfzgjPq6jOxso1FxJO/Ibaa/hIpSFz38ibN3JBD0m+oAXKducF+/6dWrTFZ4+Sih1qIFZoR7S2yz8DJ8ykNtf71epjwH8WUKF5rRxXUgRZE+DXgc9q/AAURO6o6rCr6nJXSrrWNrcJHZR/ZfmjWa3f8+id3j/VWtgdx1a/gddJPyEzLB2GPlWFjxvkE6s7c3LAkZYCxr0u1dZE547iErbR9h54Wr1BL5OOAV8hVNAe5KlYSctEL9aykFy0zad8/ptB5u1PG8FBr08TJjUQCxNKZMpry0csPTGiIsLdaZk7WUDPAZAnb8ri1Gn6hwfzVp9OVD3rvXEaZFVXCm9EAeTXt409u0S0gfo0vHmvzw0NHXMeHXsv+V+iDmOmAsrKkdwhuMK3EjqRxOmjV8yGDOXbeUsHIzl+GYBPLs0/rS6SeORHr8eEGVRV0DBnNd5VGH8My3gyXRKxdADbqAihsEmJKZS1WnjSFUrhk/eKNOyuBLR1nd3mo1ph0UMYRM4ycHAYs9lsXASf2immAuiJsJOWv0wj5KZ3UGgXENRkVfXY/pbKS2H2n8CcNDIghgA//6EncImmoUfM4A+zs/YpkJTZdsTqVHQR+8KbZ1cZUieVQOlF2eLE+jR6HbZCkmq4EYSD03dom/bnJQbm4ht5NiF85FkZltjLYc3yXNTFVeqQBLSyuglN4oDq41LhNCi5h3zw/tf2Om/dxTp58nOAeVyOqh/yDL8N7iFxHmiSCkUMaKXsNKLaM0gKhrAB3yfgt2HPHa3t5QdtzimTvRfjFHik13QWOEzfy/0TOkEkeVv0GcPQR5s9eupO/eIfrRD2BDCBijmk08HB/bLwR7D2ZdEcqJbxs7306yMejZhaFcwbae9wtTe9DaAjTb07VSx4KAnuKL8drhS5qTKkJf8QxHOp8c0DR18H2zioW6aCWKTyvBIZ5MXRvy6zH8Gpah L3ZE0Lpn Jmm3Z8PXLASbSLe/aQNBv+DZPsueVtmtZIq+NLjzRb6mgOlxFKbUWdO0M/LF4QwZfBAxltRz1iEdOZFkFoQZN6xIGoBZatd4kbtwDPtAiLB1wug3DVxDmheAczokwanpEbC3LVeAGrdtpVugZGWBRT0PqGw3IaPal4Pwr/Jyj9OmXT7N7Acwv/JvVqwZjscHi2c7G0aP36XvEFdNlWh55ANP2g48ipKIgx7VnkFR0bW2X+NQcfZn3Xae18PYZaujgx1MTbm837f2FTNIASG5SrPAXdJNGdlR9/aH6i/ETeMMa2ljIkHpX9acxpqSpWOiXA9VJ2fBrzA2PkY+udg/0B+TxJOFuj6aMZlshx9wePkA+D9x5JfgobgFI6Kiz4ixSwsd7zqMW4Rc/oRdlDa99xI9cbg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Richard Henderson Cc: Ivan Kokshaysky Cc: Matt Turner Cc: linux-alpha@vger.kernel.org --- arch/alpha/include/asm/cacheflush.h | 10 ++++++++++ arch/alpha/include/asm/pgtable.h | 18 +++++++++++++++++- 2 files changed, 27 insertions(+), 1 deletion(-) diff --git a/arch/alpha/include/asm/cacheflush.h b/arch/alpha/include/asm/cacheflush.h index 9945ff483eaf..3956460e69e2 100644 --- a/arch/alpha/include/asm/cacheflush.h +++ b/arch/alpha/include/asm/cacheflush.h @@ -57,6 +57,16 @@ extern void flush_icache_user_page(struct vm_area_struct *vma, #define flush_icache_page(vma, page) \ flush_icache_user_page((vma), (page), 0, 0) +/* + * Both implementations of flush_icache_user_page flush the entire + * address space, so one call, no matter how many pages. + */ +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ + flush_icache_user_page(vma, page, 0, 0); +} + #include #endif /* _ALPHA_CACHEFLUSH_H */ diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h index ba43cb841d19..1e3354e9731b 100644 --- a/arch/alpha/include/asm/pgtable.h +++ b/arch/alpha/include/asm/pgtable.h @@ -26,7 +26,18 @@ struct vm_area_struct; * hook is made available. */ #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1UL << 32; + } +} +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* PMD_SHIFT determines the size of the area a second-level page table can map */ #define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT-3)) @@ -303,6 +314,11 @@ extern inline void update_mmu_cache(struct vm_area_struct * vma, { } +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) +{ +} + /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that * are !pte_none() && !pte_present(). From patchwork Tue Feb 28 21:37:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155242 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAB15C7EE32 for ; Tue, 28 Feb 2023 21:38:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 521B96B0083; Tue, 28 Feb 2023 16:37:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 466CB6B008C; Tue, 28 Feb 2023 16:37:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F0CBB6B0083; Tue, 28 Feb 2023 16:37:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7A70F6B0089 for ; Tue, 28 Feb 2023 16:37:46 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5B5A71C651E for ; Tue, 28 Feb 2023 21:37:46 +0000 (UTC) X-FDA: 80518012932.08.8825453 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id 8FA0480013 for ; Tue, 28 Feb 2023 21:37:43 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=AZmClMwl; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620263; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JBoa7zXrQmG8QTNWjCtLAXsrCH4I2hmYifuV9gp6E+0=; b=BBcWFDXrQizNAL1nvVSchTRW1+jRzwp6K+CZzWdnzk6DmJHv3g0Q2wLgimo47awrinnxjD hJUui8Yl0qI89tjjWfd8hUzJWUweJRn65ABu8c4TF79ZspPe1IkTPdLBLNScX0eS6DiWpQ 2BE9RLDlqPrVBECX8/48ngRLgyDfFgQ= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=AZmClMwl; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620263; a=rsa-sha256; cv=none; b=ps2sLpY37QhTOn55+lX4a23SW0GneH3gRqDxIVZ5WlOYcRTqNmHH0vJ7AoUITnddwQKl6U MQJPoSqZi0klOhIAnQWZOVP9jEBB+Gb7p3SUYwfpbZzKAWigK5OYotv5rAXVROnies8lYL 3ia/2SzwKhexs+eNlt1XJ15zLf4tYGY= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=JBoa7zXrQmG8QTNWjCtLAXsrCH4I2hmYifuV9gp6E+0=; b=AZmClMwlS8uUvd//vCb98dBRDg UhgGytb+lpXQMcggSU1tWiIuXqMzRopxHzC1H8TVOKsgy8HCFPSIOQIQ9k/3PZpF2qkmvsKXmgEkl oBJf/N623P59FSRq8mzFh6pcIL/dPn1QLt5EbYFRuQsXvMLyb6smTbw/YBlB+N7vEaTI20KTjHmmh yxHobgk/Wr6UcxndNcfIM1ZJFA29OcU8pVyDlHF1Nu+nwMQUCJ4M0UweX2ZLXwXKFE5ntKGWSps94 0uAzKtyZsi6gM5CqgVHg4+FZknkH7VFwZi3p/8BvsTzQy/v0buyuNUg0D/8Gh+n83VKQiH9E/J/iY zgLcUzDQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018oj-1H; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH v3 06/34] arc: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:09 +0000 Message-Id: <20230228213738.272178-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: y4qc6ok6x8rj51rutaspkk9sm1fbcwmr X-Rspamd-Queue-Id: 8FA0480013 X-HE-Tag: 1677620263-920806 X-HE-Meta: U2FsdGVkX18YQhJchLKzGPc2BoD49bXDS++e2MuRWZ0qaNLIYfhCVCGhRXQ2fgxcq+DWFyjA1099xOZKFK1IGXDW+yJJRwmMS/gDZHm1pgGs5Qp8mUzWVFZYu0QCItFVlZc5g3y8od6F36RHDSS60UfteJdUk59OMVk2/QgBWFfF4RpORv+W4PUq2bjtnR5iYtdY81ffIAEu9hAzdLQI2hvQkRrK4xsYOBTBAQFNoiAiI6zblRESOd1AyP1LeE8ruo48Jc5egOPDPaFeC5EohdcLhlwdEaMpnPNhymcUkWWwXCY+4ax6x00K/tNthM3fbWHeRV0+Vk8k8URHvk18PgRfx3tYpGx5Q2NPDmziFQUhixrwLpZOojx4opL11qEzAXvRpsXDVBQk+Wrao0lkqHLX9hg89XEMDYlvq6dDeas2Or6vBl69zn0pE8UqfOBxkp/qFf7BaId7ZTvBl1mtwqNZqENxSIrqn/hyAcDHlpoER8mLwRpQfSVojSaQ/kR3tAyDKk34FxWLcR5yoqyuz6lkYexX/DwFybhAqW1JxB/lujwfNUYlt7H6A946sWk6eAf8N3sJ84X+xPjzrFmWQKB7Hu3nV0gb9RybK9Htn42oDSdqtl20czlNAkBqlfM5COmq552jHaswO2Ac/0jlDRG/8iNjbcik93RQzl1LKeMyiPVoS+jiAELUabt2MO2kUQul6FHeBfJOsCWEmMqaw/4P4cWsWzlnzvS6M9pB5eTFq96w+StXOMfNOV6353+lX6TQO9tTViTbFQeW7/7qERh2dv+PaUFVUMk3XlmhDv7EDRxNi2BPl9YjeFSKyqoawuZKbcjBBmKd91rQoRr392NnhhzzSQA0e2yTzkiPO0pUyFNXrdQ9iF56GuOKWDshwWLAauvSDCyaWPgYoggLl4SAbVlDk36j4WyF3RQEYcsfgkr5ziY4L8A7KTAcDNNChNwmrQ7cwGswTtMmjen qTbVDo3r 8YLxLPOyBpSim3RqkP/WYBiSbJO4MANmEv2KqPK0xBDPS4rBXVyEx73BmsRope4xyqh03hEVfzT1wy2lFNJ4YlNwCOgMF6EEVD67aOfUa6sFw7DmVaN6IR7y8GMDZpoIR1gBAQwWXR9AxNAkN7km+7IvniLTSjG2cmODts5Cu+Is+7htNqdv7ZoaWDsMuQv5M5ZuNN1mJXupOdvM4WvXTNGs0zJALCiilSzcA5BWHpZpJ7yWKjouSou0T74K2GyDyY9V9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dc_clean flag from being per-page to per-folio (which means it cannot always be set as we don't know that all pages in this folio were cleaned). Enhance the internal flush routines to take the number of pages to flush. Signed-off-by: Matthew Wilcox (Oracle) Cc: Vineet Gupta Cc: linux-snps-arc@lists.infradead.org --- arch/arc/include/asm/cacheflush.h | 7 ++- arch/arc/include/asm/pgtable-bits-arcv2.h | 20 ++++++-- arch/arc/mm/cache.c | 61 ++++++++++++++--------- arch/arc/mm/tlb.c | 18 ++++--- 4 files changed, 68 insertions(+), 38 deletions(-) diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index e201b4b1655a..04f65f588510 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -25,17 +25,20 @@ * in update_mmu_cache() */ #define flush_icache_page(vma, page) +#define flush_icache_pages(vma, page, nr) void flush_cache_all(void); void flush_icache_range(unsigned long kstart, unsigned long kend); void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len); -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr); -void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr); +void __inv_icache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr); +void __flush_dcache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio void dma_cache_wback_inv(phys_addr_t start, unsigned long sz); void dma_cache_inv(phys_addr_t start, unsigned long sz); diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h index 6e9f8ca6d6a1..4a1b2ce204c6 100644 --- a/arch/arc/include/asm/pgtable-bits-arcv2.h +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h @@ -100,14 +100,24 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - set_pte(ptep, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr); + +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c index 55c6de138eae..3c16ee942a5c 100644 --- a/arch/arc/mm/cache.c +++ b/arch/arc/mm/cache.c @@ -752,17 +752,17 @@ static inline void arc_slc_enable(void) * There's a corollary case, where kernel READs from a userspace mapped page. * If the U-mapping is not congruent to K-mapping, former needs flushing. */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; if (!cache_is_vipt_aliasing()) { - clear_bit(PG_dc_clean, &page->flags); + clear_bit(PG_dc_clean, &folio->flags); return; } /* don't handle anon pages here */ - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!mapping) return; @@ -771,17 +771,27 @@ void flush_dcache_page(struct page *page) * Make a note that K-mapping is dirty */ if (!mapping_mapped(mapping)) { - clear_bit(PG_dc_clean, &page->flags); - } else if (page_mapcount(page)) { - + clear_bit(PG_dc_clean, &folio->flags); + } else if (folio_mapped(folio)) { /* kernel reading from page with U-mapping */ - phys_addr_t paddr = (unsigned long)page_address(page); - unsigned long vaddr = page->index << PAGE_SHIFT; + phys_addr_t paddr = (unsigned long)folio_address(folio); + unsigned long vaddr = folio_pos(folio); + /* + * vaddr is not actually the virtual address, but is + * congruent to every user mapping. + */ if (addr_not_cache_congruent(paddr, vaddr)) - __flush_dcache_page(paddr, vaddr); + __flush_dcache_pages(paddr, vaddr, + folio_nr_pages(folio)); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + return flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); /* @@ -921,18 +931,18 @@ void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len) } /* wrapper to compile time eliminate alignment checks in flush loop */ -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr) +void __inv_icache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr) { - __ic_line_inv_vaddr(paddr, vaddr, PAGE_SIZE); + __ic_line_inv_vaddr(paddr, vaddr, nr * PAGE_SIZE); } /* * wrapper to clearout kernel or userspace mappings of a page * For kernel mappings @vaddr == @paddr */ -void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr) +void __flush_dcache_pages(phys_addr_t paddr, unsigned long vaddr, unsigned nr) { - __dc_line_op(paddr, vaddr & PAGE_MASK, PAGE_SIZE, OP_FLUSH_N_INV); + __dc_line_op(paddr, vaddr & PAGE_MASK, nr * PAGE_SIZE, OP_FLUSH_N_INV); } noinline void flush_cache_all(void) @@ -962,10 +972,10 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long u_vaddr, u_vaddr &= PAGE_MASK; - __flush_dcache_page(paddr, u_vaddr); + __flush_dcache_pages(paddr, u_vaddr, 1); if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, u_vaddr); + __inv_icache_pages(paddr, u_vaddr, 1); } void flush_cache_range(struct vm_area_struct *vma, unsigned long start, @@ -978,9 +988,9 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long u_vaddr) { /* TBD: do we really need to clear the kernel mapping */ - __flush_dcache_page((phys_addr_t)page_address(page), u_vaddr); - __flush_dcache_page((phys_addr_t)page_address(page), - (phys_addr_t)page_address(page)); + __flush_dcache_pages((phys_addr_t)page_address(page), u_vaddr, 1); + __flush_dcache_pages((phys_addr_t)page_address(page), + (phys_addr_t)page_address(page), 1); } @@ -989,6 +999,8 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, void copy_user_highpage(struct page *to, struct page *from, unsigned long u_vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); + struct folio *dst = page_folio(to); void *kfrom = kmap_atomic(from); void *kto = kmap_atomic(to); int clean_src_k_mappings = 0; @@ -1005,7 +1017,7 @@ void copy_user_highpage(struct page *to, struct page *from, * addr_not_cache_congruent() is 0 */ if (page_mapcount(from) && addr_not_cache_congruent(kfrom, u_vaddr)) { - __flush_dcache_page((unsigned long)kfrom, u_vaddr); + __flush_dcache_pages((unsigned long)kfrom, u_vaddr, 1); clean_src_k_mappings = 1; } @@ -1019,17 +1031,17 @@ void copy_user_highpage(struct page *to, struct page *from, * non copied user pages (e.g. read faults which wire in pagecache page * directly). */ - clear_bit(PG_dc_clean, &to->flags); + clear_bit(PG_dc_clean, &dst->flags); /* * if SRC was already usermapped and non-congruent to kernel mapping * sync the kernel mapping back to physical page */ if (clean_src_k_mappings) { - __flush_dcache_page((unsigned long)kfrom, (unsigned long)kfrom); - set_bit(PG_dc_clean, &from->flags); + __flush_dcache_pages((unsigned long)kfrom, + (unsigned long)kfrom, 1); } else { - clear_bit(PG_dc_clean, &from->flags); + clear_bit(PG_dc_clean, &src->flags); } kunmap_atomic(kto); @@ -1038,8 +1050,9 @@ void copy_user_highpage(struct page *to, struct page *from, void clear_user_page(void *to, unsigned long u_vaddr, struct page *page) { + struct folio *folio = page_folio(page); clear_page(to); - clear_bit(PG_dc_clean, &page->flags); + clear_bit(PG_dc_clean, &folio->flags); } EXPORT_SYMBOL(clear_user_page); diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 5f71445f26bd..0a996b65bb4e 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -467,8 +467,8 @@ void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep) * Note that flush (when done) involves both WBACK - so physical page is * in sync as well as INV - so any non-congruent aliases don't remain */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long vaddr_unaligned, pte_t *ptep, unsigned int nr) { unsigned long vaddr = vaddr_unaligned & PAGE_MASK; phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK_PHYS; @@ -491,15 +491,19 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, */ if ((vma->vm_flags & VM_EXEC) || addr_not_cache_congruent(paddr, vaddr)) { - - int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); + struct folio *folio = page_folio(page); + int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags); if (dirty) { + unsigned long offset = offset_in_folio(folio, paddr); + nr = folio_nr_pages(folio); + paddr -= offset; + vaddr -= offset; /* wback + inv dcache lines (K-mapping) */ - __flush_dcache_page(paddr, paddr); + __flush_dcache_pages(paddr, paddr, nr); /* invalidate any existing icache lines (U-mapping) */ if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, vaddr); + __inv_icache_pages(paddr, vaddr, nr); } } } @@ -531,7 +535,7 @@ void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { pte_t pte = __pte(pmd_val(*pmd)); - update_mmu_cache(vma, addr, &pte); + update_mmu_cache_range(vma, addr, &pte, HPAGE_PMD_NR); } void local_flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, From patchwork Tue Feb 28 21:37:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155238 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C227C64EC4 for ; Tue, 28 Feb 2023 21:37:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A91806B0085; Tue, 28 Feb 2023 16:37:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 65C366B0081; Tue, 28 Feb 2023 16:37:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D0C86B0081; Tue, 28 Feb 2023 16:37:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 7DF966B0085 for ; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 074BBC0CC1 for ; Tue, 28 Feb 2023 21:37:45 +0000 (UTC) X-FDA: 80518012890.19.75A494E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 2A8931C000C for ; Tue, 28 Feb 2023 21:37:41 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=pG7+TZ+d; dmarc=none; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620262; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZpUzSYGDTADlmCBk6vxpalRkkfQBt7PCBFN721ohMyE=; b=dpucglJviP11zCYgK6jSWdyEb/qX+6Mvuhf/Kpu9pAjJNLxgBvVrTwcmNhVzdeQ7j/GnzD qnwn3jdqdmsGj+Y4nAJgc2gzAdSYLpuk1vbtuXXtcCSnsbzc0vvZwcqZUBSCfGr4JeotIz lrO5w4GY5piSPa6lRHKjYUHviEd2aY0= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=pG7+TZ+d; dmarc=none; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620262; a=rsa-sha256; cv=none; b=ImnCUCSyYWZ7K1U3PPdrrkI1K3/a/NQFOih4ijwzu9XyWm+SbvL6TumpflJKr2vMwXfB08 vSsaG2T1MYbuG+ttgRWB2wk2CzK44NFnBLvq6IG7DkzNJrmNMMZo3SZuCNCeG7Oc/YsoMk 0Pq24QW0tsLp2I+ReYD78i6GTieULOI= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ZpUzSYGDTADlmCBk6vxpalRkkfQBt7PCBFN721ohMyE=; b=pG7+TZ+drwivOfE0krvBLZBu/W 660yny4Dcam/Um1wxzlld5G++gN4Xx5PhAO+pUzV25wkiYp/8bwRpfcCNGZQtPP6tZUetjFG/KJAj n6Q5LsTV7GEoXUNbPWWIHaTSSaJV8vq7PhyBIKXKD18d1Bbh38aVASi7z1U7cPez4q8dLpusfRAV+ zFm2OF7GfofICymt4KEkO2WmmXG9Bf2wDg6lwxAtzCxbo6JbGmIdeh2ytkNeXc5rbPtsiwPRUuLTn /xjYFEH1lGvAm8cQELk1PbxG1vShFHP8lp9Rp6BgsXj13WblqeagpYXfPgxWkhF5sc6KzNVeTPqb7 su5foQ+A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018oo-42; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Russell King , linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 07/34] arm: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:10 +0000 Message-Id: <20230228213738.272178-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2A8931C000C X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: 1y11aju4qbstedzd8js551yjd6uaoh7p X-HE-Tag: 1677620261-233681 X-HE-Meta: U2FsdGVkX1/RP0KaEbPrap2PFKP+CsSuWDuLzKU19UdaSPmsQhhGu3NDsd41NVdHMZIleEtxfcuXk5IW+hXeuYlwNP13EqWYUGlSD8M7E9xjtswh8vWrax12PE2+VI9iutbwsrePPr99lInFilEKI1WkIi+WhcDka6KbRTgB3YSlp4JKO8UFseeQG+YnyKwPhJ9XBRF3oFj6Ue3B0SrwsjzUVqC39XLePSsA1SixCum70/eQaurnuMCmknvvpQ1xEiaS6/5PUebe7ELsNwcOfOxGhDpYYSS+2Lql4yxRkMfxeMkMRlj2LIhQciDwUJ1EhkRX6uicCBlINjjRRTY7uWG+1uiNxZUXiTesbD8yLZ5rgR/88KYTf9BmgiLoyz7FBSqc4oljL5EThYRGz0lwn/1j/izRk+e16ljJs9Hhr2yy3EG3KUvcEQPREE9WI2+CbbzmdvZ2Mf3hQZMu/pPzPBqOUkiaCSdzUU1P8fBbpZ+hkcnoSPb9j+0PRz36jsSwRTscGgSBgnD0baRrnWe+A3JG7rp7mM9dyq160tl2jssuZR23PuvJ564dnPNyNyVSU/u5NI4CUFKIhlkmD834Dm2xYycfLcSg2qymFSqDnAeCD2y8sCyF0zc3mqsE3JmNGi15HMLjahJWuUu0DpzQq1Stg3LvIHNSm6ILCMyWpNf925mvArQlY2DWTcqIMXXmNnaSA5L84A8Ri0/3ZWgsQrRmWRp7Qr/Y4KavJqEnFRPjjnSInb/NHHZshl/V8mzqzW52TTlTz0AOcfJp7KMQ1XbZv+hO352L+iqudgfAy0kXZmQWVi0Yi/Xz9+5eSe0mUyBM0jnw+dVIpAEyV/SxowkaJ4oYHhF+cR8HginYHnHUlvYUTcA4tFHn2k0rkzyK30yPE07dOpigU/w8ZkOShui+BwmN0fANK33LaUNVuijK1kRtHLqwKa1JQCsLCgwNx9prMrSDsraVIimIyNe xHtrtO6O Iyt4YFj2IUBspRCN/PilQUDk7PkPO2bQyjqcDcqIN92webZO91PcHzao7mwSxEF8ovkvpMufIfhKpptl8d94fny/dn4njZN+1sGrcE0UWA4hY9VjNpvUdnsyZ9CUCjh1Ia+ztK8a9oF/RGnPVdoX0THKF0FBSFhuTu8R1hNaV1JeTls0rJwrOFY1ogtlFtpGexGdf1nYcCgzbuVOP6WWVYtD+tscOlHbaa+0HhA1HW473DiayztMYMwhacol5BDo353LF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dcache_clear flag from being per-page to per-folio which makes __dma_page_dev_to_cpu() a bit more exciting. Also add flush_cache_pages(), even though this isn't used by generic code (yet?) Signed-off-by: Matthew Wilcox (Oracle) Cc: Russell King Cc: linux-arm-kernel@lists.infradead.org --- arch/arm/include/asm/cacheflush.h | 24 +++++--- arch/arm/include/asm/pgtable.h | 5 +- arch/arm/include/asm/tlbflush.h | 13 ++-- arch/arm/mm/copypage-v4mc.c | 5 +- arch/arm/mm/copypage-v6.c | 5 +- arch/arm/mm/copypage-xscale.c | 5 +- arch/arm/mm/dma-mapping.c | 24 ++++---- arch/arm/mm/fault-armv.c | 14 ++--- arch/arm/mm/flush.c | 99 +++++++++++++++++++------------ arch/arm/mm/mm.h | 2 +- arch/arm/mm/mmu.c | 14 +++-- 11 files changed, 125 insertions(+), 85 deletions(-) diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index a094f964c869..841e268d2374 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -231,14 +231,15 @@ vivt_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned vma->vm_flags); } -static inline void -vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) +static inline void vivt_flush_cache_pages(struct vm_area_struct *vma, + unsigned long user_addr, unsigned long pfn, unsigned int nr) { struct mm_struct *mm = vma->vm_mm; if (!mm || cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) { unsigned long addr = user_addr & PAGE_MASK; - __cpuc_flush_user_range(addr, addr + PAGE_SIZE, vma->vm_flags); + __cpuc_flush_user_range(addr, addr + nr * PAGE_SIZE, + vma->vm_flags); } } @@ -247,15 +248,17 @@ vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsig vivt_flush_cache_mm(mm) #define flush_cache_range(vma,start,end) \ vivt_flush_cache_range(vma,start,end) -#define flush_cache_page(vma,addr,pfn) \ - vivt_flush_cache_page(vma,addr,pfn) +#define flush_cache_pages(vma, addr, pfn, nr) \ + vivt_flush_cache_pages(vma, addr, pfn, nr) #else -extern void flush_cache_mm(struct mm_struct *mm); -extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); -extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn); +void flush_cache_mm(struct mm_struct *mm); +void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); +void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, + unsigned long pfn, unsigned int nr); #endif #define flush_cache_dup_mm(mm) flush_cache_mm(mm) +#define flush_cache_page(vma, addr, pfn) flush_cache_pages(vma, addr, pfn, 1) /* * flush_icache_user_range is used when we want to ensure that the @@ -289,7 +292,9 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr * See update_mmu_cache for the user space part. */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -extern void flush_dcache_page(struct page *); +void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1 static inline void flush_kernel_vmap_range(void *addr, int size) @@ -321,6 +326,7 @@ static inline void flush_anon_page(struct vm_area_struct *vma, * duplicate cache flushing elsewhere performed by flush_dcache_page(). */ #define flush_icache_page(vma,page) do { } while (0) +#define flush_icache_pages(vma, page, nr) do { } while (0) /* * flush_cache_vmap() is used when creating mappings (eg, via vmap, diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index a58ccbb406ad..6525ac82bd50 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -207,8 +207,9 @@ static inline void __sync_icache_dcache(pte_t pteval) extern void __sync_icache_dcache(pte_t pteval); #endif -void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval); +void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr); +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline pte_t clear_pte_bit(pte_t pte, pgprot_t prot) { diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h index 0ccc985b90af..7d792e485f4f 100644 --- a/arch/arm/include/asm/tlbflush.h +++ b/arch/arm/include/asm/tlbflush.h @@ -619,18 +619,21 @@ extern void flush_bp_all(void); * If PG_dcache_clean is not set for the page, we need to ensure that any * cache entries for the kernels virtual memory range are written * back to the page. On ARMv6 and later, the cache coherency is handled via - * the set_pte_at() function. + * the set_ptes() function. */ #if __LINUX_ARM_ARCH__ < 6 -extern void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr); #else -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { } #endif +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) #endif diff --git a/arch/arm/mm/copypage-v4mc.c b/arch/arm/mm/copypage-v4mc.c index f1da3b439b96..7ddd82b9fe8b 100644 --- a/arch/arm/mm/copypage-v4mc.c +++ b/arch/arm/mm/copypage-v4mc.c @@ -64,10 +64,11 @@ static void mc_copy_user_page(void *from, void *to) void v4_mc_copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *kto = kmap_atomic(to); - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); raw_spin_lock(&minicache_lock); diff --git a/arch/arm/mm/copypage-v6.c b/arch/arm/mm/copypage-v6.c index d8a115de5507..a1a71f36d850 100644 --- a/arch/arm/mm/copypage-v6.c +++ b/arch/arm/mm/copypage-v6.c @@ -69,11 +69,12 @@ static void discard_old_kernel_data(void *kto) static void v6_copy_user_highpage_aliasing(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); unsigned int offset = CACHE_COLOUR(vaddr); unsigned long kfrom, kto; - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); /* FIXME: not highmem safe */ discard_old_kernel_data(page_address(to)); diff --git a/arch/arm/mm/copypage-xscale.c b/arch/arm/mm/copypage-xscale.c index bcb485620a05..f1e29d3e8193 100644 --- a/arch/arm/mm/copypage-xscale.c +++ b/arch/arm/mm/copypage-xscale.c @@ -84,10 +84,11 @@ static void mc_copy_user_page(void *from, void *to) void xscale_mc_copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *kto = kmap_atomic(to); - if (!test_and_set_bit(PG_dcache_clean, &from->flags)) - __flush_dcache_page(page_mapping_file(from), from); + if (!test_and_set_bit(PG_dcache_clean, &src->flags)) + __flush_dcache_folio(folio_flush_mapping(src), src); raw_spin_lock(&minicache_lock); diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 8bc01071474a..5ecfde41d70a 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -693,6 +693,7 @@ static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, size_t size, enum dma_data_direction dir) { + struct folio *folio = page_folio(page); phys_addr_t paddr = page_to_phys(page) + off; /* FIXME: non-speculating: not required */ @@ -707,19 +708,18 @@ static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, * Mark the D-cache clean for these pages to avoid extra flushing. */ if (dir != DMA_TO_DEVICE && size >= PAGE_SIZE) { - unsigned long pfn; - size_t left = size; - - pfn = page_to_pfn(page) + off / PAGE_SIZE; - off %= PAGE_SIZE; - if (off) { - pfn++; - left -= PAGE_SIZE - off; + ssize_t left = size; + size_t offset = offset_in_folio(folio, paddr); + + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); } - while (left >= PAGE_SIZE) { - page = pfn_to_page(pfn++); - set_bit(PG_dcache_clean, &page->flags); - left -= PAGE_SIZE; + + while (left >= (ssize_t)folio_size(folio)) { + set_bit(PG_dcache_clean, &folio->flags); + left -= folio_size(folio); + folio = folio_next(folio); } } } diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c index 0e49154454a6..e2c869b8f012 100644 --- a/arch/arm/mm/fault-armv.c +++ b/arch/arm/mm/fault-armv.c @@ -178,8 +178,8 @@ make_coherent(struct address_space *mapping, struct vm_area_struct *vma, * * Note that the pte lock will be held. */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); struct address_space *mapping; @@ -192,13 +192,13 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - mapping = page_mapping_file(page); - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + folio = page_folio(pfn_to_page(pfn)); + mapping = folio_flush_mapping(page); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); if (mapping) { if (cache_is_vivt()) make_coherent(mapping, vma, addr, ptep, pfn); diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 7ff9feea13a6..07ea0ab51099 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -95,10 +95,10 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned __flush_icache_all(); } -void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) +void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn, unsigned int nr) { if (cache_is_vivt()) { - vivt_flush_cache_page(vma, user_addr, pfn); + vivt_flush_cache_pages(vma, user_addr, pfn, nr); return; } @@ -196,29 +196,31 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, #endif } -void __flush_dcache_page(struct address_space *mapping, struct page *page) +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio) { /* * Writeback any data associated with the kernel mapping of this * page. This ensures that data in the physical page is mutually * coherent with the kernels mapping. */ - if (!PageHighMem(page)) { - __cpuc_flush_dcache_area(page_address(page), page_size(page)); + if (!folio_test_highmem(folio)) { + __cpuc_flush_dcache_area(folio_address(folio), + folio_size(folio)); } else { unsigned long i; if (cache_is_vipt_nonaliasing()) { - for (i = 0; i < compound_nr(page); i++) { - void *addr = kmap_atomic(page + i); + for (i = 0; i < folio_nr_pages(folio); i++) { + void *addr = kmap_local_folio(folio, + i * PAGE_SIZE); __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_atomic(addr); + kunmap_local(addr); } } else { - for (i = 0; i < compound_nr(page); i++) { - void *addr = kmap_high_get(page + i); + for (i = 0; i < folio_nr_pages(folio); i++) { + void *addr = kmap_high_get(folio_page(folio, i)); if (addr) { __cpuc_flush_dcache_area(addr, PAGE_SIZE); - kunmap_high(page + i); + kunmap_high(folio_page(folio, i)); } } } @@ -230,15 +232,14 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page) * userspace colour, which is congruent with page->index. */ if (mapping && cache_is_vipt_aliasing()) - flush_pfn_alias(page_to_pfn(page), - page->index << PAGE_SHIFT); + flush_pfn_alias(folio_pfn(folio), folio_pos(folio)); } -static void __flush_dcache_aliases(struct address_space *mapping, struct page *page) +static void __flush_dcache_aliases(struct address_space *mapping, struct folio *folio) { struct mm_struct *mm = current->active_mm; - struct vm_area_struct *mpnt; - pgoff_t pgoff; + struct vm_area_struct *vma; + pgoff_t pgoff, pgoff_end; /* * There are possible user space mappings of this page: @@ -246,21 +247,36 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p * data in the current VM view associated with this page. * - aliasing VIPT: we only need to find one mapping of this page. */ - pgoff = page->index; + pgoff = folio->index; + pgoff_end = pgoff + folio_nr_pages(folio) - 1; flush_dcache_mmap_lock(mapping); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - unsigned long offset; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff_end) { + unsigned long start, offset, pfn; + unsigned int nr; /* * If this VMA is not in our MM, we can ignore it. */ - if (mpnt->vm_mm != mm) + if (vma->vm_mm != mm) continue; - if (!(mpnt->vm_flags & VM_MAYSHARE)) + if (!(vma->vm_flags & VM_MAYSHARE)) continue; - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - flush_cache_page(mpnt, mpnt->vm_start + offset, page_to_pfn(page)); + + start = vma->vm_start; + pfn = folio_pfn(folio); + nr = folio_nr_pages(folio); + offset = pgoff - vma->vm_pgoff; + if (offset > -nr) { + pfn -= offset; + nr += offset; + } else { + start += offset * PAGE_SIZE; + } + if (start + nr * PAGE_SIZE > vma->vm_end) + nr = (vma->vm_end - start) / PAGE_SIZE; + + flush_cache_pages(vma, start, pfn, nr); } flush_dcache_mmap_unlock(mapping); } @@ -269,7 +285,7 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p void __sync_icache_dcache(pte_t pteval) { unsigned long pfn; - struct page *page; + struct folio *folio; struct address_space *mapping; if (cache_is_vipt_nonaliasing() && !pte_exec(pteval)) @@ -279,14 +295,14 @@ void __sync_icache_dcache(pte_t pteval) if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); + folio = page_folio(pfn_to_page(pfn)); if (cache_is_vipt_aliasing()) - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); else mapping = NULL; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); if (pte_exec(pteval)) __flush_icache_all(); @@ -312,7 +328,7 @@ void __sync_icache_dcache(pte_t pteval) * Note that we disable the lazy flush for SMP configurations where * the cache maintenance operations are not automatically broadcasted. */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; @@ -320,31 +336,36 @@ void flush_dcache_page(struct page *page) * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; if (!cache_ops_need_broadcast() && cache_is_vipt_nonaliasing()) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); return; } - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!cache_ops_need_broadcast() && - mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { - __flush_dcache_page(mapping, page); + __flush_dcache_folio(mapping, folio); if (mapping && cache_is_vivt()) - __flush_dcache_aliases(mapping, page); + __flush_dcache_aliases(mapping, folio); else if (mapping) __flush_icache_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} +EXPORT_SYMBOL(flush_dcache_page); /* * Flush an anonymous page so that users of get_user_pages() * can safely access the data. The expected sequence is: diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h index d7ffccb7fea7..419316316711 100644 --- a/arch/arm/mm/mm.h +++ b/arch/arm/mm/mm.h @@ -45,7 +45,7 @@ struct mem_type { const struct mem_type *get_mem_type(unsigned int type); -extern void __flush_dcache_page(struct address_space *mapping, struct page *page); +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio); /* * ARM specific vm_struct->flags bits. diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 463fc2a8448f..9947bbc32b04 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1788,7 +1788,7 @@ void __init paging_init(const struct machine_desc *mdesc) bootmem_init(); empty_zero_page = virt_to_page(zero_page); - __flush_dcache_page(NULL, empty_zero_page); + __flush_dcache_folio(NULL, page_folio(empty_zero_page)); } void __init early_mm_init(const struct machine_desc *mdesc) @@ -1797,8 +1797,8 @@ void __init early_mm_init(const struct machine_desc *mdesc) early_paging_init(mdesc); } -void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { unsigned long ext = 0; @@ -1808,5 +1808,11 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, ext |= PTE_EXT_NG; } - set_pte_ext(ptep, pteval, ext); + for (;;) { + set_pte_ext(ptep, pteval, ext); + if (--nr == 0) + break; + ptep++; + pte_val(pteval) += PAGE_SIZE; + } } From patchwork Tue Feb 28 21:37:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155230 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61BF3C64EC4 for ; Tue, 28 Feb 2023 21:37:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 356FC6B0078; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1EDC96B0081; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE4866B007E; Tue, 28 Feb 2023 16:37:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id BD8FB6B0074 for ; Tue, 28 Feb 2023 16:37:44 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 98F05A0D76 for ; Tue, 28 Feb 2023 21:37:44 +0000 (UTC) X-FDA: 80518012848.22.2E92CD1 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id EDE868000C for ; Tue, 28 Feb 2023 21:37:42 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=p9ziR0tK; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620263; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RnM8KszZJiX+sNWI95I2pqTk92H44CKgzXj9Vqv/ocM=; b=Hm37gXG6xUIlLhXmtD/FiRWF6Dp9i/DPRXOzj8jKVEei2PKffOgqXjMTATxgsEBOo/+Dy0 0qySeUzxvR9ugvbw4BiuBFgnQbRg8a4QEgZX0tVw5eSzdVvAeN+w4rfY0/GPFisP+rTZkl fSqqNU8dy+Cg3I/H5FDTWWCisv0yTIQ= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=p9ziR0tK; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620263; a=rsa-sha256; cv=none; b=QPy895t34MBixyD53zNArBZC/tAY6u2+FOu4ZoQo/K090iSclWkj8hXYysKaYYC1wo1Zb+ GXWCGgviYZsnSPa0Xm5mAqWsN8JNUF+OJGsVRawHBW3ovb6T60r6/rGnihfTFn8zUir4qE DNkpCz1PlF/h7xstzMM/fS/LgCToS68= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=RnM8KszZJiX+sNWI95I2pqTk92H44CKgzXj9Vqv/ocM=; b=p9ziR0tK44CRQ5jbhDDZszpsrt S9yommfwa6JzCtuDdkysLn2Uuj3lAfjHkLPplujjSrzPpwUQhwbzaRBp9kGF58fVojPu00NAVxSVS ggMUkC81496mG5o1zhHvyVdZOWQNYqTx04Qn2OGcFXjbRX1vDqUExKQoOWFOG4AqY4UgWyCjDsQg3 RHl5GpFAFd22Aj3PO6zATNVV9+fQbAeYOhhEL84lW1/HFz+zBt/0cmWYHtH+2mgTuOaXJ2tg1iFxx k8zez0eSYtc0PVqhUjCqSgs8mdylgTOEGnPq5YKfGVbkUOCuCNXMp8Xm7/um7BpCyE2T7cek5Mpzh 9ebYv0Dg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018oq-6h; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Catalin Marinas , linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 08/34] arm64: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:11 +0000 Message-Id: <20230228213738.272178-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 8hpdngoqggaom93dnc5dsip9ohw4tf68 X-Rspamd-Queue-Id: EDE868000C X-HE-Tag: 1677620262-167266 X-HE-Meta: U2FsdGVkX18IpLipDGnF3WxlbvFq4w6GkCbqUunH7aihoWc4fyCPtJ7LlIJikBZ9hLK5emt4U+SqbPOyNzAy4RcewJ8qbTzPY2l+Qg1VNXDXlJUNlmfyAhak5FGcyinEXZPtHLfzKgbpyJXqDBXzA3+hnXGyBGFWyR2Tvo8fVO03lnrBNuNkhk7f6si3T6xLcOWt9ovJ/y2ySFGjwVZG5vtSU4XL8sLUApL4GxUCmaUP9vzWUEf187OZG6+g5QzveuuJo9ATTmUSSemyBcLvEce53E01TkOw2R9/P/deNaFbgEiZgKmC24mTwYL144xMT7Ix62VhkniFpNR+uHmHjSNBFaMO9vpqsBL5gp2C+WpUfcQCJzi+LRkl+R1F+uRUpxIalgEDTHTvoZ7SMBDUX9d+7EfdK+2M0+WzEak/ivBuv95BOo5y2FS3OSm40JRqSQyjYJCH/vmu6u1ZfoiRN6h/mhgOcgCN9Q591zmrp8t7mE8QKnvcsG0DhNIagxFlpnIu4UP/l3BvxpfkwiJRqyOxY2HXEwlNupudNU+5aH/qHWZ/dEVf9Sw1FVwrgfIoiaUL6G9N/s9ZypkZ5y3DVPnXzRo2v8xT+TtuEpNTfOH4/nYpqoGmT7L1ruiVqNj23YQSgMmVQQq3YbRDF4D0A4G514/A6sg0VdWr6RZqCVq2d1VosxUzKJOcWq1Yz4QaIZp5KCJLZdsRZci18muntWUIvkTMK/FGqeIksxsYpd9Pe1lFWbn14dYIz9LaZSt5FUeKSyJr4p74o9kdgd06SF/yUAYgPe/LvOgSGI/8NaKc3m2ATtW4DM93YlaPhRKKJVhXOytUStWmuIcSrOKX8uAJoyCpSQsFHvJil+4SJ52+xHHY3hoHT6ffQhxmnkMzG0Nziw9eG3iHtF87ZLewOhc9HfLe6Ioj/mDGX1f6YQA/54+Y/sfxdSf7MX79yov3yjOOqA0K3LBiTm71WAN 3kytXPzz OmdFA6ZM3R7Zj7YhKprsqhxAsBKFYo5xv8JYonzW83PF159tOYu7cw/IW3jWYgh+pzxCWXPCt1nYK6p2YTC/MLFiK0xQv6MxfrQ52kft7p0WghL4miJxQQE8iZCR8iHdq3VPyK+47/OrdN5/KglQJDsWz3klDxwJlUyP9OqNUNoqM081IW17aNh4BQPjWNC8YHixU+nb9rFfjRs8cZGBJG6mMMT2pXddabE+7SQ5eFXrE2nWI23jwSxu7/4F2LvH8a7WB X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Catalin Marinas Cc: linux-arm-kernel@lists.infradead.org --- arch/arm64/include/asm/cacheflush.h | 4 +++- arch/arm64/include/asm/pgtable.h | 25 ++++++++++++++------ arch/arm64/mm/flush.c | 36 +++++++++++------------------ 3 files changed, 35 insertions(+), 30 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 37185e978aeb..d115451ed263 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -114,7 +114,7 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, #define copy_to_user_page copy_to_user_page /* - * flush_dcache_page is used when the kernel has written to the page + * flush_dcache_folio is used when the kernel has written to the page * cache page at virtual address page->virtual. * * If this page isn't mapped (ie, page_mapping == NULL), or it might @@ -127,6 +127,8 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *, */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio static __always_inline void icache_inval_all_pou(void) { diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 69765dc697af..4d1b79dbff16 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -355,12 +355,21 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, set_pte(ptep, pte); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) -{ - page_table_check_ptes_set(mm, addr, ptep, pte, 1); - return __set_pte_at(mm, addr, ptep, pte); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + page_table_check_ptes_set(mm, addr, ptep, pte, nr); + + for (;;) { + __set_pte_at(mm, addr, ptep, pte); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + pte_val(pte) += PAGE_SIZE; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* * Huge pte definitions. @@ -1059,8 +1068,8 @@ static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) /* * On AArch64, the cache coherency is handled via the set_pte_at() function. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) { /* * We don't do anything here, so there's a very small chance of @@ -1069,6 +1078,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) #ifdef CONFIG_ARM64_PA_BITS_52 diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 5f9379b3c8c8..deb781af0a3a 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -50,20 +50,13 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, void __sync_icache_dcache(pte_t pte) { - struct page *page = pte_page(pte); + struct folio *folio = page_folio(pte_page(pte)); - /* - * HugeTLB pages are always fully mapped, so only setting head page's - * PG_dcache_clean flag is enough. - */ - if (PageHuge(page)) - page = compound_head(page); - - if (!test_bit(PG_dcache_clean, &page->flags)) { - sync_icache_aliases((unsigned long)page_address(page), - (unsigned long)page_address(page) + - page_size(page)); - set_bit(PG_dcache_clean, &page->flags); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + sync_icache_aliases((unsigned long)folio_address(folio), + (unsigned long)folio_address(folio) + + folio_size(folio)); + set_bit(PG_dcache_clean, &folio->flags); } } EXPORT_SYMBOL_GPL(__sync_icache_dcache); @@ -73,17 +66,16 @@ EXPORT_SYMBOL_GPL(__sync_icache_dcache); * it as dirty for later flushing when mapped in user space (if executable, * see __sync_icache_dcache). */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - /* - * HugeTLB pages are always fully mapped and only head page will be - * set PG_dcache_clean (see comments in __sync_icache_dcache()). - */ - if (PageHuge(page)) - page = compound_head(page); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +EXPORT_SYMBOL(flush_dcache_folio); - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); } EXPORT_SYMBOL(flush_dcache_page); From patchwork Tue Feb 28 21:37:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6788C7EE31 for ; Tue, 28 Feb 2023 21:37:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6D4646B0088; Tue, 28 Feb 2023 16:37:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4AC376B007E; Tue, 28 Feb 2023 16:37:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DFD7D6B0087; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 769236B0081 for ; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id DB848C088D for ; Tue, 28 Feb 2023 21:37:44 +0000 (UTC) X-FDA: 80518012848.22.6C5B453 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 2A46B1C001E for ; Tue, 28 Feb 2023 21:37:41 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hTwQ3kkA; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620262; a=rsa-sha256; cv=none; b=v/dXBxScYSGnhDsn8tNcegFjGahGGIxLI0p5xGXadpKAYBF5eGWAGyUDVtwhes4uSlJA70 4Qj9yyNFr3kRRNSC2khEVsvrPnvYDzf2gktc2CR27cnr1CfC6B7MNjzMOFbb1PyUtQT76z e3ACOB1uFH5exRquwu8pPBIsQhdw5hc= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hTwQ3kkA; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620262; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=B5j2SOFWmtz+byodXTEtLGjajHY9V7moC1330qT7ZY8=; b=TUMxCZe+faD4CqylahQcok7d1D0YlpI7UwYekFMHJQ4HTp14Rm/5Y+XfkvjdQvJAXHDwSQ u+LK6nDZEcexdRwvpCc2UUyQQUV8Ll3b4mDKXdWdgS5E5WpJXLq4+CcSNQsR8CcHYOQxhN yRxUd5Bq6Nn7WOecF+RE2QFw3TCaNOE= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=B5j2SOFWmtz+byodXTEtLGjajHY9V7moC1330qT7ZY8=; b=hTwQ3kkAP6iIZQTqLZVTocWHRU Ij3tn6LImZ/3nzMEl0hGpChAT3eXCsnlnPct5tN2GBKBcYJJxGmusv0GWljHEUfTTdiLmKHl5/r/A e1YPRGY8lpZlVpJynqExNBNvobacTeIKUky/55E9Nu4xkWqD2oK4Xop0YMzVOQkfM0MT3m83D4raZ CDbD2yw0ifcg4sTX9tBxeMl+2SPPPSpHQGW4VCQOOGfbCjgvwVahkjYzQyw9sMF0z5xp+5oOV95vo fgf/Yv+a30t9tD4bft/P+9AsaFx/62mVSHWM8mzEHdXQ3lZYd0+qgXY50cjIO3qJS6iVvinCLisyw Y0jkOjMA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018ot-92; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Guo Ren , linux-csky@vger.kernel.org Subject: [PATCH v3 09/34] csky: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:12 +0000 Message-Id: <20230228213738.272178-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 2A46B1C001E X-Rspamd-Server: rspam01 X-Stat-Signature: on16byosq16u44xxqxza6fgszar1hgcz X-HE-Tag: 1677620261-214247 X-HE-Meta: U2FsdGVkX19Ao2GqDjN3cWhese7UTesnpuI42NMkTgu6tyq6hNg0yrdIA2p7uSQAkrLlWGXS1Ev24BwjjDR1FCF9xayZ99CEwq0MDvHcPyjtlWQhGJk01VhNwafS6Y6Sk38lsNIr/OV0sv89TV77cblBAqszLsOcVvKcKhSByHx0wV7cQdwrioNeLYxNqcZXVayl3+3ykAx8bBxdCZymgmAg/y8nPQ99EGT83AmzGTpnh0Q5BMKq7gI4zS6zA12MgYt8Zxrd5qmPNXM4DlimJAyZa+KT0W5bYAy7JEpyO1ofKZZhlaKm0TVz3L5Ftlxndcwwa+vcapLmyxNXY7xOX43ZYuMhfoEjZVtRHV2X//9u+UgY7nmqNT3z7o1FEztRK5J0c9lOwAZz1az/1hnS/IxG1JtM9K8J4Aitnsx/PI/rKaGbI7iuqvXBiTleOz8Qybc+Y3T1d5q3dG83T7Vzmfbah023tEh1qnGYhw2NFuC43D0WarCnhHXCipVeI8wWJc6oLyQZnH6+jztVVTJpnVkWH7K2rMruzXoTvAD3mtUZF6swallzh8zBWdBIR84vP3b57e4G2kP4Vtek64KB10vNntPI37JLlB8Ap6Wnk8x1PK/3tGZwHZbwEZJW5afh03z2WfeefH9TJMOphrT/TcQdK6v6SSRJDvIwtEJf6mRrur4PgoyJZKrAEMwAvPLXILuO7wUqSvhQ19/MNsFoJuokAxjzbYNsmyjajHEKPT88XmSNA3Na3sAD3adTVUK1K6UD+LdLX1SQJegHfSYE1xOnUSWH4PLyxUX/gUi7yCRSXjG54WYsBPZ2FrXTgH759+7u5fB+tQlyJu4ufKUJrZy3cECIYWh+hSVeMIO/6iPCmbTq5Nkh4Ks9z6Jwgx4Yl3IqD5tucXkx4ELXLPJJkVLDcuW99b3efAnR78aanF1Eu6i3hnSok4MESXzyOvO0C40vA7oTi6QVtwf+ovP cub+nRvf g455rdUw+af4lFSVgyK1py06/NzWSDZe1s392gzUrojiVyD22uRpNbA/cIWqXbFxl4oWa71cR12Zo4ieQN1fUm47BTPFekhlus3/MwU/EABqc4r4OSBVJr9tQ7+MJ9gSNs5dtDxGckYJJi9huDGSsoJ0X37peHVTllX03MFUHci0ViYmOLnJq9CGstlgfu9h8LyahcEz76ESZK+9EIm8/yhVfBQksj9fNTvSjIQ8yMZGFcS0dwrriGh9/atz2fSiQ6oly X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Guo Ren Cc: linux-csky@vger.kernel.org --- arch/csky/abiv1/cacheflush.c | 32 +++++++++++++++++----------- arch/csky/abiv1/inc/abi/cacheflush.h | 2 ++ arch/csky/abiv2/cacheflush.c | 30 +++++++++++++------------- arch/csky/abiv2/inc/abi/cacheflush.h | 10 +++++++-- arch/csky/include/asm/pgtable.h | 21 +++++++++++++++--- 5 files changed, 62 insertions(+), 33 deletions(-) diff --git a/arch/csky/abiv1/cacheflush.c b/arch/csky/abiv1/cacheflush.c index fb91b069dc69..ba43f6c26b4f 100644 --- a/arch/csky/abiv1/cacheflush.c +++ b/arch/csky/abiv1/cacheflush.c @@ -14,43 +14,49 @@ #define PG_dcache_clean PG_arch_1 -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); - if (mapping && !page_mapcount(page)) - clear_bit(PG_dcache_clean, &page->flags); + if (mapping && !folio_mapped(folio)) + clear_bit(PG_dcache_clean, &folio->flags); else { dcache_wbinv_all(); if (mapping) icache_inv_all(); - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} EXPORT_SYMBOL(flush_dcache_page); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, - pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); - struct page *page; + struct folio *folio; if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) + folio = page_folio(pfn_to_page(pfn)); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) dcache_wbinv_all(); - if (page_mapping_file(page)) { + if (folio_flush_mapping(folio)) { if (vma->vm_flags & VM_EXEC) icache_inv_all(); } diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index ed62e2066ba7..0d6cb65624c4 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -9,6 +9,8 @@ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *); +#define flush_dcache_folio flush_dcache_folio #define flush_cache_mm(mm) dcache_wbinv_all() #define flush_cache_page(vma, page, pfn) cache_wbinv_all() diff --git a/arch/csky/abiv2/cacheflush.c b/arch/csky/abiv2/cacheflush.c index 39c51399dd81..c1cf0d55a2a1 100644 --- a/arch/csky/abiv2/cacheflush.c +++ b/arch/csky/abiv2/cacheflush.c @@ -6,30 +6,30 @@ #include #include -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *pte, unsigned int nr) { - unsigned long addr; + unsigned long pfn = pte_pfn(*pte); struct page *page; + unsigned int i; - if (!pfn_valid(pte_pfn(*pte))) + if (!pfn_valid(pfn) || is_zero_pfn(pfn)) return; - page = pfn_to_page(pte_pfn(*pte)); - if (page == ZERO_PAGE(0)) - return; + folio = page_folio(pfn_to_page(pfn)); - if (test_and_set_bit(PG_dcache_clean, &page->flags)) + if (test_and_set_bit(PG_dcache_clean, &folio->flags)) return; - addr = (unsigned long) kmap_atomic(page); - - dcache_wb_range(addr, addr + PAGE_SIZE); + for (i = 0; i < folio_nr_pages(folio); i++) { + unsigned long addr = (unsigned long) kmap_local_folio(folio, + i * PAGE_SIZE); - if (vma->vm_flags & VM_EXEC) - icache_inv_range(addr, addr + PAGE_SIZE); - - kunmap_atomic((void *) addr); + dcache_wb_range(addr, addr + PAGE_SIZE); + if (vma->vm_flags & VM_EXEC) + icache_inv_range(addr, addr + PAGE_SIZE); + kunmap_local((void *) addr); + } } void flush_icache_deferred(struct mm_struct *mm) diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index a565e00c3f70..9c728933a776 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -18,11 +18,17 @@ #define PG_dcache_clean PG_arch_1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 static inline void flush_dcache_page(struct page *page) { - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + flush_dcache_folio(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h index d4042495febc..a30ae048233e 100644 --- a/arch/csky/include/asm/pgtable.h +++ b/arch/csky/include/asm/pgtable.h @@ -90,7 +90,20 @@ static inline void set_pte(pte_t *p, pte_t pte) /* prevent out of order excution */ smp_mb(); } -#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) + +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline pte_t *pmd_page_vaddr(pmd_t pmd) { @@ -263,8 +276,10 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *pte); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *pte, unsigned int nr); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define io_remap_pfn_range(vma, vaddr, pfn, size, prot) \ remap_pfn_range(vma, vaddr, pfn, size, prot) From patchwork Tue Feb 28 21:37:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155228 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 614E2C64EC4 for ; Tue, 28 Feb 2023 21:37:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D3A776B0071; Tue, 28 Feb 2023 16:37:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CE9A76B0073; Tue, 28 Feb 2023 16:37:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B62EF6B0075; Tue, 28 Feb 2023 16:37:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A4DD26B0071 for ; Tue, 28 Feb 2023 16:37:44 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 676A4C088D for ; Tue, 28 Feb 2023 21:37:44 +0000 (UTC) X-FDA: 80518012848.25.7D71E11 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id BE5E320004 for ; Tue, 28 Feb 2023 21:37:42 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=fqpsebHD; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620262; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yM8cvfNz5jVfR0nEV77nAvQQBtTw5TotKep/NN0+z/U=; b=mMA/GohGjKchSnTDLvplrGuoCgtc4R9VGcZbcHSOAyIbpbwpHuGna72Gg1xDk3KcLYnwq1 QYy2ZHgNFQbnKTKpyciNt8MpsX12iunCNPM/C8yA27EGV2T9PlNQCigO108ad6GNAWlavS XiHhjl94jdCuTB0htz8TTBYW7LYlO/o= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=fqpsebHD; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620262; a=rsa-sha256; cv=none; b=cYNT/5VWUHo3UD4OS8smwcORjC2NvXimtKRpmCs9DmTQvhVDq3kW/MSpPAaKJcdnF770f/ lZ57PYM0bwEtuQAxgmH6GcqPwCM8BjQPEezdgRfQUM9vsnOs3QMCRz7ABOMK/+0pa8jkSz IpEhhTlJdzltxRkC2unZJgO4bx686Kc= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yM8cvfNz5jVfR0nEV77nAvQQBtTw5TotKep/NN0+z/U=; b=fqpsebHD5cGcr5m/rhD5tIZunJ mu0FGvCDmW7G8UikT9VBw3YmgHxjYCqR85Yjq1SNB5b8LxY4tD1KESaNk/C/MEE9oMcnZSOFKEQQQ heyVfhw3U68Rb4yS48Ngvf07xGLDqG7dWbrMMPBcsAAZrefTRiEAHNg/h+eLBtJ/l81xnSWNW2w1e 6KYkVIOeY8tHmhIMfpicjm1a3EL9JqeKKYbT7N/dAO5THS+vuwuvH9+Z47+qvE3n9mCx6gjBTWE58 VqdGnnKOt1NRyYe+2+9HLQh9d0ggzJCZ4Bh+7CwzWNiLpqJ7mGYdUigM+boHB29pObhMCXaXBQg2Y oCsKxxEw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018oy-CB; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Brian Cain Subject: [PATCH v3 10/34] hexagon: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:13 +0000 Message-Id: <20230228213738.272178-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: BE5E320004 X-Stat-Signature: x9yjz7krq49fs9r37n9p11ak14dc7feq X-Rspam-User: X-HE-Tag: 1677620262-842818 X-HE-Meta: U2FsdGVkX18UOmxFpjG4NOmZBB/jjNbZBs1qpJoLu2W+bBu5SP7V0TZRmiezMl2TZkIzDutRKyWBUjG8I49Lx5lEAvVnN0doGxcO6OhfaoS3VXDXDDSeExY2dP0/WCW8mkhNSQhVdXHrIbHF16NeltdM/Zn7Yqxy6/x+LbMSnHynkZApO0MS7S7bp23k8GhSTLCI4UzgD45ubqU4u/IbEjXVCDYRhYPch+0Zzd0IW534jQ8IZC1OZc+iblaaTxCcSkDIFttuG+43+/x5+VGiU3m4c1CHGo/TZJ0n/7/nxM8jdNaotaviBNbSNPRMoiE6stDk5OVAV3dIiLKjdRl0OOzUOt6jTWUkQiPDJtVIxI6Ccss2p9fWctDz7uta9MqLcEfsnPN6G0PffXtWbDTK58nhRe9rRl35gy3uxR4PUZ7eq+6wX4Z1WG/bjiffmvNRd3yqzSNwRFA1WyoNHBdWzhFOpHcZA/9E82WLbUR490eaX5nqrYkYkM4pAEDVNCy2ZlvZfrBgekytyrEDT+V6q1rwcivl6nZbtZxBO7nfpgxCQm3Jo6zfgzXHIBUQDeaHuzxERORnA4n4wz6qgZ+G9e9vfz1iUPTSGO0W5R0ncLxwp1/m5OpGs2RY9ldnZo01cNFR1N/qjjaEddod5bCiWgWZOdHbJQCDMOHK52oqmBEI8apPvqwXyCBAGdR2Eyahl+lAfUPqCId3whPF+aI3xoeSn82/xvPsXu+Yu1acXzysmwXb6aV+ulP4Sxl6wJJvVZOkkfQJQhvf04dWLr2zLwT+mlYCXaMN47Ro/rtjiQDrP9H5Ba1QYw4Lu8jj1s9SIEHFQsXDrdSh6Snr4waqIaaWEzTd0GfRafURjTYum1JhuX5t2rVwfevzZDRdQvWiMUoJeEGe40FuYQI7skTmAg9BcD+++8l62ZMZncgyDiE3fILZc73+rPhpeeGo2FsaU9RKTNnJQnl9cKZSKv0 BoukXRjt Nt9KwIy1pzDbr6z1q5zukVziarVGYN/4VI2hpRx8mUS616cCYdDh+LyLsxvs9kXtHhHT7rm9pq/4tfKLm0w/hdWfwyBurw2TjzAVf9chdfL4M0kOHCNByxRkAfLdhS6KPGjrQcfDW9Jt+ggrSFg5zL9YYnBEfHrFkmxU4cuFM7ZZma5v7PtA08xLxPxVIQaY5cXNCNPEHohv1Yy3shSFX+321f0SQEBk5bUwfFKHiQIXcVfzX2DsBBq8GqnPz6S04e/AL X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes() and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Brian Cain --- arch/hexagon/include/asm/cacheflush.h | 7 +++++-- arch/hexagon/include/asm/pgtable.h | 16 ++++++++++++++-- 2 files changed, 19 insertions(+), 4 deletions(-) diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h index 6eff0730e6ef..63ca314ede89 100644 --- a/arch/hexagon/include/asm/cacheflush.h +++ b/arch/hexagon/include/asm/cacheflush.h @@ -58,12 +58,15 @@ extern void flush_cache_all_hexagon(void); * clean the cache when the PTE is set. * */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { /* generic_ptrace_pokedata doesn't wind up here, does it? */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, void *src, int len); #define copy_to_user_page copy_to_user_page diff --git a/arch/hexagon/include/asm/pgtable.h b/arch/hexagon/include/asm/pgtable.h index 59393613d086..f58f1d920769 100644 --- a/arch/hexagon/include/asm/pgtable.h +++ b/arch/hexagon/include/asm/pgtable.h @@ -346,12 +346,24 @@ static inline int pte_exec(pte_t pte) #define set_pmd(pmdptr, pmdval) (*(pmdptr) = (pmdval)) /* - * set_pte_at - update page table and do whatever magic may be + * set_ptes - update page table and do whatever magic may be * necessary to make the underlying hardware/firmware take note. * * VM may require a virtual instruction to alert the MMU. */ -#define set_pte_at(mm, addr, ptep, pte) set_pte(ptep, pte) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline unsigned long pmd_page_vaddr(pmd_t pmd) { From patchwork Tue Feb 28 21:37:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32051C64EC4 for ; Tue, 28 Feb 2023 21:37:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9254F6B007D; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 76B1F6B007E; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37C216B007B; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 10DAD6B007B for ; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id DB57E140D63 for ; Tue, 28 Feb 2023 21:37:44 +0000 (UTC) X-FDA: 80518012848.23.898E876 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id 22A2D16001B for ; Tue, 28 Feb 2023 21:37:41 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=d64ZiIZw; dmarc=none; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620263; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=X4gTGIoZDPD05YxRdFQZH/uA/i6Ep+mlgv+7pNzKT5k=; b=mBG2LzA5zUuF7fD9pQohoijUg21IwSnerfoCr3mC9A1krPK1+6eicTvTuEV4SMj5TRRmQn Huy9olZLFVvkdU8x+tF3uLZJUcoEBizyg21B09b0jPeQhWkGhuhn+d5j3uL2nq5uaE8wZa 5DZDsklmJdX0FfnE1VTdHkdiXvboDgA= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=d64ZiIZw; dmarc=none; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620263; a=rsa-sha256; cv=none; b=gFho8/1+f9PkzekXD+qyxQJlDxyYg2cJ8Wcft5Oazzy1f6NcZSiiyQ772MSM73Qc9dlBi+ +77bZ/kfMnT6FooPGPzeA6srtTt6t8Xc577Z6ctK00eRMI8EQ1FwX7GEVRGIjfi2iXDpTW Z15KcOatzs/18z5ZnGfOGQkKenHzeiM= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=X4gTGIoZDPD05YxRdFQZH/uA/i6Ep+mlgv+7pNzKT5k=; b=d64ZiIZwbaZAR26epvp/fIC9ZJ 4oAPhYbNdFVbSebf5WTKvAstEIjxipjzZkYQBvxy6T5/cPJHsz6b7cOGTTzsbUMisq/PARR/croyV 386bEC6Y844QeOXW8GtSK4ycdugpUvB/2iIkP/nRGrtHnVXDHhh+GNxFBcFSglXGf7pX9Zi7Me0U+ uCrTAaMmU3tAXFylqB8VCo+bNjwNnbsladeGoCCzjRh7KrI/FWsm3kkPy+sn4LsdjLib5Mo8UNXFt PerbroycxkC0s00okLnb7cwTLHo7VK4uTlB45F5Zh7n8Ydln+h4qJigRBeBi+2jsZAEPcsXcw7hep EivU9tOQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018p8-Ho; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org Subject: [PATCH v3 11/34] ia64: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:14 +0000 Message-Id: <20230228213738.272178-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 22A2D16001B X-Stat-Signature: bdq5hctto8chq8e4ocnyhx6m6k6upwhp X-HE-Tag: 1677620261-780383 X-HE-Meta: U2FsdGVkX19fcfv4vcrElxAkbCsbykBPDv91qHPbGGbLIwwMSG9QOEVrqun7VXjOWWo5X4l9Il0KzmreaZB/b8xEB9vZNaBmFz6k6ocYxD3Dlbuob+iG7DBX0WZJpz2BVIAAYhYSi0b1cC0fN1FQ1guDtrObtHJRWGIE/LaGLmZsEyRHotXvf9Dh4/YR0b2r9KJCqG7za2b5WTg3plqJzBI1QEFkr5LrlZlBgj6NCrZUN1ygT7u13RMnLYKIluKn31E0n84r7zvp/LVOYuJ+D+Yen4Q/fJ7d6u2mzjw+ZsT0D/+Xu5avigme0bIC7NAmBiF9aygxaabX6A4f3OOSdCNZjF1Fu5Dfyu6gsdc5esmt4ldJGRKYXkYrXWCnXjIZuJYtw7m4+TGfs1tQ4dgXCMXrZ6rFcq4BWgYCgnq4q22pqX0UcgdpgDdkDuHAJwfNquC9OeWPfoNfqCc4bwzx66a6AwvsIaJl48vGm36PlC5yq3Xm0B1ChOd1qXWbD/EwkjRCj9fF+ylkaAkeiQamClXKbh0B1mGFjfSkdjJj802M6WZz83dXYOzl3QL0vUtpSRsaflhFnauu0MyddGJvW53d5Pnif0gvO5NWcr3zgommK4TD8J9xOqFasv+Zcgv3XnBpyu4y/Imz0hi6XrzXokpnNPSxAasxzmipcHyYCuJJMv3+iPRFulCeIYresZMi2SlaphnThqSS58Y/cPGAG+CWVvoVnvJnbfj+V528XcLuiUG67aB5jbwnq+671nyLm7KqWMmYkEX+KtvyR1DKUwWIVu7h4R0MObATR5MEV04P/zgBGcDkLFQjQt8F9tDoH7LDktN8aMBu05jiu1xjHtzQ9DrncKS0ac3LUiZmkoJW3wOV2PUjo2P8i1NfyJ5YSrMAnbiMZ0g1M6TnVkhsVNaJa5oOkhaIOI/QN/m68eyb48Uxwy1FOa86afSWAhX8N2L4rGB45//WA3PbWNh pkYFVwyi 9HrQ3f7lJNmdcXQyuKfZEHavmOxc1VoFi7wxn82NrxnsVM4BXaHcX5ffYmGGL9bGQyC1za/d8RB4LGbu2VaCVlhbOonSn6OCM8DfsA5GVMTPBjjdoXV+Utcu8TwkykzS2asqaj5aKBNGAQU9r+irR+36fn5fesLKtWwbnjLclAxROdnM+4gQ19IpJIqcOBllZhmA6c0CAznaw5TdJN5GH82tube5vzZlTb/1pVapw4j5UmgCZgY2ibz9fCASK8aoeH76s X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_clean) flag from being per-page to per-folio, which makes arch_dma_mark_clean() and mark_clean() a little more exciting. Signed-off-by: Matthew Wilcox (Oracle) Cc: linux-ia64@vger.kernel.org --- arch/ia64/hp/common/sba_iommu.c | 26 +++++++++++++++----------- arch/ia64/include/asm/cacheflush.h | 14 ++++++++++---- arch/ia64/include/asm/pgtable.h | 14 +++++++++++++- arch/ia64/mm/init.c | 29 +++++++++++++++++++---------- 4 files changed, 57 insertions(+), 26 deletions(-) diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c index 8ad6946521d8..48d475f10003 100644 --- a/arch/ia64/hp/common/sba_iommu.c +++ b/arch/ia64/hp/common/sba_iommu.c @@ -798,22 +798,26 @@ sba_io_pdir_entry(u64 *pdir_ptr, unsigned long vba) #endif #ifdef ENABLE_MARK_CLEAN -/** +/* * Since DMA is i-cache coherent, any (complete) pages that were written via * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ -static void -mark_clean (void *addr, size_t size) +static void mark_clean(void *addr, size_t size) { - unsigned long pg_addr, end; - - pg_addr = PAGE_ALIGN((unsigned long) addr); - end = (unsigned long) addr + size; - while (pg_addr + PAGE_SIZE <= end) { - struct page *page = virt_to_page((void *)pg_addr); - set_bit(PG_arch_1, &page->flags); - pg_addr += PAGE_SIZE; + struct folio *folio = virt_to_folio(addr); + ssize_t left = size; + size_t offset = offset_in_folio(folio, addr); + + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } + + while (left >= folio_size(folio)) { + set_bit(PG_arch_1, &folio->flags); + left -= folio_size(folio); + folio = folio_next(folio); } } #endif diff --git a/arch/ia64/include/asm/cacheflush.h b/arch/ia64/include/asm/cacheflush.h index 708c0fa5d975..eac493fa9e0d 100644 --- a/arch/ia64/include/asm/cacheflush.h +++ b/arch/ia64/include/asm/cacheflush.h @@ -13,10 +13,16 @@ #include #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) \ -do { \ - clear_bit(PG_arch_1, &(page)->flags); \ -} while (0) +static inline void flush_dcache_folio(struct folio *folio) +{ + clear_bit(PG_arch_1, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_range flush_icache_range diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h index 21c97e31a28a..0c2be4ea664b 100644 --- a/arch/ia64/include/asm/pgtable.h +++ b/arch/ia64/include/asm/pgtable.h @@ -303,7 +303,18 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) *ptep = pteval; } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, add, ptep, pte, 1) /* * Make page protection values cacheable, uncacheable, or write- @@ -396,6 +407,7 @@ pte_same (pte_t a, pte_t b) return pte_val(a) == pte_val(b); } +#define update_mmu_cache_range(vma, address, ptep, nr) do { } while (0) #define update_mmu_cache(vma, address, ptep) do { } while (0) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index 7f5353e28516..12aef25944aa 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -50,30 +50,39 @@ void __ia64_sync_icache_dcache (pte_t pte) { unsigned long addr; - struct page *page; + struct folio *folio; - page = pte_page(pte); - addr = (unsigned long) page_address(page); + folio = page_folio(pte_page(pte)); + addr = (unsigned long)folio_address(folio); - if (test_bit(PG_arch_1, &page->flags)) + if (test_bit(PG_arch_1, &folio->flags)) return; /* i-cache is already coherent with d-cache */ - flush_icache_range(addr, addr + page_size(page)); - set_bit(PG_arch_1, &page->flags); /* mark page as clean */ + flush_icache_range(addr, addr + folio_size(folio)); + set_bit(PG_arch_1, &folio->flags); /* mark page as clean */ } /* - * Since DMA is i-cache coherent, any (complete) pages that were written via + * Since DMA is i-cache coherent, any (complete) folios that were written via * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ void arch_dma_mark_clean(phys_addr_t paddr, size_t size) { - unsigned long pfn = PHYS_PFN(paddr); + struct folio *folio = page_folio(phys_to_page(paddr)); + ssize_t left = size; + size_t offset = offset_in_folio(folio, paddr); - do { + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } + + while (left >= (ssize_t)folio_size(folio)) { set_bit(PG_arch_1, &pfn_to_page(pfn)->flags); - } while (++pfn <= PHYS_PFN(paddr + size - 1)); + left -= folio_size(folio); + folio = folio_next(folio); + } } inline void From patchwork Tue Feb 28 21:37:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72F1CC7EE30 for ; Tue, 28 Feb 2023 21:37:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2C23D6B0082; Tue, 28 Feb 2023 16:37:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EBE7B6B0073; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91B8A6B0073; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 549516B0082 for ; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 29D49160B72 for ; Tue, 28 Feb 2023 21:37:45 +0000 (UTC) X-FDA: 80518012890.16.51DDC75 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id 7A9ECA000E for ; Tue, 28 Feb 2023 21:37:43 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=t6c3JJ2f; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620263; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fxYZM+dA5j6swMF6gw/nu6/QS6N0if0H8qzTGtI3Nx4=; b=NQVjxUgbtM49hDRapBSJtnTRLgUtvCn4opeuWmRDgZhif6eAKkJK/Dfspxm6MUAcwGyhIU wnojRgvrAiroQvmKyk+CmC0nn4NnXFTb87MFRjmHffg3agcvbl3OM+eQGJxZ+ZApGeIPub MLELzXh1k0asIwXawxJtIRbWxyt17jM= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=t6c3JJ2f; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620263; a=rsa-sha256; cv=none; b=mU6DUZ5aQUV11YXUwbknrAKLQzS6mr69o+Mv73Iken+dZJ6Y67k3KN90/7mXTr2sOSRuSw yr8DxcuZrVlO3m6zAoLPTv08H6Wb6QCqDFBCrNVaQJKFdPFTQKjx1sSBJ5AeAXcMDMgAXk 8wyVaxptN5hOBIi4xdzfEeWz6MXEIUw= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fxYZM+dA5j6swMF6gw/nu6/QS6N0if0H8qzTGtI3Nx4=; b=t6c3JJ2fH0x5/grUhxogUZYeOe SShFST+p4XE+MLjmz3zZC+Bb2IuqOE3tNx+RraD0omIWAq8IcyRUrAPXzFKZ1FrTAucOdlBe3xMzE lQ57T91RzbVntY61EzTv9D7IE1ldyJJ45YLjjzYxuzP9II163WbEtWeScny5ZSan37odChHi7yzRD kQLo8dzCNRFj9wvhvvnLj0c2MX9sfST0vtR4aTtBeUJN+KuLRfgPKa0ZU1DIZFA4SRGErx2dWS898 N3GIZC4rqgn3P//+KVhXPknGJ/J+PLr4uzkkj/ZAmK8Kp1dglWSVVGN+44eBlFHrtp7r1RIDhlb+9 5+/0JPsQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018pA-LT; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Huacai Chen , WANG Xuerui , loongarch@lists.linux.dev Subject: [PATCH v3 12/34] loongarch: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:15 +0000 Message-Id: <20230228213738.272178-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 7A9ECA000E X-Stat-Signature: wptofu9ar3eqsnuy8e53qhm4roe3sgqn X-Rspam-User: X-HE-Tag: 1677620263-652599 X-HE-Meta: U2FsdGVkX18C4UcxGnoTI3SKaCDvD3U4fLmnYF6shaE5EBGD6eF2czqX5NyOUPx5d4MpwqbB850JwhjZc7KHi8MGSggwuMjbP+4I4iPvnvcSgQwcff3TudQZqKT9IfH+ccMKoDLSpjoTcFjNR0duaSByqa1nwJT+2mH8Yk1raL17FR/uCNvusJ48TOFc5fKcb1Gw70584pdWJ8gIP6hZWqLiKkl8f89of9pdVuU/fg+JShTiI18Gk//28jFMf188HPdc5pxdR6MthhQBnBbooWCdN7Nn+RMf0K0Yf/O+7UCN4Ko7JYRgVe6oMJ06RvdPQyxBLa2yO+Q5ue6nuvLA3G5kWFH4LH/mb9458ZVJf7yHyc2aW7mNeS6PGm95fC9q1gkGFAf+shf8Uf0WBkFz98Rvvwq1t85LVEzfFgHl08Rq8cg0JC98EHJiFyM2zUkHmypEpJ88rJ54BfDInN4qmaoIO51hnlas1zBninRqR+6cWq4idTzqDH+JaEG0IkeX0FKZM+YfXEysTByPU54cFG0vUyIaReVL8OAmAAZJlU/kv/cgNq4kmIlacNN1d5SYTQ1PZ0vURgZi5PhGZ7J15ltvBLQknkTStGoAy4zA9504iAracZ3szfSmTLr5M5EeKI+l/U6IWOb7uGbgurp5+EX0tvm2ZZEFkWydB+NQ/hsPFa3wTPNOeHipJFU2x5Oml0+vC0EKlA4U2XFuTRETawJgp+S3Mx7krSpMTLb2cv0B6JJ8Nhm/p6VrUXynJnuJOVBei+S4J+AycydUOJU00TO9v2pM4zmzxG6bVWhHMGC/2WTaXW5J1vnnNjRG7eDfnF5grA0Fp7QpnfFp7aexEraoFODJoqRv66pEHCQgYRXSrmPaCUkawKpnqDZM5uv8IlLsoLMKL77uIg224Szm8nWwtH12SM8NgNlOq9wbhKKHGuNx5V0CR8IDekayrhPY90ZteF98T4WyMe7rA10 f/ye8Nm/ UAkTrIkIVlhF4Onxps0zOjd/PEtC9Ba9hUKVW4QWQsrXrEJGzqxwsjbHF4NRSzShXcqurMxgXfKiH41wv/73+kF8cdEvQEKzUhAOBhvxSqs0wPYrw8pzIhVDHKPCEXxLnlaH9M/IkOD7PPzzuuKxV6DWjrjgt9UqjzC1Qx0jvJxuW3zHcFF8Cx/KOl98PKYC9i47BOQeZyA/COB5q4MiS8RZNjadSVSVDlcPBUON6txq3BA8yuCSLe+R98dV0KxSxeqkj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes() and update_mmu_cache_range(). It would probably be more efficient to implement __update_tlb() by flushing the entire folio instead of calling it __update_tlb() N times, but I'll leave that for someone who understands the architecture better. Signed-off-by: Matthew Wilcox (Oracle) Cc: Huacai Chen Cc: WANG Xuerui Cc: loongarch@lists.linux.dev Signed-off-by: WANG Xuerui --- arch/loongarch/include/asm/cacheflush.h | 2 ++ arch/loongarch/include/asm/pgtable.h | 30 +++++++++++++++++++------ 2 files changed, 25 insertions(+), 7 deletions(-) diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h index 0681788eb474..7907eb42bfbd 100644 --- a/arch/loongarch/include/asm/cacheflush.h +++ b/arch/loongarch/include/asm/cacheflush.h @@ -47,8 +47,10 @@ void local_flush_icache_range(unsigned long start, unsigned long end); #define flush_cache_vmap(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) #define flush_icache_page(vma, page) do { } while (0) +#define flush_icache_pages(vma, page) do { } while (0) #define flush_icache_user_page(vma, page, addr, len) do { } while (0) #define flush_dcache_page(page) do { } while (0) +#define flush_dcache_folio(folio) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index d28fb9dbec59..9154d317ffb4 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -334,12 +334,20 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) } } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) -{ - set_pte(ptep, pteval); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << _PFN_SHIFT; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { /* Preserve global status for the pair */ @@ -445,11 +453,19 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { - __update_tlb(vma, address, ptep); + for (;;) { + __update_tlb(vma, address, ptep); + if (--nr == 0) + break; + address += PAGE_SIZE; + ptep++; + } } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache From patchwork Tue Feb 28 21:37:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09595C7EE30 for ; Tue, 28 Feb 2023 21:38:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A59CD6B008C; Tue, 28 Feb 2023 16:37:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 94C386B0099; Tue, 28 Feb 2023 16:37:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 65B5B6B008C; Tue, 28 Feb 2023 16:37:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 45D4B6B008C for ; Tue, 28 Feb 2023 16:37:50 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 1F89C1C63E6 for ; Tue, 28 Feb 2023 21:37:50 +0000 (UTC) X-FDA: 80518013100.05.6DF502E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 6906B80019 for ; Tue, 28 Feb 2023 21:37:48 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=EVNR24DZ; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620268; a=rsa-sha256; cv=none; b=LRw+VyEJWoYcImb2TX0NrAIGOt5MnFM5CLwbT0lAtK046V6FBeMshwA8UdPv6267Et1HbV olqps9ySqq/kSph4RXVad5ZZvqAs5mOA6Zzj9Kdld9zXe8uTV646hXYGZj+3MZEmaNq/FQ y8PhzAHnPXim5yIV6O5CQGxxcbMV3RU= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=EVNR24DZ; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620268; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=u7mDkU53gCVgq34r9bUs2tkHplV8+tfY4mkfvunD0YE=; b=QE1/mnvdZjxlZdV46ngu09zlzdaD+0nqzx1x0AIWipHnMsSBK7JITIJlHL4HCsHJMWXdNf 5zmMdAzNvQXnpWiqvlWAIVzC14dNupymEOLPQk8dxm7F4FSINr7JGegNEeq3CUq8BsnX9F KxdbwTW15h5sHHd1aZ1Pza86fClkIz8= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=u7mDkU53gCVgq34r9bUs2tkHplV8+tfY4mkfvunD0YE=; b=EVNR24DZpArKnCSCBaresow5yA Vyhm29cOyzaXLJUNOI8O8YcxcgphzbnZewhX959jro9fKmyPdCaD1Oq0kKZ1Q806YXmVGYjcNEIy7 j/QDNkPxrP//0P8P17QBzkuNGdOFIpJu8gy2eKCOMXRoxY6AsVW5po/gsjXR8jB2Ua1Ug+kyMONrP 0guR4FHrIqyTVjvVe7ol+0GiG8ks9sAJ4LoO1y2HNSTF8qnVwS8P0bxOuz1GKc9V4+Z7XBjEYQexR pNS2aXLc55KzCKM8kBZsInzUsBDUtPJ0F8fb2/FsnR+OiZKi2QHZqsPzS6JFvql9POSWICWWHcSGP obOFf91Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018pG-Q0; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Geert Uytterhoeven , linux-m68k@lists.linux-m68k.org Subject: [PATCH v3 13/34] m68k: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:16 +0000 Message-Id: <20230228213738.272178-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 6906B80019 X-Rspamd-Server: rspam01 X-Stat-Signature: t5c6r3ok1t393nans8bjej1ah1defdgh X-HE-Tag: 1677620268-190242 X-HE-Meta: U2FsdGVkX19VFzPLHMLSZu9GIydHAOElTGo21HTmv7BvxfDQdDBfLcKaqhvbE5pNkuqIQHTw/D91L/ZZphgIdUss6Pr0rx6l1TVdcUyyz7QRwlSoC/jr/RuDyGcQ+VdJlEjE5V1q5UgbQwC1D5iNeA65lyZCAH85TJVFX4XPogN362s7uCT8PxtrSyzgcyGMp9qyRec6UWe7MpD/EErMCRI5YIzhzF8Sh36ZJbmi4hWnPTziYECmqs+H9SvYYcYH98/hVa3YFHHIunQlO6fVtzMHu09vRt7Gh5lZYTOJrJnj+CSxOw+WZqdq1g/UBMZhHOObNcEMJEOYDDAYX5ZLwyq4Hh9J9ZQybvPFxHPNl0aUWR5MyD9sgVHsBM8vr0ajQG7pxGiNi9U0VkQXWYYp+/i7X8EJok9Fybkc9yAlo+YpDS1DcLH1K2siqvkNDw5nlJGa/j5zGLlcrXwfR9vCQhti0gYeZXxveMV4qrju7taeGxld15CLU3zcrxbgPRCn00WXWQhUbAnKJuXfEkL8P8Vx3S7ZQXJIQAO0Ww3e3tWgYGi0J5kbpwrfxashxf+n5/bXhPyOYfTC/IFfgwtdkJQSLxoVMMPqsaO7wWB7nUjO2yVyGroKB/7dcCX3ChpJdhftzRCUh0E/1wd2fUdFh8khSnEVAd+GE8BJ3+VibSjxvhpVsFuc+60s5d7LpLuMZno9KvlEvHJJZSIQ8XQ3eMKdk+4lSb3pp/v01uCZf/FH188FuzHZkRJe2Jp1blGauBG4gJMK5ai68tBKDtTgqWwcN6zii6hjXyk0Rtc5XOABwIh1W9K8fNnIHH8wi1X7wlGPRpC3lraF5l7AVUsHlvsXkhOub1zRS96iTwSuAEmhqqoR4PUPLYibrhs+VNJel5Ji9CdkBYxRwh6bJboee2gL1IuVk75ofUUsq9uOq4+pE0VlmyjUEVcjQVZxrxol2KQ9s+1Svy+BKtMUhop AFsG5Ahw 7YVSaY92/Mekzp39hg9UfOzzvRQduCT0Tjf2iLpvumdX9rN4Ik0aOiQQucCpns504UdQi+BUlLCSUlc3Mf6AYetl758ACLxd08C+paA0Ub6CNzjOzcv2+MBZQokI4jWHruJ8c6ckKJkPi99oqAAmMokVtTqlXxnh35jLHvGQdV+4DthUnHFFsdcpAUsvG0EHNuHC/LK5IVvvYJ0L/mEVCU9XSs4d7JWZsunycexsaNxNnSu9QRSixEwEFiMEfSBE0WUN1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Geert Uytterhoeven Cc: linux-m68k@lists.linux-m68k.org --- arch/m68k/include/asm/cacheflush_mm.h | 26 +++++++++++++++++--------- arch/m68k/include/asm/pgtable_mm.h | 21 ++++++++++++++++++--- arch/m68k/mm/motorola.c | 2 +- 3 files changed, 36 insertions(+), 13 deletions(-) diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h index 1ac55e7b47f0..d43c8bce149b 100644 --- a/arch/m68k/include/asm/cacheflush_mm.h +++ b/arch/m68k/include/asm/cacheflush_mm.h @@ -220,24 +220,28 @@ static inline void flush_cache_page(struct vm_area_struct *vma, unsigned long vm /* Push the page at kernel virtual address and clear the icache */ /* RZ: use cpush %bc instead of cpush %dc, cinv %ic */ -static inline void __flush_page_to_ram(void *vaddr) +static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr) { if (CPU_IS_COLDFIRE) { unsigned long addr, start, end; addr = ((unsigned long) vaddr) & ~(PAGE_SIZE - 1); start = addr & ICACHE_SET_MASK; - end = (addr + PAGE_SIZE - 1) & ICACHE_SET_MASK; + end = (addr + nr * PAGE_SIZE - 1) & ICACHE_SET_MASK; if (start > end) { flush_cf_bcache(0, end); end = ICACHE_MAX_ADDR; } flush_cf_bcache(start, end); } else if (CPU_IS_040_OR_060) { - __asm__ __volatile__("nop\n\t" - ".chip 68040\n\t" - "cpushp %%bc,(%0)\n\t" - ".chip 68k" - : : "a" (__pa(vaddr))); + unsigned long paddr = __pa(vaddr); + + while (nr--) { + __asm__ __volatile__("nop\n\t" + ".chip 68040\n\t" + "cpushp %%bc,(%0)\n\t" + ".chip 68k" + : : "a" (paddr + nr * PAGE_SIZE)); + } } else { unsigned long _tmp; __asm__ __volatile__("movec %%cacr,%0\n\t" @@ -249,10 +253,14 @@ static inline void __flush_page_to_ram(void *vaddr) } #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) __flush_page_to_ram(page_address(page)) +#define flush_dcache_page(page) __flush_pages_to_ram(page_address(page), 1) +#define flush_dcache_folio(folio) \ + __flush_pages_to_ram(folio_address(folio), folio_nr_pages(folio)) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) -#define flush_icache_page(vma, page) __flush_page_to_ram(page_address(page)) +#define flush_icache_pages(vma, page, nr) \ + __flush_pages_to_ram(page_address(page), nr) +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); diff --git a/arch/m68k/include/asm/pgtable_mm.h b/arch/m68k/include/asm/pgtable_mm.h index b93c41fe2067..400206c17c97 100644 --- a/arch/m68k/include/asm/pgtable_mm.h +++ b/arch/m68k/include/asm/pgtable_mm.h @@ -31,8 +31,20 @@ do{ \ *(pteptr) = (pteval); \ } while(0) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* PMD_SHIFT determines the size of the area a second-level page table can map */ #if CONFIG_PGTABLE_LEVELS == 3 @@ -138,11 +150,14 @@ extern void kernel_set_cachemode(void *addr, unsigned long size, int cmode); * tables contain all the necessary information. The Sun3 does, but * they are updated on demand. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + #endif /* !__ASSEMBLY__ */ /* MMU-specific headers */ diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index 2a375637e007..7784d0fcdf6e 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -81,7 +81,7 @@ static inline void cache_page(void *vaddr) void mmu_page_ctor(void *page) { - __flush_page_to_ram(page); + __flush_pages_to_ram(page, 1); flush_tlb_kernel_page(page); nocache_page(page); } From patchwork Tue Feb 28 21:37:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62366C7EE31 for ; Tue, 28 Feb 2023 21:37:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0EFFC6B0075; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 017A96B0073; Tue, 28 Feb 2023 16:37:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD9376B0078; Tue, 28 Feb 2023 16:37:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id A98C96B0073 for ; Tue, 28 Feb 2023 16:37:44 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 6DBB11A0C93 for ; Tue, 28 Feb 2023 21:37:44 +0000 (UTC) X-FDA: 80518012848.28.79F4C19 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 964AD100010 for ; Tue, 28 Feb 2023 21:37:42 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MylEPx10; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620262; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jco2/brYdWe0hU6YkAHAWSfRYtcsxYQg52X9PqtqDX0=; b=rgZzrVZDkd6/USdpvLExj/jDuJ92y+4oapDhAkUnMs/7pXazFGf/MDbUTCTVXWC4lAF9nc Wh3q9XvLZ5lg23dAx1URfQ1JxzlJji1l785NENFCZEj11lRNEo3Cm6u9DN5DPu9dl0mre0 iRSyn9hEQfnp4PUqw0bZJoH6vOYT5As= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MylEPx10; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620263; a=rsa-sha256; cv=none; b=H2oxZxcBSoeq1ay8EKxB2ptRKHqKevHQDIrWZ90Qy2XNN5VttHBsVI+7mKTU10AU9TQ7mF wd8hiuYhkNGQN5LUeDmC9yJQdNtTsRn+Y6T8Grtm0zhY631j3jHzCOH1Tt+qk635Po+dVl rgqZzh2f2PW8vitZdY74DDKzHKQkJu4= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jco2/brYdWe0hU6YkAHAWSfRYtcsxYQg52X9PqtqDX0=; b=MylEPx10D9cki0lh9U/HhXaSjh 5VXaWVx/v0ZIYexIBTVZVd4Dj6AHZ3hAgeXILq/jVFyUoUXkSff4Pmo799mtlnuZ0suzfkXCafxG8 nzpSeo6aYtWeN0Oecw+5ERf9nBoatUgG8DvKR73bfIwWILec/CNHG3S552x7G5Xe2282okADGciuV NoISRFOqmdfyLi6xuOq6nfQHWs66T+MpGVc0tQ3PdjyjUaDP1L8Vi6afNGu4eFFgnkOuUOKAgSkVi F6oRG/BpdUZiixspN/rpIE6HbXVFicUqaxpOtNAakyxly5l1gLJlVLaMSEsZts0QfwNmUnsxp6vgV C/jNhgTw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018pJ-SN; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Michal Simek Subject: [PATCH v3 14/34] microblaze: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:17 +0000 Message-Id: <20230228213738.272178-15-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 964AD100010 X-Stat-Signature: 5xkk1g6na4ipu5agsa6e5wdqkyfwusat X-Rspam-User: X-HE-Tag: 1677620262-739931 X-HE-Meta: U2FsdGVkX186oK+MAYl+/O3aeWNrCgStP4Wl6ZqXs4CfUyF9cNSzC0KxLZjyeH0rDQ8+k70AM2I2l2pdPq5o9v+sH/dMLkMFWEsKFPuOYljnhYEoL9SKt/UoDr44/PCug0mfvCeh/4r15hyUya6sQh13E/BZa2TLhddpobM4kZ+8fPvgdpLk0dqw9jy98jZ2yybNViUl7NdsjIs4g0nr15/SUS3nHUHMOzdUE+tTvaG+OYc7LGpXGDwzplM+42GEAT9F0u/nQNXxO9iU3F3vQgPbpcom/8hSxMKkdMAcrkbSielE19sL3fYno1bywNv5fZdYAa2QH7teocLHp0btaQ+wS4ep821tq9M2BkVOirEGRNiL6jtCCGrZc9bRI2VDwM68JfWRZPOUg2DJvHYKPEyFalOr8OSqEPJmUmxr5MdDMJcyEO7PllF/TTfPUMtJVgoOoU7JLN3yBDUU6FdjRGczcxezwo4meIRso3cpPCK80DTPhymBS9510vaC3SwGYx2TMkuC7I2vhTH7fu7SOYO4ARmcZOxKZKsm3Jr7ttZ2isdsdZGaa9I4nuAYc1+IXZ2aWVWBh7pSgu+W5urDA8gPfecMePtYWxDGRP9Ucv0saBGGkgGbwEPoz/wSMCJj0pxdZLuugElt0Sd4UtYlhl1xNzDrUOUhn0NGg10FM/e1tcsXjlsPS93wG0Fd+9/0XUgy29Y0F9J0WvkCnmT6YbmJcXEiiCvcPEAHa7DM0FVge8p1+zlP3gfuXVuDE2iy3zct95jnhSaLHmg378Op42lrZjCoRX8cuAW+5avEI7rfVhEiHU4QCBmCMuAz4lP9ik/stxdSdAekRUcG/hh3VaQUeIOpyFb3VdGzKedqy/AsSK6AIUXwbLu8ISwS7qYWpJQ9SsiQ+h39cwroqObQMBgzx7DWlpVM1yMFbPEh+gaG/D0vV1SKGFx7XmovMqvNyHXru76+5Q02QEV87yE +TMcjJXb C7C+O6N3ksMidTJBXyIdAn7T9mBJT60tKAeEotc/ui/veyDlCqGKGT1VUeRyErSTrNLAE4n2Yf0ak8YwacRf+xW1HZCCA8sVpA74KfOlItSOve5/pE9NHwSLHqDwkVoCdvNbKHpDYjs/9eMpM+03aKcbwqIFCAtC2wO/UiwxHl93DMzj1KsveypYY42krDuGkjK5NOOWH0kVn3ZbAQAoYsvBE5ve60Ha1pc0NLAtZXfnPTjRkEg0wUqHEZkG5dYspcXqd X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Also change the calling convention for set_pte() to be the same as other architectures. Signed-off-by: Matthew Wilcox (Oracle) Cc: Michal Simek --- arch/microblaze/include/asm/cacheflush.h | 8 ++++++++ arch/microblaze/include/asm/pgtable.h | 17 ++++++++++++----- arch/microblaze/include/asm/tlbflush.h | 4 +++- 3 files changed, 23 insertions(+), 6 deletions(-) diff --git a/arch/microblaze/include/asm/cacheflush.h b/arch/microblaze/include/asm/cacheflush.h index 39f8fb6768d8..e6641ff98cb3 100644 --- a/arch/microblaze/include/asm/cacheflush.h +++ b/arch/microblaze/include/asm/cacheflush.h @@ -74,6 +74,14 @@ do { \ flush_dcache_range((unsigned) (addr), (unsigned) (addr) + PAGE_SIZE); \ } while (0); +static void flush_dcache_folio(struct folio *folio) +{ + unsigned long addr = folio_pfn(folio) << PAGE_SHIFT; + + flush_dcache_range(addr, addr + folio_size(folio)); +} +#define flush_dcache_folio flush_dcache_folio + #define flush_cache_page(vma, vmaddr, pfn) \ flush_dcache_range(pfn << PAGE_SHIFT, (pfn << PAGE_SHIFT) + PAGE_SIZE); diff --git a/arch/microblaze/include/asm/pgtable.h b/arch/microblaze/include/asm/pgtable.h index d1b8272abcd9..a01e1369b486 100644 --- a/arch/microblaze/include/asm/pgtable.h +++ b/arch/microblaze/include/asm/pgtable.h @@ -330,18 +330,25 @@ static inline unsigned long pte_update(pte_t *p, unsigned long clr, /* * set_pte stores a linux PTE into the linux page table. */ -static inline void set_pte(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_pte(pte_t *ptep, pte_t pte) { *ptep = pte; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - *ptep = pte; + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << PFN_SHIFT_OFFSET; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) diff --git a/arch/microblaze/include/asm/tlbflush.h b/arch/microblaze/include/asm/tlbflush.h index 2038168ed128..1b179e5e9062 100644 --- a/arch/microblaze/include/asm/tlbflush.h +++ b/arch/microblaze/include/asm/tlbflush.h @@ -33,7 +33,9 @@ static inline void local_flush_tlb_range(struct vm_area_struct *vma, #define flush_tlb_kernel_range(start, end) do { } while (0) -#define update_mmu_cache(vma, addr, ptep) do { } while (0) +#define update_mmu_cache_range(vma, addr, ptep, nr) do { } while (0) +#define update_mmu_cache(vma, addr, pte) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define flush_tlb_all local_flush_tlb_all #define flush_tlb_mm local_flush_tlb_mm From patchwork Tue Feb 28 21:37:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A8C4C7EE32 for ; Tue, 28 Feb 2023 21:37:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B4D0C6B0074; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 93BAD6B007B; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5517C6B0074; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 082CF6B0074 for ; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id D99A31C641A for ; Tue, 28 Feb 2023 21:37:44 +0000 (UTC) X-FDA: 80518012848.21.0E678FC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id 0859140009 for ; Tue, 28 Feb 2023 21:37:42 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HBX2kOrQ; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620263; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YFkv649gCM/AwFXo2f8Ac1lw9lI40kQCtDNPnXRRbZA=; b=o3WC7lnPnWgcx4K/q6uyR8M1enJ4JGaoF55vur/cGL/rCdSvsMOkNf8eW78RIiqKY9cXfT GtklfFQ4U/tjcjgpEvBcMY7Kha6ANlivYnGUXIJoVy0h17jBhLIP0yLBKLE7XxqDLaNlVb AwWAcbQ7luMC+72JXfmHWfbSFTeU/FY= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HBX2kOrQ; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620263; a=rsa-sha256; cv=none; b=INxc3JHgh+C0pswwgdzqi3mpbuPH9ar/Ioe6fAPfB0hDnSsy1wpNepmM2ej2+70EJ+NqX+ FBlWlILcLxTDquMAUHKow2IfOgd5HojPmvxLH5ZFyvrthROy2csvBcrruqC6tnFxuKFhhW iv5+pTPqR5tR/w9uwCCaeh0xW1GPPKQ= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YFkv649gCM/AwFXo2f8Ac1lw9lI40kQCtDNPnXRRbZA=; b=HBX2kOrQCeD5vBphiVPX6Gy7ii QLZwyGb/2kqUPDSXRCJ6pZ3pdgJZ4aetej5KZE7T+61evE1yqGSKNM4/CFSXpBO4mfNyR4oL0yXGp l5WUNQgynWZmvbj3LMClMj0L4gTQQj7WETVZ6cdDxJ/vuUGloIbW1UKfxFp/pO+PDEEL7arHGlHLh 9ijBQtXZ1Bbkr5RbQZdOYv7n7jsnFxU3CymXIJDvpoZOVPZeeYC3IML6da2ib8lp/zu+dc8zOAQ4S JBB8UDOGqf0j/p0HM7Qy8i5c9g5lB6Tma9BqEnZlMUPuSqH1HVBx3UXitNx1lhwLKaVXdtQgsOg/3 TzT+FNMw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018pO-Ub; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org Subject: [PATCH v3 15/34] mips: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:18 +0000 Message-Id: <20230228213738.272178-16-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: cz3jm5dgytuj6j8cfbrz93nsc573fmzx X-Rspamd-Queue-Id: 0859140009 X-HE-Tag: 1677620262-91954 X-HE-Meta: U2FsdGVkX19JCyOtkf+ImQiAUU2jwwlVOjWBEV0egKfgm7rpphz4A2YFB9+8k5bYfI9xdN6iGxGwiObSNnnaeatWWqHIWAiRzwuonHEwnHSpalnZGtgEiFGIh3cYaxzrunPI0tuaP2EyDxmx35YuqjNcy3CZU3M3U7e1P48RERTAGpsBKqsYLbkyj5Vos8dZY7TJeINq+sto6MG3KAhruCS9v6S2PMLxJcrzborIb1wt5ZTjPeNs+XqUz4gk7WusbjrEiEk84i2puCdDYZGKhRUuXqxLT0zC28MDw/ThznQiAPgMaatgb71Rc44yP1OCdsD6Q+pSQFa7Gj706zpqGQHcNjHL48c2YtEvs78cXSCWs9YkFHxquM1VK3KoQlPokbUDGpDDFsU6P7G8btcd+zqPlRLM+nwKlhH+4vjlW9ZksdvKx9busKJSEBNakR/RA6DMH+a3pXZ52kQIWPk2wU9LDPSCs5Z6CypHq9e+HrkgMcEVfU+P7jLjyeh/hx3XRJ/whsRA4Z6fA7ZgOonBJMpfBbzd9tdpXEvOwsDFGDkQseJmGT4HbPj2PQhctnSUb/XGPHJ0EdU9cWu4/GY4DK9uud5WuLs7ww3cqxhavUa7lx82JzXsr6FWHSCk1M+YMRJ/IeTkzzFI+hOYx4gXiPRi07sBUmxboW+QMu1AcaYMpdtsTmYGPF+hiFe/Cia82zapgBawP7HLYqvyceyBBvsTZOOAmdUiH6uGCIjbM1axfvLZTCiAQRjNRre39cB+BnY2RFSr/21+SEhn2Hzx8SJ7p23DmcIDE0GRQdLe61UPEjw/rgBbzav3soJloV0tY+XlalNCGMXC1oMmIdHeJvANpOpkQoSbIKZKGbz2G7GdxXbVQk5vo+uS899tj9v24r42ttkMG+B1kLJkVg4Y93sCA74NCQ+iz2Sfal58uPxsTl1mo4Nto/uWUu/MtCBYSeeTCSVccQ12TAaI14t bRU8YY7B bGp6g4sQPNiBekcQcdl378t2A5sEdjB/gG2gL8hZ/Rxb3pcgHDj0spW06pg6by793qbW2PsJRLG2PH1L0tpUDJeicJdf650+C6wKQsVM/qtrMBImsBLTjcr3hAwLTP9bVxkhQXwAeTaXIJw57nvGL7LxiRFIh0gm6Di8Z2s3x6kMD6AI7PVKlvyVhlMKaFtu3D563qYoS8YXnOWYQWFn8HZBu0gHUy38kMBkTcr+8l6hVAqm6DbTPA5yV8HCnwnirs27Q X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Thomas Bogendoerfer Cc: linux-mips@vger.kernel.org --- arch/mips/include/asm/cacheflush.h | 32 +++++++++++------ arch/mips/include/asm/pgtable.h | 36 +++++++++++++------ arch/mips/mm/c-r4k.c | 5 +-- arch/mips/mm/cache.c | 56 +++++++++++++++--------------- arch/mips/mm/init.c | 17 +++++---- 5 files changed, 88 insertions(+), 58 deletions(-) diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h index b3dc9c589442..2683cade42ef 100644 --- a/arch/mips/include/asm/cacheflush.h +++ b/arch/mips/include/asm/cacheflush.h @@ -36,12 +36,12 @@ */ #define PG_dcache_dirty PG_arch_1 -#define Page_dcache_dirty(page) \ - test_bit(PG_dcache_dirty, &(page)->flags) -#define SetPageDcacheDirty(page) \ - set_bit(PG_dcache_dirty, &(page)->flags) -#define ClearPageDcacheDirty(page) \ - clear_bit(PG_dcache_dirty, &(page)->flags) +#define folio_test_dcache_dirty(folio) \ + test_bit(PG_dcache_dirty, &(folio)->flags) +#define folio_set_dcache_dirty(folio) \ + set_bit(PG_dcache_dirty, &(folio)->flags) +#define folio_clear_dcache_dirty(folio) \ + clear_bit(PG_dcache_dirty, &(folio)->flags) extern void (*flush_cache_all)(void); extern void (*__flush_cache_all)(void); @@ -50,15 +50,24 @@ extern void (*flush_cache_mm)(struct mm_struct *mm); extern void (*flush_cache_range)(struct vm_area_struct *vma, unsigned long start, unsigned long end); extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn); -extern void __flush_dcache_page(struct page *page); +extern void __flush_dcache_pages(struct page *page, unsigned int nr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (cpu_has_dc_aliases) + __flush_dcache_pages(&folio->page, folio_nr_pages(folio)); + else if (!cpu_has_ic_fills_f_dc) + folio_set_dcache_dirty(folio); +} +#define flush_dcache_folio flush_dcache_folio + static inline void flush_dcache_page(struct page *page) { if (cpu_has_dc_aliases) - __flush_dcache_page(page); + __flush_dcache_pages(page, 1); else if (!cpu_has_ic_fills_f_dc) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) @@ -73,10 +82,11 @@ static inline void flush_anon_page(struct vm_area_struct *vma, __flush_anon_page(page, vmaddr); } -static inline void flush_icache_page(struct vm_area_struct *vma, - struct page *page) +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) { } +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void (*flush_icache_range)(unsigned long start, unsigned long end); extern void (*local_flush_icache_range)(unsigned long start, unsigned long end); diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 791389bf3c12..0cf0455e6ae8 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -105,8 +105,10 @@ do { \ } \ } while(0) -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr); + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) #if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) @@ -204,19 +206,31 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt } #endif -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { + unsigned int i; + bool do_sync = false; - if (!pte_present(pteval)) - goto cache_sync_done; + for (i = 0; i < nr; i++) { + if (!pte_present(pte)) + continue; + if (pte_present(ptep[i]) && + (pte_pfn(ptep[i]) == pte_pfn(pte))) + continue; + do_sync = true; + } - if (pte_present(*ptep) && (pte_pfn(*ptep) == pte_pfn(pteval))) - goto cache_sync_done; + if (do_sync) + __update_cache(addr, pte); - __update_cache(addr, pteval); -cache_sync_done: - set_pte(ptep, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << _PFN_SHIFT; + } } /* diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c index a549fa98c2f4..7d2a42f0cffd 100644 --- a/arch/mips/mm/c-r4k.c +++ b/arch/mips/mm/c-r4k.c @@ -679,13 +679,14 @@ static inline void local_r4k_flush_cache_page(void *args) if ((mm == current->active_mm) && (pte_val(*ptep) & _PAGE_VALID)) vaddr = NULL; else { + struct folio *folio = page_folio(page); /* * Use kmap_coherent or kmap_atomic to do flushes for * another ASID than the current one. */ map_coherent = (cpu_has_dc_aliases && - page_mapcount(page) && - !Page_dcache_dirty(page)); + folio_mapped(folio) && + !folio_test_dcache_dirty(folio)); if (map_coherent) vaddr = kmap_coherent(page, addr); else diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c index 11b3e7ddafd5..0668435521fc 100644 --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -82,13 +82,15 @@ SYSCALL_DEFINE3(cacheflush, unsigned long, addr, unsigned long, bytes, return 0; } -void __flush_dcache_page(struct page *page) +void __flush_dcache_pages(struct page *page, unsigned int nr) { - struct address_space *mapping = page_mapping_file(page); + struct folio *folio = page_folio(page); + struct address_space *mapping = folio_flush_mapping(folio); unsigned long addr; + unsigned int i; if (mapping && !mapping_mapped(mapping)) { - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); return; } @@ -97,25 +99,21 @@ void __flush_dcache_page(struct page *page) * case is for exec env/arg pages and those are %99 certainly going to * get faulted into the tlb (and thus flushed) anyways. */ - if (PageHighMem(page)) - addr = (unsigned long)kmap_atomic(page); - else - addr = (unsigned long)page_address(page); - - flush_data_cache_page(addr); - - if (PageHighMem(page)) - kunmap_atomic((void *)addr); + for (i = 0; i < nr; i++) { + addr = (unsigned long)kmap_local_page(page + i); + flush_data_cache_page(addr); + kunmap_local((void *)addr); + } } - -EXPORT_SYMBOL(__flush_dcache_page); +EXPORT_SYMBOL(__flush_dcache_pages); void __flush_anon_page(struct page *page, unsigned long vmaddr) { unsigned long addr = (unsigned long) page_address(page); + struct folio *folio = page_folio(page); if (pages_do_alias(addr, vmaddr)) { - if (page_mapcount(page) && !Page_dcache_dirty(page)) { + if (folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *kaddr; kaddr = kmap_coherent(page, vmaddr); @@ -130,27 +128,29 @@ EXPORT_SYMBOL(__flush_anon_page); void __update_cache(unsigned long address, pte_t pte) { - struct page *page; + struct folio *folio; unsigned long pfn, addr; int exec = !pte_no_exec(pte) && !cpu_has_ic_fills_f_dc; + unsigned int i; pfn = pte_pfn(pte); if (unlikely(!pfn_valid(pfn))) return; - page = pfn_to_page(pfn); - if (Page_dcache_dirty(page)) { - if (PageHighMem(page)) - addr = (unsigned long)kmap_atomic(page); - else - addr = (unsigned long)page_address(page); - - if (exec || pages_do_alias(addr, address & PAGE_MASK)) - flush_data_cache_page(addr); - if (PageHighMem(page)) - kunmap_atomic((void *)addr); + folio = page_folio(pfn_to_page(pfn)); + address &= PAGE_MASK; + address -= offset_in_folio(folio, pfn << PAGE_SHIFT); + + if (folio_test_dcache_dirty(folio)) { + for (i = 0; i < folio_nr_pages(folio); i++) { + addr = (unsigned long)kmap_local_folio(folio, i); - ClearPageDcacheDirty(page); + if (exec || pages_do_alias(addr, address)) + flush_data_cache_page(addr); + kunmap_local((void *)addr); + address += PAGE_SIZE; + } + folio_clear_dcache_dirty(folio); } } diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 5a8002839550..19d4ca3b3fbd 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -88,7 +88,7 @@ static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot) pte_t pte; int tlbidx; - BUG_ON(Page_dcache_dirty(page)); + BUG_ON(folio_test_dcache_dirty(page_folio(page))); preempt_disable(); pagefault_disable(); @@ -169,11 +169,12 @@ void kunmap_coherent(void) void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *vfrom, *vto; vto = kmap_atomic(to); if (cpu_has_dc_aliases && - page_mapcount(from) && !Page_dcache_dirty(from)) { + folio_mapped(src) && !folio_test_dcache_dirty(src)) { vfrom = kmap_coherent(from, vaddr); copy_page(vto, vfrom); kunmap_coherent(); @@ -194,15 +195,17 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (cpu_has_dc_aliases && - page_mapcount(page) && !Page_dcache_dirty(page)) { + folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(vto, src, len); kunmap_coherent(); } else { memcpy(dst, src, len); if (cpu_has_dc_aliases) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); } if (vma->vm_flags & VM_EXEC) flush_cache_page(vma, vaddr, page_to_pfn(page)); @@ -212,15 +215,17 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (cpu_has_dc_aliases && - page_mapcount(page) && !Page_dcache_dirty(page)) { + folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(dst, vfrom, len); kunmap_coherent(); } else { memcpy(dst, src, len); if (cpu_has_dc_aliases) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); } } EXPORT_SYMBOL_GPL(copy_from_user_page); From patchwork Tue Feb 28 21:37:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE1E1C7EE32 for ; Tue, 28 Feb 2023 21:37:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 67F9D6B0080; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 55A556B007D; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3260B6B0073; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id DA9C96B007D for ; Tue, 28 Feb 2023 16:37:44 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8257FC0C6B for ; Tue, 28 Feb 2023 21:37:44 +0000 (UTC) X-FDA: 80518012848.28.09E1AB9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 8F818180021 for ; Tue, 28 Feb 2023 21:37:42 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=iCk2NHUn; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620262; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=E51MTWtc4kvJF42CxWT2/h/jjoo3XaciyQ6YJuFHeTk=; b=pRbFi+83j4T5NlLyRI4j/ZOhXSPogXB5JgJXSnmMCDtSynT2Y+2OLJVQgIv6MELlWa4OXw XrcSb2j8Wt0GoITESm2KPcMA58PvdMn0F06ZyBRAZ7uUbdtHBstIsRjBhBcERqqTg3pDwF zmUjbbeH3tlmJp17eV54ih2mbyk0bto= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=iCk2NHUn; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620263; a=rsa-sha256; cv=none; b=hQbe1tC1dOli4PfDzeGUFg7lWb89cyc93glV24HDpbsRcS/7Q7wJF3KtL9aXZFfGBcKv7u zclO5y8bgi77Ctxumni0elw+LqeWSZ176w+GSiaA23lGH1dGhdLFwE1Axgr6NeRhV53WbE wlA6OdjXu1cxEn7Sh0so9pflDgE/mzA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=E51MTWtc4kvJF42CxWT2/h/jjoo3XaciyQ6YJuFHeTk=; b=iCk2NHUn77YUnj8VUtpTSTBZhH Q+w0GQqlfXZtxx1tG5dX9wrAN716AEv1qO1lhHaksuKQ2gDTBGbxLmjxh62/lBBBXQtn9d4qUbWg5 NkRL7wrzl5j6R3kWw+oBufEmIYr22w/5Q+KKN8y60jpxgbSFuSNNLfdpdsJF7s9tegSSzI4UORnak fhhuyGUt0d2zOT0wUKY2eUdpx+IXa/eFOY7iQF1aEmB8vAyULmZFwZkbsliWKYWUgMc3+1sT8AUVb bLhDPoZtuF8OvsC4dcr86rSV4aaYqAq6vF+Tn6MItEVVzeAqwYvbvTCpqWMr4K1y4eUUCpXypNmLZ AVvCuoCg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018pQ-0n; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Dinh Nguyen Subject: [PATCH v3 16/34] nios2: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:19 +0000 Message-Id: <20230228213738.272178-17-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 8F818180021 X-Stat-Signature: 1kuagsffi1hqtxaf51j4qx1ydynhyp67 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1677620262-135383 X-HE-Meta: U2FsdGVkX1/yptH1rfgcXw6HgX118QfhIWt4iOa9Rfx6uxQJlq3dxPiiwXH3Me6xxyrXAWPyN6qDG/VJI/q4qhmNomaJwsbXGOabC+0i/8Yyc60Eo5ZfAqZqPSdxXrViuO1RpqsbWZrcGl36XhSwUZ44XDldBL4h2BFKoc0h/46jneW8jfD4Yo31TEK3+0zFwlUkqcAYZfx7SlGrO+MjvfME5UH9hK3hMgghrUoJMP2SrgbU2BR4Mgn+8yfXD7zYQod9FyVwomuYIgAJdW7RRnMfFgoKmNn3PxEnOjEjmNfF/WqmU+Lh9rI4knfPYoyFgTORbmw6yEfOMV2/VpraGFZX21AS/UpPAJ03hXqS4HoLk/NReQs2rmicSgNoy+fZcPJE/WDp1eMP3hiU3JGW0eHe6oblMzoMEE4THyXitvNj1tLKNSXZ1duEQm+pbHqoCFXuoiWCBPz90b/SmSLAUyKsDRXYNQm+zf8/YYOI37jNAkNvxxjoeQ8kobbM/w0oaoT9IxFGqUYXfHcuQL5TAPZptVGQc2Ll1ZTpfvZzIQQHKfo8EFT2KUnQKFOh5DXcVWo8dJVCafQJ5b7fzK9q8XIFJScISbc9hiHOTIMISHy1qCw2VA9mRZKMl8KMq5w2AiUQKB41RAOr8lpIJbyjJP1PJUUOvTsDWOFTKw9JeFVfj/Ozn/iyRu17RbdMbI0ObHP41K3bE0FHtrr8IkzQk59H6vd0DyDjec6S7UwnQ91GDyXvRyZTs5LfeVKkmid+Ddhofwn9zN1LKd1R3gpxMQONy1P0nfeJmh5wDe1rWIT67oSoC67jGTybhNB83VyRGJ6zcIwquGd0IxWwG2uu4uGPOEi65TXPC1rr726Qwcxr8UHUB4tvI0x+e5ZxbkFSIMaOMrtAj4ZieyjZJri9VcTNYli/tCQ3DjY5GKriF1VdQX2MEiCHV7ERNsFWGqTITTZD9S98yRDrHCVm0ZY OI8kttZv aJWyBcgcWDx3whaxWl0GJEULQG+OMCw6T/x1cU9EAnxOW+ATWjXP7mjJJ5MiGMDIFIf3/xitDP7mg9D4GYOvU+gklYW73ke2N1lATWslWkK0D8pz8oqhfLnUVdOpEHIQgZyyBhxykvxPqNVFGn2lA5cWqIjt9Brk27n8ObfkdNwzCffFvZDxt0S+I836HkV1nUj5LZxWnqgHDLi5hECAu9J6jF6bn7FNzYCLqka64qRqLCpbd0txX+VOtvYzEsKMNYlR0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Dinh Nguyen --- arch/nios2/include/asm/cacheflush.h | 6 ++- arch/nios2/include/asm/pgtable.h | 27 +++++++++---- arch/nios2/mm/cacheflush.c | 61 ++++++++++++++++------------- 3 files changed, 58 insertions(+), 36 deletions(-) diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h index d0b71dd71287..8624ca83cffe 100644 --- a/arch/nios2/include/asm/cacheflush.h +++ b/arch/nios2/include/asm/cacheflush.h @@ -29,9 +29,13 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio extern void flush_icache_range(unsigned long start, unsigned long end); -extern void flush_icache_page(struct vm_area_struct *vma, struct page *page); +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1); #define flush_cache_vmap(start, end) flush_dcache_range(start, end) #define flush_cache_vunmap(start, end) flush_dcache_range(start, end) diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h index 0f5c2564e9f5..8a77821a17a5 100644 --- a/arch/nios2/include/asm/pgtable.h +++ b/arch/nios2/include/asm/pgtable.h @@ -178,15 +178,23 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) *ptep = pteval; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - unsigned long paddr = (unsigned long)page_to_virt(pte_page(pteval)); - - flush_dcache_range(paddr, paddr + PAGE_SIZE); - set_pte(ptep, pteval); + unsigned long paddr = (unsigned long)page_to_virt(pte_page(pte)); + + flush_dcache_range(paddr, paddr + nr * PAGE_SIZE); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + static inline int pmd_none(pmd_t pmd) { return (pmd_val(pmd) == @@ -273,7 +281,10 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) extern void __init paging_init(void); extern void __init mmu_init(void); -extern void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *pte); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr); + +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #endif /* _ASM_NIOS2_PGTABLE_H */ diff --git a/arch/nios2/mm/cacheflush.c b/arch/nios2/mm/cacheflush.c index 6aa9257c3ede..471485a84b2c 100644 --- a/arch/nios2/mm/cacheflush.c +++ b/arch/nios2/mm/cacheflush.c @@ -138,10 +138,11 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, __flush_icache(start, end); } -void flush_icache_page(struct vm_area_struct *vma, struct page *page) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) { unsigned long start = (unsigned long) page_address(page); - unsigned long end = start + PAGE_SIZE; + unsigned long end = start + nr * PAGE_SIZE; __flush_dcache(start, end); __flush_icache(start, end); @@ -158,19 +159,19 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, __flush_icache(start, end); } -void __flush_dcache_page(struct address_space *mapping, struct page *page) +void __flush_dcache_folio(struct address_space *mapping, struct folio *folio) { /* * Writeback any data associated with the kernel mapping of this * page. This ensures that data in the physical page is mutually * coherent with the kernels mapping. */ - unsigned long start = (unsigned long)page_address(page); + unsigned long start = (unsigned long)folio_address(folio); - __flush_dcache(start, start + PAGE_SIZE); + __flush_dcache(start, start + folio_size(folio)); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { struct address_space *mapping; @@ -178,32 +179,38 @@ void flush_dcache_page(struct page *page) * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(folio_pfn(folio))) return; - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); /* Flush this page if there are aliases. */ if (mapping && !mapping_mapped(mapping)) { - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } else { - __flush_dcache_page(mapping, page); + __flush_dcache_folio(mapping, folio); if (mapping) { - unsigned long start = (unsigned long)page_address(page); - flush_aliases(mapping, page); - flush_icache_range(start, start + PAGE_SIZE); + unsigned long start = (unsigned long)folio_address(folio); + flush_aliases(mapping, folio); + flush_icache_range(start, start + folio_size(folio)); } - set_bit(PG_dcache_clean, &page->flags); + set_bit(PG_dcache_clean, &folio->flags); } } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); + +void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} +EXPORT_SYMBOL(flush_dcache_folio); -void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { pte_t pte = *ptep; unsigned long pfn = pte_pfn(pte); - struct page *page; + struct folio *folio; struct address_space *mapping; reload_tlb_page(vma, address, pte); @@ -215,19 +222,19 @@ void update_mmu_cache(struct vm_area_struct *vma, * The zero page is never written to, so never has any dirty * cache lines, and therefore never needs to be flushed. */ - page = pfn_to_page(pfn); - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; - mapping = page_mapping_file(page); - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - __flush_dcache_page(mapping, page); + folio = page_folio(pfn_to_page(pfn)); + mapping = folio_flush_mapping(folio); + if (!test_and_set_bit(PG_dcache_clean, &folio->flags)) + __flush_dcache_folio(mapping, folio); - if(mapping) - { - flush_aliases(mapping, page); + if (mapping) { + flush_aliases(mapping, folio); if (vma->vm_flags & VM_EXEC) - flush_icache_page(vma, page); + flush_icache_pages(vma, &folio->page, + folio_nr_pages(folio)); } } From patchwork Tue Feb 28 21:37:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7E08C64EC4 for ; Tue, 28 Feb 2023 21:38:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5C1EC6B0096; Tue, 28 Feb 2023 16:37:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 521556B0098; Tue, 28 Feb 2023 16:37:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 39D366B0099; Tue, 28 Feb 2023 16:37:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id F17F46B0098 for ; Tue, 28 Feb 2023 16:37:50 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 9F3FE1C651E for ; Tue, 28 Feb 2023 21:37:50 +0000 (UTC) X-FDA: 80518013100.30.C066881 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 7EACAC0015 for ; Tue, 28 Feb 2023 21:37:47 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=bhOA4QdZ; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620267; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Nqz7AKc3hPtY/ZHqOcsiEiY49KCNRExEKCVBLCnS/x4=; b=geH7dt8Q0+CUMN+v87cfsQl/0erKy79xMq6UyTBKpXP3NRD8UHbFvY6G+5vjrhibVkYdkJ c+4N8gSMYDCO622IHFXG9byIlcgB5ECdvOe8ip+VT/xwQz5k3rISE5iHCAGDX1Xv3mVHge 8xHEUaTrBQYbJhoZyh+O10gGmjveVjU= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=bhOA4QdZ; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620267; a=rsa-sha256; cv=none; b=UVQu70MBhM3dRx0H4JI7HV2RiKldVLuyv64SmVtqvGbv/MT4InSQUS9Prj6BtORrWAVqe0 kE0Rla8sOCyK03X8KNsVL2Lpcxj9PD5EqjoxGkMiRksgpA78g72Re9N+vJNAMBjtm0CASq cI0llxXTgF0j8x9fEl6EZ1lv1Q0MXZ4= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Nqz7AKc3hPtY/ZHqOcsiEiY49KCNRExEKCVBLCnS/x4=; b=bhOA4QdZIDUyBWXUwB66ZVmlbr 92rE+G6ulRW8KUKHLmFDR8kXMC7/eBb2RCNg+w5ZtNI8ADvv85hR8wSba7m5N27fhtl12+ATq7y5t uXcsSVxgCwaptDvWpzVPhKX/Wj/f3XVq8bz1OLP4YPeCMOAtSr73WmdSjQtQNuNryXQrAL63ETzKr C7i7BvTiGxNROIu5JebX2Kq7sMDyBmgqQTr7m9UOOdbMsB7zUvyR01XeUTFME4hkKmN8NPMaLW2TM w5hsEzQzeoW1yXcAvS79gvRQJClLwwWAnsSD6SdD6S8gqpYU1pYrazUsOYZ7ZrWAZGYnIyBy9kMxx zyQX0MZw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018pS-45; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Jonas Bonn , Stefan Kristiansson , Stafford Horne , linux-openrisc@vger.kernel.org Subject: [PATCH v3 17/34] openrisc: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:20 +0000 Message-Id: <20230228213738.272178-18-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: 9kbxqispftep34r6m3xmdzhumaqd7a5t X-Rspam-User: X-Rspamd-Queue-Id: 7EACAC0015 X-Rspamd-Server: rspam06 X-HE-Tag: 1677620267-227879 X-HE-Meta: U2FsdGVkX190xcYJKmLBSA5Yn/U0UdhcvLV/HXR46MIONGnUwU9zdto7tN1xjqxduj4kyRdxVnpDEauG/GBAM5JJBStRtJQnQX/+qwAeprBXBXn+p9/cwmbYmZrdFo0NaxivEtvfVYkobcvSlGpu83eiNekmuszzQOJjy4y/ZBfUv0OEdm6wH39Na2u+5DClouwuQa1BLdo6QZ968uBGOg/ipTYn4NKHPyoGPPfpbD/qS9Xzb2BozJhYLn7qbOEsVBSwy11PdVOZhWUXdOykGwb2I0Zhpub74GsqSBwfM5BITsU08EvvEvMNH4udB7uV89KtT6X8WzlT7qsbuJeGGlHtm54QQzshbjwi07vw6p61dBFCgTBIEaaBHedt3oYDqrNa6XtFvwghzWOzp/ZQFBGbC+6mVeAN/wI8xtWfr6Jik3u0fCFEnvSmBFYlVyaLxEC4vHDmDzpGFLkcEbugVMvnPRGqaA+kv+ymWmPzeHNAjkg+RHO0BboZqE44vv0iq8iLIL6yS5advrZMdtlSzkzNJwki604TxOLEkJaGc3IQEUTiqOoH5ew4toiOw/fVls7BVYNM+/vP5e6j57vF/3MgdXY691lAhZIA+hEKSK/I+bUornuA2KpySRgPO9BTYexFh7RU5/yMe7jHJG8NusfLMltNSUqYLOnbSU0MVU5MUPKIwUI6AgSQ95rlRFS4pyECWHaVGLv0+/Qekvp6pKz+P23Nw2UiNIUmaWAjK7eIgCt6UTR4/l13jOORj6vMF+aVfOdZETTxYz7Pj+qdpLhhWfxdl/oP3UOnQU5sVZZ9UDGnQzBpwOirgoQkf1jrl7gdARbJR7bEJIaeKVAEteSXsHqhyCl0BtW8sMlCQTqa/UVQsSQx0YgVecYmENEWd4YZI11jUF2HQ7xRgRWTuQumLTFAk5mdaVWa192DgTgrvCeVlEpI9+BG8pIIwoCG/4RcZuM9sKnssmkzajt xjYgQcZ8 fn5VxX1fpaVXY8mn7OzqBokO4BQptt2IVYNsdX5/ak0xWFJg3EAlkYegjo5UtJ2bndRrqLQKz5PhvWRy+r+/vCCw6FBIWca89KKgWYunPx0mnJXvF7uWLv8WQB5+nzlIHPYIVxMQZ7wduG2SyiybFXCjKi81k8OAzT/jS4zdvOJgQPspu+9OI+wnSDJJy4zR7jzICaK0TQ9vO0D1GT6uquJBeFHPL/0xswuYCJ5L1mhTkPwpjx/aKq9ej1+R1OJ9NStK20r6sUqcet5or9CRu7rxxMg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Jonas Bonn Cc: Stefan Kristiansson Cc: Stafford Horne Cc: linux-openrisc@vger.kernel.org --- arch/openrisc/include/asm/cacheflush.h | 8 +++++++- arch/openrisc/include/asm/pgtable.h | 27 +++++++++++++++++++++----- arch/openrisc/mm/cache.c | 12 ++++++++---- 3 files changed, 37 insertions(+), 10 deletions(-) diff --git a/arch/openrisc/include/asm/cacheflush.h b/arch/openrisc/include/asm/cacheflush.h index eeac40d4a854..984c331ff5f4 100644 --- a/arch/openrisc/include/asm/cacheflush.h +++ b/arch/openrisc/include/asm/cacheflush.h @@ -56,10 +56,16 @@ static inline void sync_icache_dcache(struct page *page) */ #define PG_dc_clean PG_arch_1 +static inline void flush_dcache_folio(struct folio *folio) +{ + clear_bit(PG_dc_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 static inline void flush_dcache_page(struct page *page) { - clear_bit(PG_dc_clean, &page->flags); + flush_dcache_folio(page_folio(page)); } #define flush_icache_user_page(vma, page, addr, len) \ diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h index 3eb9b9555d0d..1a7077150d7b 100644 --- a/arch/openrisc/include/asm/pgtable.h +++ b/arch/openrisc/include/asm/pgtable.h @@ -46,7 +46,21 @@ extern void paging_init(void); * hook is made available. */ #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval) + +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + /* * (pmds are folded into pgds so this doesn't get actually called, * but the define is needed for a generic inline function.) @@ -379,13 +393,16 @@ static inline void update_tlb(struct vm_area_struct *vma, extern void update_cache(struct vm_area_struct *vma, unsigned long address, pte_t *pte); -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *pte) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { - update_tlb(vma, address, pte); - update_cache(vma, address, pte); + update_tlb(vma, address, ptep); + update_cache(vma, address, ptep); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + /* __PHX__ FIXME, SWAP, this probably doesn't work */ /* diff --git a/arch/openrisc/mm/cache.c b/arch/openrisc/mm/cache.c index 534a52ec5e66..eb43b73f3855 100644 --- a/arch/openrisc/mm/cache.c +++ b/arch/openrisc/mm/cache.c @@ -43,15 +43,19 @@ void update_cache(struct vm_area_struct *vma, unsigned long address, pte_t *pte) { unsigned long pfn = pte_val(*pte) >> PAGE_SHIFT; - struct page *page = pfn_to_page(pfn); - int dirty = !test_and_set_bit(PG_dc_clean, &page->flags); + struct folio *folio = page_folio(pfn_to_page(pfn)); + int dirty = !test_and_set_bit(PG_dc_clean, &folio->flags); /* * Since icaches do not snoop for updated data on OpenRISC, we * must write back and invalidate any dirty pages manually. We * can skip data pages, since they will not end up in icaches. */ - if ((vma->vm_flags & VM_EXEC) && dirty) - sync_icache_dcache(page); + if ((vma->vm_flags & VM_EXEC) && dirty) { + unsigned int nr = folio_nr_pages(folio); + + while (nr--) + sync_icache_dcache(folio_page(folio, nr)); + } } From patchwork Tue Feb 28 21:37:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155250 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 703EBC64EC7 for ; Tue, 28 Feb 2023 21:38:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 636886B0098; Tue, 28 Feb 2023 16:37:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 56FEA6B0099; Tue, 28 Feb 2023 16:37:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3EA4A6B009A; Tue, 28 Feb 2023 16:37:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2A1256B0098 for ; Tue, 28 Feb 2023 16:37:53 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id EC23B806C7 for ; Tue, 28 Feb 2023 21:37:52 +0000 (UTC) X-FDA: 80518013184.03.57FB3B6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 3448F100012 for ; Tue, 28 Feb 2023 21:37:50 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vP9MT4CY; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620270; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=f4iId2jx8WHA9aYmBaR2cIQrZXvmhrhHuZFPGqlcK5c=; b=0jSWGtzDopaZQNX3NiMUvhwI15jm4qEZoyLu0scaFg8lTyCIdqxxzVPP9iRx7woq2X9aAd VRY31Out7oa8LnlTDVgd6M8pE/fAN/TCkAmkdl5zcswWdW9FNEunx3seT9H2vdnVwyVcWS GaPxrV3dcCpwsuR32zjqNVoiI2F0sv0= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vP9MT4CY; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620270; a=rsa-sha256; cv=none; b=wbxtQO3B4oQaLUHC+lSU2j0PQUvHSM3wv1xYlZ9AmUvc5N6Ic8De2SesjYDeBILOy+U+36 ht96IVKC/84vue2/R48A5IsaunFbk90H5CUtEaZbezRgoK1oFilG1PoEAHWGIrjOQV68zI cLHHKck4NqnSWMPfEsZcGXTCBEjaVg8= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=f4iId2jx8WHA9aYmBaR2cIQrZXvmhrhHuZFPGqlcK5c=; b=vP9MT4CYFjnnL/8DNjZmLaLR+B AQFbv3zu/rYwMRDoAYahHuoPHudDkC2YpNuwyb/kFfu7wr4IeIszeNzhekKsYM4iIauiNg4BOAYAZ bjOrgyfv4xnULOaD9ypWLqvJ0WKF1mW9nULCcOpvzV9ZQk3R2tiIMG4eTCbafm3RCl0zLv2LuMp/h S+lWGIlyZhgJ6MZu0byuDa9HsVxgBeNYtgolqafsS8lsLN6JIpK+S1MVwOqGSEZ5qRX8HlSDesTQW pXuwKiQAia0i14XRrzJm7nAUsxld01RDBrw2lKKcw/VtR33Tq5l6NEne4lI1LYIG5hsxyb38Q93RZ J/3yLwGQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018pa-8U; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, "James E.J. Bottomley" , Helge Deller , linux-parisc@vger.kernel.org Subject: [PATCH v3 18/34] parisc: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:21 +0000 Message-Id: <20230228213738.272178-19-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 3448F100012 X-Stat-Signature: mii11e41w7m47738fizpqo6yrorp8emh X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1677620270-761774 X-HE-Meta: U2FsdGVkX18iYx4BLCDufmJOx2z2ceqPt7OXo5fPJRQtK+PO8kQfQfkBlcnFoQWLm1qvDhApYjlC2KuuFAvAquKy4AkD4aoQYhee0umUBmOX+wktJpwU43/5uCjN13j0FdPC4LgM7VRe2dSho37yFbRwK0c+Q7u/6kVD6uVN1+SqPK9N9ODVkY9vj+mLy/zfcb6Q48GZkg2clFaEHAdCpf5ufD+VPzSplTaFkWKEk//PeMWWzZw9JJumKB/u9Ru2BCRBSSWnYfZZle877sIJiqaE1qH4Eaw3j1bWsQyZcVKEzsY9v/jGa0Sd7d4dJkvCj1c614ExKiVoglGJP9r6SvFBdk7NPqi2Kb4OnH+i5WFMeqFKR8c2kCuWkR63Ysk4w5VJYED8O/vRrCldwYRUfyHv+Fu+yDVmVdtGMXCE9RcOEJTksjg929ihPLBNX+W0VciVE3B28STHSai4ZYgO874TQniZba3DI+hUqGecMFk41Z7EEcfXAJUJWN4iuH4a0QIpgdueqR3pHU+Fbl+kFX+WyU0Qx9JeDUwJRNGIHNLEeg86HWCbQld+U2dqAsVfhC1WGqpLgoPsgRiR0j0NlNPG1ozjbZzLYr0WnwpZmLFo53b3pks0RCEzf+/QiWCIYUoBN54aWcsXcg4Xno+0qsTcKpVLSPwCvOR4E8suT/OtVhIkmRCMHEjlzu+ImRgm2/V3ks/ryggIOM72Fv2zIb2CO/n9P94g56mANvrEz9MFmkSPlMWvS/jrn7FRzsFyyqRhtUqmdCxMRgJJ77+aCZHyQBQwSKAizz//R24LIRzDZwDFAYuOG1vIJPhJwilL2ieSyvw+AqbAXjg1GPZFz8hPp4UBKPq7vxtsm/y+2tUskqodHBciu5TMkM2MADGgReEpAtzFWuJRQRtqa+D09/jV3vsPleo0sTW9E00Dmc08BQfZbkP/tYdoiN4rBBVwKw4oPlMlvp2PGFHnp7V MwVfoSuC 6y9gY6EpmheR17ifRCv++Fx4bEG+xi0nicsrNBZuRmlctrlOZB8QZpb6tuLQw/TPUPcY/4yzMLILPMkI9BNgELLPKs6AQTlqLKqK03Ka64l+6wGw7xJTCSs/JV/XkWlVETYTCsktZCRnxFeFwgk/BMDXKQVQn0+tGtDzpfTURFxI5/EqMUNoEeCUmp5MCNytbKNjB4tf9iYNbFIoHDRwvYW20M7pD7zVkyrIpzLcD7AuwNec4LnsU/lWPF0fdHwRvvafyPE8urp1Fcl/XZ/6grW+2voJoX85+fhwubnMSxJEVxeJoeQK0WKdW/wAz6poEnLm4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: "James E.J. Bottomley" Cc: Helge Deller Cc: linux-parisc@vger.kernel.org --- arch/parisc/include/asm/cacheflush.h | 14 ++-- arch/parisc/include/asm/pgtable.h | 28 +++++--- arch/parisc/kernel/cache.c | 101 +++++++++++++++++++-------- 3 files changed, 99 insertions(+), 44 deletions(-) diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h index ff07c509e04b..0bf8b69d086b 100644 --- a/arch/parisc/include/asm/cacheflush.h +++ b/arch/parisc/include/asm/cacheflush.h @@ -46,16 +46,20 @@ void invalidate_kernel_vmap_range(void *vaddr, int size); #define flush_cache_vmap(start, end) flush_cache_all() #define flush_cache_vunmap(start, end) flush_cache_all() +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) -#define flush_icache_page(vma,page) do { \ - flush_kernel_dcache_page_addr(page_address(page)); \ - flush_kernel_icache_page(page_address(page)); \ -} while (0) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) #define flush_icache_range(s,e) do { \ flush_kernel_dcache_range_asm(s,e); \ diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h index e2950f5db7c9..78ee9816f423 100644 --- a/arch/parisc/include/asm/pgtable.h +++ b/arch/parisc/include/asm/pgtable.h @@ -73,14 +73,7 @@ extern void __update_cache(pte_t pte); mb(); \ } while(0) -#define set_pte_at(mm, addr, pteptr, pteval) \ - do { \ - if (pte_present(pteval) && \ - pte_user(pteval)) \ - __update_cache(pteval); \ - *(pteptr) = (pteval); \ - purge_tlb_entries(mm, addr); \ - } while (0) +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) #endif /* !__ASSEMBLY__ */ @@ -391,11 +384,28 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) extern void paging_init (void); +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + if (pte_present(pte) && pte_user(pte)) + __update_cache(pte); + for (;;) { + *ptep = pte; + purge_tlb_entries(mm, addr); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += 1 << PFN_PTE_SHIFT; + addr += PAGE_SIZE; + } +} + /* Used for deferring calls to flush_dcache_page() */ #define PG_dcache_dirty PG_arch_1 -#define update_mmu_cache(vms,addr,ptep) __update_cache(*ptep) +#define update_mmu_cache_range(vma, addr, ptep, nr) __update_cache(*ptep) +#define update_mmu_cache(vma, addr, ptep) __update_cache(*ptep) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c index 984d3a1b3828..16057812103b 100644 --- a/arch/parisc/kernel/cache.c +++ b/arch/parisc/kernel/cache.c @@ -92,11 +92,11 @@ static inline void flush_data_cache(void) /* Kernel virtual address of pfn. */ #define pfn_va(pfn) __va(PFN_PHYS(pfn)) -void -__update_cache(pte_t pte) +void __update_cache(pte_t pte) { unsigned long pfn = pte_pfn(pte); - struct page *page; + struct folio *folio; + unsigned int nr; /* We don't have pte special. As a result, we can be called with an invalid pfn and we don't need to flush the kernel dcache page. @@ -104,13 +104,17 @@ __update_cache(pte_t pte) if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); - if (page_mapping_file(page) && - test_bit(PG_dcache_dirty, &page->flags)) { - flush_kernel_dcache_page_addr(pfn_va(pfn)); - clear_bit(PG_dcache_dirty, &page->flags); + folio = page_folio(pfn_to_page(pfn)); + pfn = folio_pfn(folio); + nr = folio_nr_pages(folio); + if (folio_flush_mapping(folio) && + test_bit(PG_dcache_dirty, &folio->flags)) { + while (nr--) + flush_kernel_dcache_page_addr(pfn_va(pfn + nr)); + clear_bit(PG_dcache_dirty, &folio->flags); } else if (parisc_requires_coherency()) - flush_kernel_dcache_page_addr(pfn_va(pfn)); + while (nr--) + flush_kernel_dcache_page_addr(pfn_va(pfn + nr)); } void @@ -365,6 +369,20 @@ static void flush_user_cache_page(struct vm_area_struct *vma, unsigned long vmad preempt_enable(); } +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) +{ + void *kaddr = page_address(page); + + for (;;) { + flush_kernel_dcache_page_addr(kaddr); + flush_kernel_icache_page(kaddr); + if (--nr == 0) + break; + page += PAGE_SIZE; + } +} + static inline pte_t *get_ptep(struct mm_struct *mm, unsigned long addr) { pte_t *ptep = NULL; @@ -393,26 +411,30 @@ static inline bool pte_needs_flush(pte_t pte) == (_PAGE_PRESENT | _PAGE_ACCESSED); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - struct address_space *mapping = page_mapping_file(page); - struct vm_area_struct *mpnt; - unsigned long offset; + struct address_space *mapping = folio_flush_mapping(folio); + struct vm_area_struct *vma; unsigned long addr, old_addr = 0; + void *kaddr; unsigned long count = 0; + unsigned long i, nr; pgoff_t pgoff; if (mapping && !mapping_mapped(mapping)) { - set_bit(PG_dcache_dirty, &page->flags); + set_bit(PG_dcache_dirty, &folio->flags); return; } - flush_kernel_dcache_page_addr(page_address(page)); + nr = folio_nr_pages(folio); + kaddr = folio_address(folio); + for (i = 0; i < nr; i++) + flush_kernel_dcache_page_addr(kaddr + i * PAGE_SIZE); if (!mapping) return; - pgoff = page->index; + pgoff = folio->index; /* * We have carefully arranged in arch_get_unmapped_area() that @@ -422,15 +444,29 @@ void flush_dcache_page(struct page *page) * on machines that support equivalent aliasing */ flush_dcache_mmap_lock(mapping); - vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { - offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; - addr = mpnt->vm_start + offset; - if (parisc_requires_coherency()) { - pte_t *ptep; + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff + nr - 1) { + unsigned long offset = pgoff - vma->vm_pgoff; + unsigned long pfn = folio_pfn(folio); + + addr = vma->vm_start; + nr = folio_nr_pages(folio); + if (offset > -nr) { + pfn -= offset; + nr += offset; + } else { + addr += offset * PAGE_SIZE; + } + if (addr + nr * PAGE_SIZE > vma->vm_end) + nr = (vma->vm_end - addr) / PAGE_SIZE; - ptep = get_ptep(mpnt->vm_mm, addr); - if (ptep && pte_needs_flush(*ptep)) - flush_user_cache_page(mpnt, addr); + if (parisc_requires_coherency()) { + for (i = 0; i < nr; i++) { + pte_t *ptep = get_ptep(vma->vm_mm, + addr + i * PAGE_SIZE); + if (ptep && pte_needs_flush(*ptep)) + flush_user_cache_page(vma, + addr + i * PAGE_SIZE); + } } else { /* * The TLB is the engine of coherence on parisc: @@ -443,27 +479,32 @@ void flush_dcache_page(struct page *page) * in (until the user or kernel specifically * accesses it, of course) */ - flush_tlb_page(mpnt, addr); + for (i = 0; i < nr; i++) + flush_tlb_page(vma, addr + i * PAGE_SIZE); if (old_addr == 0 || (old_addr & (SHM_COLOUR - 1)) != (addr & (SHM_COLOUR - 1))) { - __flush_cache_page(mpnt, addr, page_to_phys(page)); + for (i = 0; i < nr; i++) + __flush_cache_page(vma, + addr + i * PAGE_SIZE, + (pfn + i) * PAGE_SIZE); /* * Software is allowed to have any number * of private mappings to a page. */ - if (!(mpnt->vm_flags & VM_SHARED)) + if (!(vma->vm_flags & VM_SHARED)) continue; if (old_addr) pr_err("INEQUIVALENT ALIASES 0x%lx and 0x%lx in file %pD\n", - old_addr, addr, mpnt->vm_file); - old_addr = addr; + old_addr, addr, vma->vm_file); + if (nr == folio_nr_pages(folio)) + old_addr = addr; } } WARN_ON(++count == 4096); } flush_dcache_mmap_unlock(mapping); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); /* Defined in arch/parisc/kernel/pacache.S */ EXPORT_SYMBOL(flush_kernel_dcache_range_asm); From patchwork Tue Feb 28 21:37:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C2C5C64EC7 for ; Tue, 28 Feb 2023 22:02:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 071D96B0072; Tue, 28 Feb 2023 17:02:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 021E76B0078; Tue, 28 Feb 2023 17:02:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2B146B007B; Tue, 28 Feb 2023 17:02:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CDC6D6B0072 for ; Tue, 28 Feb 2023 17:02:32 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8F00F160CE0 for ; Tue, 28 Feb 2023 22:02:32 +0000 (UTC) X-FDA: 80518075344.30.09F2107 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id 323D8A001F for ; Tue, 28 Feb 2023 22:02:28 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="fec0F/Xo"; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677621749; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=M4ny1muYlvcClbRg5WQTO/Vts3FXL6q850CUXICG6ck=; b=7mUYJB3fPcf192zFmOb095yMUzUzMag7e3fVU+DLZRLu5OkCsoQuGByWN2TtfKcABgj62D 7Zn6+Ta/U2vIG/H2ObNuQq1gG8FDdWCYJTt+6DmBTvAwgstoThkj4JpQluKq9r3l5YAX6A I7T/h6F17/mJ3pAVhTlbu9mohy6Tymk= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="fec0F/Xo"; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677621749; a=rsa-sha256; cv=none; b=m0+noWBszztDSSJAzi986njp5GErQ6fAFEz2sYqKzlCoiT7Xe/0Jbhkx5UTuQv4qFyemLI +GVteXtym1Hk7NboURQ+Z+ZDpZCoVYyp19a7OyZ3fDuywWLLRDs7J/YHlmimZ0h3hgu5SF aMMvbybQQm5j/8gWTv8D7L5ab2IT4PY= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=M4ny1muYlvcClbRg5WQTO/Vts3FXL6q850CUXICG6ck=; b=fec0F/XoUT3otKNoa22zMoIuPS DzAVFaFnpuH/ZtFlnUThyhqIp+LsdaPcVYQx2l4U3pjAwSgpF2HPhIbxnToJIE9npYCR/L8Mm9U4D 2MOCixX3GL8mOkX/T1gfi/6aMdWXhFHxscKh4u+uammQk0R5wNdC4WGxWJp+FO97pGAAQe0MQOMDf iGoIW9d86Ac63VneTen8KNqkRrnFeU5BWszsYjhEwR1YPY8+EvMfg6tM4b5/z51myppBtmqHAPmmZ QpsUdJjeHS9QzwJYjQ94SNhBdFsSIzCKYzY+2zXcVhble+NLp+Kl/i7d1GyB54072AKtith2exzBV T8g1DiDw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018pk-Cl; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Michael Ellerman , Nicholas Piggin , Christophe Leroy , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v3 19/34] powerpc: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:22 +0000 Message-Id: <20230228213738.272178-20-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: souyqfuemtus3a1rf3o7eek61i8cb9k1 X-Rspam-User: X-Rspamd-Queue-Id: 323D8A001F X-Rspamd-Server: rspam06 X-HE-Tag: 1677621748-369523 X-HE-Meta: U2FsdGVkX1++DemKg1QeORV7lzG7vrPEUNccA/Y1qmqOdkpwIF9gGZpk0sIScdqcJJegEkT6PqHCilvav9t+sa+kp55GDaULhyDEGEHpRnP7aMYmtY8RX3PUOEnEeByMFM2mO8RgUagUB9cQDAjGJXUYTVnnE8SnXbQyZjy8+7kau2aMMpxrwW5Jq5H2nxtMKnL7R2K1gAsvCK3yWa3gytK+As+zH+HLPmDlqTOxTtdkAdplBJyyETQpxG1QLWuNJ6wixC3s+fzxIaC5U0uUVTnGusNqLwBD43S60NY1lOjqGYxe0HwljWx+0QqTKWUovpYjEGTKB640LmQIxX5f2ld4Xx6toIw68AjNs0bOCbyiQ24NzPkaSm3UDbt9FoSD+Cr3oN/xiG97cEwU0UXdVkAjUVGG7mlKUn5ylMNqG616/ZQkI3DAIb3c4lLTyZSEvm2QVE856EmRtzBkrWuyadMNzij+PTo2X/ls9LJNtg7ghM0QDAImrkfibQ47lNPFy6PlSNc0zkpt2YCwUxenDXzzbpTEqVWORkjRd/hYlkhA7uWeTrdhwqUBcdiIIC+7mUkYhEvr+2scQmvozlgIWBcgOQuUv/SNkR8kqydrppSeTDT3eyFZtd4Qtc51x/CIa8wnL1GVuWdQYXafFOHsz6RUK527YNdqsbrEYgBiwvrQ9S7DPGl9N6tJ8X98Kiqr5W+MWrnpKrGJ0nOyplvMP4oip2NJZUM9j45frcAfhBKEzhJg2kDPOmEaivJdKezOExMxzBkQcFvrW1jrWt27aQKtpnY/buXKZRzuiZRtc0dy55nIoYhkOH7PrGKT/H4zhJwfBPce7iPaAcbUlT3CDxNwIVAo96XAVnuY/zdSDGcgxCyT0CiVkFPHxk5YtPWDNEt4Ztslxxg8ezJWajt78pwTk4ph7aaEWStxdav5wCSp3O+F0zjAH71hZk1U/U1JoxRlYf8dXTnJYl8QH3b DVN8K+Go H+YjbFRAi1W4n+A6O5DHxcWjDPONQAafLZqYeV4LwT8Gx8vw4fqMP34OwSIl8+hGNQwwfe/MLsEPcZc7OhRsQv60aKFOom9WP1j3LV+bpngfQt0q0LNe3rve6PhDgsXU4CzgKqzo44395Kmclj07ArfLxcqfdtm/AwY6f8Q2yzO99dH9OnWGqG2+FK+CTbYBkNgNO84xty4H7sJWdcuEbP/KORT9rg757qGN6jyMay8mo4L+NJU2EqrZLs7C9r08VkL0Ij9u1VfTiiBb35ODUW4xp/ZhIRwwIThzG3DDOKzsFVnCzdPH9b8O3gdECMjrPgS4mzY6vwHnKti1BZc7CK1U62Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: linuxppc-dev@lists.ozlabs.org --- arch/powerpc/include/asm/book3s/pgtable.h | 10 +---- arch/powerpc/include/asm/cacheflush.h | 14 +++++-- arch/powerpc/include/asm/kvm_ppc.h | 10 ++--- arch/powerpc/include/asm/nohash/pgtable.h | 13 ++---- arch/powerpc/include/asm/pgtable.h | 6 +++ arch/powerpc/mm/book3s64/hash_utils.c | 11 ++--- arch/powerpc/mm/cacheflush.c | 40 ++++++------------ arch/powerpc/mm/nohash/e500_hugetlbpage.c | 3 +- arch/powerpc/mm/pgtable.c | 51 +++++++++++++---------- 9 files changed, 77 insertions(+), 81 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/pgtable.h b/arch/powerpc/include/asm/book3s/pgtable.h index d18b748ea3ae..c2ef811505b0 100644 --- a/arch/powerpc/include/asm/book3s/pgtable.h +++ b/arch/powerpc/include/asm/book3s/pgtable.h @@ -9,13 +9,6 @@ #endif #ifndef __ASSEMBLY__ -/* Insert a PTE, top-level function is out of line. It uses an inline - * low level function in the respective pgtable-* files - */ -extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte); - - #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, pte_t *ptep, pte_t entry, int dirty); @@ -36,7 +29,8 @@ void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * corresponding HPTE into the hash table ahead of time, instead of * waiting for the inevitable extra hash-table miss exception. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { if (IS_ENABLED(CONFIG_PPC32) && !mmu_has_feature(MMU_FTR_HPTE_TABLE)) return; diff --git a/arch/powerpc/include/asm/cacheflush.h b/arch/powerpc/include/asm/cacheflush.h index 7564dd4fd12b..ef7d2de33b89 100644 --- a/arch/powerpc/include/asm/cacheflush.h +++ b/arch/powerpc/include/asm/cacheflush.h @@ -35,13 +35,19 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end) * It just marks the page as not i-cache clean. We do the i-cache * flush later when the page is given to a user process, if necessary. */ -static inline void flush_dcache_page(struct page *page) +static inline void flush_dcache_folio(struct folio *folio) { if (cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) return; /* avoid an atomic op if possible */ - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); } void flush_icache_range(unsigned long start, unsigned long stop); @@ -51,7 +57,7 @@ void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); #define flush_icache_user_page flush_icache_user_page -void flush_dcache_icache_page(struct page *page); +void flush_dcache_icache_folio(struct folio *folio); /** * flush_dcache_range(): Write any modified data cache blocks out to memory and diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 6bef23d6d0e3..e91dd8e88bb7 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -868,7 +868,7 @@ void kvmppc_init_lpid(unsigned long nr_lpids); static inline void kvmppc_mmu_flush_icache(kvm_pfn_t pfn) { - struct page *page; + struct folio *folio; /* * We can only access pages that the kernel maps * as memory. Bail out for unmapped ones. @@ -877,10 +877,10 @@ static inline void kvmppc_mmu_flush_icache(kvm_pfn_t pfn) return; /* Clear i-cache for new pages */ - page = pfn_to_page(pfn); - if (!test_bit(PG_dcache_clean, &page->flags)) { - flush_dcache_icache_page(page); - set_bit(PG_dcache_clean, &page->flags); + folio = page_folio(pfn_to_page(pfn)); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } } diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h index a6caaaab6f92..69a7dd47a9f0 100644 --- a/arch/powerpc/include/asm/nohash/pgtable.h +++ b/arch/powerpc/include/asm/nohash/pgtable.h @@ -166,12 +166,6 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) return __pte(pte_val(pte) & ~_PAGE_SWP_EXCLUSIVE); } -/* Insert a PTE, top-level function is out of line. It uses an inline - * low level function in the respective pgtable-* files - */ -extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte); - /* This low level function performs the actual PTE insertion * Setting the PTE depends on the MMU type and other factors. It's * an horrible mess that I'm not going to try to clean up now but @@ -282,10 +276,11 @@ static inline int pud_huge(pud_t pud) * for the page which has just been mapped in. */ #if defined(CONFIG_PPC_E500) && defined(CONFIG_HUGETLB_PAGE) -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr); #else -static inline -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) {} +static inline void update_mmu_cache(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) {} #endif #endif /* __ASSEMBLY__ */ diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h index 9972626ddaf6..bf1263ff7e67 100644 --- a/arch/powerpc/include/asm/pgtable.h +++ b/arch/powerpc/include/asm/pgtable.h @@ -41,6 +41,12 @@ struct mm_struct; #ifndef __ASSEMBLY__ +void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, + pte_t pte, unsigned int nr); +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1); + #ifndef MAX_PTRS_PER_PGD #define MAX_PTRS_PER_PGD PTRS_PER_PGD #endif diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index fedffe3ae136..ad2afa08e62e 100644 --- a/arch/powerpc/mm/book3s64/hash_utils.c +++ b/arch/powerpc/mm/book3s64/hash_utils.c @@ -1307,18 +1307,19 @@ void hash__early_init_mmu_secondary(void) */ unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap) { - struct page *page; + struct folio *folio; if (!pfn_valid(pte_pfn(pte))) return pp; - page = pte_page(pte); + folio = page_folio(pte_page(pte)); /* page is dirty */ - if (!test_bit(PG_dcache_clean, &page->flags) && !PageReserved(page)) { + if (!test_bit(PG_dcache_clean, &folio->flags) && + !folio_test_reserved(folio)) { if (trap == INTERRUPT_INST_STORAGE) { - flush_dcache_icache_page(page); - set_bit(PG_dcache_clean, &page->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } else pp |= HPTE_R_N; } diff --git a/arch/powerpc/mm/cacheflush.c b/arch/powerpc/mm/cacheflush.c index 0e9b4879c0f9..8760d2223abe 100644 --- a/arch/powerpc/mm/cacheflush.c +++ b/arch/powerpc/mm/cacheflush.c @@ -148,44 +148,30 @@ static void __flush_dcache_icache(void *p) invalidate_icache_range(addr, addr + PAGE_SIZE); } -static void flush_dcache_icache_hugepage(struct page *page) +void flush_dcache_icache_folio(struct folio *folio) { - int i; - int nr = compound_nr(page); + unsigned int i, nr = folio_nr_pages(folio); - if (!PageHighMem(page)) { + if (flush_coherent_icache()) + return; + + if (!folio_test_highmem(folio)) { + void *addr = folio_address(folio); for (i = 0; i < nr; i++) - __flush_dcache_icache(lowmem_page_address(page + i)); - } else { + __flush_dcache_icache(addr + i * PAGE_SIZE); + } else if (IS_ENABLED(CONFIG_BOOKE) || sizeof(phys_addr_t) > sizeof(void *)) { for (i = 0; i < nr; i++) { - void *start = kmap_local_page(page + i); + void *start = kmap_local_folio(folio, i * PAGE_SIZE); __flush_dcache_icache(start); kunmap_local(start); } - } -} - -void flush_dcache_icache_page(struct page *page) -{ - if (flush_coherent_icache()) - return; - - if (PageCompound(page)) - return flush_dcache_icache_hugepage(page); - - if (!PageHighMem(page)) { - __flush_dcache_icache(lowmem_page_address(page)); - } else if (IS_ENABLED(CONFIG_BOOKE) || sizeof(phys_addr_t) > sizeof(void *)) { - void *start = kmap_local_page(page); - - __flush_dcache_icache(start); - kunmap_local(start); } else { - flush_dcache_icache_phys(page_to_phys(page)); + unsigned long pfn = folio_pfn(folio); + for (i = 0; i < nr; i++) + flush_dcache_icache_phys((pfn + i) * PAGE_SIZE); } } -EXPORT_SYMBOL(flush_dcache_icache_page); void clear_user_page(void *page, unsigned long vaddr, struct page *pg) { diff --git a/arch/powerpc/mm/nohash/e500_hugetlbpage.c b/arch/powerpc/mm/nohash/e500_hugetlbpage.c index 58c8d9849cb1..f3cb91107a47 100644 --- a/arch/powerpc/mm/nohash/e500_hugetlbpage.c +++ b/arch/powerpc/mm/nohash/e500_hugetlbpage.c @@ -178,7 +178,8 @@ book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea, pte_t pte) * * This must always be called with the pte lock held. */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { if (is_vm_hugetlb_page(vma)) book3e_hugetlb_preload(vma, address, *ptep); diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index cb2dcdb18f8e..b3c7b874a7a2 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -58,7 +58,7 @@ static inline int pte_looks_normal(pte_t pte) return 0; } -static struct page *maybe_pte_to_page(pte_t pte) +static struct folio *maybe_pte_to_folio(pte_t pte) { unsigned long pfn = pte_pfn(pte); struct page *page; @@ -68,7 +68,7 @@ static struct page *maybe_pte_to_page(pte_t pte) page = pfn_to_page(pfn); if (PageReserved(page)) return NULL; - return page; + return page_folio(page); } #ifdef CONFIG_PPC_BOOK3S @@ -84,12 +84,12 @@ static pte_t set_pte_filter_hash(pte_t pte) pte = __pte(pte_val(pte) & ~_PAGE_HPTEFLAGS); if (pte_looks_normal(pte) && !(cpu_has_feature(CPU_FTR_COHERENT_ICACHE) || cpu_has_feature(CPU_FTR_NOEXECUTE))) { - struct page *pg = maybe_pte_to_page(pte); - if (!pg) + struct folio *folio = maybe_pte_to_folio(pte); + if (!folio) return pte; - if (!test_bit(PG_dcache_clean, &pg->flags)) { - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + if (!test_bit(PG_dcache_clean, &folio->flags)) { + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); } } return pte; @@ -107,7 +107,7 @@ static pte_t set_pte_filter_hash(pte_t pte) { return pte; } */ static inline pte_t set_pte_filter(pte_t pte) { - struct page *pg; + struct folio *folio; if (radix_enabled()) return pte; @@ -120,18 +120,18 @@ static inline pte_t set_pte_filter(pte_t pte) return pte; /* If you set _PAGE_EXEC on weird pages you're on your own */ - pg = maybe_pte_to_page(pte); - if (unlikely(!pg)) + folio = maybe_pte_to_folio(pte); + if (unlikely(!folio)) return pte; /* If the page clean, we move on */ - if (test_bit(PG_dcache_clean, &pg->flags)) + if (test_bit(PG_dcache_clean, &folio->flags)) return pte; /* If it's an exec fault, we flush the cache and make it clean */ if (is_exec_fault()) { - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); return pte; } @@ -142,7 +142,7 @@ static inline pte_t set_pte_filter(pte_t pte) static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, int dirty) { - struct page *pg; + struct folio *folio; if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) return pte; @@ -168,17 +168,17 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, #endif /* CONFIG_DEBUG_VM */ /* If you set _PAGE_EXEC on weird pages you're on your own */ - pg = maybe_pte_to_page(pte); - if (unlikely(!pg)) + folio = maybe_pte_to_folio(pte); + if (unlikely(!folio)) goto bail; /* If the page is already clean, we move on */ - if (test_bit(PG_dcache_clean, &pg->flags)) + if (test_bit(PG_dcache_clean, &folio->flags)) goto bail; /* Clean the page and set PG_dcache_clean */ - flush_dcache_icache_page(pg); - set_bit(PG_dcache_clean, &pg->flags); + flush_dcache_icache_folio(folio); + set_bit(PG_dcache_clean, &folio->flags); bail: return pte_mkexec(pte); @@ -187,8 +187,8 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma, /* * set_pte stores a linux PTE into the linux page table. */ -void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, - pte_t pte) +void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, + pte_t pte, unsigned int nr) { /* * Make sure hardware valid bit is not set. We don't do @@ -203,7 +203,14 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte = set_pte_filter(pte); /* Perform the setting of the PTE */ - __set_pte_at(mm, addr, ptep, pte, 0); + for (;;) { + __set_pte_at(mm, addr, ptep, pte, 0); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + PAGE_SIZE); + addr += PAGE_SIZE; + } } void unmap_kernel_page(unsigned long va) From patchwork Tue Feb 28 21:37:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45B0CC7EE31 for ; Tue, 28 Feb 2023 21:38:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7887D6B0087; Tue, 28 Feb 2023 16:37:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 682D86B0089; Tue, 28 Feb 2023 16:37:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1CF506B0093; Tue, 28 Feb 2023 16:37:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 924636B0092 for ; Tue, 28 Feb 2023 16:37:46 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 643C8A0D69 for ; Tue, 28 Feb 2023 21:37:46 +0000 (UTC) X-FDA: 80518012932.06.8827CD3 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id AF2E11C001D for ; Tue, 28 Feb 2023 21:37:43 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MZG8P9QC; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620263; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bgDXt1j338PLwuReLhtbxzMM/LqJBLyZGUvNk00hFLE=; b=gS84INGk05hEwnWzfs4MwJb4FmKjcyHorrGtpNL91HKHO+09sCjS12oUz+sAlzRzX1GPSt /95okYGjyA0wOfPtqsVlkzQG6s0H2QOv0WiTxw+O25lHnvYHRhbk8rLyF+YjKlkFVociIo QUbH1KtXvLWgSIRM7E1I3iIyn+EaIjg= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MZG8P9QC; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620263; a=rsa-sha256; cv=none; b=L0J2nNQ0zRNsW411ch/Hz0akeqK5E2VRBbZX3Pq9F++ODel4ZaOtldD7D8XQeEObGo9Voq sVROtibLdLQ6JD0RpJP7K6fovmO84kNZBxJ3nBcV1uwdWPouLWWQ6kCUwToVIQVk5llvZL B+YRbDlMBNNRD1Mo24XRqti3alVj1Yk= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bgDXt1j338PLwuReLhtbxzMM/LqJBLyZGUvNk00hFLE=; b=MZG8P9QCVk6X6QUeTHsxN4ljba XHTJvll6JiCHqkrFufD6FkOaPVdMfdChmuIsYG6gPN/C0EtF8ZsJOFYFEH1aeBcwuCVsvT6cc6zYC kERWqlZqxcgCWvBS70G4o63hwFCq5a5JLvfaDSP47cYHd1mFAk1fwtqJ/1In2d8HC2ySTJuVnuQNW lGzpIRExrBET51YPs0VKertzPTTKR+WXGDup5hZEHoBb9ayQxwVwro9LZ0dfPFqtLQK6Q85qK5e7b Aj8Top+afwoUhwQOh16Hnac/FNf0S0cYWNl7KZwibvSYNIWcbo2rngPWYZgxF35kJkqJ6zw0rggUf HdbXC1nQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018pu-IG; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Alexandre Ghiti , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [PATCH v3 20/34] riscv: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:23 +0000 Message-Id: <20230228213738.272178-21-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: AF2E11C001D X-Stat-Signature: e49zryao1ajwy8hiys6fnxtpjcx5ggpm X-HE-Tag: 1677620263-469594 X-HE-Meta: U2FsdGVkX19a147lQFr6V9dU45f/85XU3Q81U2nzMKOqs5NUjohy+URC7UCmHAFPjkO5zNP+NY9fR4kNyiObfNi6+nl37QKj3uIzmDIFy/lO7IVWn8usBCF4teT1yJQ6hslzS5bf3/yiJN4N75rF26Z7TYD1K7O98gJEmi2+KYVJccZ2L9aDlF/fUg31DHQjrk+21NQiyfAurGlGtHHNIYYTbX5JSVktMyW0h31+zyMgEHwTdrZj4VBQ8DhezqJvHwQmh43BIttcvHQmzoXjeMZNOW9j58IjsELhT1P/MopupDDTajrTTW7WUISkB4bcbckFTZ3LsmC3AOMTUOpZULn6XK3Jr0MflsexKyWn+KYgQSUC90U4Kvz5fW976VZw0ycortwCADBz1g0fLVBHsXk6Esw4BaKG5YfC7Bon5YmoBdVk0KX76iXjZG3r3VLoOtagCkYguddUEcJunTWM9+dsxDjLMvRTKWlftqhfB5iQ91HhkbNBnG7Sy88cYuoQqwvwqln1y1pxSjXscFnWki9pwLGcRaSdomP7G2dQP6bsoAQuEg5M8HTnsEUZFTegcKejOLqIJoYeMRdikBD3Q8M1v+3ENtsx1GYf6pcLt1TBJnSd5Lkz3XxYMRgnpQHwO4qUPFuidxj9a9tKZZnZJYqmgLHo1X6TRzQL+EY2D42YH3cVK4a9v0AGP8G7uLsjhm1+E9QkccSdBsApnNwHbSdn+Z8RmRtOmRaTwV3Bu3FkpbZtBKW1H3C/JaolSGHBnvojXQcK6b5s37zEFxWfRy7TAXGTtaSmQttb2NO//+s2Evpq7YtpFzDEE8WNWVkBh8r/agoznU2O58goCYROJHW9KntS6G/LydbJngTSJF4ifSBt20dCppA2kEv1+FHIH8XT6ki9e3/a2IViL2WkbT5bg8JPohCKnd/VQDVASo6zGacRacCMQxRz4FRiohprC1zffiDXBcP0YgMCJWL vFRux2ST kUcyB4Qh9aTu3htMFIL60Dg5juFMMuiizMaR1vQ6Rzx/EOlaGtbwjlQKC38v3lBUJO2gtWhKvMSgLYY2s3c0p9n4ZsGBuN67xZJoYsK0sOw2a4F2DLMH480rfqFFQPnEllEkDTr07iSHb6/y34KsTz7Yv16xriUZkKlIA8TI5vCxRpn2gj9JvMRlWenO0devIdJ/L76Ia2qXJrgchUlsziOS8ZgDx61bRmhLstthFwDHQLFuuvuwadGKryXiv7c1b6+ugOnS7oEp8ol/rrE3uQtMBGnZ1IoGIw5KbiVx2RA2JIr2wapq8WFNAiCImrKfRvkUTKKsADRTdx0XKyBoY90H+5D1NHMktXBMd8i0eWW3DXgV/LuPSA9bJiQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_dcache_clean flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Alexandre Ghiti Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: linux-riscv@lists.infradead.org Acked-by: Palmer Dabbelt --- arch/riscv/include/asm/cacheflush.h | 19 +++++++++---------- arch/riscv/include/asm/pgtable.h | 26 +++++++++++++++++++------- arch/riscv/mm/cacheflush.c | 11 ++--------- 3 files changed, 30 insertions(+), 26 deletions(-) diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h index 03e3b95ae6da..10e5e96f09b5 100644 --- a/arch/riscv/include/asm/cacheflush.h +++ b/arch/riscv/include/asm/cacheflush.h @@ -15,20 +15,19 @@ static inline void local_flush_icache_all(void) #define PG_dcache_clean PG_arch_1 -static inline void flush_dcache_page(struct page *page) +static inline void flush_dcache_folio(struct folio *folio) { - /* - * HugeTLB pages are always fully mapped and only head page will be - * set PG_dcache_clean (see comments in flush_icache_pte()). - */ - if (PageHuge(page)) - page = compound_head(page); - - if (test_bit(PG_dcache_clean, &page->flags)) - clear_bit(PG_dcache_clean, &page->flags); + if (test_bit(PG_dcache_clean, &folio->flags)) + clear_bit(PG_dcache_clean, &folio->flags); } +#define flush_dcache_folio flush_dcache_folio #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} + /* * RISC-V doesn't have an instruction to flush parts of the instruction cache, * so instead we just flush the whole thing. diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index b516f3b59616..3a3a776fc047 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -405,8 +405,8 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) /* Commit new configuration to MMU hardware */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { /* * The kernel assumes that TLBs don't cache invalid entries, but @@ -415,8 +415,11 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, * Relying on flush_tlb_fix_spurious_fault would suffice, but * the extra traps reduce performance. So, eagerly SFENCE.VMA. */ - local_flush_tlb_page(address); + while (nr--) + local_flush_tlb_page(address + nr * PAGE_SIZE); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #define __HAVE_ARCH_UPDATE_MMU_TLB #define update_mmu_tlb update_mmu_cache @@ -456,12 +459,21 @@ static inline void __set_pte_at(struct mm_struct *mm, set_pte(ptep, pteval); } -static inline void set_pte_at(struct mm_struct *mm, - unsigned long addr, pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval, unsigned int nr) { - page_table_check_ptes_set(mm, addr, ptep, pteval, 1); - __set_pte_at(mm, addr, ptep, pteval); + page_table_check_ptes_set(mm, addr, ptep, pteval, nr); + + for (;;) { + __set_pte_at(mm, addr, ptep, pteval); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + pte_val(pteval) += 1 << _PAGE_PFN_SHIFT; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c index fcd6145fbead..e36a851e5788 100644 --- a/arch/riscv/mm/cacheflush.c +++ b/arch/riscv/mm/cacheflush.c @@ -81,16 +81,9 @@ void flush_icache_mm(struct mm_struct *mm, bool local) #ifdef CONFIG_MMU void flush_icache_pte(pte_t pte) { - struct page *page = pte_page(pte); + struct folio *folio = page_folio(pte_page(pte)); - /* - * HugeTLB pages are always fully mapped, so only setting head page's - * PG_dcache_clean flag is enough. - */ - if (PageHuge(page)) - page = compound_head(page); - - if (!test_bit(PG_dcache_clean, &page->flags)) { + if (!test_bit(PG_dcache_clean, &folio->flags)) { flush_icache_all(); set_bit(PG_dcache_clean, &page->flags); } From patchwork Tue Feb 28 21:37:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155244 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A044C7EE43 for ; Tue, 28 Feb 2023 21:38:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AA2F66B0089; Tue, 28 Feb 2023 16:37:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A7A896B008C; Tue, 28 Feb 2023 16:37:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 919CD6B0092; Tue, 28 Feb 2023 16:37:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7B7386B0089 for ; Tue, 28 Feb 2023 16:37:48 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 37F36140D6B for ; Tue, 28 Feb 2023 21:37:48 +0000 (UTC) X-FDA: 80518013016.22.2CBE54E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 8C1F180019 for ; Tue, 28 Feb 2023 21:37:46 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rMDBeLPm; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620266; a=rsa-sha256; cv=none; b=XZYoo726akMNv88UCGQ88D/HkfqORJCSQqx8/whRS0k1c/HRMZ7lV+8CoqKAH+P72SMhoj 4GKEwvmxgLvogu5DfeMjJx5yDwowrtHzZNi4fczEynGJf+lfZxaK5kg4ijJVqF/TGavsID 4FK5+MNGlOpYmE4jg4l4yCF7chGbrWo= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rMDBeLPm; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620266; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OsGw+Kta3YWjQfJb2PytPQ2Q1ww6/aja6t5dfRe0Qq4=; b=Tdd7DZ6GyCkoxmH2nLiTi3NaMHOflFpktR9NfxDJcZH3iDLjQscmQWNdaKRMbU4jG6dPNj 8N8tef73gz349phvXxnxJMismmkER/sHyRnpMFif+inM8OyAUieXr4sj30Q7yhztGJ5kId /3TS7B+Mdm7+/bKpvti0p3pgLGsSMiI= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OsGw+Kta3YWjQfJb2PytPQ2Q1ww6/aja6t5dfRe0Qq4=; b=rMDBeLPmVy/kg7VjEW0+V+mfrZ lKyuBvqhfx1BjW9Zuwx44oDQcbTer4re2OMXNroGqqGxQhEAf2SSYunPi/vpenpVcDBDCoVE1Snen 1eFC9Wns0R6xq9fq2jKRtj6r6IO0MxL6T/YimB5wQY8ItbD2uI7GphlMWfK5Y7sVrwoOF/1WhR34z Y8HnMMK0+lAeaswImqwwHTbCM4jjuUw+7MST7gy/EVKYwTvAaLoR6N7jGY3p+YwvhpVCH5lyC7B+U MnToir81ROGosZPKkD/HG2y3Q9I/Jag2e1EmeqAYvNGojQTho3bVX8Zp8n8qvubagR8ZVl/B+IUhB kfLELGYA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018q2-M1; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , linux-s390@vger.kernel.org Subject: [PATCH v3 21/34] s390: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:24 +0000 Message-Id: <20230228213738.272178-22-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 8C1F180019 X-Rspamd-Server: rspam01 X-Stat-Signature: xqgaxkb6hzsifj7qr8hwic8nxno5h3yw X-HE-Tag: 1677620266-69089 X-HE-Meta: U2FsdGVkX19qOH89xnsJzC3wXcTxxwPek/tlRohZf8kNnVslDTsNhheNwN7/7CDeN0frqGod9XnTxd70oKhhAhcaodbP9Kl8G2hy75DOKLHrHDsn4DFrH6WPrtLiMfOi6tXnlJtLqxYIm2fSfgxHRcg3ui7w0RrIOfAhVb/V1AtmEjhbBeXWe7+VMRUgicG78Da6nFPltXiPbDw6K0sbSnlNFHMlZKLNWAMW8TOgamjTLIzH7MglKlbSxtNHZ8FxR5bLUdtgHBYo/7OhJwDl21lkTkBdh1uvQ6BMTA9o7xrz5kjfEzsWo4hPAIw6EXY4eU9WGKSAMovhu2uOT7IYkLVHgAa0a9M0FhqW3InAB+qA4u6+qOz5BLmUudU5jHYPwVWjxpWSwCeU5ByQHxMBGQo7CpUE1RN37dryjEU8HLQ0daZvlETK+74qkpySvc34VhS8Hy9xXomuB2AIVjmPchN0nhgkxyHR8hXER8UL49QYguOlyrYLobn0CIVHPAYUdQbV07/ZPOtS//oiFbfqDNwukNSj7UyiaNQc9UlPh83mM0PYA4Vm1cMJyI64g0Yn6XQOGQTF9OsUyRTgXhngBMvZNU0t8XW0mGtKv9iGEls2MP5C8liz+ztcz8qlgRPXTMOeR8GRLuLzWKvasM+tu88kqbKdViUV34x0H/FHL1mISVbvqHVuODakpAdcIWCvrJbBECYQEaf+sYcKVbcerWb0HpeK+wN/MjHZgdMcpQdOpeQN0i8zYsAG7wzL1Jv79jgwjYJxbb9YqD/1Xv4SehjzLApWPT82feWDdvOqAnF0vML9wC5FaWMY+SRi6TQucXU2EFVOJMKnpZTQv114vqpLur8lcU8QkHFG3DvV69yayDTDSfInfyogf++vzy2C/Sb2oxayvs6Gu+XyD5fCK6Kj7Vpbo5x7N/eKOeBMdam1H3JVLSlJ/JGq01DZQRQCThHtRvdPMIZUWtemStw wn2cOZhI 0NGp6BsMnTbj+CZ3PWI0Ka6iAmbbI14/Fef1WWCWfer8SMm6rxD3wQ26H/Hg1MwSFDSaT61zhDeJ9n6iVuRkYarhvfKNBdgpia9Yf5EBbc9fUwY+U6594KLi3ollrsY6nMiYs9zYaL70L2h5IfVRJB0RxTvohxzKEfZYpp+yxXKXqjZ33fCyth/FI4m0mXDZ69wxLteXNRPk0gnM7rWSO58yWqGHH7zwy3MniExY1wip1lQKETj2Jtn7nfXzRaGSD35DlFtJ1IUaaO1SoOApxflEehQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes() and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Cc: linux-s390@vger.kernel.org Reviewed-by: Gerald Schaefer --- arch/s390/include/asm/pgtable.h | 34 ++++++++++++++++++++++++--------- 1 file changed, 25 insertions(+), 9 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 2c70b4d1263d..46bf475116f1 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -50,6 +50,7 @@ void arch_report_meminfo(struct seq_file *m); * tables contain all the necessary information. */ #define update_mmu_cache(vma, address, ptep) do { } while (0) +#define update_mmu_cache_range(vma, addr, ptep, nr) do { } while (0) #define update_mmu_cache_pmd(vma, address, ptep) do { } while (0) /* @@ -1317,21 +1318,36 @@ pgprot_t pgprot_writecombine(pgprot_t prot); pgprot_t pgprot_writethrough(pgprot_t prot); /* - * Certain architectures need to do special things when PTEs - * within a page table are directly modified. Thus, the following - * hook is made available. + * Set multiple PTEs to consecutive pages with a single call. All PTEs + * are within the same folio, PMD and VMA. */ -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t entry) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t entry, unsigned int nr) { if (pte_present(entry)) entry = clear_pte_bit(entry, __pgprot(_PAGE_UNUSED)); - if (mm_has_pgste(mm)) - ptep_set_pte_at(mm, addr, ptep, entry); - else - set_pte(ptep, entry); + if (mm_has_pgste(mm)) { + for (;;) { + ptep_set_pte_at(mm, addr, ptep, entry); + if (--nr == 0) + break; + ptep++; + entry = __pte(pte_val(entry) + PAGE_SIZE); + addr += PAGE_SIZE; + } + } else { + for (;;) { + set_pte(ptep, entry); + if (--nr == 0) + break; + ptep++; + entry = __pte(pte_val(entry) + PAGE_SIZE); + } + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + /* * Conversion functions: convert a page and protection to a page entry, * and a page entry and page directory to the page they refer to. From patchwork Tue Feb 28 21:37:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155248 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FA1FC7EE32 for ; Tue, 28 Feb 2023 21:38:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E99A26B0095; Tue, 28 Feb 2023 16:37:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E49A36B0096; Tue, 28 Feb 2023 16:37:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4F476B0098; Tue, 28 Feb 2023 16:37:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 915B66B0096 for ; Tue, 28 Feb 2023 16:37:50 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 63715C0CC1 for ; Tue, 28 Feb 2023 21:37:50 +0000 (UTC) X-FDA: 80518013100.01.1858EDB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 612B814000E for ; Tue, 28 Feb 2023 21:37:47 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=uuNpayke; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620267; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bHG0cAAB5y+5Fy2o+LQ/LT1veDA41EiEXJDm/yoHE4w=; b=BR+8/Cs4rySGwEu6eVE5oer/mRbhzUTSNc8aUot+g3h/R8tYwzd/CdvU0SXz2Ze6nuCzMl z0ZXYzGLXyD8IS7RdHXbhaQOq9FBO7LEtoRJkYcnig3J6xRQr1a9nmsmwBPxZeypIWaIul wxwptI79dBDIEJrd48XKmklUAUSLQD8= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=uuNpayke; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620267; a=rsa-sha256; cv=none; b=sZgJjdKd9Zpz9ZA2F8FZMaHV0gSSZCA9SfxHume9tUG1FGM6YqQluEAErxfEEDvSWNhAoq JQzitKfhabwUBULcZ4r7XT8t6iI8zkcIzc6p1xjy0n00WJJaAqXB0LdYlgx3o6PZfV93dG rHT+QraUz2KgYY73GbySmWFuZ3u2Bco= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bHG0cAAB5y+5Fy2o+LQ/LT1veDA41EiEXJDm/yoHE4w=; b=uuNpaykeiK5frENkfd+aJNrgoD zi+BAtfLF/YVlEoNF2/fceWUxc4H557U7VOvkZavRFq+8vH7xnCPti0atW6QBc1YG7hC7hZocJ78G yesByn9YVrJPdpmy3ngrYC5YXRqKl+W3RBlRiDnHUDM5/XyrSOyubzsZu01ggy0jmQpkYpGeAWHRm lyTL+J5cDDw5ztIpSIT4nE7R5zTUN3fMKKEnnBDpb3ODjFvY1RvBN5YZiCWcuu8BvL9E6xCka7s/7 keZ7XcWYA1V7/9HEyfngx7D5SXVa9jECUsd7mDWRP2+0I46TYC4G42bdFY12DTKPLXSsurdY6UYLU BFkpkq2w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018q4-Ol; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , linux-sh@vger.kernel.org Subject: [PATCH v3 22/34] superh: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:25 +0000 Message-Id: <20230228213738.272178-23-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 612B814000E X-Stat-Signature: 5ymfzz81sbwicmmbgp7whd57ij7kr765 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1677620267-259756 X-HE-Meta: U2FsdGVkX1+8TPYfHeCUqimJbi/hLWuOe6oLhOv4T/FIfg6HLF9S2RjgOlGN+IZR9U61Hl03OFA27vf1xSmwW1596NzkKjvPRcWzRpbZCZXwvz8IZMBWVhWzUHBNXo7Zvw4XsKSvpK1GfqaP7OAHCJWrqw43ZUI1kyJIwcn302q5ZFYOYOpKYIeHZtlxCeS+W5hbufj1PUYOiZRlrG3u5dDRdN2gPQcs0k1efm/tPD84KX3pDIbXGVujAZ5Q2P4kT50D3slqsDyvlJ+1sp6ztqwc6Sv7PjUO57lDP8MYCWRQu78poPqyKeGZTEOlwuSDBUXIC1jzAfHsLJxVnTBM3oeozHNkcfNwJIfHhcTgKhLjRKWduIPhosER0fsqlMR6Z+nvZ5atzaVeTB8acQJ2ch8VQFxtLuzmsa3sMLcKI8IR5Dx9Qw6FAn+RiHi4yqgiypgjy/5eRLW4JAoM66lxw1RHaSZOqepdiH1qwMY4ATgcK//r17lrMettdleLNRNwjbuxn+c7kUU7bM7rOg0TkQZrDy2aZyGa8eEvuqCsQrpeeshHPY75NY8pBSOI4KvfyMpZTqBChuZA0d7O3Y2++kkOX92NncIeuMEp+s1WaDMvxFxhBMfZ1HM5nWopaWi/b9QyDJkV79DMabwGF0BGIhiwd9jMBp2vC/Km+TR4z7PfvsEX6eLzZY60u046uTLXTsHexdumlkFHVdbW/zGOq97MCC4UbJHHIaC9ceEXas65qd2iVVWae7RcF/UhZmtxTiNNvZ7qdSUqf3vZp8cymqA3VS1WPd67IdIuI9An1LCRBd1lCX4wtadLbc48ukYRfvXOTJhKyMaDiH4eNwtREH1dBfhru5lTG5MJTfMRoI/gnCvVwrpGBkDJmblnVRBVDVS+SJdLwHYXP6DYEggn0x36zTsOLiGOXa5Q0SMRlR8SowSj5ens1ZYdlXtZ3jA3D9LWHoyQzXOheEVMCcN /VJp06Nt 1ciTR0k+JeZOXZNgtMEGFK2MDyuSmMbb2AKeKVmNwLbiqdDDGauWuLyvl36Rm9wWySdMygtdnMKikL4m0+todCB2nmhBCJAUy8MIgJegq7yEJv2VQg1XBhEbOvIt4JwBap7P9noOApQaszm3nwQWCH1A8XPnpnQDQekAu7KXvQWmwNn1y6RBkq/YtsPfZtV1UWO+ZJ1s+0agFJn92CMOvGpYOEe03RnUE7l+3rdDoGeMnqMa6Eemv/bU4WDerLwaK6qaettZugoeul7+H7Rw3j4c+ptTnHPo0/6kNa23ogcwx4PWmezKpZ8M42Uf42wQvGvFf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dcache_clean flag from being per-page to per-folio. Flush the entire folio containing the pages in flush_icache_pages() for ease of implementation. Signed-off-by: Matthew Wilcox (Oracle) Cc: Yoshinori Sato Cc: Rich Felker Cc: John Paul Adrian Glaubitz Cc: linux-sh@vger.kernel.org --- arch/sh/include/asm/cacheflush.h | 21 ++++++++----- arch/sh/include/asm/pgtable.h | 6 ++-- arch/sh/include/asm/pgtable_32.h | 16 ++++++++-- arch/sh/mm/cache-j2.c | 4 +-- arch/sh/mm/cache-sh4.c | 26 ++++++++++----- arch/sh/mm/cache-sh7705.c | 26 +++++++++------ arch/sh/mm/cache.c | 54 ++++++++++++++++++-------------- arch/sh/mm/kmap.c | 3 +- 8 files changed, 101 insertions(+), 55 deletions(-) diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h index 481a664287e2..9fceef6f3e00 100644 --- a/arch/sh/include/asm/cacheflush.h +++ b/arch/sh/include/asm/cacheflush.h @@ -13,9 +13,9 @@ * - flush_cache_page(mm, vmaddr, pfn) flushes a single page * - flush_cache_range(vma, start, end) flushes a range of pages * - * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache + * - flush_dcache_folio(folio) flushes(wback&invalidates) a folio for dcache * - flush_icache_range(start, end) flushes(invalidates) a range for icache - * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache + * - flush_icache_pages(vma, pg, nr) flushes(invalidates) pages for icache * - flush_cache_sigtramp(vaddr) flushes the signal trampoline */ extern void (*local_flush_cache_all)(void *args); @@ -23,9 +23,9 @@ extern void (*local_flush_cache_mm)(void *args); extern void (*local_flush_cache_dup_mm)(void *args); extern void (*local_flush_cache_page)(void *args); extern void (*local_flush_cache_range)(void *args); -extern void (*local_flush_dcache_page)(void *args); +extern void (*local_flush_dcache_folio)(void *args); extern void (*local_flush_icache_range)(void *args); -extern void (*local_flush_icache_page)(void *args); +extern void (*local_flush_icache_folio)(void *args); extern void (*local_flush_cache_sigtramp)(void *args); static inline void cache_noop(void *args) { } @@ -42,11 +42,18 @@ extern void flush_cache_page(struct vm_area_struct *vma, extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} + extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_user_range flush_icache_range -extern void flush_icache_page(struct vm_area_struct *vma, - struct page *page); +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr); +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_cache_sigtramp(unsigned long address); struct flusher_data { diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h index 3ce30becf6df..1a8fdc3bc363 100644 --- a/arch/sh/include/asm/pgtable.h +++ b/arch/sh/include/asm/pgtable.h @@ -102,13 +102,15 @@ extern void __update_cache(struct vm_area_struct *vma, extern void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte); -static inline void -update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { pte_t pte = *ptep; __update_cache(vma, address, pte); __update_tlb(vma, address, pte); } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern void paging_init(void); diff --git a/arch/sh/include/asm/pgtable_32.h b/arch/sh/include/asm/pgtable_32.h index 21952b094650..03ba1834e126 100644 --- a/arch/sh/include/asm/pgtable_32.h +++ b/arch/sh/include/asm/pgtable_32.h @@ -307,7 +307,19 @@ static inline void set_pte(pte_t *ptep, pte_t pte) #define set_pte(pteptr, pteval) (*(pteptr) = pteval) #endif -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + PAGE_SIZE); + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* * (pmds are folded into pgds so this doesn't get actually called, @@ -323,7 +335,7 @@ static inline void set_pte(pte_t *ptep, pte_t pte) #define pte_none(x) (!pte_val(x)) #define pte_present(x) ((x).pte_low & (_PAGE_PRESENT | _PAGE_PROTNONE)) -#define pte_clear(mm,addr,xp) do { set_pte_at(mm, addr, xp, __pte(0)); } while (0) +#define pte_clear(mm, addr, ptep) set_pte(ptep, __pte(0)) #define pmd_none(x) (!pmd_val(x)) #define pmd_present(x) (pmd_val(x)) diff --git a/arch/sh/mm/cache-j2.c b/arch/sh/mm/cache-j2.c index f277862a11f5..9ac960214380 100644 --- a/arch/sh/mm/cache-j2.c +++ b/arch/sh/mm/cache-j2.c @@ -55,9 +55,9 @@ void __init j2_cache_init(void) local_flush_cache_dup_mm = j2_flush_both; local_flush_cache_page = j2_flush_both; local_flush_cache_range = j2_flush_both; - local_flush_dcache_page = j2_flush_dcache; + local_flush_dcache_folio = j2_flush_dcache; local_flush_icache_range = j2_flush_icache; - local_flush_icache_page = j2_flush_icache; + local_flush_icache_folio = j2_flush_icache; local_flush_cache_sigtramp = j2_flush_icache; pr_info("Initial J2 CCR is %.8x\n", __raw_readl(j2_ccr_base)); diff --git a/arch/sh/mm/cache-sh4.c b/arch/sh/mm/cache-sh4.c index 72c2e1b46c08..862046f26981 100644 --- a/arch/sh/mm/cache-sh4.c +++ b/arch/sh/mm/cache-sh4.c @@ -107,19 +107,29 @@ static inline void flush_cache_one(unsigned long start, unsigned long phys) * Write back & invalidate the D-cache of the page. * (To avoid "alias" issues) */ -static void sh4_flush_dcache_page(void *arg) +static void sh4_flush_dcache_folio(void *arg) { - struct page *page = arg; - unsigned long addr = (unsigned long)page_address(page); + struct folio *folio = arg; #ifndef CONFIG_SMP - struct address_space *mapping = page_mapping_file(page); + struct address_space *mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); else #endif - flush_cache_one(CACHE_OC_ADDRESS_ARRAY | - (addr & shm_align_mask), page_to_phys(page)); + { + unsigned long pfn = folio_pfn(folio); + unsigned long addr = (unsigned long)folio_address(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + flush_cache_one(CACHE_OC_ADDRESS_ARRAY | + (addr & shm_align_mask), + pfn * PAGE_SIZE); + addr += PAGE_SIZE; + pfn++; + } + } wmb(); } @@ -379,7 +389,7 @@ void __init sh4_cache_init(void) __raw_readl(CCN_PRR)); local_flush_icache_range = sh4_flush_icache_range; - local_flush_dcache_page = sh4_flush_dcache_page; + local_flush_dcache_folio = sh4_flush_dcache_folio; local_flush_cache_all = sh4_flush_cache_all; local_flush_cache_mm = sh4_flush_cache_mm; local_flush_cache_dup_mm = sh4_flush_cache_mm; diff --git a/arch/sh/mm/cache-sh7705.c b/arch/sh/mm/cache-sh7705.c index 9b63a53a5e46..b509a407588f 100644 --- a/arch/sh/mm/cache-sh7705.c +++ b/arch/sh/mm/cache-sh7705.c @@ -132,15 +132,20 @@ static void __flush_dcache_page(unsigned long phys) * Write back & invalidate the D-cache of the page. * (To avoid "alias" issues) */ -static void sh7705_flush_dcache_page(void *arg) +static void sh7705_flush_dcache_folio(void *arg) { - struct page *page = arg; - struct address_space *mapping = page_mapping_file(page); + struct folio *folio = arg; + struct address_space *mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) - clear_bit(PG_dcache_clean, &page->flags); - else - __flush_dcache_page(__pa(page_address(page))); + clear_bit(PG_dcache_clean, &folio->flags); + else { + unsigned long pfn = folio_pfn(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) + __flush_dcache_page((pfn + i) * PAGE_SIZE); + } } static void sh7705_flush_cache_all(void *args) @@ -176,19 +181,20 @@ static void sh7705_flush_cache_page(void *args) * Not entirely sure why this is necessary on SH3 with 32K cache but * without it we get occasional "Memory fault" when loading a program. */ -static void sh7705_flush_icache_page(void *page) +static void sh7705_flush_icache_folio(void *arg) { - __flush_purge_region(page_address(page), PAGE_SIZE); + struct folio *folio = arg; + __flush_purge_region(folio_address(folio), folio_size(folio)); } void __init sh7705_cache_init(void) { local_flush_icache_range = sh7705_flush_icache_range; - local_flush_dcache_page = sh7705_flush_dcache_page; + local_flush_dcache_folio = sh7705_flush_dcache_folio; local_flush_cache_all = sh7705_flush_cache_all; local_flush_cache_mm = sh7705_flush_cache_all; local_flush_cache_dup_mm = sh7705_flush_cache_all; local_flush_cache_range = sh7705_flush_cache_all; local_flush_cache_page = sh7705_flush_cache_page; - local_flush_icache_page = sh7705_flush_icache_page; + local_flush_icache_folio = sh7705_flush_icache_folio; } diff --git a/arch/sh/mm/cache.c b/arch/sh/mm/cache.c index 3aef78ceb820..93fc5fb8ec1c 100644 --- a/arch/sh/mm/cache.c +++ b/arch/sh/mm/cache.c @@ -20,9 +20,9 @@ void (*local_flush_cache_mm)(void *args) = cache_noop; void (*local_flush_cache_dup_mm)(void *args) = cache_noop; void (*local_flush_cache_page)(void *args) = cache_noop; void (*local_flush_cache_range)(void *args) = cache_noop; -void (*local_flush_dcache_page)(void *args) = cache_noop; +void (*local_flush_dcache_folio)(void *args) = cache_noop; void (*local_flush_icache_range)(void *args) = cache_noop; -void (*local_flush_icache_page)(void *args) = cache_noop; +void (*local_flush_icache_folio)(void *args) = cache_noop; void (*local_flush_cache_sigtramp)(void *args) = cache_noop; void (*__flush_wback_region)(void *start, int size); @@ -61,15 +61,17 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { - if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + struct folio *folio = page_folio(page); + + if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) && + test_bit(PG_dcache_clean, &folio->flags)) { void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(vto, src, len); kunmap_coherent(vto); } else { memcpy(dst, src, len); if (boot_cpu_data.dcache.n_aliases) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } if (vma->vm_flags & VM_EXEC) @@ -80,27 +82,30 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + test_bit(PG_dcache_clean, &folio->flags)) { void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(dst, vfrom, len); kunmap_coherent(vfrom); } else { memcpy(dst, src, len); if (boot_cpu_data.dcache.n_aliases) - clear_bit(PG_dcache_clean, &page->flags); + clear_bit(PG_dcache_clean, &folio->flags); } } void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *vfrom, *vto; vto = kmap_atomic(to); - if (boot_cpu_data.dcache.n_aliases && page_mapcount(from) && - test_bit(PG_dcache_clean, &from->flags)) { + if (boot_cpu_data.dcache.n_aliases && folio_mapped(src) && + test_bit(PG_dcache_clean, &src->flags)) { vfrom = kmap_coherent(from, vaddr); copy_page(vto, vfrom); kunmap_coherent(vfrom); @@ -136,35 +141,37 @@ EXPORT_SYMBOL(clear_user_highpage); void __update_cache(struct vm_area_struct *vma, unsigned long address, pte_t pte) { - struct page *page; unsigned long pfn = pte_pfn(pte); if (!boot_cpu_data.dcache.n_aliases) return; - page = pfn_to_page(pfn); if (pfn_valid(pfn)) { - int dirty = !test_and_set_bit(PG_dcache_clean, &page->flags); + struct folio *folio = page_folio(pfn_to_page(pfn)); + int dirty = !test_and_set_bit(PG_dcache_clean, &folio->flags); if (dirty) - __flush_purge_region(page_address(page), PAGE_SIZE); + __flush_purge_region(folio_address(folio), + folio_size(folio)); } } void __flush_anon_page(struct page *page, unsigned long vmaddr) { + struct folio *folio = page_folio(page); unsigned long addr = (unsigned long) page_address(page); if (pages_do_alias(addr, vmaddr)) { - if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && - test_bit(PG_dcache_clean, &page->flags)) { + if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) && + test_bit(PG_dcache_clean, &folio->flags)) { void *kaddr; kaddr = kmap_coherent(page, vmaddr); /* XXX.. For now kunmap_coherent() does a purge */ /* __flush_purge_region((void *)kaddr, PAGE_SIZE); */ kunmap_coherent(kaddr); - } else - __flush_purge_region((void *)addr, PAGE_SIZE); + } else + __flush_purge_region(folio_address(folio), + folio_size(folio)); } } @@ -215,11 +222,11 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, } EXPORT_SYMBOL(flush_cache_range); -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - cacheop_on_each_cpu(local_flush_dcache_page, page, 1); + cacheop_on_each_cpu(local_flush_dcache_folio, folio, 1); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); void flush_icache_range(unsigned long start, unsigned long end) { @@ -233,10 +240,11 @@ void flush_icache_range(unsigned long start, unsigned long end) } EXPORT_SYMBOL(flush_icache_range); -void flush_icache_page(struct vm_area_struct *vma, struct page *page) +void flush_icache_pages(struct vm_area_struct *vma, struct page *page, + unsigned int nr) { - /* Nothing uses the VMA, so just pass the struct page along */ - cacheop_on_each_cpu(local_flush_icache_page, page, 1); + /* Nothing uses the VMA, so just pass the folio along */ + cacheop_on_each_cpu(local_flush_icache_folio, page_folio(page), 1); } void flush_cache_sigtramp(unsigned long address) diff --git a/arch/sh/mm/kmap.c b/arch/sh/mm/kmap.c index 73fd7cc99430..fa50e8f6e7a9 100644 --- a/arch/sh/mm/kmap.c +++ b/arch/sh/mm/kmap.c @@ -27,10 +27,11 @@ void __init kmap_coherent_init(void) void *kmap_coherent(struct page *page, unsigned long addr) { + struct folio *folio = page_folio(page); enum fixed_addresses idx; unsigned long vaddr; - BUG_ON(!test_bit(PG_dcache_clean, &page->flags)); + BUG_ON(!test_bit(PG_dcache_clean, &folio->flags)); preempt_disable(); pagefault_disable(); From patchwork Tue Feb 28 21:37:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155259 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E10AC7EE32 for ; Tue, 28 Feb 2023 21:38:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 377126B00A1; Tue, 28 Feb 2023 16:38:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 28BE56B00A2; Tue, 28 Feb 2023 16:38:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12AEF6B00A3; Tue, 28 Feb 2023 16:38:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 021D26B00A1 for ; Tue, 28 Feb 2023 16:38:04 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C0E98C0CE3 for ; Tue, 28 Feb 2023 21:38:03 +0000 (UTC) X-FDA: 80518013646.03.57A1A3A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 13DF4100012 for ; Tue, 28 Feb 2023 21:38:00 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HmCkTVEx; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620281; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hq3kISoFqRozP9bJJptL5dM/gm/mC87Ff3e3oJGmMC0=; b=yH6eJobEsBWSw88wJc17/ORrfKmKFBBsjT+NrBhdXXb1M4lzQWxTXbhVFqhy/1UsH+MTJx GYYMLuYHbkAFTd5AeLojdz6UIuSS7mSRsZBdj6qYj4eiA0NG2BZCl5QdZAWGqclqoooDra jWBgJAxtSg/zTliUq3I5Afl2y9tielw= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=HmCkTVEx; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620281; a=rsa-sha256; cv=none; b=JgKJw2qAR5NWzXz2F1sfg5dM0ayjKTFR6lnPUYBLpnFNx8/Str856tdaOF+Cw8zDFn4CJU EZVc50X8HdIcpsKOXog2tXMmQ5ONfyoTT8le8btG6mdGkHy2xeUuoIQrsgIRdIu/mJVNl2 19wSfRDVSmxciIxVcCVnlsq6uoo1fCk= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hq3kISoFqRozP9bJJptL5dM/gm/mC87Ff3e3oJGmMC0=; b=HmCkTVEx+/alS0XUELP9sviI0P XwT1wjtUe1UHwQkomBTxZ6s1g3pshmPH1hg8a7HjTzZqpZFzAUALurqF54RkhBf6hfWtHZZNcdFPR ARLhyIau0aDSwhLJkNu02P21NBw4Ksxam8xdlreRAHM7wnnddFKZXuif2FyJQONCZT/CZRR6pkG3a mO3MySAvI7RlNwsACfgqbsabTJBLW1X7YctVMHvkuujOGmlicHDytDEFtyjarwsFi8pZMwyQ43ayb gqJwIpJyk3fXgoXXvzzhiWHzpvGUeOjRAegjyuVrn+dH/asF3DIhdh8K/NmzVwCCO085Ss1V0QznX t15DgCmA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fJ-0018qC-TJ; Tue, 28 Feb 2023 21:37:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, "David S. Miller" , sparclinux@vger.kernel.org Subject: [PATCH v3 23/34] sparc32: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:26 +0000 Message-Id: <20230228213738.272178-24-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 13DF4100012 X-Stat-Signature: 71abrbwydwbztn96mgedycfssstbfu8o X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1677620280-249013 X-HE-Meta: U2FsdGVkX19MtO6ylQj58c49lxZVMoXveP2+Fk6q2U9dAeppN+6GUpJK8y0mp9ow0BcSgcAowtPqyXzIstmjBYYXL409EKMQ63BVgNn9YjfbYJ9BuvbGlIdbkg2Q8OyLRiNrGLxPVwzhjJ1rMjAnSHmAfLhr8e5kwCm5koIUM0hd45OV7Hxv1DunufouLST2YuMEPH+dk4ITE4FX3RxqDrqzJk2pZ9BmyFQkYo9S17UHD36Mw3wtgDZGx80L+SFUpL2MdHakCLu6viCnhebrx5PqxSseyM6n1DiumGd6/71XKxj3DEoXYSVhRlKDWoSiWOuh2YydB5sIRM9HJwbNJSBL8GSeuDdJtqYfDmcX2lEYRK8hWQFzAjWijXxbiNQ3XmR1kzxEl/XzIDoAJEtYBJx4KoG2hZSLCkIOmRM1sPA5gow/p/9ta4KgsvX955jB9nVgDr5yJs580Rp1LFBie6whn8JsHXInVFtQrpbGMv8CB7O6k+y57MMUN2kuJpnU73/vQg7vcFY96MiBNcF0brnCkUpn69BsszbnZwAvdMnhCfCmTQCDFi8EJFY+16GUgAH962KYNMV6z8h0yFjjm4a6C4/EmWxI+Jd9Z3pcyFuIJObGbv2i+K6x1XlqQoBjuRnu48h0XFGPx0mGoJBo+2ETgtWKED/424W8D6L9IuP2RJGHPWrCnbTlxki7s3bwCyOEhFnf6TR6d7RzpgOv3gy5aRKPJiZKCWoKY5qEV6+4mH9jRR6Rbhwsce63Zn6g8ucCJzrqem2NyhzVI6Tq/Vuydqe8jGXZfoZV72wMTe6xvNsYPY0kVTXsYhR+q6hte0B4xNSabqJNU/PL+QJxYI/xXZNUGVoB/+/9ZHUO+14b9mrAMmZjuLIG41/uGNRatZpEDJ+Mwnn9HWYH5dQJJG9IjhI0mDXVv4TYRIZF/LKqSZ1glgI1DDlCdxaQSWD6LuPwtQqJpf890cma7ms UspyODJh pfN2g+w/maWFqL/hkOwhgY+WDo2yFsw1eOkUWDxcQ5OyKMu8pnmZxd5IlOET1OQHfDHmy7jxaL5V5kt8v8AmtAvoUJDPwKW/Q/NvRJe0NzxB2WhLAog3XLXExi2nZRLw24S2i5TaQDG7qE/OH6Z6pQwoaaWQmGpwyCqQ2Sh32G2P7oWzTzCtmEkO7OCSybSxhTwtROxYolviLuvmFDhOgymE+NJrOSpijZm2AeOrEh4b5eOi6bJDqMlY7K7ltdxdy06yQbyHKCX/Wayz+FMXMb745xr8JftIMNpiiH/6cQ8bvwuDNv9lC5NxtG0omTlxmiiZ/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Cc: "David S. Miller" Cc: sparclinux@vger.kernel.org --- arch/sparc/include/asm/cacheflush_32.h | 9 +++++++-- arch/sparc/include/asm/pgtable_32.h | 15 ++++++++++++++- arch/sparc/mm/init_32.c | 13 +++++++++++-- 3 files changed, 32 insertions(+), 5 deletions(-) diff --git a/arch/sparc/include/asm/cacheflush_32.h b/arch/sparc/include/asm/cacheflush_32.h index adb6991d0455..8dba35d63328 100644 --- a/arch/sparc/include/asm/cacheflush_32.h +++ b/arch/sparc/include/asm/cacheflush_32.h @@ -16,6 +16,7 @@ sparc32_cachetlb_ops->cache_page(vma, addr) #define flush_icache_range(start, end) do { } while (0) #define flush_icache_page(vma, pg) do { } while (0) +#define flush_icache_pages(vma, pg, nr) do { } while (0) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ @@ -35,11 +36,15 @@ #define flush_page_for_dma(addr) \ sparc32_cachetlb_ops->page_for_dma(addr) -struct page; void sparc_flush_page_to_ram(struct page *page); +void sparc_flush_folio_to_ram(struct folio *folio); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) sparc_flush_page_to_ram(page) +#define flush_dcache_folio(folio) sparc_flush_folio_to_ram(folio) +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h index d4330e3c57a6..47ae55ea1837 100644 --- a/arch/sparc/include/asm/pgtable_32.h +++ b/arch/sparc/include/asm/pgtable_32.h @@ -101,7 +101,19 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) srmmu_swap((unsigned long *)ptep, pte_val(pteval)); } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline int srmmu_device_memory(unsigned long x) { @@ -318,6 +330,7 @@ void mmu_info(struct seq_file *m); #define FAULT_CODE_USER 0x4 #define update_mmu_cache(vma, address, ptep) do { } while (0) +#define update_mmu_cache_range(vma, address, ptep, nr) do { } while (0) void srmmu_mapiorange(unsigned int bus, unsigned long xpa, unsigned long xva, unsigned int len); diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c index 9c0ea457bdf0..d96a14ffceeb 100644 --- a/arch/sparc/mm/init_32.c +++ b/arch/sparc/mm/init_32.c @@ -297,11 +297,20 @@ void sparc_flush_page_to_ram(struct page *page) { unsigned long vaddr = (unsigned long)page_address(page); - if (vaddr) - __flush_page_to_ram(vaddr); + __flush_page_to_ram(vaddr); } EXPORT_SYMBOL(sparc_flush_page_to_ram); +void sparc_flush_folio_to_ram(struct folio *folio) +{ + unsigned long vaddr = (unsigned long)folio_address(folio); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) + __flush_page_to_ram(vaddr + i * PAGE_SIZE); +} +EXPORT_SYMBOL(sparc_flush_folio_to_ram); + static const pgprot_t protection_map[16] = { [VM_NONE] = PAGE_NONE, [VM_READ] = PAGE_READONLY, From patchwork Tue Feb 28 21:37:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155260 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED0EFC7EE30 for ; Tue, 28 Feb 2023 21:38:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 96A576B00A2; Tue, 28 Feb 2023 16:38:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D0FB6B00A3; Tue, 28 Feb 2023 16:38:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A95A6B00A4; Tue, 28 Feb 2023 16:38:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 466D56B00A3 for ; Tue, 28 Feb 2023 16:38:04 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E92531C6564 for ; Tue, 28 Feb 2023 21:38:03 +0000 (UTC) X-FDA: 80518013646.04.5EB4501 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 2D13AC0006 for ; Tue, 28 Feb 2023 21:38:01 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=VNWe56rI; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620282; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ILwuDhxkxYADxt38igbE8Aj0Ns9nbUEIuqdImY95JZY=; b=3pHsNAM0ogiQ0hzpxLFtNOlI7YV5x4xI6sAig8Zpum/9lnmayvQI187h4egEYRUP9eErKH sYBp7p71y/ERRX36ag08kHmX6KCIKj1SHNtVrc7YhkerbnXXK9M7V5G1UmuTfLL8PXDn1M TfQzmqUjiVQURhuZzT4e8b5GtLjVrs8= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=VNWe56rI; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620282; a=rsa-sha256; cv=none; b=WHleB65njmoYKsK1bRpkWvQUIX4gr0vq0izTWv1PDjC54GqFUGb5L0R8yNtSFG2ecyuB/j 42XD/Xk+qr2rKo8LNJXI4YgPbHs9WohOM1nz+i+nYr0YUDCxKp/R76S7Xugz7xYc7kUi/I 4CpbdBu7PTmXCukMbN3KKAoM0TAF8iQ= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ILwuDhxkxYADxt38igbE8Aj0Ns9nbUEIuqdImY95JZY=; b=VNWe56rIvK6ZQNhcT+rHJK9Rdy skcBMuLILWbEoU7cjAt3BHNCVAm56AssVcRXA04yNLAzZmnmgfZfUwNaVIykjqj4hnPBKbxMPlpcx T4RviyPQUrRJzhxs6A3AwdkphUaleY0IoN/RMWVWj14zZvLs0bUyfun08bwmayhgP8EVBtl4nii1W 5ocKcCzq9ZlNj1bZNhyr9tSB7h14TOzc5xG+rhr3HXfZPe1P5V1FS8ccJDcbP+k9ViiAWAoCkoFse kp9yri9tUB3927/V55jcrIUlNInZr4q8BrD0kaAGViVdpJDT9mjm/aL66MtXF7MrzXcflmTH00q/C u6D56hEQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fK-0018qE-1v; Tue, 28 Feb 2023 21:37:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, "David S. Miller" , sparclinux@vger.kernel.org Subject: [PATCH v3 24/34] sparc64: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:27 +0000 Message-Id: <20230228213738.272178-25-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2D13AC0006 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: yry9edwkwrczw677x4t5ocdmps87ysro X-HE-Tag: 1677620281-680264 X-HE-Meta: U2FsdGVkX19BsFBYB1LNhN2oe2A0apNmmbR+QkyNDg7v4ee84BHprKA413oCpB02hUgrkSeJs1rSU2/Fb0bClAhxiM032KL5f2NH4VXyN0K/KGW599b1SrvzWEqTnq1UpzGYT+89yLKCuGx7RCs9T/7At5QVJ27+SjMuOlrpE344rUvUxA/NNstReYvvgUP0LSGxJSMiUrunoZMwM818A1geOZAxJ/ZbO9CNOziNpJHJuceb4Q1e29Lo+Y9DHTgDCI8LgIVeM01df0bD3A/gidlNp56vKg6n8PMp9cpPAqfM/HbofV+A7rtI17imAMc8a6+8ahct4NqYmwkmkPRG/Dy6g4p41rbU/KsdwVEyNTp4IKuipbvIayKsYHODxODJ0eT1neO9k+VMR1x24LemSSTh92KKZt4a81Xu8srbaZNI0d0drJy5VSnniWzy3Q1Jjko6CZ6AHeEZ3bU5ICNOiQqSxe1iDUTLxpVr641e0RAEGAe3ZtoUA3d1ZnKUAU08NQK7WKgjz6vGskffURch+T9KeYloHuPNOYvZ2hg7qwIrjEkHJOtGTWdDkSPZGSC+HkGDwqZwc5B/cChWt+jasxrRRw07qO7Sq9JKE6crDcVvjcqJfR7gECf7NtU1NbP5xPn0AIDpaZGgSngzVRA98CEOM+w1echr4VdiD8mRJ4YyEFq5soAyoAiDYf9nbLKcPpuGRE+5X8GzWCRMEuI4u6XTVC9ZAKvgZHtsz5sxfntQNA2WfKYBvq+/bNYraUpjPq8o32AXnlldQIQvYwfJi2WmJLrCLZ+n0PRAUNWxv73nyH4HX1+k0DzQe8ySwaAdSxzvZt2B5hK17RVk9Rjtk0Ncxtsi36kEdvtMAaToaflHaYq3yrCMYnUpG+mmxmf0ebiMNrOKoaIZkW8jyWHBH4KbxNmh55uSMDl6fBQT0BQm02M/xxPxKdXL9U8mdUbR+yw1R5Jyu7FZd366crt uXG/7q+B tJtfn3Rs0gC78ja7vw76LMmoQHdnv03xjy+rlj74Os79iMPFQWaWIDyDb6vetCq73qpipTlSP26DDbmkVGLLIw8I95n9Drdpz+SDZsTFJn6hlsbaHrCVrjDGAsJGQbixG5f8tUJCdUGlcqiqjY0/RTkj7hcppsANiWKG96uMPP7/GTjTZD7eZ9W2rQCchY176ncGgiFZkn6xV2nsBs1GBh1aVWDx73bePxZul/i2Y+cpp3Kz6M8eMoV75DvfjwbIwP2ycndOvm0NZH7B4/5WPmKa6co6ZWUoJslBUCXVR4+yHLEfB/S+wGA2pQsdvhcMGItF9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Convert the PG_dcache_dirty flag from being per-page to per-folio. Signed-off-by: Matthew Wilcox (Oracle) Cc: "David S. Miller" Cc: sparclinux@vger.kernel.org --- arch/sparc/include/asm/cacheflush_64.h | 18 ++++-- arch/sparc/include/asm/pgtable_64.h | 25 +++++++-- arch/sparc/kernel/smp_64.c | 56 +++++++++++------- arch/sparc/mm/init_64.c | 78 +++++++++++++++----------- arch/sparc/mm/tlb.c | 5 +- 5 files changed, 117 insertions(+), 65 deletions(-) diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h index b9341836597e..a9a719f04d06 100644 --- a/arch/sparc/include/asm/cacheflush_64.h +++ b/arch/sparc/include/asm/cacheflush_64.h @@ -35,20 +35,26 @@ void flush_icache_range(unsigned long start, unsigned long end); void __flush_icache_page(unsigned long); void __flush_dcache_page(void *addr, int flush_icache); -void flush_dcache_page_impl(struct page *page); +void flush_dcache_folio_impl(struct folio *folio); #ifdef CONFIG_SMP -void smp_flush_dcache_page_impl(struct page *page, int cpu); -void flush_dcache_page_all(struct mm_struct *mm, struct page *page); +void smp_flush_dcache_folio_impl(struct folio *folio, int cpu); +void flush_dcache_folio_all(struct mm_struct *mm, struct folio *folio); #else -#define smp_flush_dcache_page_impl(page,cpu) flush_dcache_page_impl(page) -#define flush_dcache_page_all(mm,page) flush_dcache_page_impl(page) +#define smp_flush_dcache_folio_impl(folio, cpu) flush_dcache_folio_impl(folio) +#define flush_dcache_folio_all(mm, folio) flush_dcache_folio_impl(folio) #endif void __flush_dcache_range(unsigned long start, unsigned long end); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} #define flush_icache_page(vma, pg) do { } while(0) +#define flush_icache_pages(vma, pg, nr) do { } while(0) void flush_ptrace_access(struct vm_area_struct *, struct page *, unsigned long uaddr, void *kaddr, diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 2dc8d4641734..d5c0088e0c6a 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -911,8 +911,20 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, maybe_tlb_batch_add(mm, addr, ptep, orig, fullmm, PAGE_SHIFT); } -#define set_pte_at(mm,addr,ptep,pte) \ - __set_pte_at((mm), (addr), (ptep), (pte), 0) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + __set_pte_at(mm, addr, ptep, pte, 0); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + addr += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1); #define pte_clear(mm,addr,ptep) \ set_pte_at((mm), (addr), (ptep), __pte(0UL)) @@ -931,8 +943,8 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, \ if (pfn_valid(this_pfn) && \ (((old_addr) ^ (new_addr)) & (1 << 13))) \ - flush_dcache_page_all(current->mm, \ - pfn_to_page(this_pfn)); \ + flush_dcache_folio_all(current->mm, \ + page_folio(pfn_to_page(this_pfn))); \ } \ newpte; \ }) @@ -947,7 +959,10 @@ struct seq_file; void mmu_info(struct seq_file *); struct vm_area_struct; -void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *); +void update_mmu_cache_range(struct vm_area_struct *, unsigned long addr, + pte_t *ptep, unsigned int nr); +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) #ifdef CONFIG_TRANSPARENT_HUGEPAGE void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd); diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c index a55295d1b924..90ef8677ac89 100644 --- a/arch/sparc/kernel/smp_64.c +++ b/arch/sparc/kernel/smp_64.c @@ -921,20 +921,26 @@ extern unsigned long xcall_flush_dcache_page_cheetah; #endif extern unsigned long xcall_flush_dcache_page_spitfire; -static inline void __local_flush_dcache_page(struct page *page) +static inline void __local_flush_dcache_folio(struct folio *folio) { + unsigned int i, nr = folio_nr_pages(folio); + #ifdef DCACHE_ALIASING_POSSIBLE - __flush_dcache_page(page_address(page), + for (i = 0; i < nr; i++) + __flush_dcache_page(folio_address(folio) + i * PAGE_SIZE, ((tlb_type == spitfire) && - page_mapping_file(page) != NULL)); + folio_flush_mapping(folio) != NULL)); #else - if (page_mapping_file(page) != NULL && - tlb_type == spitfire) - __flush_icache_page(__pa(page_address(page))); + if (folio_flush_mapping(folio) != NULL && + tlb_type == spitfire) { + unsigned long pfn = folio_pfn(folio) + for (i = 0; i < nr; i++) + __flush_icache_page((pfn + i) * PAGE_SIZE); + } #endif } -void smp_flush_dcache_page_impl(struct page *page, int cpu) +void smp_flush_dcache_folio_impl(struct folio *folio, int cpu) { int this_cpu; @@ -948,14 +954,14 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu) this_cpu = get_cpu(); if (cpu == this_cpu) { - __local_flush_dcache_page(page); + __local_flush_dcache_folio(folio); } else if (cpu_online(cpu)) { - void *pg_addr = page_address(page); + void *pg_addr = folio_address(folio); u64 data0 = 0; if (tlb_type == spitfire) { data0 = ((u64)&xcall_flush_dcache_page_spitfire); - if (page_mapping_file(page) != NULL) + if (folio_flush_mapping(folio) != NULL) data0 |= ((u64)1 << 32); } else if (tlb_type == cheetah || tlb_type == cheetah_plus) { #ifdef DCACHE_ALIASING_POSSIBLE @@ -963,18 +969,23 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu) #endif } if (data0) { - xcall_deliver(data0, __pa(pg_addr), - (u64) pg_addr, cpumask_of(cpu)); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + xcall_deliver(data0, __pa(pg_addr), + (u64) pg_addr, cpumask_of(cpu)); #ifdef CONFIG_DEBUG_DCFLUSH - atomic_inc(&dcpage_flushes_xcall); + atomic_inc(&dcpage_flushes_xcall); #endif + pg_addr += PAGE_SIZE; + } } } put_cpu(); } -void flush_dcache_page_all(struct mm_struct *mm, struct page *page) +void flush_dcache_folio_all(struct mm_struct *mm, struct folio *folio) { void *pg_addr; u64 data0; @@ -988,10 +999,10 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page) atomic_inc(&dcpage_flushes); #endif data0 = 0; - pg_addr = page_address(page); + pg_addr = folio_address(folio); if (tlb_type == spitfire) { data0 = ((u64)&xcall_flush_dcache_page_spitfire); - if (page_mapping_file(page) != NULL) + if (folio_flush_mapping(folio) != NULL) data0 |= ((u64)1 << 32); } else if (tlb_type == cheetah || tlb_type == cheetah_plus) { #ifdef DCACHE_ALIASING_POSSIBLE @@ -999,13 +1010,18 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page) #endif } if (data0) { - xcall_deliver(data0, __pa(pg_addr), - (u64) pg_addr, cpu_online_mask); + unsigned int i, nr = folio_nr_pages(folio); + + for (i = 0; i < nr; i++) { + xcall_deliver(data0, __pa(pg_addr), + (u64) pg_addr, cpu_online_mask); #ifdef CONFIG_DEBUG_DCFLUSH - atomic_inc(&dcpage_flushes_xcall); + atomic_inc(&dcpage_flushes_xcall); #endif + pg_addr += PAGE_SIZE; + } } - __local_flush_dcache_page(page); + __local_flush_dcache_folio(folio); preempt_enable(); } diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 04f9db0c3111..ab9aacbaf43c 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -195,21 +195,26 @@ atomic_t dcpage_flushes_xcall = ATOMIC_INIT(0); #endif #endif -inline void flush_dcache_page_impl(struct page *page) +inline void flush_dcache_folio_impl(struct folio *folio) { + unsigned int i, nr = folio_nr_pages(folio); + BUG_ON(tlb_type == hypervisor); #ifdef CONFIG_DEBUG_DCFLUSH atomic_inc(&dcpage_flushes); #endif #ifdef DCACHE_ALIASING_POSSIBLE - __flush_dcache_page(page_address(page), - ((tlb_type == spitfire) && - page_mapping_file(page) != NULL)); + for (i = 0; i < nr; i++) + __flush_dcache_page(folio_address(folio) + i * PAGE_SIZE, + ((tlb_type == spitfire) && + folio_flush_mapping(folio) != NULL)); #else - if (page_mapping_file(page) != NULL && - tlb_type == spitfire) - __flush_icache_page(__pa(page_address(page))); + if (folio_flush_mapping(folio) != NULL && + tlb_type == spitfire) { + for (i = 0; i < nr; i++) + __flush_icache_page((pfn + i) * PAGE_SIZE); + } #endif } @@ -218,10 +223,10 @@ inline void flush_dcache_page_impl(struct page *page) #define PG_dcache_cpu_mask \ ((1UL<flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask) +#define dcache_dirty_cpu(folio) \ + (((folio)->flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask) -static inline void set_dcache_dirty(struct page *page, int this_cpu) +static inline void set_dcache_dirty(struct folio *folio, int this_cpu) { unsigned long mask = this_cpu; unsigned long non_cpu_bits; @@ -238,11 +243,11 @@ static inline void set_dcache_dirty(struct page *page, int this_cpu) "bne,pn %%xcc, 1b\n\t" " nop" : /* no outputs */ - : "r" (mask), "r" (non_cpu_bits), "r" (&page->flags) + : "r" (mask), "r" (non_cpu_bits), "r" (&folio->flags) : "g1", "g7"); } -static inline void clear_dcache_dirty_cpu(struct page *page, unsigned long cpu) +static inline void clear_dcache_dirty_cpu(struct folio *folio, unsigned long cpu) { unsigned long mask = (1UL << PG_dcache_dirty); @@ -260,7 +265,7 @@ static inline void clear_dcache_dirty_cpu(struct page *page, unsigned long cpu) " nop\n" "2:" : /* no outputs */ - : "r" (cpu), "r" (mask), "r" (&page->flags), + : "r" (cpu), "r" (mask), "r" (&folio->flags), "i" (PG_dcache_cpu_mask), "i" (PG_dcache_cpu_shift) : "g1", "g7"); @@ -284,9 +289,10 @@ static void flush_dcache(unsigned long pfn) page = pfn_to_page(pfn); if (page) { + struct folio *folio = page_folio(page); unsigned long pg_flags; - pg_flags = page->flags; + pg_flags = folio->flags; if (pg_flags & (1UL << PG_dcache_dirty)) { int cpu = ((pg_flags >> PG_dcache_cpu_shift) & PG_dcache_cpu_mask); @@ -296,11 +302,11 @@ static void flush_dcache(unsigned long pfn) * in the SMP case. */ if (cpu == this_cpu) - flush_dcache_page_impl(page); + flush_dcache_folio_impl(folio); else - smp_flush_dcache_page_impl(page, cpu); + smp_flush_dcache_folio_impl(folio, cpu); - clear_dcache_dirty_cpu(page, cpu); + clear_dcache_dirty_cpu(folio, cpu); put_cpu(); } @@ -388,12 +394,14 @@ bool __init arch_hugetlb_valid_size(unsigned long size) } #endif /* CONFIG_HUGETLB_PAGE */ -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, unsigned int nr) { struct mm_struct *mm; unsigned long flags; bool is_huge_tsb; pte_t pte = *ptep; + unsigned int i; if (tlb_type != hypervisor) { unsigned long pfn = pte_pfn(pte); @@ -440,15 +448,21 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * } } #endif - if (!is_huge_tsb) - __update_mmu_tsb_insert(mm, MM_TSB_BASE, PAGE_SHIFT, - address, pte_val(pte)); + if (!is_huge_tsb) { + for (i = 0; i < nr; i++) { + __update_mmu_tsb_insert(mm, MM_TSB_BASE, PAGE_SHIFT, + address, pte_val(pte)); + address += PAGE_SIZE; + pte_val(pte) += PAGE_SIZE; + } + } spin_unlock_irqrestore(&mm->context.lock, flags); } -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { + unsigned long pfn = folio_pfn(folio); struct address_space *mapping; int this_cpu; @@ -459,35 +473,35 @@ void flush_dcache_page(struct page *page) * is merely the zero page. The 'bigcore' testcase in GDB * causes this case to run millions of times. */ - if (page == ZERO_PAGE(0)) + if (is_zero_pfn(pfn)) return; this_cpu = get_cpu(); - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (mapping && !mapping_mapped(mapping)) { - int dirty = test_bit(PG_dcache_dirty, &page->flags); + bool dirty = test_bit(PG_dcache_dirty, &folio->flags); if (dirty) { - int dirty_cpu = dcache_dirty_cpu(page); + int dirty_cpu = dcache_dirty_cpu(folio); if (dirty_cpu == this_cpu) goto out; - smp_flush_dcache_page_impl(page, dirty_cpu); + smp_flush_dcache_folio_impl(folio, dirty_cpu); } - set_dcache_dirty(page, this_cpu); + set_dcache_dirty(folio, this_cpu); } else { /* We could delay the flush for the !page_mapping * case too. But that case is for exec env/arg * pages and those are %99 certainly going to get * faulted into the tlb (and thus flushed) anyways. */ - flush_dcache_page_impl(page); + flush_dcache_folio_impl(folio); } out: put_cpu(); } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); void __kprobes flush_icache_range(unsigned long start, unsigned long end) { @@ -2280,10 +2294,10 @@ void __init paging_init(void) setup_page_offset(); /* These build time checkes make sure that the dcache_dirty_cpu() - * page->flags usage will work. + * folio->flags usage will work. * * When a page gets marked as dcache-dirty, we store the - * cpu number starting at bit 32 in the page->flags. Also, + * cpu number starting at bit 32 in the folio->flags. Also, * functions like clear_dcache_dirty_cpu use the cpu mask * in 13-bit signed-immediate instruction fields. */ diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c index 9a725547578e..3fa6a070912d 100644 --- a/arch/sparc/mm/tlb.c +++ b/arch/sparc/mm/tlb.c @@ -118,6 +118,7 @@ void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr, unsigned long paddr, pfn = pte_pfn(orig); struct address_space *mapping; struct page *page; + struct folio *folio; if (!pfn_valid(pfn)) goto no_cache_flush; @@ -127,13 +128,13 @@ void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr, goto no_cache_flush; /* A real file page? */ - mapping = page_mapping_file(page); + mapping = folio_flush_mapping(folio); if (!mapping) goto no_cache_flush; paddr = (unsigned long) page_address(page); if ((paddr ^ vaddr) & (1 << 13)) - flush_dcache_page_all(mm, page); + flush_dcache_folio_all(mm, folio); } no_cache_flush: From patchwork Tue Feb 28 21:37:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155246 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFF5FC7EE45 for ; Tue, 28 Feb 2023 21:38:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 65DED6B0092; Tue, 28 Feb 2023 16:37:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5BD836B0098; Tue, 28 Feb 2023 16:37:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3E9656B0095; Tue, 28 Feb 2023 16:37:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2CBE06B008C for ; Tue, 28 Feb 2023 16:37:50 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id F2239120CCB for ; Tue, 28 Feb 2023 21:37:49 +0000 (UTC) X-FDA: 80518013058.03.BC49C36 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 49663100009 for ; Tue, 28 Feb 2023 21:37:47 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="nim/Ffuw"; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620267; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dtiP3hUvA6nDur052lXOuZXkJQbzfHLCDQRu6Y0/CbA=; b=enInXzy/OYzO8jhG39Fdh8rf8OcUfGiLukUWZkqXHG1wdyY3IO5qIWrmhsnj9sFCIajLuI 3nDZ2cEMG3pZVneqLnEtYSoLugmPJNtpjwFRAgSdqEYFVrAUSA5OAKQH4r6wBAVJotVy6t SNYdvc9QhGLO//BbiHXvxU1dm4rx6Bc= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="nim/Ffuw"; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620267; a=rsa-sha256; cv=none; b=R8NBE4PMevg49nL/hZ1vrLO1JAmb4cboauj5KHTho8ZMglVWwV31Sn+KVOPj+zN/JAP0Sl 7DeEbiJgaSfAPPd3wQ3oed5Zazbm//QacKrn/VicWnaIR5PcmG6BwqmNcN01ncp4wfuUmw KreLn+pyN/5L9CNsqORK1xoVcIZ1x9w= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dtiP3hUvA6nDur052lXOuZXkJQbzfHLCDQRu6Y0/CbA=; b=nim/FfuwphiszxzJgRhRboHqcH vQzqCBkhh0tvLmMfPiIWWtFpSXipbb1iphhpFKTWFPAu/hVwVZoPc6ZHlCj2/jQKdF91GmZJ9na7T coRSAXj3oPuim8mixy7pm83Peo0keEkBTn2/bmTHwumG4lAQepukA68VsAXvrA39a+I9BJTE7Hs+Z 2u1V+pOEg9omjGjJIo7CRah2Vmhk48cMhBKDhpBAzTccVvBia7mHwuEn9QwqlNekN5TbgYvLfWOcz 1PHel/gzKJuYCRqdbehQTZEfjReDu/fMv2F9CyrAVcSivKYRhU/co/gP5obxRB7MApo8aNtsWl1xA sOVMrNJA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fK-0018qG-5T; Tue, 28 Feb 2023 21:37:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Richard Weinberger , Anton Ivanov , Johannes Berg , linux-um@lists.infradead.org Subject: [PATCH v3 25/34] um: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:28 +0000 Message-Id: <20230228213738.272178-26-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 49663100009 X-Stat-Signature: kzjwepxgbeuhz3rarfnzugyj9i6it4wt X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1677620267-723661 X-HE-Meta: U2FsdGVkX185OXFyBQWUeQ1H4a5Uu+V19wJtZmvhslc0ceGWPAR3OiycN6gRUZiij6gJ2fEtG+buKvO1kVNB04TZto/BbnTN7GfEh34IQ0Tez+rMAEq8D+em7tZLByo4U74oZJL1FZk0S0mR/j4Vk1ngKwje6U5cQxuFSscQwPdTOcgImWr+4CpaWDcTe7om4GaPVKRmBmJkkkmBFgYhJvDo6pTKGlUflXu0XDTGYez1EQN1i9vXcsn2PC0Yt3oPhRP8YKx4rfdSWhpI3l2DTSGE1Lvog9kI4MhKRrhG+sAyQLkLOb1NJgbqleUKrhmHtVVsf6kYH55/fgRkvKSV1lsQ5zJwR60waLuG9r4qCKypt31RAItJAg9+a5WCuqCeZIEH7bTYownIL/VQ1TTUwiVjYMuCXyqGdvSST7UcVbHggIoZpLnpoRQ4JpspLjvfnTv20cBZNm7rvy20VSitQNB2Qz62tZleO1+8L0yU+lBP0BDtFS76D8aI0RYIqv1CING/pyaQStNoHwO2q/A8p2fPk7ohIuaoogQaEop3lF0d4z5pDN/rHc0Q2dRqnQT7T3Ut/FyaIm9ObMvbG2oIwIkCsz0+mXjbmRKPhejjQBeWXEg9YIx1Hkuhbjty70tE78+peRgpuwWVVCwNa8z80QuQqkE2DzbQeWdUgUPtva7Qh9f2B7HL8vuV0c8Q7vLQokIwcCmHPqeGlGnqKyZQb7OgSi4zLMd6/SvbvYg5Qz6d+mOtcExIlzDaVrJGr2Powk3Mec3bkilx4ZvrhhJaBW8ZIoBkBI/VICnO6oPDkaWs9w9VoLZwtVoEOJoHb3zBPGA3y5BhTd+joqEVYVg3/gbnVQWbzKp/gM9XkS5oYJo0jbFl8btPAhHtzk1Ta0Yznb0pBd31+g1L1fD2Y59oXyaFsjTnS6hl642CSjx9GiaP9Kpjuawja206ylcs/IH/7pxE7XfdGpxMNu/q0UQ vbufnW5D GvIoA93pM895zIQjNhghY+WZGcIi8yqRXzHVgQyU+1WK2CTsIBO6ReZmHPzAiGi7aIe2IrveA/QyNFn0xQx0+AKPA/kD5wXqWPfARyUA0SKtGPrYEd3cffQ/zdT8T0s9WC3FSKFaPIpaegcNVbz4MWvb9oTRogzqIF18YaZ5Zg1kLN5mREgvlZiTplvzHNcCpQhp8R081/wYjNY0KmT7/w/WgVe14eRfVVd+wR4c6UpGt3CzjUJl8va01cczI3caGmOSyziuKuXAKpDYUamfxPrpm7S02zYtk9tneobFXbCWolOv2RJuXKnEmpOsql/9CuxHU X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes() and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Richard Weinberger Cc: Anton Ivanov Cc: Johannes Berg Cc: linux-um@lists.infradead.org --- arch/um/include/asm/pgtable.h | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h index a70d1618eb35..ca78c90ae74f 100644 --- a/arch/um/include/asm/pgtable.h +++ b/arch/um/include/asm/pgtable.h @@ -242,12 +242,20 @@ static inline void set_pte(pte_t *pteptr, pte_t pteval) if(pte_present(*pteptr)) *pteptr = pte_mknewprot(*pteptr); } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *pteptr, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - set_pte(pteptr, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + #define __HAVE_ARCH_PTE_SAME static inline int pte_same(pte_t pte_a, pte_t pte_b) { @@ -290,6 +298,7 @@ struct mm_struct; extern pte_t *virt_to_pte(struct mm_struct *mm, unsigned long addr); #define update_mmu_cache(vma,address,ptep) do {} while (0) +#define update_mmu_cache_range(vma, address, ptep, nr) do {} while (0) /* * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that From patchwork Tue Feb 28 21:37:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79318C7EE31 for ; Tue, 28 Feb 2023 21:38:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 39C416B007D; Tue, 28 Feb 2023 16:38:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3258C6B00A3; Tue, 28 Feb 2023 16:38:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C71F6B00A4; Tue, 28 Feb 2023 16:38:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0005B6B007D for ; Tue, 28 Feb 2023 16:38:06 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B6E8D1C6572 for ; Tue, 28 Feb 2023 21:38:06 +0000 (UTC) X-FDA: 80518013772.13.C1FA773 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 072B5100009 for ; Tue, 28 Feb 2023 21:38:03 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=NjCRe50+; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620284; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fpcPwGn1cnZkifuBnwEfVFMB8mrQdavgisIkUPDNgF0=; b=6pX9Xrr6OGTNQuyFdCYhaebcilaLPmC7exyMt72LGWG4/QXtaq4HvEF7s6ZHxFyFLuwKvd /PvvQmt8ih6Bak47ra/2i2BVhyRh3gxtd3j4eou6j+5+xyW0xHxvuCNR0Zc2LWbtjrvHJ9 I4/p/LuGfAE8AAq1n+g/0BG+vm5yrd8= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=NjCRe50+; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620284; a=rsa-sha256; cv=none; b=u9yvjgva/R3xCIVa+DvPPG1oiG5x5eqtu+jNLcHc9Kly+vFqkL6gLw83bFM5yuR3/zk/Mx GoAqxgbGjLmiAnDaSUMWuaAFFmDVDXJanvye9lAdVWb0xAMF/avtUZLEWVbdmWMRuMoztf g5O1os390ildKuFIfajdzhOMZ3Q/d3w= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fpcPwGn1cnZkifuBnwEfVFMB8mrQdavgisIkUPDNgF0=; b=NjCRe50+iJnOn2KOisgXMICdJ3 i3f0pYaSzRdjUJTsp293YdZYI8IxMAOKNR+z7ak18vrhXU29OXtVq/6sl1smHAbHvQAWRAsJuaJO7 HroZWUZSUw4hmaDi418EyQIjM4I2oPg2mtFYRimli3yyGf5PXj0e2xz6RsaTbj+jUDybB7J7W5GKo dCKhrpuyl764OzyK2702Y5Wjet1VVbzltlGR06KAtWjdP9L1uwbYUEQ3eoNlbZp75zltxKplP3zRq 14sK6gTtCoOIuWPWxnAQmtUIAZJyEmbBaNtkCrRvpb265t+Q1nKN59c9ANqXTk+Bk8/IUKkO5+9dv ltuj7eeg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fK-0018qN-9o; Tue, 28 Feb 2023 21:37:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH v3 26/34] x86: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:29 +0000 Message-Id: <20230228213738.272178-27-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 072B5100009 X-Stat-Signature: 7n7wepe9ady33dtk4hkoonhhbiro9auz X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1677620283-243747 X-HE-Meta: U2FsdGVkX19b4BS8s6H6UFUpaPHKafYqpV4cgoOSpJ9GMCBDnNPz0XpUC47Paw3h3MUG7iinryJzh4fs98xBZOq5MFndNIrMPzBwkPp+5/UvyHEU0y0HXhcaWb5kYWtCrVNkNKXp+zHy8dFAX6682ms3kCrq+9D1AVDFp8OcFVPWDDPQIUKpwV2VbcFd6cv1fOjVNy2rjx8c/ttxwcT4LkT7iVDMFQKJ4PpmoSTm374E81pyG0QcQiqhSi78sWmMXuc2kuB8Pl2bia+MFL9366Gx+Lpd5UD5ZzraQr5xAoharlp5EZs3hRChurcSznA8Tw3ugJquDDfJDtZGnUlOqCPzD6d82+YnO8/ffs+F4Ig/hIwcI1QWaE466L3KorU9WCzL4yCxGmWon56iQhTL33uv3LZqNnsWY2HzHzkfXC4oBlbf7WlADdQj07IMsFE4N5SIMQJPForgizVt0wuO2C9fAP8XsjyNi34EJc1J1sQgfiHwopOhv/2XXYpvpfbMGxWuIlHeoyVFHH4FvepaqSUROE5zaWLuBLNTPBaN4aBFTZKEHgCQ8UTg+Ocy7EdP8KtNc9M4TY2DDnCDitAwJJU2/2kBPPoVqQrSqFRDpY4pcKYNFJeQVLIBfcjPToSLmUBaegnoCYyU3iaGL+I/BGMg1w30mXrHQy35GA7218oJePFFtJroBLVthFaBkN6mvdmJFUpBYmnXuYs86LT9bIj8ri+fb8u/YHRc4x6ZdVx4M5QHt7qS8Yp8cJAbhsh7teh/ANZeHhdvb4TWAWCqtPWXVbZYiT+4MOg71J4+oUhslKRNd9F2h4FoObQDmiRm3uXRsrXnDawln7q7kgE88ZKanpvkMH9CeEtxTf23Yqt5uaJI4lY5yyaAgRI8I8TLP6s4YcvNU++jYlxJ0F6Oa96gABrIuE6BspEJmONlc+CkWJlcccTBCSf0uaxdpCunjq795emWNQCsS481mTs 9Ih+bfc+ fL07sN+1tSleF1iYlhFzdPm0jqHUL8zcMfsCjTjbsNrfHcV3NKT71lHusrB1BLn9ob23LPPgVfBUIR/Xip9gU7l7MM/QzO1pxFg4AUMm1qW5TxbrssupldJrF44cKohgsPCgStK1aGgH6GxqcuvDg/y/jrrUQVYXFCT5+ZpzuCGxadStr6LMVcvnx830cPCqxjPTw4/yKNlIo3hsh0lzOinPeRQoz/AVsvilhXucpZqYG93xjRYNdwFDbqm3mX9ASX3k9cmUuTtWh8kyH/+bo3qnWaj7SfNgToQv2dJwJMA74/gS5cDiPgVvS5Bi7ZYAcql9d0kXsw5zqIrfMC+SkNBFY/S2AN63jpp6Rq3uUYzveA9Q= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert set_pte_at() into set_ptes() and add a noop update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Cc: "H. Peter Anvin" --- arch/x86/include/asm/pgtable.h | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 84be3e07b112..f424371ea143 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1019,13 +1019,22 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) return res; } -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - page_table_check_ptes_set(mm, addr, ptep, pte, 1); - set_pte(ptep, pte); + page_table_check_ptes_set(mm, addr, ptep, pte, nr); + + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte = __pte(pte_val(pte) + PAGE_SIZE); + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) { @@ -1291,6 +1300,10 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { } +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, unsigned int nr) +{ +} static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { From patchwork Tue Feb 28 21:37:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 663ACC7EE30 for ; Tue, 28 Feb 2023 21:38:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D0FA56B0081; Tue, 28 Feb 2023 16:37:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 98DAB6B0093; Tue, 28 Feb 2023 16:37:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 36A3B6B0085; Tue, 28 Feb 2023 16:37:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 1747D6B0089 for ; Tue, 28 Feb 2023 16:37:46 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E4C5FC088D for ; Tue, 28 Feb 2023 21:37:45 +0000 (UTC) X-FDA: 80518012890.09.2D8AE41 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id 3571814000F for ; Tue, 28 Feb 2023 21:37:43 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=gjFOpRbF; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620264; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FcAVSqRhOvKHjYDKfG+YKts/V6VftTlUNnoYvCjS+is=; b=ovVhdi0mlaVX2uf6bCzzkx2n2v3rn475PjmbBUNj8ETN/GhTtDTcUjLz0FGhBPjRngvf9D edcJto/YjWh/fP/7M7ZCl8MgmV7Gi4nEYjiPKCLZB6XLG5AoJiaLbgljVUzK/r4Kyda6HI tzI8p3tbGwgNZYIJWDskK3zPEUMzqZ0= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=gjFOpRbF; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620264; a=rsa-sha256; cv=none; b=6B7VYLx0M18TphQF8DB8q9d3QoYKAYbaN8U1j3dP62P/T3C5OIU/xQqtYW1UvIGsur4P7i XwbEtiUkt+B5XTFUN0euVHicE7AWAjJOMvZTp37+Tvt2Jq4Ww3sHRoty0gLj2R8LPRjZE+ /eCKaL+F0xRjbOQkcb1Mxp/NI9CJktI= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FcAVSqRhOvKHjYDKfG+YKts/V6VftTlUNnoYvCjS+is=; b=gjFOpRbF60JNy37K9eEbGTKOMu rAhhWaB7QqOETWVuOMHoykORkUMucxrCs96HybyjsTJ3vJTglIuX2iJHbkzpWlMe86FNKBH1x5oxw vAwlCCMDn+rZs2qNdqK5B2UIGPDAttKg4c74ukYUw97lfz84Q5OlfV/tNg1zfVxT4QbH14iBu2e5f Glna1tEHSYgtZWads993I+P+QquYUSAeQvodM8gLoQKXfd+7IVlo3At9HKfzq967zgG3QsqX3UHUu 4arrPA+eGIEXtMkBbM69Cw6ClBZTFfiHywWuDdmApshEdGCdhljblQKfJmSixpj8Pe8/d9KD00GFk ZbRsztxg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fK-0018qP-CR; Tue, 28 Feb 2023 21:37:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Max Filippov , linux-xtensa@linux-xtensa.org Subject: [PATCH v3 27/34] xtensa: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:30 +0000 Message-Id: <20230228213738.272178-28-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 3571814000F X-Rspam-User: X-Stat-Signature: qbeh4azy1bkro643xuiq1o348fgnxxje X-HE-Tag: 1677620263-601259 X-HE-Meta: U2FsdGVkX1+LTp1juck8ZSpF2ilQpXc4yGXudgA7+qHEzgUv2+C+au3TKNfdwBZ0szhljFMdH4s5REpQNcCWXoTqxaEJlf0kCglM68A51MnYgn69IHz4frFRiOtMRZgEwWyGkGmaGvluLLLlcDUZiJ1ty+ZNUMPyJNdkBH3lMxPExnk6FsahA34A9LX34/abyvyERTcasEyOpKYEqgvX1Be9vsrZ866Ve1BAAIexnsoTxNj0yKAyMP35qwwStQIPROXI4rhOMctmbyNQakru6A6TQdFTs/TaUbZkIQqWJtu2lbAyu+bAthdfG8zrlI1OEVpaieVMvpnw630V+OzUh+4LNNutI1kadKPIIzqsMUef5zqgLUZOQD6u6cwuS8H5nExneJlJUj5vwrqRzArW74iTzdtrX6WS8StrT5tQ/59Eym8CxSaZUC4WL0gK6QUobEiotX4COUetChVOazXWudC70NvwXcysS/r2tbK0ckaNAzDRLxfGqFhnufEN9nb8kcco8ijJxvWopqlSoFE1KTFDcXV6kl8i1mezfhxaawQp13yeck+6REGDfpTOTL8QxvFljvzAkjiGSsjbFUMsY81LpPBEvJSu0QnO2A0XpwKsXkiqHJkPjYwh2bZNnSQvb3bhfSoaXfgqiD1c2+ure4G4rRXD7HYdnOhLVr0ycTo857W3ikf0xsls+2TOQADeAQtm9Q2A1Xl8P+IymvuStUSFG3uFfeeYpnDJMIZ8nJOBD37HgE8AKFgSmrmoIPTr6/jyMau524HWdRnfW1x09NMMDTnv2kf5Uy3PRgur7s5OPMf6YziCkJ8reo+pO0Q0eQEdHpkzCYwUg2UIjm2rLYffGkCsEvYwtK/HNSzxky1yTUqFVdhOmGo+3+hr64CxE07a9Bt1g4/B69fJCzAs+1kx/rMdxcAYpHazuZ0B04Mo/QiW5IN+mt6iKP9W90WzTxfb9FrfjZf2Qr0wFVm D1BDieg8 OLmHBYcy4//o6sL+0xQVSo4uBOPQmDSS9WeNRPAm/a0QMic8NiwlbCahXVgvZzDr1IKklWT5aRCX3pmgeOwTQfEJTp4yFCuLaTX/5TTdf6+r8H01O1puOyq6xBs1uZZFapuo50nQLRWswF9H09OkzXH1x/JN8w9XEMm9WEsNEDigM8U/6IFSDJVz1Y+kSciu6OaiYHYDo9t7r1t2leWdS1DgEC2rUjZOkELC6dsgAAn6ae9A92EeC4pwkNpHSLvC2j+XcmhoehevfTOufA+sW4Mp23g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Signed-off-by: Matthew Wilcox (Oracle) Cc: Max Filippov Cc: linux-xtensa@linux-xtensa.org --- arch/xtensa/include/asm/cacheflush.h | 9 ++- arch/xtensa/include/asm/pgtable.h | 24 +++++--- arch/xtensa/mm/cache.c | 83 ++++++++++++++++------------ 3 files changed, 72 insertions(+), 44 deletions(-) diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h index 7b4359312c25..35153f6725e4 100644 --- a/arch/xtensa/include/asm/cacheflush.h +++ b/arch/xtensa/include/asm/cacheflush.h @@ -119,8 +119,14 @@ void flush_cache_page(struct vm_area_struct*, #define flush_cache_vmap(start,end) flush_cache_all() #define flush_cache_vunmap(start,end) flush_cache_all() +void flush_dcache_folio(struct folio *folio); +#define flush_dcache_folio flush_dcache_folio + #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -void flush_dcache_page(struct page *); +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} void local_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); @@ -156,6 +162,7 @@ void local_flush_cache_page(struct vm_area_struct *vma, /* This is not required, see Documentation/core-api/cachetlb.rst */ #define flush_icache_page(vma,page) do { } while (0) +#define flush_icache_pages(vma, page, nr) do { } while (0) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h index fc7a14884c6c..293101530541 100644 --- a/arch/xtensa/include/asm/pgtable.h +++ b/arch/xtensa/include/asm/pgtable.h @@ -301,17 +301,25 @@ static inline void update_pte(pte_t *ptep, pte_t pteval) struct mm_struct; -static inline void -set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pteval) +static inline void set_pte(pte_t *ptep, pte_t pte) { - update_pte(ptep, pteval); + update_pte(ptep, pte); } -static inline void set_pte(pte_t *ptep, pte_t pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) { - update_pte(ptep, pteval); + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } } +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) + static inline void set_pmd(pmd_t *pmdp, pmd_t pmdval) { @@ -407,8 +415,10 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) #else -extern void update_mmu_cache(struct vm_area_struct * vma, - unsigned long address, pte_t *ptep); +void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr); +#define update_mmu_cache(vma, address, ptep) \ + update_mmu_cache_range(vma, address, ptep, 1) typedef pte_t *pte_addr_t; diff --git a/arch/xtensa/mm/cache.c b/arch/xtensa/mm/cache.c index 19e5a478a7e8..65c0d5298041 100644 --- a/arch/xtensa/mm/cache.c +++ b/arch/xtensa/mm/cache.c @@ -121,9 +121,9 @@ EXPORT_SYMBOL(copy_user_highpage); * */ -void flush_dcache_page(struct page *page) +void flush_dcache_folio(struct folio *folio) { - struct address_space *mapping = page_mapping_file(page); + struct address_space *mapping = folio_flush_mapping(folio); /* * If we have a mapping but the page is not mapped to user-space @@ -132,14 +132,14 @@ void flush_dcache_page(struct page *page) */ if (mapping && !mapping_mapped(mapping)) { - if (!test_bit(PG_arch_1, &page->flags)) - set_bit(PG_arch_1, &page->flags); + if (!test_bit(PG_arch_1, &folio->flags)) + set_bit(PG_arch_1, &folio->flags); return; } else { - - unsigned long phys = page_to_phys(page); - unsigned long temp = page->index << PAGE_SHIFT; + unsigned long phys = folio_pfn(folio) * PAGE_SIZE; + unsigned long temp = folio_pos(folio); + unsigned int i, nr = folio_nr_pages(folio); unsigned long alias = !(DCACHE_ALIAS_EQ(temp, phys)); unsigned long virt; @@ -154,22 +154,26 @@ void flush_dcache_page(struct page *page) return; preempt_disable(); - virt = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(virt, phys); + for (i = 0; i < nr; i++) { + virt = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(virt, phys); - virt = TLBTEMP_BASE_1 + (temp & DCACHE_ALIAS_MASK); + virt = TLBTEMP_BASE_1 + (temp & DCACHE_ALIAS_MASK); - if (alias) - __flush_invalidate_dcache_page_alias(virt, phys); + if (alias) + __flush_invalidate_dcache_page_alias(virt, phys); - if (mapping) - __invalidate_icache_page_alias(virt, phys); + if (mapping) + __invalidate_icache_page_alias(virt, phys); + phys += PAGE_SIZE; + temp += PAGE_SIZE; + } preempt_enable(); } /* There shouldn't be an entry in the cache for this page anymore. */ } -EXPORT_SYMBOL(flush_dcache_page); +EXPORT_SYMBOL(flush_dcache_folio); /* * For now, flush the whole cache. FIXME?? @@ -207,45 +211,52 @@ EXPORT_SYMBOL(local_flush_cache_page); #endif /* DCACHE_WAY_SIZE > PAGE_SIZE */ -void -update_mmu_cache(struct vm_area_struct * vma, unsigned long addr, pte_t *ptep) +void update_mmu_cache_range(struct vm_area_struct *vma, unsigned long addr, + pte_t *ptep, unsigned int nr) { unsigned long pfn = pte_pfn(*ptep); - struct page *page; + struct folio *folio; + unsigned int i; if (!pfn_valid(pfn)) return; - page = pfn_to_page(pfn); + folio = page_folio(pfn_to_page(pfn)); - /* Invalidate old entry in TLBs */ - - flush_tlb_page(vma, addr); + /* Invalidate old entries in TLBs */ + for (i = 0; i < nr; i++) + flush_tlb_page(vma, addr + i * PAGE_SIZE); + nr = folio_nr_pages(folio); #if (DCACHE_WAY_SIZE > PAGE_SIZE) - if (!PageReserved(page) && test_bit(PG_arch_1, &page->flags)) { - unsigned long phys = page_to_phys(page); + if (!folio_test_reserved(folio) && test_bit(PG_arch_1, &folio->flags)) { + unsigned long phys = folio_pfn(folio) * PAGE_SIZE; unsigned long tmp; preempt_disable(); - tmp = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(tmp, phys); - tmp = TLBTEMP_BASE_1 + (addr & DCACHE_ALIAS_MASK); - __flush_invalidate_dcache_page_alias(tmp, phys); - __invalidate_icache_page_alias(tmp, phys); + for (i = 0; i < nr; i++) { + tmp = TLBTEMP_BASE_1 + (phys & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(tmp, phys); + tmp = TLBTEMP_BASE_1 + (addr & DCACHE_ALIAS_MASK); + __flush_invalidate_dcache_page_alias(tmp, phys); + __invalidate_icache_page_alias(tmp, phys); + phys += PAGE_SIZE; + } preempt_enable(); - clear_bit(PG_arch_1, &page->flags); + clear_bit(PG_arch_1, &folio->flags); } #else - if (!PageReserved(page) && !test_bit(PG_arch_1, &page->flags) + if (!folio_test_reserved(folio) && !test_bit(PG_arch_1, &folio->flags) && (vma->vm_flags & VM_EXEC) != 0) { - unsigned long paddr = (unsigned long)kmap_atomic(page); - __flush_dcache_page(paddr); - __invalidate_icache_page(paddr); - set_bit(PG_arch_1, &page->flags); - kunmap_atomic((void *)paddr); + for (i = 0; i < nr; i++) { + void *paddr = kmap_local_folio(folio, i * PAGE_SIZE); + __flush_dcache_page((unsigned long)paddr); + __invalidate_icache_page((unsigned long)paddr); + kunmap_atomic(paddr); + } + set_bit(PG_arch_1, &folio->flags); } #endif } From patchwork Tue Feb 28 21:37:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155240 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB57FC7EE31 for ; Tue, 28 Feb 2023 21:38:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0339A6B007E; Tue, 28 Feb 2023 16:37:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EE6826B008C; Tue, 28 Feb 2023 16:37:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A7DEA6B007E; Tue, 28 Feb 2023 16:37:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 4092B6B0087 for ; Tue, 28 Feb 2023 16:37:46 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 18E42AB2C4 for ; Tue, 28 Feb 2023 21:37:46 +0000 (UTC) X-FDA: 80518012932.12.1A1E348 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id 6ED5440005 for ; Tue, 28 Feb 2023 21:37:44 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DrFh2yHP; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620264; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mdaqpw27tgluLSVUfr5NCk7JYy/yOeBvYqmE7YJCi4Q=; b=2HRu8yuSZGlyeUswRxMkg6h1TraAP0AWLfRwYzIcxOmRcA6SdzfJdnWCVzHquTpqs2b2lr QzCrGhWvbtpLaUrWl8QAuSzTEwf9ymZsylv31JZOkG+PsEv2Sk+jaC2ZrZybJAtFiF5UJh 32XqO793inD6PIfk7jY8jSK1fKFuK00= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DrFh2yHP; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620264; a=rsa-sha256; cv=none; b=ksvSsZ5HthzEfPp2kpmV8N/x1a+vjRy0IPxcE+rV6t5Wm/e9k5en3m3meIuO46FO2FyksT xc22GPqqZ2KOHiE/PF0VJ3UuNpgTl6tmtxJz3YsMP+33tSLOBganX3CVSg05jGjr5uvKz/ KgqFfos8uVfqqNo8z07HmoLZCzEou/A= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mdaqpw27tgluLSVUfr5NCk7JYy/yOeBvYqmE7YJCi4Q=; b=DrFh2yHP+xndjR7rLzUn2Ia2US A9zyiUHLOTEyGpZP45ISo4r/tr45esU8qg5UQYLi1Tg8YmpQVCQFuth04qslUG1emgF5B594UN4+7 OqklC4ITNVGNREdIx2rLBwu7gfGDKNtcVmXeUlxiHR4acwBEx+18uyz/FMgo6Bg3OCgxbyP2HnkCo NtvjjX9fsWSims23QTMn3KVR2VLXfe3bqSN9gmaCMW0sK84KjQparXeo/4XYtu3lOAH8XkACexa6v VP06CViu2V6RZ3dqP9iH693OXYOMYPni/zYuFHC2aNm+CiyNLHGA2FHR6T/cIg4qNPPCIyRzhaGzX cSaNjUvw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fK-0018qR-K2; Tue, 28 Feb 2023 21:37:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v3 28/34] mm: Remove page_mapping_file() Date: Tue, 28 Feb 2023 21:37:31 +0000 Message-Id: <20230228213738.272178-29-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: e5f83odfc76ehqpythu8t5qzf6k17uqf X-Rspam-User: X-Rspamd-Queue-Id: 6ED5440005 X-Rspamd-Server: rspam06 X-HE-Tag: 1677620264-629563 X-HE-Meta: U2FsdGVkX18kyUscoHqzjto52lrNu3nsXLP+sNVyfvyVykcA9KGLooWdScsCSGP4FduItz0iZ8ufalE9BK4CfEdyDIonKEhvOx/OJR58bMDnixNT5wXfI/P0gNOmdc4B/miSBxXoRsxP+5UZ+C8ArUXBPagRamWWwbdita0w9COz21TT36s43LrcJfvAHx/6+i09zM2WnCUR99fyd0KKVdAnKezU+c2qppxiZc4gNGXHwJpeGip+Q9DNYlDbBbmddD9w0T667b5JTTT/y/1rB2FUWXQw8kFfNsY8C1ZjvgFX/Rddzm5E1WvObcqR5Z6L2AAnEN0oluBzYeS22rGGgBbHjkHxPScMr99Tz9bD2OnIsmNxNlXK9f0n5lYo7nYehNp0iIHzEmHFb91FC5aRezWKv+C+raFgj4FOlZSBlOl9n4YtoCbzU6qCkkgoCnWfvwOGOVZxDG58PTq9diH7QVwxcV+9XprWDxJ51t102VzGWHPNoYBoxAu+Dw8UAL4mOqVbEeuymxxoP96yW83Mi18lDF40NH+r5lKOzUp9y8uLAa9tpMyOJNI8UIYu4OYRFZUzBi2sp66shZAihoqVX3EgX5AOs6vHPSmG5AyJ/im/y8kjbblNvbLllt4mzgE8oet4PF1uHcQlwigUJqk+b3fUOc7SDnm4M17uOFPlfKupK8OUcfdRseIf0ktR2BO4Ps9PJH98n++Ckle30y2hwKmU+BaffPpf7pveBVTXbSMG8hx5vAQBCYpjbOU6OD0k9fP+49OQtHAfcWhcoVsrMMoRBZG8CrulpyZGp08Gr74OhsKtXT6FMn6OVIs94uqYh4LOGWUR/aeRoMm4w1QonziIBUglIX8y56fyiZLT2TavbjHkxOsNXzHhEguS18xAmodDNy8Pw+XsQLHo3OT5oOhwEH/rbHUdRxAx1MOZeQYuuKcbP6YzPpiUqOp50P/10LEhs2mQ8bx8eY3lZJo xE+trdiL ArgDhgdeJWgVRpSt5LDY1hNZTbpCTQU3oeYUFv5QG6Vf4vHxN7G1FWdejqyoqMdQ7vKqPsnWxhOwTNHqCCLk/wvA1idkCM5kGfV4vsJduLft4cz/GgQoIydAvJDMVMeo3fzK7kDB5qh2vG4JiQlINVqeQxMWyzv6XfocOol7FXvVi6JPKwCCsjUug3L+H+BwT09V0n8lSX2KVpQ+OJlXz1fjzEJKt4l16Mwfa4ZEgpvuWAjiLko5cz7ePKQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function has no more users. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 8 -------- 1 file changed, 8 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 1b1ba3d5100d..c21b3ad1068c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -394,14 +394,6 @@ static inline struct address_space *page_file_mapping(struct page *page) return folio_file_mapping(page_folio(page)); } -/* - * For file cache pages, return the address_space, otherwise return NULL - */ -static inline struct address_space *page_mapping_file(struct page *page) -{ - return folio_flush_mapping(page_folio(page)); -} - /** * folio_inode - Get the host inode for this folio. * @folio: The folio. From patchwork Tue Feb 28 21:37:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155254 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D47A1C7EE32 for ; Tue, 28 Feb 2023 21:38:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 611E76B009A; Tue, 28 Feb 2023 16:37:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5C03C6B009C; Tue, 28 Feb 2023 16:37:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 39C486B009E; Tue, 28 Feb 2023 16:37:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1DBEC6B009A for ; Tue, 28 Feb 2023 16:37:58 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id F3C7EC0C6B for ; Tue, 28 Feb 2023 21:37:57 +0000 (UTC) X-FDA: 80518013394.18.D45D038 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 35CB8C0006 for ; Tue, 28 Feb 2023 21:37:55 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=YjerQ6PU; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620276; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4+4t1rqEiuqyoKPIj2cQSmuuKOyYNvfnS04rFEq3Bbw=; b=zXK/um4CGvxe2wLAKcvyyYUroSKdSqJnqxHOxrmF9fFFBs+0nML/r7ZHDxQgU+2bjahUbA GjcdQ/e6/BElrJRJbiFm/dhL4MMP8tAB5tdEixZK9U1IAXfl2VTnbtdDwIGTUMsNXp7l+a iixoKlDKO/30tOjLM/5cq8VCWnFGMHA= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=YjerQ6PU; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620276; a=rsa-sha256; cv=none; b=yxJCns7IPZ1LMu/QndCK/FikcKM+k+ViZhtxLG/XN8quYE2GAcx5b6Y0ybr/+Cv4WkN3Xh cmi5UgROX6QXCosbOBfaa59XMRZnjt/BTYEFiammJs0M7l8h93VymabCFBr0eHPy6qrURG Hl+tDLDKfZ+Yxo14qp42yluCmDlxqKE= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4+4t1rqEiuqyoKPIj2cQSmuuKOyYNvfnS04rFEq3Bbw=; b=YjerQ6PU7LAW9NfRpaTaBoZkTX 5pRbBMRHM6R06nY/z+O+qG6+GWpR68xxpcIPvXdcUCNlAUWSWOTbNaKWLYJRhtQyT99fSTpYhNEFo PzNX/9hgALt9UELLjVFRwIdXhiiGp5zb3T5hzElFR/BwNho/6u2XXPW52bnSSkeszWHCb2AI18Voj S1IL6qTAZevsZ80YCiEoZq5/vnPYMimwDo7jUtkPRXXjiObPNp7PSqiJuyJr/3lTtQOtx7UjPu3rc GxtqnEDXN60JlFU4FkBAufdlkdpAjnO3+St2uny78qm+t6sI8yjFd9GLEVkPfcGV20QAo7em/xW/g j+Gms1IQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fK-0018qf-S8; Tue, 28 Feb 2023 21:37:42 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v3 29/34] mm: Rationalise flush_icache_pages() and flush_icache_page() Date: Tue, 28 Feb 2023 21:37:32 +0000 Message-Id: <20230228213738.272178-30-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 35CB8C0006 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: awt7x75uctyoik1fgiq98huk1rwrdu17 X-HE-Tag: 1677620275-782494 X-HE-Meta: U2FsdGVkX18mf7XFz0OgfEJoLVE+iPlhYX1O9PMiPecDznzJtzCY4kFOSPwvFzye2wZPmaX9kgplWkLue7mw7h2DaIYeo8SogVycuWFHHYs4M3b98HiZu+5qpkz7VWJIs4seY53e2qmhzzjR+R2C3NlRxsXAfN4Ytc5XgWBW4nma3SMgA9tGP7ybriFilTMIhxeyNFmG0XlvocaIHEmnlC+zP6T9lPWa1cr34W3LpM+D4+cTwo0ZWr7Y2xBkXvS2lZKp2f+YVt27Gu56XsAVz6u5NDNZp0uD41bK5WeMCztFF2V1Wb2hmqY9bY5EG5rbBnge+IOIOhLzyMTUrVn1SzO6cxDoY5vyarRl7tttOWiLQmZUirjwuj5xMy4Bz94rcnFt7MuhtQ25Evr3UX8JRxbpwQmVBI1USMkwt0QtIxGL7Iqr1+PW2RCyArh1hkA00lPxq+w8/CzaltFIVXybPPtc3W6rLyeTBj5xY5P1ScWupw9rv1yDjrXTFyxseeoY/uXg4rYJQIytaIzHEH2O/CFS9hVc2lTWHiTbXgiRm2Gt00VqqlnK6jqTzANN6vUv5tjsZEFqtaX9zsyzceosNmBOvAqFO3JTEKhlwBawGZLlRtYVCZx5kXjnZQxLTcS1ur7uW6wxwsLHlv25tMdpcD7sgg7oT0G3gUFEWYBwrZAQyA6R9z9Kec1knqvu7d1rE/ZtM3o+hBukWmhqNLC98XZaX3MLmlHgxTWk8i5065PezG5s7Clfuqgyp9B7MQ+fxzVr12jfCIjMON2g7u16AOtSIn0huN6E/fT2oBOGrqz/Eu/BSIFv4RGxfmBxSsBfYYdxldpZotGY4fSfKQ2WXiaX3/oiVq4ERcQydsz0pWXeVrf87Nz/OdE5fJT8KOTloaqeok8yMl/HMyju8U8TcNz2z08WcxgK0cZIXthxGpJMJvLOh1fIf/V+uGecg22531n4IsXceN0npi99kT+ wkgbKq84 6s1pvyOMyba3Z2aksphliOPD78e1btXhdhwfovEmskfi30d9/qVRrnerhBdABwAoDmAAjayu2uwdz+4/7+dpOxnc+CNo+9WfxRDKQ90L1NTsgpQmjt557YI3TdTFfqSCI5KjNqTCSPLgcFgvdg6Lc7up21/FCUiCf57QEml+l5HyCIzi1/X4jg2mX8MbCxsjk/Gnb+W5pm0EJBaug3FWkliPXDidtrB3FyRv7uG7BFU932qjUfWk4xxsYJA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move the default (no-op) implementation of flush_icache_pages() to from . Remove the flush_icache_page() wrapper from each architecture into . Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Geert Uytterhoeven --- arch/alpha/include/asm/cacheflush.h | 5 +---- arch/arc/include/asm/cacheflush.h | 9 --------- arch/arm/include/asm/cacheflush.h | 7 ------- arch/csky/abiv1/inc/abi/cacheflush.h | 1 - arch/csky/abiv2/inc/abi/cacheflush.h | 1 - arch/hexagon/include/asm/cacheflush.h | 2 +- arch/loongarch/include/asm/cacheflush.h | 2 -- arch/m68k/include/asm/cacheflush_mm.h | 1 - arch/mips/include/asm/cacheflush.h | 6 ------ arch/nios2/include/asm/cacheflush.h | 2 +- arch/nios2/mm/cacheflush.c | 1 + arch/parisc/include/asm/cacheflush.h | 2 +- arch/sh/include/asm/cacheflush.h | 2 +- arch/sparc/include/asm/cacheflush_32.h | 2 -- arch/sparc/include/asm/cacheflush_64.h | 3 --- arch/xtensa/include/asm/cacheflush.h | 4 ---- include/asm-generic/cacheflush.h | 12 ------------ include/linux/cacheflush.h | 9 +++++++++ 18 files changed, 15 insertions(+), 56 deletions(-) diff --git a/arch/alpha/include/asm/cacheflush.h b/arch/alpha/include/asm/cacheflush.h index 3956460e69e2..36a7e924c3b9 100644 --- a/arch/alpha/include/asm/cacheflush.h +++ b/arch/alpha/include/asm/cacheflush.h @@ -53,10 +53,6 @@ extern void flush_icache_user_page(struct vm_area_struct *vma, #define flush_icache_user_page flush_icache_user_page #endif /* CONFIG_SMP */ -/* This is used only in __do_fault and do_swap_page. */ -#define flush_icache_page(vma, page) \ - flush_icache_user_page((vma), (page), 0, 0) - /* * Both implementations of flush_icache_user_page flush the entire * address space, so one call, no matter how many pages. @@ -66,6 +62,7 @@ static inline void flush_icache_pages(struct vm_area_struct *vma, { flush_icache_user_page(vma, page, 0, 0); } +#define flush_icache_pages flush_icache_pages #include diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index 04f65f588510..bd5b1a9a0544 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -18,15 +18,6 @@ #include #include -/* - * Semantically we need this because icache doesn't snoop dcache/dma. - * However ARC Cache flush requires paddr as well as vaddr, latter not available - * in the flush_icache_page() API. So we no-op it but do the equivalent work - * in update_mmu_cache() - */ -#define flush_icache_page(vma, page) -#define flush_icache_pages(vma, page, nr) - void flush_cache_all(void); void flush_icache_range(unsigned long kstart, unsigned long kend); diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 841e268d2374..f6181f69577f 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -321,13 +321,6 @@ static inline void flush_anon_page(struct vm_area_struct *vma, #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) -/* - * We don't appear to need to do anything here. In fact, if we did, we'd - * duplicate cache flushing elsewhere performed by flush_dcache_page(). - */ -#define flush_icache_page(vma,page) do { } while (0) -#define flush_icache_pages(vma, page, nr) do { } while (0) - /* * flush_cache_vmap() is used when creating mappings (eg, via vmap, * vmalloc, ioremap etc) in kernel space for pages. On non-VIPT diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index 0d6cb65624c4..908d8b0bc4fd 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -45,7 +45,6 @@ extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, u #define flush_cache_vmap(start, end) cache_wbinv_all() #define flush_cache_vunmap(start, end) cache_wbinv_all() -#define flush_icache_page(vma, page) do {} while (0); #define flush_icache_range(start, end) cache_wbinv_range(start, end) #define flush_icache_mm_range(mm, start, end) cache_wbinv_range(start, end) #define flush_icache_deferred(mm) do {} while (0); diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index 9c728933a776..40be16907267 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -33,7 +33,6 @@ static inline void flush_dcache_page(struct page *page) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) -#define flush_icache_page(vma, page) do { } while (0) #define flush_icache_range(start, end) cache_wbinv_range(start, end) diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h index 63ca314ede89..bdacf72d97e1 100644 --- a/arch/hexagon/include/asm/cacheflush.h +++ b/arch/hexagon/include/asm/cacheflush.h @@ -18,7 +18,7 @@ * - flush_cache_range(vma, start, end) flushes a range of pages * - flush_icache_range(start, end) flush a range of instructions * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache - * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache + * - flush_icache_pages(vma, pg, nr) flushes(invalidates) nr pages for icache * * Need to doublecheck which one is really needed for ptrace stuff to work. */ diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h index 7907eb42bfbd..326ac6f1b27c 100644 --- a/arch/loongarch/include/asm/cacheflush.h +++ b/arch/loongarch/include/asm/cacheflush.h @@ -46,8 +46,6 @@ void local_flush_icache_range(unsigned long start, unsigned long end); #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) #define flush_cache_vmap(start, end) do { } while (0) #define flush_cache_vunmap(start, end) do { } while (0) -#define flush_icache_page(vma, page) do { } while (0) -#define flush_icache_pages(vma, page) do { } while (0) #define flush_icache_user_page(vma, page, addr, len) do { } while (0) #define flush_dcache_page(page) do { } while (0) #define flush_dcache_folio(folio) do { } while (0) diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h index d43c8bce149b..c67a8d2e6d6a 100644 --- a/arch/m68k/include/asm/cacheflush_mm.h +++ b/arch/m68k/include/asm/cacheflush_mm.h @@ -260,7 +260,6 @@ static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr) #define flush_dcache_mmap_unlock(mapping) do { } while (0) #define flush_icache_pages(vma, page, nr) \ __flush_pages_to_ram(page_address(page), nr) -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h index 2683cade42ef..043e50effc62 100644 --- a/arch/mips/include/asm/cacheflush.h +++ b/arch/mips/include/asm/cacheflush.h @@ -82,12 +82,6 @@ static inline void flush_anon_page(struct vm_area_struct *vma, __flush_anon_page(page, vmaddr); } -static inline void flush_icache_pages(struct vm_area_struct *vma, - struct page *page, unsigned int nr) -{ -} -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) - extern void (*flush_icache_range)(unsigned long start, unsigned long end); extern void (*local_flush_icache_range)(unsigned long start, unsigned long end); extern void (*__flush_icache_user_range)(unsigned long start, diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h index 8624ca83cffe..7c48c5213fb7 100644 --- a/arch/nios2/include/asm/cacheflush.h +++ b/arch/nios2/include/asm/cacheflush.h @@ -35,7 +35,7 @@ void flush_dcache_folio(struct folio *folio); extern void flush_icache_range(unsigned long start, unsigned long end); void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1); +#define flush_icache_pages flush_icache_pages #define flush_cache_vmap(start, end) flush_dcache_range(start, end) #define flush_cache_vunmap(start, end) flush_dcache_range(start, end) diff --git a/arch/nios2/mm/cacheflush.c b/arch/nios2/mm/cacheflush.c index 471485a84b2c..2565767b98a3 100644 --- a/arch/nios2/mm/cacheflush.c +++ b/arch/nios2/mm/cacheflush.c @@ -147,6 +147,7 @@ void flush_icache_pages(struct vm_area_struct *vma, struct page *page, __flush_dcache(start, end); __flush_icache(start, end); } +#define flush_icache_pages flush_icache_pages void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn) diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h index 0bf8b69d086b..e4fdce328dbd 100644 --- a/arch/parisc/include/asm/cacheflush.h +++ b/arch/parisc/include/asm/cacheflush.h @@ -59,7 +59,7 @@ static inline void flush_dcache_page(struct page *page) void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) +#define flush_icache_pages flush_icache_pages #define flush_icache_range(s,e) do { \ flush_kernel_dcache_range_asm(s,e); \ diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h index 9fceef6f3e00..878b6b551bd2 100644 --- a/arch/sh/include/asm/cacheflush.h +++ b/arch/sh/include/asm/cacheflush.h @@ -53,7 +53,7 @@ extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_user_range flush_icache_range void flush_icache_pages(struct vm_area_struct *vma, struct page *page, unsigned int nr); -#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) +#define flush_icache_pages flush_icache_pages extern void flush_cache_sigtramp(unsigned long address); struct flusher_data { diff --git a/arch/sparc/include/asm/cacheflush_32.h b/arch/sparc/include/asm/cacheflush_32.h index 8dba35d63328..21f6c918238b 100644 --- a/arch/sparc/include/asm/cacheflush_32.h +++ b/arch/sparc/include/asm/cacheflush_32.h @@ -15,8 +15,6 @@ #define flush_cache_page(vma,addr,pfn) \ sparc32_cachetlb_ops->cache_page(vma, addr) #define flush_icache_range(start, end) do { } while (0) -#define flush_icache_page(vma, pg) do { } while (0) -#define flush_icache_pages(vma, pg, nr) do { } while (0) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h index a9a719f04d06..0e879004efff 100644 --- a/arch/sparc/include/asm/cacheflush_64.h +++ b/arch/sparc/include/asm/cacheflush_64.h @@ -53,9 +53,6 @@ static inline void flush_dcache_page(struct page *page) flush_dcache_folio(page_folio(page)); } -#define flush_icache_page(vma, pg) do { } while(0) -#define flush_icache_pages(vma, pg, nr) do { } while(0) - void flush_ptrace_access(struct vm_area_struct *, struct page *, unsigned long uaddr, void *kaddr, unsigned long len, int write); diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h index 35153f6725e4..785a00ce83c1 100644 --- a/arch/xtensa/include/asm/cacheflush.h +++ b/arch/xtensa/include/asm/cacheflush.h @@ -160,10 +160,6 @@ void local_flush_cache_page(struct vm_area_struct *vma, __invalidate_icache_range(start,(end) - (start)); \ } while (0) -/* This is not required, see Documentation/core-api/cachetlb.rst */ -#define flush_icache_page(vma,page) do { } while (0) -#define flush_icache_pages(vma, page, nr) do { } while (0) - #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index 09d51a680765..84ec53ccc450 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -77,18 +77,6 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) #define flush_icache_user_range flush_icache_range #endif -#ifndef flush_icache_page -static inline void flush_icache_pages(struct vm_area_struct *vma, - struct page *page, unsigned int nr) -{ -} - -static inline void flush_icache_page(struct vm_area_struct *vma, - struct page *page) -{ -} -#endif - #ifndef flush_icache_user_page static inline void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h index 82136f3fcf54..55f297b2c23f 100644 --- a/include/linux/cacheflush.h +++ b/include/linux/cacheflush.h @@ -17,4 +17,13 @@ static inline void flush_dcache_folio(struct folio *folio) #define flush_dcache_folio flush_dcache_folio #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ +#ifndef flush_icache_pages +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ +} +#endif + +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) + #endif /* _LINUX_CACHEFLUSH_H */ From patchwork Tue Feb 28 21:37:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7AEAC64EC4 for ; Tue, 28 Feb 2023 21:38:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 462456B009B; Tue, 28 Feb 2023 16:37:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 300D16B009A; Tue, 28 Feb 2023 16:37:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C7756B009B; Tue, 28 Feb 2023 16:37:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 0571B6B0099 for ; Tue, 28 Feb 2023 16:37:55 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id CE9D6A0D7F for ; Tue, 28 Feb 2023 21:37:54 +0000 (UTC) X-FDA: 80518013268.16.BEF0B15 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 26B35100010 for ; Tue, 28 Feb 2023 21:37:52 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=OD1wQ9FR; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620273; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Yzj6cO2/vJ4MfYZK59aIBqu3Kv7SFJ8gH5oZ5flVgEs=; b=HcZaeUFZa6UrSIMId4l8ZcStpoP9ZHJwHtHEDX8I366qrIawMqHgFdCuXlLIoyPf1wZCrS 1NngcIcm+T1nLzegh97SIAEFgVf8gOPRoIursfZdpaJ0NHPFIDmy/vazWLs6TgjKEkbzer 0UoTxgqim6MilYzcu1SP5Ht6zuFDpM0= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=OD1wQ9FR; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620273; a=rsa-sha256; cv=none; b=qY5Y79Cf+F17rjxx931O6Evez93sYaHfF7HsTAzCSmXF9C97ZMcigHvGgRagfrYIXpmoxk YfdqAYysi1bgGsgCCCL/VJ4cpcfD+Vyjq8n/xlC+/LreqU40HzNbslHMoD6Dba8a3Xynkk Sv28cdnR9wDFE+ikURe/bSf7k5wDFu0= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Yzj6cO2/vJ4MfYZK59aIBqu3Kv7SFJ8gH5oZ5flVgEs=; b=OD1wQ9FRkxcghMLJg6Ce5C0ga2 cmeah8bBnDlnApT7w9eP4jLoOHWe5LOWJvaXjzySJToJ/ri5lFa/WPRrYlKCBfTnoTOy61IeVDPmi 68qgaO505JobxikfyskgY9xbNje+JqFPUysl0cnN+vEVR3pgREX8J89OLKMHl/TqyyAVjEpnol4zA mWNvomzEhL2O9QcgO/0OOrJvu8ccybEFryAof/kx97tzFyUluCPOeKa3/0cjX3UTPkb/j9069P53J +Cg4IG3uFwzyjrZHabcHWuf84tIb4VxnYUxvHW0kGOEgevnxc29eSy5Y6qNTUzaF2UnG761WUlGH0 nbiy8g/A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fL-0018qr-6O; Tue, 28 Feb 2023 21:37:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v3 30/34] mm: Use flush_icache_pages() in do_set_pmd() Date: Tue, 28 Feb 2023 21:37:33 +0000 Message-Id: <20230228213738.272178-31-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 26B35100010 X-Stat-Signature: ccfykspoykknqam71f6p191jof3iahgh X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1677620272-619896 X-HE-Meta: U2FsdGVkX1+/aW4IkReV6VJQzVoVLXh9Jy08sH9J4eOoDD986wH07jeYl71gwGSVDkZ5Yw2aJs2XvBgO2LEU+C0iX3rkzJxfYSqw4gBkMm+TGLp4/KeqAg95UwLzcLTEoguEBEzTmKfj5Uc0Mtq/0mtq+t2K5pmxi6jDvNMYTauj5NGFB/cxxUCE6j9eSFtFIoFCVjba/BgJEpT8OFZfJNKivDRpiPIyiAYbsicoVfPVD3HwLALTnAZ3iMMwgOnBqN/4FSJsSEI6igLg2o5IG5MgBHmKrJNJffxV0z2xI+XnGowwlDL5mSoE6gP0QOpH8Ht3dMtvchMKfCIokdKzLUgGg/9iPKoPFfuBHXA7vj5+THkGpSaJ/XIslo9fpfn3b8ctSAv3aPIgaLW//za4QRjxw2oC08n5OxozkLczV2YjMbYwcGd1BAHl1lfKpiZwRVToBVpfsyYyROBduqG/OI1huFk3phd+KfkZR77/6EU02s7Tvjh6QnMwbkrSd9J1vGtBr/M4jg5EGRFzmnootxLN4H79oTxPMobihn75M1uosNgY7GJqck4cteKYaXxuf4vJ1ABdrSoI79ReumJo3IebFV9xngEvWToSzubm4pm2tVXNCWzMqQ5oKbYexNBoO6Of5GxV/97lArZtvk3NpCuIvZM4nb6w79vLGhxJjjNAnCrKLjaH1LynCCSijgFLviU2XTgj8XP+FXa/5UzkKyRoxJ9ogq8UI1lbjvdt81B/5QFSeSbOXCjIyx338OhiT6fsbeKw2gGvyOq9Ky8uVNn21GIYWTNHNgZtniA5iOV4IbWqBGUrKJ9Y2H1T7SCZRI67jRao+fCSRR6Xv5Ff56eVZ7c/KhUGQHA5dXgoknK+ZayStEsj2tWdsfFdYqkusNFcxBuvap99AH5lS+J2h689/j8NKaANOEyb9bRFaPIT2Y/cjsyajPmAqqrU+sKWUhjvDN4CxNWhCqKE+Xj xf/beW8y klhaReWH0I9pSxFl9jRDP9uN+B3fShGhM0zD5fG1xuZPRFCR8SoBWFxJssaiG9Jpi0bokJjAv5K6L74zuLIjEgTo8Av6LguNCOs86LvHz3QpbDIK+VXl0o2/rEB+AcXRbDfqrWnODEf1Ft+wI6GnMk8F3IaInGASrghgZ55x9PDPNLJqJCoIklaDdGYmKU3h4xsh24GZdmlOyyO6Bh3n2X3tRylDxqmr6F7kHaSWFoK8EnNjkNUV1EBo3Zrp7/eYdJTjJyq3rhRw17t0ZoWDpW1DLzg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Push the iteration over each page down to the architectures (many can flush the entire THP without iteration). Signed-off-by: Matthew Wilcox (Oracle) --- mm/memory.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index bfa3100ec5a3..69e844d5f75c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4222,8 +4222,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) if (unlikely(!pmd_none(*vmf->pmd))) goto out; - for (i = 0; i < HPAGE_PMD_NR; i++) - flush_icache_page(vma, page + i); + flush_icache_pages(vma, page, HPAGE_PMD_NR); entry = mk_huge_pmd(page, vma->vm_page_prot); if (write) From patchwork Tue Feb 28 21:37:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A252C7EE31 for ; Tue, 28 Feb 2023 21:38:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3514F6B009D; Tue, 28 Feb 2023 16:37:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1EEF26B009C; Tue, 28 Feb 2023 16:37:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 08E326B009D; Tue, 28 Feb 2023 16:37:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id EB9426B009A for ; Tue, 28 Feb 2023 16:37:56 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id C815C1A0C72 for ; Tue, 28 Feb 2023 21:37:56 +0000 (UTC) X-FDA: 80518013352.01.2E62B95 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 105E7100014 for ; Tue, 28 Feb 2023 21:37:54 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Zu3izYdW; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620275; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=i9ts6rBAYolQ1/FBCYccIPnlyslRrUlDUNhaTG0F47Y=; b=toy99/ZJAWbY1i+GJ0JDVa8x92/HQrBTqqzjSaUU2N7WKEv+M0L01XOUJyfCTI9FWh7FMU 2G91CR1Y0Q4hw25xde2/370PQqyO96rCm4c8qmq8SDWCmkNc6/R7RETZ4MQmRDkMfX2AqX 5WdTXH4wb8rX5kmaQWbp5pPm0qNOQBk= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Zu3izYdW; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620275; a=rsa-sha256; cv=none; b=lfgM0kIIGac6ZgQvTlJwdPM7aC3s+8PsQ26i04pRyaLiDfJYgj3EeBXev6j7EOjBXJqKve GWQqP3QoJr7AwAocBVKiOHUTuxdzvSb8VabDyKFynf/bF99S9Ht4b9Yjy+TNxG8x5JOz15 ch7s4VfNlV7Wg8qLsHC4HQfhYaN/vm0= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=i9ts6rBAYolQ1/FBCYccIPnlyslRrUlDUNhaTG0F47Y=; b=Zu3izYdWiUUrWNlblTIecgxVap vSbBWWBbGVf6F/mRhWkEatlIiQnHo0p87SmbzvwgHtTMhRqJ7iDgvDse5sB/S5t41WheVx2gII5C8 hBaIEcWUmNdss0vl3hGJmAep54TOAa3KmCoNQY+LnVxNkEkyaZnV/a4iADAxF7TcCPY5KSbsPG650 ADPBKhoPOPKcqaSgLW0NANyzDDwHvz3HXPOQJHU/aVC0a5QhSum4OYeiv1z77mW8yKu9D5p6/Lnrz da6ZPtIqG4XkeLCjYg0m3VLc2wKopty+hJW2xnBDcerJxOZQ7S2BK0qfWcEU3t1/OMfJ4ovzu+LDK 8Nda2UwQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fL-0018r0-C5; Tue, 28 Feb 2023 21:37:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v3 31/34] filemap: Add filemap_map_folio_range() Date: Tue, 28 Feb 2023 21:37:34 +0000 Message-Id: <20230228213738.272178-32-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 105E7100014 X-Stat-Signature: stqj7fd8rfh4r177by457ztiwp8xuw7z X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1677620274-614247 X-HE-Meta: U2FsdGVkX19mTVdVbJng9d5yv8NQlAyP4SX4MqjE/bZS7Fm9fwJ2I3ZN+6S0qYJsQlC2hehEpKeuE4Y7j2ju7ZlGhUZc3Q29g+pQ2rfhUUG+6hGMPCqWFrwtuuwwKu4m711DyfYhelEFPBAo82zhVjeLTDpZB/guu2A7h0Z+mJJ+A/7iiqSe1lefK+BtaS5NgHZLT+qOmXbKre/hubAXGIO5jvca7Bx7iHTTaheidqJT5yWqhLMKCY4Oo098vmJH7PwlbwGvmuVv/4E3K26voq9wBPPkpw3edS7LN6qgqe0BVkCYzWhZ9lK1kXi7dlC7aLHl8VHc49mQr5g5XEsKaL7NaLVQUe0MtwLcHxB6yPtF6YmjTN4KoF0e1EYPqU+FuRDNuhYiuJBzJAuzebLPq0muJk22CAiujx9G5DalIXn6W/P10Y3AewxRgJFoI0uy8fL0Di/Q+tC/sm0ycbK4Sztm0wRmkU1yurM/i86X9ysYcdUN4mVvw+chiCINrNwb1kv+Wl7wVALJTpJyW+W8jXF62m841kX4OYM/08O5TC60YLPbsDtxLwGqXhGkxIKCMawJn8hyRLsFI6ONVWBrkezKm6OCLU1PEPwx+w+r3gmY/riTadxSQEVc8Hnwet0EY7kgRpK9W0JdmGrvG0imbSpb01M6WXMeNj+XJky/m/O32TmH8XegQfHWG9V/nAbB9lJDGt/eNxv1SLxQ9/HarpTHF5dybEsBKnAi4/z7j4qFLICtFJhWoa1VTxNTJNzLldFiwoh0OlA1ecoFLgpdCQU5dVyPFJgoLAPK79X5+M8rc9NQUYz2swW4ZAeWiYDkHdrtVzxaE+/i9lhH0KhpouuPzJ+JfywdTc2diyh24/iYFdHwyLt/VS6WIH4scEfwellD+/w+++NFnPEGPlA9ORwY5ZrjbECODyA5DUvVvBcuQvcQLiyen74gzn6F/nIGejF5x89Y6ASW1ZxJ88A yw3OIrJp UzmfQyY33V7xbnxhdabrVjkdnk39r6+YHX+VZUw168CSCQhMrm2q4HDOn4ANhI6o5KfANF1zGk3/Zff62MxZ13FD7AAIzK+2uL6vyQfG/ChCmoTd/K4TG21RwS23L0OIYm7ezs0Crva2ec/vOW9IrkORZ6YC59/PuzCvzDMVXAWiRFUBHbmX2IAX/rtgrhWzUR8c8GrwFTlkkm0LQxJO/qT1hAnNmYDW3NY6j5eXuoAYuRPhaZv0GKYLh4eh6dBBapRPaLuIYBq7giv6KtMH778uMROgEQN1xdoJO X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yin Fengwei filemap_map_folio_range() maps partial/full folio. Comparing to original filemap_map_pages(), it updates refcount once per folio instead of per page and gets minor performance improvement for large folio. With a will-it-scale.page_fault3 like app (change file write fault testing to read fault testing. Trying to upstream it to will-it-scale at [1]), got 2% performance gain on a 48C/96T Cascade Lake test box with 96 processes running against xfs. [1]: https://github.com/antonblanchard/will-it-scale/pull/37 Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 98 +++++++++++++++++++++++++++++----------------------- 1 file changed, 54 insertions(+), 44 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 2723104cc06a..db86e459dde6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2202,16 +2202,6 @@ unsigned filemap_get_folios(struct address_space *mapping, pgoff_t *start, } EXPORT_SYMBOL(filemap_get_folios); -static inline -bool folio_more_pages(struct folio *folio, pgoff_t index, pgoff_t max) -{ - if (!folio_test_large(folio) || folio_test_hugetlb(folio)) - return false; - if (index >= max) - return false; - return index < folio->index + folio_nr_pages(folio) - 1; -} - /** * filemap_get_folios_contig - Get a batch of contiguous folios * @mapping: The address_space to search @@ -3483,6 +3473,53 @@ static inline struct folio *next_map_page(struct address_space *mapping, mapping, xas, end_pgoff); } +/* + * Map page range [start_page, start_page + nr_pages) of folio. + * start_page is gotten from start by folio_page(folio, start) + */ +static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, + struct folio *folio, unsigned long start, + unsigned long addr, unsigned int nr_pages) +{ + vm_fault_t ret = 0; + struct vm_area_struct *vma = vmf->vma; + struct file *file = vma->vm_file; + struct page *page = folio_page(folio, start); + unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); + unsigned int ref_count = 0, count = 0; + + do { + if (PageHWPoison(page)) + continue; + + if (mmap_miss > 0) + mmap_miss--; + + /* + * NOTE: If there're PTE markers, we'll leave them to be + * handled in the specific fault path, and it'll prohibit the + * fault-around logic. + */ + if (!pte_none(*vmf->pte)) + continue; + + if (vmf->address == addr) + ret = VM_FAULT_NOPAGE; + + ref_count++; + do_set_pte(vmf, page, addr); + update_mmu_cache(vma, addr, vmf->pte); + } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); + + /* Restore the vmf->pte */ + vmf->pte -= nr_pages; + + folio_ref_add(folio, ref_count); + WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); + + return ret; +} + vm_fault_t filemap_map_pages(struct vm_fault *vmf, pgoff_t start_pgoff, pgoff_t end_pgoff) { @@ -3493,9 +3530,9 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, unsigned long addr; XA_STATE(xas, &mapping->i_pages, start_pgoff); struct folio *folio; - struct page *page; unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); vm_fault_t ret = 0; + int nr_pages = 0; rcu_read_lock(); folio = first_map_page(mapping, &xas, end_pgoff); @@ -3510,45 +3547,18 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, addr = vma->vm_start + ((start_pgoff - vma->vm_pgoff) << PAGE_SHIFT); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); do { -again: - page = folio_file_page(folio, xas.xa_index); - if (PageHWPoison(page)) - goto unlock; - - if (mmap_miss > 0) - mmap_miss--; + unsigned long end; addr += (xas.xa_index - last_pgoff) << PAGE_SHIFT; vmf->pte += xas.xa_index - last_pgoff; last_pgoff = xas.xa_index; + end = folio->index + folio_nr_pages(folio) - 1; + nr_pages = min(end, end_pgoff) - xas.xa_index + 1; - /* - * NOTE: If there're PTE markers, we'll leave them to be - * handled in the specific fault path, and it'll prohibit the - * fault-around logic. - */ - if (!pte_none(*vmf->pte)) - goto unlock; + ret |= filemap_map_folio_range(vmf, folio, + xas.xa_index - folio->index, addr, nr_pages); + xas.xa_index += nr_pages; - /* We're about to handle the fault */ - if (vmf->address == addr) - ret = VM_FAULT_NOPAGE; - - do_set_pte(vmf, page, addr); - /* no need to invalidate: a not-present page won't be cached */ - update_mmu_cache(vma, addr, vmf->pte); - if (folio_more_pages(folio, xas.xa_index, end_pgoff)) { - xas.xa_index++; - folio_ref_inc(folio); - goto again; - } - folio_unlock(folio); - continue; -unlock: - if (folio_more_pages(folio, xas.xa_index, end_pgoff)) { - xas.xa_index++; - goto again; - } folio_unlock(folio); folio_put(folio); } while ((folio = next_map_page(mapping, &xas, end_pgoff)) != NULL); From patchwork Tue Feb 28 21:37:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FEE3C7EE32 for ; Tue, 28 Feb 2023 21:38:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 598A06B0078; Tue, 28 Feb 2023 16:38:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 54AAB6B009F; Tue, 28 Feb 2023 16:38:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3CCFC6B00A1; Tue, 28 Feb 2023 16:38:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 2833D6B0078 for ; Tue, 28 Feb 2023 16:38:01 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id E254C8025A for ; Tue, 28 Feb 2023 21:38:00 +0000 (UTC) X-FDA: 80518013520.24.9FCE171 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 0ABC4100011 for ; Tue, 28 Feb 2023 21:37:58 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=LRdOtZ9Z; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620279; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=y0P5OnBh3qG6d3noGaCIYdmYizhMJwO+EWfJPW8MG4E=; b=gLsGDJk1eBJYGCV4riYhDNIUZiGG8CjewpmG2xRXHqvRbgU96TvO3/coe0XQo7TUxF7gS2 E3duQI9OygV1Oo2BIpqHFxl8qvjccAyj2mltiE2EUoWTHqKsMVvmvgWzCsZPdPL5sYCTyR 1WkblquNmM0OC+xtUkXoLU4b7e0Gn4s= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=LRdOtZ9Z; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620279; a=rsa-sha256; cv=none; b=zeHD+bRiTxTdG93Iz3fcyjkRMg2Savh3CH0QFc9O7/vHl0bNnGgZCgbCsTlbwV5vvcvvdt 7mxtIBHe+AE2+zeXtJhbJgtxj4knvuQ2JddiLiQwosHUBaKbaE/10+12umYpqduj37qrgM 0I9ML94qfT5Y+fRAl2ngrMRl6I1pTx0= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=y0P5OnBh3qG6d3noGaCIYdmYizhMJwO+EWfJPW8MG4E=; b=LRdOtZ9ZFkTTxLpXQzhSaFBaCO 3LY1E0Ww6W7K84LoUvYzcxA8jYCxW/TlqQ6e5ETaKmaR6l27lmJAZCHy4QLqJdsCEmO7qZVip53ZF aMwH9kSUjY9Bf+LrPeWlxlolBANWfD0tGcMm9aTgIpfam04BFwmZiBTytL1Aj4xW8plPpu1BStvS/ YAGcDVb0J/Z34yIZc3XS3bwtDdU70HZuxJiDVtxN9iyhe2H5z0vZJ/avHH5y5dPwdoz5Kv7iNHEzL No0ijIELFIwUqqoIWgpdx38Wg9d+6zY92qOCl4JKw8mVTDyZvnl4BSbK8YBk4+101LB2HCYitNH6F QnOg7n2Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fL-0018r6-HG; Tue, 28 Feb 2023 21:37:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v3 32/34] rmap: add folio_add_file_rmap_range() Date: Tue, 28 Feb 2023 21:37:35 +0000 Message-Id: <20230228213738.272178-33-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 0ABC4100011 X-Stat-Signature: co9j664niphck7hcsngrbe1s71s66qs4 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1677620278-326163 X-HE-Meta: U2FsdGVkX19Aa6+nh8HclHdozJTJ0JSVFSAgg1VDMbhKP7AFIOA9JF5rcclB0tx1v30+DjqvTDPFLXYwGekdA3dp6EKuu0ZaTLwuzYGmVg1/tXI8kBKKRo2zDvIHpmg3IY6ZGn07oRd0Jb3WUVun4vHawIzy5ReWnutd2ZpHHZrlEW6z2r1Z86CnYlsvvq6yRRmtc8BiYCTHDg8WFIp3UuSOz4e+9wgHzFKpjGhvV5pC3MzVQ219jrpSYhFAT3XV0pyweBMaVXtdR6obaVN55ThWucDiCq7FxGHl126w9qsXjcoISffTKkgFG1FdG/8A8XBlhK5wD1OArnXFMQWNVpsFPJvIdRNtWN6VrjdSEd11+KR1Zng+CeYigFfQ+QpG6FUsEqav80lHj9/txFk6ZITFDw+JtzdblGOJ4p4kb0OICs1aaKBZJarlVFuucsubBKy8ewt2dYZiePqPA/SHHvzxiQjri1qvYjDJa+gfz7oRgA30nW+g2FTQ2AVUXoKttIQnYJL2M31am96wciHO//wEpNl/GGksvz3con3Z+0maN2CdWPNjCtT7nfVyhZkhV4irMhiMXgnBe0ZL/yCm/b+N9FaUgXGYYPA3pmjv68MOHqPo8c4XKsAPVDZem4HpM+UoLH2J1HRsqMu7RTkPiUZVEGoWGbi0RMXb9NbRM+huQ/RCAEVP57Kv6Z705Z5U41uD5FXKzl0exPfvRBLBiKWJLTyyd9Eftcvg2ncbBfP7qppb3CIGTy+IVtDoUKG0u+hrQBBG1F5js9MP+Q3JHiu18QLtJZKBdFlyMER8felXwMgmpTB2RNaCiw7Cr1I2uAjV0sFE3L3g0hHTUzNsxh0RbrfXwpn/PwdOa8Fg8AXwUfMY2Q7/7BANdrweSN90Wj3l3+OTGoqhz2YtxE33aBGTCBOMvqnZPb7fqcwStEPr17X926vV3U6pV+u/aJ2bhlYKc1XcGbwwjC09eGz Aja3AclX QDkVgiW1AySKJFeeOhM/kyHGAmOvNH3mbqK/goizhQ8EtHAyfczBLOlCOtjU2HB3no8oLop9Dr8qVEVaTiV769zlqdxEttLPEt2w0hilNNSx95q4zBQtPoOPVB46U1kb4TL2NCtmNngSQlKqZbwdXibtLxWUBvfKILJUmf6jTC20rM+Ef965nNtlfcstLP0yRxgZ8L4fuR7AdY8RnMXGt0FZtPneaJslh97i0eTzQ/1+dV9CXVAyC/83ocburVl/ynbc/KlQ+qhwjhFQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yin Fengwei folio_add_file_rmap_range() allows to add pte mapping to a specific range of file folio. Comparing to page_add_file_rmap(), it batched updates __lruvec_stat for large folio. Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rmap.h | 2 ++ mm/rmap.c | 60 +++++++++++++++++++++++++++++++++----------- 2 files changed, 48 insertions(+), 14 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b87d01660412..a3825ce81102 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -198,6 +198,8 @@ void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); void page_add_file_rmap(struct page *, struct vm_area_struct *, bool compound); +void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr, + struct vm_area_struct *, bool compound); void page_remove_rmap(struct page *, struct vm_area_struct *, bool compound); diff --git a/mm/rmap.c b/mm/rmap.c index bacdb795d5ee..fffdb85a3b3d 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1303,31 +1303,39 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, } /** - * page_add_file_rmap - add pte mapping to a file page - * @page: the page to add the mapping to + * folio_add_file_rmap_range - add pte mapping to page range of a folio + * @folio: The folio to add the mapping to + * @page: The first page to add + * @nr_pages: The number of pages which will be mapped * @vma: the vm area in which the mapping is added * @compound: charge the page as compound or small page * + * The page range of folio is defined by [first_page, first_page + nr_pages) + * * The caller needs to hold the pte lock. */ -void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, - bool compound) +void folio_add_file_rmap_range(struct folio *folio, struct page *page, + unsigned int nr_pages, struct vm_area_struct *vma, + bool compound) { - struct folio *folio = page_folio(page); atomic_t *mapped = &folio->_nr_pages_mapped; - int nr = 0, nr_pmdmapped = 0; - bool first; + unsigned int nr_pmdmapped = 0, first; + int nr = 0; - VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); + VM_WARN_ON_FOLIO(compound && !folio_test_pmd_mappable(folio), folio); /* Is page being mapped by PTE? Is this its first map to be added? */ if (likely(!compound)) { - first = atomic_inc_and_test(&page->_mapcount); - nr = first; - if (first && folio_test_large(folio)) { - nr = atomic_inc_return_relaxed(mapped); - nr = (nr < COMPOUND_MAPPED); - } + do { + first = atomic_inc_and_test(&page->_mapcount); + if (first && folio_test_large(folio)) { + first = atomic_inc_return_relaxed(mapped); + first = (nr < COMPOUND_MAPPED); + } + + if (first) + nr++; + } while (page++, --nr_pages > 0); } else if (folio_test_pmd_mappable(folio)) { /* That test is redundant: it's for safety or to optimize out */ @@ -1356,6 +1364,30 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, mlock_vma_folio(folio, vma, compound); } +/** + * page_add_file_rmap - add pte mapping to a file page + * @page: the page to add the mapping to + * @vma: the vm area in which the mapping is added + * @compound: charge the page as compound or small page + * + * The caller needs to hold the pte lock. + */ +void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, + bool compound) +{ + struct folio *folio = page_folio(page); + unsigned int nr_pages; + + VM_WARN_ON_ONCE_PAGE(compound && !PageTransHuge(page), page); + + if (likely(!compound)) + nr_pages = 1; + else + nr_pages = folio_nr_pages(folio); + + folio_add_file_rmap_range(folio, page, nr_pages, vma, compound); +} + /** * page_remove_rmap - take down pte mapping from a page * @page: page to remove mapping from From patchwork Tue Feb 28 21:37:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D7A5C7EE33 for ; Tue, 28 Feb 2023 21:38:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 23D576B009C; Tue, 28 Feb 2023 16:37:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1EF4F6B009E; Tue, 28 Feb 2023 16:37:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 018366B009F; Tue, 28 Feb 2023 16:37:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DFE4E6B009C for ; Tue, 28 Feb 2023 16:37:58 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B6CC5C0CD8 for ; Tue, 28 Feb 2023 21:37:58 +0000 (UTC) X-FDA: 80518013436.16.7DE9D2C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 1872B100023 for ; Tue, 28 Feb 2023 21:37:56 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=iach5zv0; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620277; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ARAoirNoaB9mRzIf53Xf1NK7jxsnUs5xL8l8fM8Dxdk=; b=iyvB+oJbFii/Qw5FeUw2yPacop9KB9F5rqUw2z0qXui6P6YxfRgQ+/ym4FBSz+hTymRC+v jlgYZjtL6uHfgtL19p5XF6ISz0BlGVE8DqSHThmQZxbaJcJm9VGIM0QfxXQA32TZfxP3Vf awj9Dl6M+emzTOQQR5Xa7JEJhUjPuD8= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=iach5zv0; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620277; a=rsa-sha256; cv=none; b=vUfiOrQMydf+OG3SxIio6vsuvGMiTo3VXVQ4Gf/yFQTVEFUOXBIXk9vtgOlwQ6ldZJpWh9 fQp3BNBXB+wjoCpn9X++Y+PDpRTBOnvPXQrGeaZxaSjCtSNFcz+yamukXfzwtD3BKJ/jfI b0+9UkiYjZAUK9wcpH1+L6UgDUAW/jg= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ARAoirNoaB9mRzIf53Xf1NK7jxsnUs5xL8l8fM8Dxdk=; b=iach5zv0eq0D8FR1a4k/lIomSV pgV0FW5stQQcTQhQVoCe85oCF2tR0Zj8sfrut3+8k6372kXxFNSJGDZXqRN6yULzdojt2xBYsywgS 2W16HPgDHLjhzw5IzkQd/GsCfjPEVBtahoYMJvuz3QmDMqduVxPv52jHKli8DXp+8luYU2PFlLrjP NW8YWzQyvpAsuN8eJgVE83fFbYYHWReM/TXWACNDFuh99uLt18q2UYlUwuvP/Nju2F0XapCddF62s wTnfVleI9HrGcxOq/8tdigu//t0DHMbg0I38uigDhK6n+RzFcj5KM+/ffMUcDNAoHFv8BnTtALLAF gfFtG2sg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fL-0018rD-La; Tue, 28 Feb 2023 21:37:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v3 33/34] mm: Convert do_set_pte() to set_pte_range() Date: Tue, 28 Feb 2023 21:37:36 +0000 Message-Id: <20230228213738.272178-34-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1872B100023 X-Stat-Signature: hfwq6nz4xyk4b8cqx7diwtjjea5hb4ft X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1677620276-695400 X-HE-Meta: U2FsdGVkX1/1wDnhEE8x2T0QupsLrDLgGcNB5QWq56hVXczrzshFCmrLrpl9tYctiwAzKJnELG+S3eOugrkkdg1qvN4K8QbTqHM2Q+e7vSfgmVLebWmWE6IiwZBc6salivuLwK19ELIxXkrdGcTxGuq9n4dBRDxBjGVmS+kh9hiTuwe09a2c6xLWzqY5n1T29injZPYrFDr9zqv55s8orUbrKhCQpX0qh+GTXrfhUgMgidb0Kh6NhGt3wXHZ29ss2WZA9y4bjA7H1KZXoUSHLabm7hiLQGjB4Q7onnC+rJmr98UOa2gnETDuWpgxL02QRQTVY3P/heyTSCKDmkjmIpEkSaFIernNWyhXjhk1ZYpTPMmE03NZpZCQj3SnhUnsUbWXohDSdiScf4uw8s92sxRVio7fODUBVF50lxQFChBDtBf+L52VbVCqcyg7I3+8h3dUbwPx8gqn3NvIyAa4POE+aJeoyqJUu1v9+OoAAa1HrfiAkk+4IVpV5RKiKCht+Xa+/Oh2td1kBmv6OmmEiCcgS3TAFDmJLseWnqC4J+P3k6j/oRHHUqFsvJEUX0U1axyksuwRElGvoE+53y4GytvGIQ9x+jw0x4stK6iXtmP+rb6mYdMB6bPg2PZMazZ8khpIebLFXieZG5sfeBVe9t1zx4ef5s+K5Q3Kwhb7rRBA1ox7rYicRV41P86gS65P/OxG/xGbK7wUaBx4cHYVNlGZuuPVpT+ehvLa69mA7vgprMiSsecDtmlqtnq3GpBEhlKjOBbC9I3BcEI7UG6+F0wG4mkNvKolU4l4ctvyPvKF+/J+nZlpsiR2NQkXsp9+N55FFNhmPMuj0ONR9bJb/DuUViE0wcwVdXmvHRbQa5Qa7rBe0a2ileiyFm6rNLMG8v/lH1dpor53RG0yVK/oQIFDuj1aLB58xMNsB6imozSpYrzhjqzsFRbLAdzwwbyI4mHHsQJxxBmDRL0t/W3 sbkBMtYH XliBpub/pQSLNigAzsjgenVU88PAsGp2VHXWO5k+CpjJVJhp45JRQG4mtH5DSRGVLOBztqVCY/Ra3fS5paKhgAou+6u/nHDW+quSLJhe7Y54DXFa8ce9I8IW4gjlRiyi8gVq1f4tMg4mDdrP5KXFbczV4mUVbdI6BYc5RJSL+KzvwDAezUgi8j/5ErjPi/xb7PCNJNnU1Q7hxdqERil4eQgmJxtnoFqgSW+UTN9dxxNXltKHqjOgr6AYMxO5cvaMQuqqvK6MR92d8xJM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yin Fengwei set_pte_range() allows to setup page table entries for a specific range. It takes advantage of batched rmap update for large folio. It now takes care of calling update_mmu_cache_range(). Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/filesystems/locking.rst | 2 +- include/linux/mm.h | 3 ++- mm/filemap.c | 3 +-- mm/memory.c | 27 +++++++++++++++------------ 4 files changed, 19 insertions(+), 16 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst index 7de7a7272a5e..922886fefb7f 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -663,7 +663,7 @@ locked. The VM will unlock the page. Filesystem should find and map pages associated with offsets from "start_pgoff" till "end_pgoff". ->map_pages() is called with page table locked and must not block. If it's not possible to reach a page without blocking, -filesystem should skip it. Filesystem should use do_set_pte() to setup +filesystem should skip it. Filesystem should use set_pte_range() to setup page table entry. Pointer to entry associated with the page is passed in "pte" field in vm_fault structure. Pointers to entries for other offsets should be calculated relative to "pte". diff --git a/include/linux/mm.h b/include/linux/mm.h index 1f79667824eb..568ebe7058d4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1168,7 +1168,8 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) } vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page); -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr); +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr); vm_fault_t finish_fault(struct vm_fault *vmf); vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); diff --git a/mm/filemap.c b/mm/filemap.c index db86e459dde6..07ebd90967a3 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3507,8 +3507,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, ret = VM_FAULT_NOPAGE; ref_count++; - do_set_pte(vmf, page, addr); - update_mmu_cache(vma, addr, vmf->pte); + set_pte_range(vmf, folio, page, 1, addr); } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); /* Restore the vmf->pte */ diff --git a/mm/memory.c b/mm/memory.c index 69e844d5f75c..efd17ff09315 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4255,7 +4255,8 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) } #endif -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr) { struct vm_area_struct *vma = vmf->vma; bool uffd_wp = pte_marker_uffd_wp(vmf->orig_pte); @@ -4263,7 +4264,7 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) bool prefault = vmf->address != addr; pte_t entry; - flush_icache_page(vma, page); + flush_icache_pages(vma, page, nr); entry = mk_pte(page, vma->vm_page_prot); if (prefault && arch_wants_old_prefaulted_pte()) @@ -4277,14 +4278,18 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) entry = pte_mkuffd_wp(entry); /* copy-on-write page */ if (write && !(vma->vm_flags & VM_SHARED)) { - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, vma, addr); - lru_cache_add_inactive_or_unevictable(page, vma); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr); + VM_BUG_ON_FOLIO(nr != 1, folio); + folio_add_new_anon_rmap(folio, vma, addr); + folio_add_lru_vma(folio, vma); } else { - inc_mm_counter(vma->vm_mm, mm_counter_file(page)); - page_add_file_rmap(page, vma, false); + add_mm_counter(vma->vm_mm, mm_counter_file(page), nr); + folio_add_file_rmap_range(folio, page, nr, vma, false); } - set_pte_at(vma->vm_mm, addr, vmf->pte, entry); + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr); + + /* no need to invalidate: a not-present page won't be cached */ + update_mmu_cache_range(vma, addr, vmf->pte, nr); } static bool vmf_pte_changed(struct vm_fault *vmf) @@ -4357,11 +4362,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf) /* Re-check under ptl */ if (likely(!vmf_pte_changed(vmf))) { - do_set_pte(vmf, page, vmf->address); - - /* no need to invalidate: a not-present page won't be cached */ - update_mmu_cache(vma, vmf->address, vmf->pte); + struct folio *folio = page_folio(page); + set_pte_range(vmf, folio, page, 1, vmf->address); ret = 0; } else { update_mmu_tlb(vma, vmf->address, vmf->pte); From patchwork Tue Feb 28 21:37:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155252 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F22CC7EE30 for ; Tue, 28 Feb 2023 21:38:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A2E686B0099; Tue, 28 Feb 2023 16:37:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 98EA86B009A; Tue, 28 Feb 2023 16:37:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 71DA46B009C; Tue, 28 Feb 2023 16:37:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5ACF36B0099 for ; Tue, 28 Feb 2023 16:37:56 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 1707340B40 for ; Tue, 28 Feb 2023 21:37:56 +0000 (UTC) X-FDA: 80518013352.06.9CE4FC5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 5C6C3C0012 for ; Tue, 28 Feb 2023 21:37:54 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="LP/NyLEv"; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620274; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GeBhKEWLK25eomw3eGszIimIpcpbCe83zHF885nubCM=; b=cxLkBjmEP78smQH2QSACgiqKpMoPfGgytMMAvU1yxwKg2ZcaBFzFd987oAmaXWKze89BOz wrxRpffSMAZaeeOtGH85PtvGLHxbGuokNSW11hws+dxUaux7sL399eSDNSnAddA4t9NLgh k0iS/ezTz8Ox0rlj3N6koRJdH2y3M2k= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="LP/NyLEv"; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620274; a=rsa-sha256; cv=none; b=5RIAk1cKr8xPgbwScHwnrcOtwhJRfMq2H1aMifLo1JV266TfTv8QWV7CXzbQR8LTF3Fh6p jsK8mEXIsE5K5j78UAn3aBakjwRd5BMaq8DRsu4iIWaTLaV1zPMjctJyCLVkkHX74DqGNU dMYsmYYaY+vhgRTV+fcM0+5H2jl7YSo= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=GeBhKEWLK25eomw3eGszIimIpcpbCe83zHF885nubCM=; b=LP/NyLEvU7Zr38BcJdWijZxpuz gziV4LpTw0wg2ceBinWysRsZZcycgBwZA2g7/Lm3HyxGiF1EEYEB3oZEMgrlbMW/lL2X2nrparQGy Zw04V6x8HNF5cPBAJOvScFx9DZMJVWxlE6+lIdHHDhw+d3ZT5d8h4XrEnqJGiVw0cPI4xltr7qIBy bUFvwy+/iUiECz7wXz+84dGuTi/q6iFsO4jyZD4oI7XQrBoeSvJnhDDqLdZ4xGHT8Bw5vZmVtkoVA C9BfuyTn+j+hO03JV95B0gmKzMTE65OMlLGGPxQnsIVjRDXfIlhs+T6oOBoljZ+Q77g/iY3WItu5S Pb6xt5sw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fL-0018rJ-PM; Tue, 28 Feb 2023 21:37:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: Yin Fengwei , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v3 34/34] filemap: Batch PTE mappings Date: Tue, 28 Feb 2023 21:37:37 +0000 Message-Id: <20230228213738.272178-35-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5C6C3C0012 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: xrm9nkwaf5a7fdq1wbmfm7bqaxfzaa6b X-HE-Tag: 1677620274-778041 X-HE-Meta: U2FsdGVkX1/IfATThEpx/6sTz+5zocOLHnjA1OXcAkNyJutn/u3dwJCGM3fxMjV3pCSdDLPQ5L168dL/W1n4pRf0uyhFiLJCs71HPw0GCYZmnTpEn8ZUDBRZrvBF+TdH8+Wul3Zp+U0MEvLY3TdNXh5GncjikfF+vCTLrksGYx4AraN6+LFmc4pGInE5g53+Qv+HnpAs9x4tBMpOQgeJ2XjJzAhCMaC9RiJ3vBWiW6M0lM3JA0CVxDjIyrsYEqyp2hDdZLUA8Af8j6JmbX+EdLsQaQ9k+NizYZCKoCAb/CZsp7w3obl/1Dwxi+q4cIxLuzsyWkkrtPzEm4TSb2jSeduo9k1H9h4Zhgt7dO9IEIW3dNKTgn92tDc+sEcwSotkVbP8XW5/h0aFCIgGG6rLn0yNG5qqXZqgx80LLUj7ConLycxK84QM6+u+W/OeEq3TWZvAMGakfXd6jespE6CYMfT1ipimcoGW415sKsF7A/OzJXXLHqqSnkbElIEij7m3lm1gys5lws2joDg/jARa/JBOrOpD+5Cu8fsjeUsSh7/nGnAAh7viV10/erAgtr4wTNWn1RBxIhu2touUdBflc1BAujIR4oyhOEGfZFuQd2liQjPY8GLRQbVs141a4iSd6Y6RRgwtRoRpNeBF0YnMXNmZaWDskrVFbgFyLdBiKeihbYu2UyEr2PLtxa2EUZt0EL6C765eXnmf8JykwCdhgUqovSegLfRWzKhGVdhq0nzahofVWcW6MpVksITwS3p1PgCCBhJAKJsCyyUQFQjXd5nzmKGmvze3AxuywE2OYNHcHw7G49aX6nsItmDXVGDuwqr8dgXhWXCe7g1MOgbaP9Svca0BeWOMlLzrRFNxbxhTtZcdthpudhsdYoCyyaO9na479wKj+aDbiwKujL5PeJdaaCfXS34xGHMzqu6E14Acqap8/EubpZlUBbxoTbZD1B6oE7aRBNgfY6SpPGP OTlnJK5G bPTvpK2qwr1mx9aphPjb6kshZOdfBVHXOfXOFcB1GGztmwD7iu3mLGMolSbVYO186kdjR97y8ulKw4u8mcKI0yhhu31dP7uDseyT+P9IospcFK2xsiSjmrZvSFHcd/TO/2OlxomzYHHrTwx5cvWa+RZmHdUhPPLcC/+EAZBfGDCh4BL2nCQUBR1hYq5KvEZcyJ8RGKmvmdTYlcRHD/CZ1gN/P65NTsrvqMZvSOV8e7WgoDUhtQif54CI6UQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yin Fengwei Call set_pte_range() once per contiguous range of the folio instead of once per page. This batches the updates to mm counters and the rmap. With a will-it-scale.page_fault3 like app (change file write fault testing to read fault testing. Trying to upstream it to will-it-scale at [1]) got 15% performance gain on a 48C/96T Cascade Lake test box with 96 processes running against xfs. Perf data collected before/after the change: 18.73%--page_add_file_rmap | --11.60%--__mod_lruvec_page_state | |--7.40%--__mod_memcg_lruvec_state | | | --5.58%--cgroup_rstat_updated | --2.53%--__mod_lruvec_state | --1.48%--__mod_node_page_state 9.93%--page_add_file_rmap_range | --2.67%--__mod_lruvec_page_state | |--1.95%--__mod_memcg_lruvec_state | | | --1.57%--cgroup_rstat_updated | --0.61%--__mod_lruvec_state | --0.54%--__mod_node_page_state The running time of __mode_lruvec_page_state() is reduced about 9%. [1]: https://github.com/antonblanchard/will-it-scale/pull/37 Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 36 +++++++++++++++++++++++++----------- 1 file changed, 25 insertions(+), 11 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 07ebd90967a3..40be33b5ee46 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3486,11 +3486,12 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, struct file *file = vma->vm_file; struct page *page = folio_page(folio, start); unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); - unsigned int ref_count = 0, count = 0; + unsigned int count = 0; + pte_t *old_ptep = vmf->pte; do { - if (PageHWPoison(page)) - continue; + if (PageHWPoison(page + count)) + goto skip; if (mmap_miss > 0) mmap_miss--; @@ -3500,20 +3501,33 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, * handled in the specific fault path, and it'll prohibit the * fault-around logic. */ - if (!pte_none(*vmf->pte)) - continue; + if (!pte_none(vmf->pte[count])) + goto skip; if (vmf->address == addr) ret = VM_FAULT_NOPAGE; - ref_count++; - set_pte_range(vmf, folio, page, 1, addr); - } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); + count++; + continue; +skip: + if (count) { + set_pte_range(vmf, folio, page, count, addr); + folio_ref_add(folio, count); + } - /* Restore the vmf->pte */ - vmf->pte -= nr_pages; + count++; + page += count; + vmf->pte += count; + addr += count * PAGE_SIZE; + count = 0; + } while (--nr_pages > 0); + + if (count) { + set_pte_range(vmf, folio, page, count, addr); + folio_ref_add(folio, count); + } - folio_ref_add(folio, ref_count); + vmf->pte = old_ptep; WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); return ret;