From patchwork Tue Feb 28 21:37:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13155232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32051C64EC4 for ; Tue, 28 Feb 2023 21:37:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9254F6B007D; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 76B1F6B007E; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37C216B007B; Tue, 28 Feb 2023 16:37:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 10DAD6B007B for ; Tue, 28 Feb 2023 16:37:45 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id DB57E140D63 for ; Tue, 28 Feb 2023 21:37:44 +0000 (UTC) X-FDA: 80518012848.23.898E876 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id 22A2D16001B for ; Tue, 28 Feb 2023 21:37:41 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=d64ZiIZw; dmarc=none; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677620263; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=X4gTGIoZDPD05YxRdFQZH/uA/i6Ep+mlgv+7pNzKT5k=; b=mBG2LzA5zUuF7fD9pQohoijUg21IwSnerfoCr3mC9A1krPK1+6eicTvTuEV4SMj5TRRmQn Huy9olZLFVvkdU8x+tF3uLZJUcoEBizyg21B09b0jPeQhWkGhuhn+d5j3uL2nq5uaE8wZa 5DZDsklmJdX0FfnE1VTdHkdiXvboDgA= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=d64ZiIZw; dmarc=none; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677620263; a=rsa-sha256; cv=none; b=gFho8/1+f9PkzekXD+qyxQJlDxyYg2cJ8Wcft5Oazzy1f6NcZSiiyQ772MSM73Qc9dlBi+ +77bZ/kfMnT6FooPGPzeA6srtTt6t8Xc577Z6ctK00eRMI8EQ1FwX7GEVRGIjfi2iXDpTW Z15KcOatzs/18z5ZnGfOGQkKenHzeiM= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=X4gTGIoZDPD05YxRdFQZH/uA/i6Ep+mlgv+7pNzKT5k=; b=d64ZiIZwbaZAR26epvp/fIC9ZJ 4oAPhYbNdFVbSebf5WTKvAstEIjxipjzZkYQBvxy6T5/cPJHsz6b7cOGTTzsbUMisq/PARR/croyV 386bEC6Y844QeOXW8GtSK4ycdugpUvB/2iIkP/nRGrtHnVXDHhh+GNxFBcFSglXGf7pX9Zi7Me0U+ uCrTAaMmU3tAXFylqB8VCo+bNjwNnbsladeGoCCzjRh7KrI/FWsm3kkPy+sn4LsdjLib5Mo8UNXFt PerbroycxkC0s00okLnb7cwTLHo7VK4uTlB45F5Zh7n8Ydln+h4qJigRBeBi+2jsZAEPcsXcw7hep EivU9tOQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pX7fI-0018p8-Ho; Tue, 28 Feb 2023 21:37:40 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org Subject: [PATCH v3 11/34] ia64: Implement the new page table range API Date: Tue, 28 Feb 2023 21:37:14 +0000 Message-Id: <20230228213738.272178-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230228213738.272178-1-willy@infradead.org> References: <20230228213738.272178-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 22A2D16001B X-Stat-Signature: bdq5hctto8chq8e4ocnyhx6m6k6upwhp X-HE-Tag: 1677620261-780383 X-HE-Meta: U2FsdGVkX19fcfv4vcrElxAkbCsbykBPDv91qHPbGGbLIwwMSG9QOEVrqun7VXjOWWo5X4l9Il0KzmreaZB/b8xEB9vZNaBmFz6k6ocYxD3Dlbuob+iG7DBX0WZJpz2BVIAAYhYSi0b1cC0fN1FQ1guDtrObtHJRWGIE/LaGLmZsEyRHotXvf9Dh4/YR0b2r9KJCqG7za2b5WTg3plqJzBI1QEFkr5LrlZlBgj6NCrZUN1ygT7u13RMnLYKIluKn31E0n84r7zvp/LVOYuJ+D+Yen4Q/fJ7d6u2mzjw+ZsT0D/+Xu5avigme0bIC7NAmBiF9aygxaabX6A4f3OOSdCNZjF1Fu5Dfyu6gsdc5esmt4ldJGRKYXkYrXWCnXjIZuJYtw7m4+TGfs1tQ4dgXCMXrZ6rFcq4BWgYCgnq4q22pqX0UcgdpgDdkDuHAJwfNquC9OeWPfoNfqCc4bwzx66a6AwvsIaJl48vGm36PlC5yq3Xm0B1ChOd1qXWbD/EwkjRCj9fF+ylkaAkeiQamClXKbh0B1mGFjfSkdjJj802M6WZz83dXYOzl3QL0vUtpSRsaflhFnauu0MyddGJvW53d5Pnif0gvO5NWcr3zgommK4TD8J9xOqFasv+Zcgv3XnBpyu4y/Imz0hi6XrzXokpnNPSxAasxzmipcHyYCuJJMv3+iPRFulCeIYresZMi2SlaphnThqSS58Y/cPGAG+CWVvoVnvJnbfj+V528XcLuiUG67aB5jbwnq+671nyLm7KqWMmYkEX+KtvyR1DKUwWIVu7h4R0MObATR5MEV04P/zgBGcDkLFQjQt8F9tDoH7LDktN8aMBu05jiu1xjHtzQ9DrncKS0ac3LUiZmkoJW3wOV2PUjo2P8i1NfyJ5YSrMAnbiMZ0g1M6TnVkhsVNaJa5oOkhaIOI/QN/m68eyb48Uxwy1FOa86afSWAhX8N2L4rGB45//WA3PbWNh pkYFVwyi 9HrQ3f7lJNmdcXQyuKfZEHavmOxc1VoFi7wxn82NrxnsVM4BXaHcX5ffYmGGL9bGQyC1za/d8RB4LGbu2VaCVlhbOonSn6OCM8DfsA5GVMTPBjjdoXV+Utcu8TwkykzS2asqaj5aKBNGAQU9r+irR+36fn5fesLKtWwbnjLclAxROdnM+4gQ19IpJIqcOBllZhmA6c0CAznaw5TdJN5GH82tube5vzZlTb/1pVapw4j5UmgCZgY2ibz9fCASK8aoeH76s X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_clean) flag from being per-page to per-folio, which makes arch_dma_mark_clean() and mark_clean() a little more exciting. Signed-off-by: Matthew Wilcox (Oracle) Cc: linux-ia64@vger.kernel.org --- arch/ia64/hp/common/sba_iommu.c | 26 +++++++++++++++----------- arch/ia64/include/asm/cacheflush.h | 14 ++++++++++---- arch/ia64/include/asm/pgtable.h | 14 +++++++++++++- arch/ia64/mm/init.c | 29 +++++++++++++++++++---------- 4 files changed, 57 insertions(+), 26 deletions(-) diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c index 8ad6946521d8..48d475f10003 100644 --- a/arch/ia64/hp/common/sba_iommu.c +++ b/arch/ia64/hp/common/sba_iommu.c @@ -798,22 +798,26 @@ sba_io_pdir_entry(u64 *pdir_ptr, unsigned long vba) #endif #ifdef ENABLE_MARK_CLEAN -/** +/* * Since DMA is i-cache coherent, any (complete) pages that were written via * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ -static void -mark_clean (void *addr, size_t size) +static void mark_clean(void *addr, size_t size) { - unsigned long pg_addr, end; - - pg_addr = PAGE_ALIGN((unsigned long) addr); - end = (unsigned long) addr + size; - while (pg_addr + PAGE_SIZE <= end) { - struct page *page = virt_to_page((void *)pg_addr); - set_bit(PG_arch_1, &page->flags); - pg_addr += PAGE_SIZE; + struct folio *folio = virt_to_folio(addr); + ssize_t left = size; + size_t offset = offset_in_folio(folio, addr); + + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } + + while (left >= folio_size(folio)) { + set_bit(PG_arch_1, &folio->flags); + left -= folio_size(folio); + folio = folio_next(folio); } } #endif diff --git a/arch/ia64/include/asm/cacheflush.h b/arch/ia64/include/asm/cacheflush.h index 708c0fa5d975..eac493fa9e0d 100644 --- a/arch/ia64/include/asm/cacheflush.h +++ b/arch/ia64/include/asm/cacheflush.h @@ -13,10 +13,16 @@ #include #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) \ -do { \ - clear_bit(PG_arch_1, &(page)->flags); \ -} while (0) +static inline void flush_dcache_folio(struct folio *folio) +{ + clear_bit(PG_arch_1, &folio->flags); +} +#define flush_dcache_folio flush_dcache_folio + +static inline void flush_dcache_page(struct page *page) +{ + flush_dcache_folio(page_folio(page)); +} extern void flush_icache_range(unsigned long start, unsigned long end); #define flush_icache_range flush_icache_range diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h index 21c97e31a28a..0c2be4ea664b 100644 --- a/arch/ia64/include/asm/pgtable.h +++ b/arch/ia64/include/asm/pgtable.h @@ -303,7 +303,18 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) *ptep = pteval; } -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, add, ptep, pte, 1) /* * Make page protection values cacheable, uncacheable, or write- @@ -396,6 +407,7 @@ pte_same (pte_t a, pte_t b) return pte_val(a) == pte_val(b); } +#define update_mmu_cache_range(vma, address, ptep, nr) do { } while (0) #define update_mmu_cache(vma, address, ptep) do { } while (0) extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index 7f5353e28516..12aef25944aa 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -50,30 +50,39 @@ void __ia64_sync_icache_dcache (pte_t pte) { unsigned long addr; - struct page *page; + struct folio *folio; - page = pte_page(pte); - addr = (unsigned long) page_address(page); + folio = page_folio(pte_page(pte)); + addr = (unsigned long)folio_address(folio); - if (test_bit(PG_arch_1, &page->flags)) + if (test_bit(PG_arch_1, &folio->flags)) return; /* i-cache is already coherent with d-cache */ - flush_icache_range(addr, addr + page_size(page)); - set_bit(PG_arch_1, &page->flags); /* mark page as clean */ + flush_icache_range(addr, addr + folio_size(folio)); + set_bit(PG_arch_1, &folio->flags); /* mark page as clean */ } /* - * Since DMA is i-cache coherent, any (complete) pages that were written via + * Since DMA is i-cache coherent, any (complete) folios that were written via * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to * flush them when they get mapped into an executable vm-area. */ void arch_dma_mark_clean(phys_addr_t paddr, size_t size) { - unsigned long pfn = PHYS_PFN(paddr); + struct folio *folio = page_folio(phys_to_page(paddr)); + ssize_t left = size; + size_t offset = offset_in_folio(folio, paddr); - do { + if (offset) { + left -= folio_size(folio) - offset; + folio = folio_next(folio); + } + + while (left >= (ssize_t)folio_size(folio)) { set_bit(PG_arch_1, &pfn_to_page(pfn)->flags); - } while (++pfn <= PHYS_PFN(paddr + size - 1)); + left -= folio_size(folio); + folio = folio_next(folio); + } } inline void