From patchwork Wed Jun 21 16:45:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13287673 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C5D9C0015E for ; Wed, 21 Jun 2023 16:46:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9C3AC8D0010; Wed, 21 Jun 2023 12:46:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 975448D000D; Wed, 21 Jun 2023 12:46:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C7F48D0010; Wed, 21 Jun 2023 12:46:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 6BDF28D000D for ; Wed, 21 Jun 2023 12:46:17 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 1708EA09A9 for ; Wed, 21 Jun 2023 16:46:17 +0000 (UTC) X-FDA: 80927332794.03.42FC155 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 0EC4B180028 for ; Wed, 21 Jun 2023 16:46:14 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=JXUQqB4d; dmarc=none; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687365975; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eONAo72om3NJJPfeQU98nnaiEyx1girYaBRrJpkgqkU=; b=BkyE2hyNBO7aWjocYhkJ0hORCAwVmKnzQqvpczMA0zJWLlzgpDu87i2pVju+makuGG0NMA nsFgxmUF3Ao5DjRgJhsSdi45eIGHdF9HgXmERZUXqXg2fSba9XGwuD15EGXpEJNvbBVGf/ iFSrfDCZbOct7SVq/zoUUAx2MvH59GU= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=JXUQqB4d; dmarc=none; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687365975; a=rsa-sha256; cv=none; b=VHbjf/j68KFmg8nO7mfgb+Vb6fYknmMKQbrgOXI5bKAQ/0Ilji5k/DWW1lgUtJluOLhMSG VLHbcL2LlDqltUQDyqPS0/6xcd0nredp1dc2DXwLG712trNqbf4g5zK2631nqc2XmVHxpV XH1DjvP3P3jQPqt3na2A1PNTSKqdxCg= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eONAo72om3NJJPfeQU98nnaiEyx1girYaBRrJpkgqkU=; b=JXUQqB4dBUOu93/qitkyE97f3Z vicwKtf2V+aPt3Dbg9/7v8WVwzM1uW09KbzEgrV7HoZavbGv57iMq0P8yKRiVHqblH2VN4jJwQ51P 3UBOa8B2hu7+kv08AyN/nJEB47YJdT5WmFl8qEnhdI5jyaHNGAsAFAZ1nL/q35cRWiLbB1YToy/e8 42a1+W41mg0rHN6jVqxvTZByBSAn9Z9fMtkxjLN56svcogUEoy8anGD/EDPT8G5k/Bd4memePz7s9 CrQmO7YpoNGtdgXoNsSSDZ7tXeMZyLnRddcjDjPExN2w8ZnbHBneVjox+kOnqzeVCqXZWHclrs02X NKuWOUMg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y1-00EjDm-IE; Wed, 21 Jun 2023 16:46:01 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Cc: "Matthew Wilcox (Oracle)" , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-afs@lists.infradead.org, linux-nfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 04/13] i915: Convert shmem_sg_free_table() to use a folio_batch Date: Wed, 21 Jun 2023 17:45:48 +0100 Message-Id: <20230621164557.3510324-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 0EC4B180028 X-Stat-Signature: 38abmis3bg1ghgs916ery5fsamye67dy X-Rspam-User: X-HE-Tag: 1687365974-792507 X-HE-Meta: U2FsdGVkX1/fo6pvSgCCHeIcxSqJX4VvDJy4RV0efTqcMxu0UVeQ64WQ7tLdK7MWdwIMJyfpsQh7tGm1t3HnZjhgfvKRLNEenmmbKAZtk0SPker7Lu0V3gcU3/qIF5bS7FNKELQHIDXISJw1Zvyz9F19q74FV5Ix1qRyAUxPmomHtD1wFWkdS9TfnGo9n/6mMJ26kkQULAoNmu7Q1UNRfAsGpkLRsVDSlPTka5eOjoBbBFMBgpkQQrNxvGDyvQ704y1AJk1WaUPmUMgdnJnq4duh9Ju+PgvSBTcmprCGw32UhKncLzGra+by2/6J0OFRbHi92cOAM8WM9DAlqOjm1FnwSOTKjgbIWrLa8u/eQEzcGCl1BKS1nhGgwlOukVh4DK5X2+Oo5HgYLjZlOLz4TNCYo8KlG9+IWLeGOp5yjyQ/IZhQcepbf2dnkNk5ZXEZnANnhQx6DSGqprhxFDAqZ0srzB55UWcs5DyipkVMFPJX7mVSMXo33WoQkbDonFtnHuSXwuIa9ar8fi6kncsJcM6y3SDScBgyJlF8VgI6FeIZ+7q2HoTH2f/PBXB1U3AiksjBcKaWXnYHzALrGLqSZobnCAbyCh99/G9NGA2Yg9gACwpFHJyG9UXRBxfGzoDQdP673IDNxzNqIastyNlYo63PQdV2rPtp6ubg/5yWV70eVNXoH9SDKmJcwIZAJnXJKLc3W3CURkg1P5OL5QQjdzsXw59uREkl0qib8EG6qWqzTTmcHLp+Fi9A97biUsydoRT2QfE+Ze4VuG+3Uxh4an1C6CqGOrnh5vtFjpMkQ7mtrKE1os5ryEVyRB+4ZpzWqiKpEvb96hNhY0f+yZIqGmziuPEh4xVShUsYRF6m2hZP6tssrKsWH4ri11SYvd+0EfMymrNa9/YF3Wdr/ZwALwzXLiZ59rnK3y6ln2eCCSQUiZGcSYtl866APs88LFc373TYLqYcDGPh9dpI5RU AUTe5FGC 3QZwJ3RaeE5uQv6T75uu/Nyj5G7UAKjMkL/By/WiPQdqO0M8pDUSykItNp9MYqMY7Ct9piOkQfeTPMxrsE87QB57TAMGL0o1tbVovloy21npZuugFwWslZw+sZqr8bVGWQvYzQhDumDUifb/1p7XCk+zmHniqH6H04gZARNSHzj8vVypAJinrQ2/lTsaAVTbP0bFE6JqGhE0ww2jH8e4dFsHlpqqNEXcivuDF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove a few hidden compound_head() calls by converting the returned page to a folio once and using the folio APIs. We also only increment the refcount on the folio once instead of once for each page. Ideally, we would have a for_each_sgt_folio macro, but until then this will do. Signed-off-by: Matthew Wilcox (Oracle) --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 55 +++++++++++++---------- 1 file changed, 31 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 33d5d5178103..8f1633c3fb93 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -19,13 +19,13 @@ #include "i915_trace.h" /* - * Move pages to appropriate lru and release the pagevec, decrementing the - * ref count of those pages. + * Move folios to appropriate lru and release the batch, decrementing the + * ref count of those folios. */ -static void check_release_pagevec(struct pagevec *pvec) +static void check_release_folio_batch(struct folio_batch *fbatch) { - check_move_unevictable_pages(pvec); - __pagevec_release(pvec); + check_move_unevictable_folios(fbatch); + __folio_batch_release(fbatch); cond_resched(); } @@ -33,24 +33,29 @@ void shmem_sg_free_table(struct sg_table *st, struct address_space *mapping, bool dirty, bool backup) { struct sgt_iter sgt_iter; - struct pagevec pvec; + struct folio_batch fbatch; + struct folio *last = NULL; struct page *page; mapping_clear_unevictable(mapping); - pagevec_init(&pvec); + folio_batch_init(&fbatch); for_each_sgt_page(page, sgt_iter, st) { - if (dirty) - set_page_dirty(page); + struct folio *folio = page_folio(page); + if (folio == last) + continue; + last = folio; + if (dirty) + folio_mark_dirty(folio); if (backup) - mark_page_accessed(page); + folio_mark_accessed(folio); - if (!pagevec_add(&pvec, page)) - check_release_pagevec(&pvec); + if (!folio_batch_add(&fbatch, folio)) + check_release_folio_batch(&fbatch); } - if (pagevec_count(&pvec)) - check_release_pagevec(&pvec); + if (fbatch.nr) + check_release_folio_batch(&fbatch); sg_free_table(st); } @@ -63,8 +68,7 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, unsigned int page_count; /* restricted by sg_alloc_table */ unsigned long i; struct scatterlist *sg; - struct page *page; - unsigned long last_pfn = 0; /* suppress gcc warning */ + unsigned long next_pfn = 0; /* suppress gcc warning */ gfp_t noreclaim; int ret; @@ -95,6 +99,7 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, sg = st->sgl; st->nents = 0; for (i = 0; i < page_count; i++) { + struct folio *folio; const unsigned int shrink[] = { I915_SHRINK_BOUND | I915_SHRINK_UNBOUND, 0, @@ -103,12 +108,12 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, do { cond_resched(); - page = shmem_read_mapping_page_gfp(mapping, i, gfp); - if (!IS_ERR(page)) + folio = shmem_read_folio_gfp(mapping, i, gfp); + if (!IS_ERR(folio)) break; if (!*s) { - ret = PTR_ERR(page); + ret = PTR_ERR(folio); goto err_sg; } @@ -147,19 +152,21 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, if (!i || sg->length >= max_segment || - page_to_pfn(page) != last_pfn + 1) { + folio_pfn(folio) != next_pfn) { if (i) sg = sg_next(sg); st->nents++; - sg_set_page(sg, page, PAGE_SIZE, 0); + sg_set_folio(sg, folio, folio_size(folio), 0); } else { - sg->length += PAGE_SIZE; + /* XXX: could overflow? */ + sg->length += folio_size(folio); } - last_pfn = page_to_pfn(page); + next_pfn = folio_pfn(folio) + folio_nr_pages(folio); + i += folio_nr_pages(folio) - 1; /* Check that the i965g/gm workaround works. */ - GEM_BUG_ON(gfp & __GFP_DMA32 && last_pfn >= 0x00100000UL); + GEM_BUG_ON(gfp & __GFP_DMA32 && next_pfn >= 0x00100000UL); } if (sg) /* loop terminated early; short sg table */ sg_mark_end(sg);