From patchwork Sun Jul 21 10:46:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11051033 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 71BAD138D for ; Sun, 21 Jul 2019 10:46:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5F43E27C05 for ; Sun, 21 Jul 2019 10:46:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 531CB2871E; Sun, 21 Jul 2019 10:46:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4114127C05 for ; Sun, 21 Jul 2019 10:46:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F41108E0007; Sun, 21 Jul 2019 06:46:19 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ED0C78E0005; Sun, 21 Jul 2019 06:46:19 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C0E158E0007; Sun, 21 Jul 2019 06:46:19 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by kanga.kvack.org (Postfix) with ESMTP id 81C2F8E0005 for ; Sun, 21 Jul 2019 06:46:19 -0400 (EDT) Received: by mail-pg1-f200.google.com with SMTP id m17so12731832pgh.21 for ; Sun, 21 Jul 2019 03:46:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=GN6prBEc9OxXBozwmtQx/sYg0ub34bNOmdwtjSsuhI4=; b=czL7CHaPA1sTY/qyO5SgV/JYyYC+uqe4f3YC56PyEOsGQMOwAeEIJwhBwoFIzNuBcU qpz3WOi6LJ67t6GwJRbI3wFDs4DgVBiBv+uSk6Zaq1Itc7UazPC2xkVfcD3+njZPVWjb 7XMuqKG+p77Co/JK62H+HTbb3lK9zF5HPLN5hjMAl0Mp1AJvsE5D6ZutaycwhRAkUfin nd5wkwcX1JBOAgyNpmtJhILj8BPrujlausymBajf6RnyY9SDQhkCapYnHdLrR7KU0qyZ I/4mZaAeIT/4xmCK8waDOH8c4/oGIrnISE+kqVHA62WtuyKpAKJ0F1YnwR/Up4D0xOAM +wCw== X-Gm-Message-State: APjAAAX7HQ+e4yyDPHm7sDHR/IaEOSFwL1+IcyHqdTMf0xw2BV4kTPJH SnRxLUhJrqps6lt1YfATpz1CsCgRBv16h1sryU8APXqhiu4/KyFL/HcfQ8yc9v8PoLWSSeNLhuw nexrcc3ijEUDAMrdud8SeDHBwUIK18otnS6HBaQHiP3ysfd+mfqzLAEh99JHWV2zFAQ== X-Received: by 2002:a65:4b8b:: with SMTP id t11mr65573477pgq.130.1563705978074; Sun, 21 Jul 2019 03:46:18 -0700 (PDT) X-Google-Smtp-Source: APXvYqxxkDQpvJZ4SvANBZ8IP/OPHubwRzMTKXM3D01Ep8Ac975G6vCOZ3o2nBa+5Kl02JaQVcku X-Received: by 2002:a65:4b8b:: with SMTP id t11mr65573406pgq.130.1563705976866; Sun, 21 Jul 2019 03:46:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563705976; cv=none; d=google.com; s=arc-20160816; b=j30vwuhLP6SjF8A15gte7RqhEEYuGQDHOcF8rXwVTAiIoL8VWSskA8NpZivgW2bnNr GsEsWuEK+bDYgS3f800tlLWfth9npzP94QUTHsA076o33wHzS300HNIgDCbU89bvquZr g/edhdQPgM2a3lUYUzcmBBthGJHGFH6bZ6wBq2GuB6YXvO7xWQphJO68Ds3bZb6jhse8 GhbXe44JrJBWoqLqgywdFMTE6iB+TzhdXkZR2qPBok3zFo/aoHk64W0MwfLfHFQKF4Vj bFaKkDlI+7V4B8IIhtde8V2ixzaq+mbTIwQ23hMXI5xgeWvmwKdPsWrYwOuEqroE3/iR dxJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=GN6prBEc9OxXBozwmtQx/sYg0ub34bNOmdwtjSsuhI4=; b=ANQgg8Q05Z3DC703hYuvbGm/5midbT2FA1LJI9B7JgBS+Q9GDteanwB6R+EOXPFg0J egwYcJQzGXqYUlJvwsmJH+AvfqYcDhFv+Uq/rfsAPDMci09e26sQ8UWK1Ccld404gnf2 mM7zeW7+8gQlkdqxXLVBOhC/hJsNQoUpIGPsD9mOYHxQAV9/nGl8S4oJUtIv+XRTCVqN eLBMkd6vlx22uEYLBkpZs3VzhZgqb+Lxp8K+aQOhGUVOrxOYiykbnvhwzhg3LIod6Evf 839Z58CUhAvxclyc3wzmhkCLSDOkLCbRzSu4ox38MDTv5I/rXy+6fT1e7Hj1yQ6RAJFw /vAg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="ILI7Ai/q"; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id c18si8293015plo.316.2019.07.21.03.46.15 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Sun, 21 Jul 2019 03:46:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="ILI7Ai/q"; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=GN6prBEc9OxXBozwmtQx/sYg0ub34bNOmdwtjSsuhI4=; b=ILI7Ai/qVKkWPtUdzO9bTbn4/i F3TPPZOUt9+oFU53lX3CqV/4DTd4gBxLNIXXxzVKeCP2He6eAOO5k2FhnIT9DgJUuu4ApnAVxyyUh cNWRvhTzvE9loorPa2HwPuo5Mw2Z1o+wDKXab3Lo8jMO+LM5dFsHnpILAbFlHn25W6l6uLwrRaEgh 9t2iKC8MvqX0HKhGS+j+42Oyrn2daIoF4GY+GZYwKYamHOpZ8UMMzvIMEJeooiNQtFGK//ZyPIjlA yQT2oZuEGxQ4rOlCaVMtRIkBZ28/CvlD8FJvv06G4bmj6zTNDR/Rddw+ASNAlBlakEmgfuwcK6KtI rXerpmIA==; Received: from willy by bombadil.infradead.org with local (Exim 4.92 #3 (Red Hat Linux)) id 1hp9M6-000500-Lw; Sun, 21 Jul 2019 10:46:14 +0000 From: Matthew Wilcox To: Andrew Morton , linux-mm@kvack.org Cc: Matthew Wilcox , Michal Hocko Subject: [PATCH v2 1/3] mm: Introduce page_size() Date: Sun, 21 Jul 2019 03:46:10 -0700 Message-Id: <20190721104612.19120-2-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190721104612.19120-1-willy@infradead.org> References: <20190721104612.19120-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox (Oracle) It's unnecessarily hard to find out the size of a potentially huge page. Replace 'PAGE_SIZE << compound_order(page)' with page_size(page). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Michal Hocko Reviewed-by: Ira Weiny Reviewed-by: Ira Weiny --- arch/arm/mm/flush.c | 3 +-- arch/arm64/mm/flush.c | 3 +-- arch/ia64/mm/init.c | 2 +- drivers/crypto/chelsio/chtls/chtls_io.c | 5 ++--- drivers/staging/android/ion/ion_system_heap.c | 4 ++-- drivers/target/tcm_fc/tfc_io.c | 3 +-- fs/io_uring.c | 2 +- include/linux/hugetlb.h | 2 +- include/linux/mm.h | 6 ++++++ lib/iov_iter.c | 2 +- mm/kasan/common.c | 8 +++----- mm/nommu.c | 2 +- mm/page_vma_mapped.c | 3 +-- mm/rmap.c | 6 ++---- mm/slob.c | 2 +- mm/slub.c | 18 +++++++++--------- net/xdp/xsk.c | 2 +- 17 files changed, 35 insertions(+), 38 deletions(-) diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 6ecbda87ee46..4c7ebe094a83 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -204,8 +204,7 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page) * coherent with the kernels mapping. */ if (!PageHighMem(page)) { - size_t page_size = PAGE_SIZE << compound_order(page); - __cpuc_flush_dcache_area(page_address(page), page_size); + __cpuc_flush_dcache_area(page_address(page), page_size(page)); } else { unsigned long i; if (cache_is_vipt_nonaliasing()) { diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index dc19300309d2..ac485163a4a7 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -56,8 +56,7 @@ void __sync_icache_dcache(pte_t pte) struct page *page = pte_page(pte); if (!test_and_set_bit(PG_dcache_clean, &page->flags)) - sync_icache_aliases(page_address(page), - PAGE_SIZE << compound_order(page)); + sync_icache_aliases(page_address(page), page_size(page)); } EXPORT_SYMBOL_GPL(__sync_icache_dcache); diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index aae75fd7b810..e97e24816bd4 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -63,7 +63,7 @@ __ia64_sync_icache_dcache (pte_t pte) if (test_bit(PG_arch_1, &page->flags)) return; /* i-cache is already coherent with d-cache */ - flush_icache_range(addr, addr + (PAGE_SIZE << compound_order(page))); + flush_icache_range(addr, addr + page_size(page)); set_bit(PG_arch_1, &page->flags); /* mark page as clean */ } diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c index 551bca6fef24..925be5942895 100644 --- a/drivers/crypto/chelsio/chtls/chtls_io.c +++ b/drivers/crypto/chelsio/chtls/chtls_io.c @@ -1078,7 +1078,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) bool merge; if (page) - pg_size <<= compound_order(page); + pg_size = page_size(page); if (off < pg_size && skb_can_coalesce(skb, i, page, off)) { merge = 1; @@ -1105,8 +1105,7 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) __GFP_NORETRY, order); if (page) - pg_size <<= - compound_order(page); + pg_size <<= order; } if (!page) { page = alloc_page(gfp); diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c index aa8d8425be25..b83a1d16bd89 100644 --- a/drivers/staging/android/ion/ion_system_heap.c +++ b/drivers/staging/android/ion/ion_system_heap.c @@ -120,7 +120,7 @@ static int ion_system_heap_allocate(struct ion_heap *heap, if (!page) goto free_pages; list_add_tail(&page->lru, &pages); - size_remaining -= PAGE_SIZE << compound_order(page); + size_remaining -= page_size(page); max_order = compound_order(page); i++; } @@ -133,7 +133,7 @@ static int ion_system_heap_allocate(struct ion_heap *heap, sg = table->sgl; list_for_each_entry_safe(page, tmp_page, &pages, lru) { - sg_set_page(sg, page, PAGE_SIZE << compound_order(page), 0); + sg_set_page(sg, page, page_size(page), 0); sg = sg_next(sg); list_del(&page->lru); } diff --git a/drivers/target/tcm_fc/tfc_io.c b/drivers/target/tcm_fc/tfc_io.c index a254792d882c..1354a157e9af 100644 --- a/drivers/target/tcm_fc/tfc_io.c +++ b/drivers/target/tcm_fc/tfc_io.c @@ -136,8 +136,7 @@ int ft_queue_data_in(struct se_cmd *se_cmd) page, off_in_page, tlen); fr_len(fp) += tlen; fp_skb(fp)->data_len += tlen; - fp_skb(fp)->truesize += - PAGE_SIZE << compound_order(page); + fp_skb(fp)->truesize += page_size(page); } else { BUG_ON(!page); from = kmap_atomic(page + (mem_off >> PAGE_SHIFT)); diff --git a/fs/io_uring.c b/fs/io_uring.c index e2a66e12fbc6..c55d8b411d2a 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -3084,7 +3084,7 @@ static int io_uring_mmap(struct file *file, struct vm_area_struct *vma) } page = virt_to_head_page(ptr); - if (sz > (PAGE_SIZE << compound_order(page))) + if (sz > page_size(page)) return -EINVAL; pfn = virt_to_phys(ptr) >> PAGE_SHIFT; diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index edfca4278319..53fc34f930d0 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -454,7 +454,7 @@ static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma, static inline struct hstate *page_hstate(struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); - return size_to_hstate(PAGE_SIZE << compound_order(page)); + return size_to_hstate(page_size(page)); } static inline unsigned hstate_index_to_shift(unsigned index) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0334ca97c584..899dfcf7c23d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -805,6 +805,12 @@ static inline void set_compound_order(struct page *page, unsigned int order) page[1].compound_order = order; } +/* Returns the number of bytes in this potentially compound page. */ +static inline unsigned long page_size(struct page *page) +{ + return PAGE_SIZE << compound_order(page); +} + void free_compound_page(struct page *page); #ifdef CONFIG_MMU diff --git a/lib/iov_iter.c b/lib/iov_iter.c index f1e0569b4539..639d5e7014c1 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -878,7 +878,7 @@ static inline bool page_copy_sane(struct page *page, size_t offset, size_t n) head = compound_head(page); v += (page - head) << PAGE_SHIFT; - if (likely(n <= v && v <= (PAGE_SIZE << compound_order(head)))) + if (likely(n <= v && v <= (page_size(head)))) return true; WARN_ON(1); return false; diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 2277b82902d8..a929a3b9444d 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -321,8 +321,7 @@ void kasan_poison_slab(struct page *page) for (i = 0; i < (1 << compound_order(page)); i++) page_kasan_tag_reset(page + i); - kasan_poison_shadow(page_address(page), - PAGE_SIZE << compound_order(page), + kasan_poison_shadow(page_address(page), page_size(page), KASAN_KMALLOC_REDZONE); } @@ -518,7 +517,7 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, page = virt_to_page(ptr); redzone_start = round_up((unsigned long)(ptr + size), KASAN_SHADOW_SCALE_SIZE); - redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page)); + redzone_end = (unsigned long)ptr + page_size(page); kasan_unpoison_shadow(ptr, size); kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, @@ -554,8 +553,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) kasan_report_invalid_free(ptr, ip); return; } - kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page), - KASAN_FREE_PAGE); + kasan_poison_shadow(ptr, page_size(page), KASAN_FREE_PAGE); } else { __kasan_slab_free(page->slab_cache, ptr, ip, false); } diff --git a/mm/nommu.c b/mm/nommu.c index fed1b6e9c89b..99b7ec318824 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -108,7 +108,7 @@ unsigned int kobjsize(const void *objp) * The ksize() function is only guaranteed to work for pointers * returned by kmalloc(). So handle arbitrary pointers here. */ - return PAGE_SIZE << compound_order(page); + return page_size(page); } /** diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 11df03e71288..eff4b4520c8d 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -153,8 +153,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (unlikely(PageHuge(pvmw->page))) { /* when pud is not present, pte will be NULL */ - pvmw->pte = huge_pte_offset(mm, pvmw->address, - PAGE_SIZE << compound_order(page)); + pvmw->pte = huge_pte_offset(mm, pvmw->address, page_size(page)); if (!pvmw->pte) return false; diff --git a/mm/rmap.c b/mm/rmap.c index e5dfe2ae6b0d..09ce05c481fc 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -898,8 +898,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, */ mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, vma, vma->vm_mm, address, - min(vma->vm_end, address + - (PAGE_SIZE << compound_order(page)))); + min(vma->vm_end, address + page_size(page))); mmu_notifier_invalidate_range_start(&range); while (page_vma_mapped_walk(&pvmw)) { @@ -1374,8 +1373,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, */ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, address, - min(vma->vm_end, address + - (PAGE_SIZE << compound_order(page)))); + min(vma->vm_end, address + page_size(page))); if (PageHuge(page)) { /* * If sharing is possible, start and end will be adjusted diff --git a/mm/slob.c b/mm/slob.c index 7f421d0ca9ab..cf377beab962 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -539,7 +539,7 @@ size_t __ksize(const void *block) sp = virt_to_page(block); if (unlikely(!PageSlab(sp))) - return PAGE_SIZE << compound_order(sp); + return page_size(sp); align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN); m = (unsigned int *)(block - align); diff --git a/mm/slub.c b/mm/slub.c index e6c030e47364..1e8e20a99660 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -829,7 +829,7 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page) return 1; start = page_address(page); - length = PAGE_SIZE << compound_order(page); + length = page_size(page); end = start + length; remainder = length % s->size; if (!remainder) @@ -1074,13 +1074,14 @@ static void setup_object_debug(struct kmem_cache *s, struct page *page, init_tracking(s, object); } -static void setup_page_debug(struct kmem_cache *s, void *addr, int order) +static +void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr) { if (!(s->flags & SLAB_POISON)) return; metadata_access_enable(); - memset(addr, POISON_INUSE, PAGE_SIZE << order); + memset(addr, POISON_INUSE, page_size(page)); metadata_access_disable(); } @@ -1340,8 +1341,8 @@ slab_flags_t kmem_cache_flags(unsigned int object_size, #else /* !CONFIG_SLUB_DEBUG */ static inline void setup_object_debug(struct kmem_cache *s, struct page *page, void *object) {} -static inline void setup_page_debug(struct kmem_cache *s, - void *addr, int order) {} +static inline +void setup_page_debug(struct kmem_cache *s, struct page *page, void *addr) {} static inline int alloc_debug_processing(struct kmem_cache *s, struct page *page, void *object, unsigned long addr) { return 0; } @@ -1635,7 +1636,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) struct kmem_cache_order_objects oo = s->oo; gfp_t alloc_gfp; void *start, *p, *next; - int idx, order; + int idx; bool shuffle; flags &= gfp_allowed_mask; @@ -1669,7 +1670,6 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) page->objects = oo_objects(oo); - order = compound_order(page); page->slab_cache = s; __SetPageSlab(page); if (page_is_pfmemalloc(page)) @@ -1679,7 +1679,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) start = page_address(page); - setup_page_debug(s, start, order); + setup_page_debug(s, page, start); shuffle = shuffle_freelist(s, page); @@ -3926,7 +3926,7 @@ size_t __ksize(const void *object) if (unlikely(!PageSlab(page))) { WARN_ON(!PageCompound(page)); - return PAGE_SIZE << compound_order(page); + return page_size(page); } return slab_ksize(page->slab_cache); diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 59b57d708697..44bfb76fbad9 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -739,7 +739,7 @@ static int xsk_mmap(struct file *file, struct socket *sock, /* Matches the smp_wmb() in xsk_init_queue */ smp_rmb(); qpg = virt_to_head_page(q->ring); - if (size > (PAGE_SIZE << compound_order(qpg))) + if (size > page_size(qpg)) return -EINVAL; pfn = virt_to_phys(q->ring) >> PAGE_SHIFT; From patchwork Sun Jul 21 10:46:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11051031 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 70FE6912 for ; Sun, 21 Jul 2019 10:46:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 51E6427C05 for ; Sun, 21 Jul 2019 10:46:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 441E22871E; Sun, 21 Jul 2019 10:46:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9631C27C05 for ; Sun, 21 Jul 2019 10:46:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5CC0C8E0006; Sun, 21 Jul 2019 06:46:17 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 556218E0005; Sun, 21 Jul 2019 06:46:17 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 41CB38E0006; Sun, 21 Jul 2019 06:46:17 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by kanga.kvack.org (Postfix) with ESMTP id 0BA2B8E0005 for ; Sun, 21 Jul 2019 06:46:17 -0400 (EDT) Received: by mail-pg1-f200.google.com with SMTP id m17so12731807pgh.21 for ; Sun, 21 Jul 2019 03:46:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=n/2VHsc+nvnGVRHIDyk7RkyeOeUNMmmMgaK72876yAU=; b=rbY6B9bqQRKJgs7rNSv4zvxIy4pbpzWT1/GFYAC7FqzZInefJej9lfhUtYFC7983la OTayqpr85EckZ6FWnd0j9JZOR9sOeU16LXvRQ7en5kxTv627clsevhaZYnPN+3/tXcMf cyiMogjy4DP0wERLMsd5mZbX7DivlX3t8gQHKi5UCKSoVi3lNGCfTZ+ohUJ+tuaD6/TY dCLlnhSPijdJZ4hQMaoerAmh/Dyo2yMNYx1kWulhz+d81rUxHoXSIHDZczsdOm9Nr7mm yC1+xS8QRz8yxbpjv1DRCsRjK93ey2br2DoG+AoQ2o2p5xhDa3MLYEgWLWbEBOy8VK38 LnWw== X-Gm-Message-State: APjAAAV8UxVah91elbokBEH/+udVjvl5Uw0dp8XOVLoMFy9ntWvRp9IG oMg73BVfJTTBQq5dH4f03Rc/lccPaKBoiwCPqJZIsbDQFVHCKOf4RTz669HKb9z1WZsgE5fg1P8 c7byRc5iYG7g4czGT8DMaXRfFQKfAL7w/Ib7bKFGdfRCViNAXuFbisXY5Rr+TfuJZ8Q== X-Received: by 2002:a17:902:8a94:: with SMTP id p20mr68951512plo.312.1563705976589; Sun, 21 Jul 2019 03:46:16 -0700 (PDT) X-Google-Smtp-Source: APXvYqxQZPDCyxD48EAgwB+E4s91ivOMaVQVNWHtEa2Ov3DXHfi/JKrReEdaOE2wpv7CyseSuWwg X-Received: by 2002:a17:902:8a94:: with SMTP id p20mr68951467plo.312.1563705975829; Sun, 21 Jul 2019 03:46:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563705975; cv=none; d=google.com; s=arc-20160816; b=T4SkMyCMJ41z7wA3346KF9JoZI2mQblXZnrKXOa7ZJdWmFUb0PDr+8LMPXJMHnD9iG s7uQ1UPUI17UMEF965GD9IvSGtdKBYsaHhzOPY0MhBJHzX5vHdiCS46SJW9P0wpYTgWf wgXwMrJ6zko5lTGc+fkSNxyF3G9YnvrbeaVkkTWMLd7iS+QBb85DTCgItlHIifXn3JSC id7LkyRYTOL+m8+vwW2RpXNzTT2JEs9S8Zbv6jarSPmICoVpTdEKArULwV0kWcBbWH+l L5mDssKuYJiwJqXiZBMgyqZRxmLIg4u7ERxzVsAawBYvfKL/EV9yBfh5WdecRr1hNJfC oxEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=n/2VHsc+nvnGVRHIDyk7RkyeOeUNMmmMgaK72876yAU=; b=vkGLv0iN4I72LTFP5l1S5lOaUe/bErte9G9gWG5xXg5HDkw3ZduLcZ3wdmRLrmPjMR joZR/R12nA9/6PRLDx/S8Jr3n0W1cSMobt3GgOg84ET88RktFE+jHHoFw9AiUtcA0yeh EmWwWXXnZbIhCs0/ZzVcEHlghq3/hDW3tQxmmimZvIJnNSe2lnk0bowrmSG4zP9L/y+8 YN7MbREfxDzlI5eQQ+IJiD90S6P0W+iCyu2YfVqK5eskWRHRaN/rSJi7YJkZJvMe1zmH lFeStInUc5nulipra7hmqVVXbEDPhej2Y08VagOSNj4dgVTy0ja3q4Npv/mNTB2uPNg8 nYRA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=Dp5kZOZA; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id j1si5371906pfr.52.2019.07.21.03.46.15 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Sun, 21 Jul 2019 03:46:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=Dp5kZOZA; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=n/2VHsc+nvnGVRHIDyk7RkyeOeUNMmmMgaK72876yAU=; b=Dp5kZOZA90ifh7qp+w2f6pHi++ xXHj9tUP4uJgKj+cY7zrZAXCNjqnPz6CQsKvTSIQYyWrMGbvLbDVWdSoxQ5eouhkWtGBtvr8KSE+k guMlY5fXIMdjtG1LY64Cdx3FJPFXrdHDalivdzbEWYmJvzaIhtJpI0UvZbqTGCmO4jDulIK+h299Y SpIU5xlgM8UW2+bzNIzJBGOsZdh1l3Yg8FIY8g6z3D0DudY3321qfukqabK0IYppygUJQI2hD5qF7 Q6qDN+pjs4T4mWd4IaxFOm9kvB+DouYVRpMXuUOVWj7YO0CvSavN+kxEBxEaecx7mvXfTF0va9I3V HVPme7fA==; Received: from willy by bombadil.infradead.org with local (Exim 4.92 #3 (Red Hat Linux)) id 1hp9M6-000504-Pd; Sun, 21 Jul 2019 10:46:14 +0000 From: Matthew Wilcox To: Andrew Morton , linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 2/3] mm: Introduce page_shift() Date: Sun, 21 Jul 2019 03:46:11 -0700 Message-Id: <20190721104612.19120-3-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190721104612.19120-1-willy@infradead.org> References: <20190721104612.19120-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: "Matthew Wilcox (Oracle)" Replace PAGE_SHIFT + compound_order(page) with the new page_shift() function. Minor improvements in readability. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Ira Weiny Reviewed-by: Ira Weiny --- arch/powerpc/mm/book3s64/iommu_api.c | 7 ++----- drivers/vfio/vfio_iommu_spapr_tce.c | 2 +- include/linux/mm.h | 6 ++++++ 3 files changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c index b056cae3388b..56cc84520577 100644 --- a/arch/powerpc/mm/book3s64/iommu_api.c +++ b/arch/powerpc/mm/book3s64/iommu_api.c @@ -129,11 +129,8 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, * Allow to use larger than 64k IOMMU pages. Only do that * if we are backed by hugetlb. */ - if ((mem->pageshift > PAGE_SHIFT) && PageHuge(page)) { - struct page *head = compound_head(page); - - pageshift = compound_order(head) + PAGE_SHIFT; - } + if ((mem->pageshift > PAGE_SHIFT) && PageHuge(page)) + pageshift = page_shift(compound_head(page)); mem->pageshift = min(mem->pageshift, pageshift); /* * We don't need struct page reference any more, switch diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c index 8ce9ad21129f..1883fd2901b2 100644 --- a/drivers/vfio/vfio_iommu_spapr_tce.c +++ b/drivers/vfio/vfio_iommu_spapr_tce.c @@ -190,7 +190,7 @@ static bool tce_page_is_contained(struct mm_struct *mm, unsigned long hpa, * a page we just found. Otherwise the hardware can get access to * a bigger memory chunk that it should. */ - return (PAGE_SHIFT + compound_order(compound_head(page))) >= page_shift; + return page_shift(compound_head(page)) >= page_shift; } static inline bool tce_groups_attached(struct tce_container *container) diff --git a/include/linux/mm.h b/include/linux/mm.h index 899dfcf7c23d..64762559885f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -811,6 +811,12 @@ static inline unsigned long page_size(struct page *page) return PAGE_SIZE << compound_order(page); } +/* Returns the number of bits needed for the number of bytes in a page */ +static inline unsigned int page_shift(struct page *page) +{ + return PAGE_SHIFT + compound_order(page); +} + void free_compound_page(struct page *page); #ifdef CONFIG_MMU From patchwork Sun Jul 21 10:46:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11051035 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9112E912 for ; Sun, 21 Jul 2019 10:46:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7DFE227C05 for ; Sun, 21 Jul 2019 10:46:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7218D2871E; Sun, 21 Jul 2019 10:46:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1F7C727C05 for ; Sun, 21 Jul 2019 10:46:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D208C8E0008; Sun, 21 Jul 2019 06:46:20 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C819F8E0005; Sun, 21 Jul 2019 06:46:20 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A877D8E0008; Sun, 21 Jul 2019 06:46:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 6AB888E0005 for ; Sun, 21 Jul 2019 06:46:20 -0400 (EDT) Received: by mail-pf1-f198.google.com with SMTP id g21so21764204pfb.13 for ; Sun, 21 Jul 2019 03:46:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=/rC3AHKP+kpLuOHYtlvi7LwqIVwg10IJNDqouvsomuc=; b=TMwwZSyY7q/kOMeHBSWoaXvtwnd+9gdMU8cAfpXCJWz5GwposYxaFkD4Aazc8l40bT dTmQeUTwozgeT5n1ksn00FvPuEig0fVevxtBCHtXRcNRVzX1vKpXlSVMcjvYLhRkCuqF p9A2SesHtF+ApSAlJgXVLuEj/FCtzVS6aN91vZoo8JxM4KhKT7HAlgZoJgp6FvXqx/QR EfgB3MGa/2XKdQtT4nfZe9G6ODCOsI8kZb0RdTbtn9fO7uSlSc5cCkwXmjWGsasGMJHo AED0HmNaepKe/GQt1rmR7XckS48lUewSgCSOAlVdQoHmFbiItXWKi+xEr1RscAh2WeW8 usbg== X-Gm-Message-State: APjAAAUZoSl1sg/m/5/iKocfkIeptxmJ+/3VTauBE3reBGHjbYIrFNwQ ol1avyubf41pOoo5dDq/Q8UGzzG8yN3CI1Hd+Bg9RT3tCEE1KzAWJKFkw8iP22NC8XEtsVcuMYB dawKSQJJv1Be6HqVBXbvuGi2X1DxTfInG59NNFiHg4C7y12pft8tCeWwONkCpYYxsZw== X-Received: by 2002:a65:60cd:: with SMTP id r13mr39035861pgv.315.1563705979885; Sun, 21 Jul 2019 03:46:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqxOJZqpg2Lo43akOLFZvd+QQqGTPqHLiHBDFmvk/La8VPKRmWNZ8/kw7AFY2ZxN7tSQWgG4 X-Received: by 2002:a65:60cd:: with SMTP id r13mr39035798pgv.315.1563705978764; Sun, 21 Jul 2019 03:46:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563705978; cv=none; d=google.com; s=arc-20160816; b=VVQ2yAj8y0VemSHy9d4dP80PCWOXHpP2usQMN5L6XsvLMsJOMe63IBOX2P0Vo5uca0 bEu1zfqj6SLG/IjrLyTf4eyEu3QrqDFjpwarwX3KH8CG/d/v95KBffyP69ZhXI777qmz emRZEltsJGs6TUmzH667Y7KEC7CU2MOOHhgFIoP/zlg+pPlvSfr2DLdTTFc1cXs1T53Q 1U6lH35ll9xGScVjNl4wDTwS6obk0Wjz+NchN3QouVbWZq4kiZz0IHNunin8K/xF+O57 gJVA/RuUktjoVgmnLI81Xe90i36LYrO39efu6hcFgw33tx1k9RuibvVyd9tMYDhw2jpN 2oVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=/rC3AHKP+kpLuOHYtlvi7LwqIVwg10IJNDqouvsomuc=; b=LPsEJLwvhzJUIyGDeNFjgYy34Ao7al+9/2FWy5zusxZ7hW9ZEQtMDY/SevMkIHjsX6 akbzXbLT3hlZML+tZO+k8KzZ8hybJM2DOovG6/RoUwVJPT2g+wGwoV5AbZ2dCsNrvnus P8IYWp9j68c0z9NLODjnF3dVFgrOfvdB++OvzDaRKpnExe/1U/AoWEQ097yvMrdeA1Pq lwtCHs/fF7Y051KXXOSU5mjl5a9YiaheX539kYt7sqGnCXPggctDkFGyo9aq5w6p9heK BUhzqqQdm3PhMmrFAOpTCALU0SJ0g/ayD48M2MCwB/kTmLvizKU9jYTwHFCnnBwGqBaE AoBw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=uC9CEqf0; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id b23si5795244pjp.62.2019.07.21.03.46.18 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Sun, 21 Jul 2019 03:46:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=uC9CEqf0; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=/rC3AHKP+kpLuOHYtlvi7LwqIVwg10IJNDqouvsomuc=; b=uC9CEqf0bb7fdm+q3CNHWlcfsJ SurJ3hWOLPR94MtWUbm2L/0KOlkNR5sLLz9RjhCphlRjud5tuECKDLuWp3ZqAoQDn3FBjZYZKzA/Z 0AJpyjUAeKPFAedaLcSxqC5BJcrzHq9DeCD3sCEdPEbrSsiMmz1bs8C0dEzXRAeCcHsoNsWoaWQXR YZ09Pr/d1rNn8CaUK57Wn6Uf2gccjfl8sagngkjnHoVg3+9KzZdv04uwIqbTAm05Lp1Ni7nt6SLtv ogp/noxlhzCFfFIPEvLgeNtjGMgm1OZkMLdeL/rMsaw6dgrsIFN9qWz/wjN6EMVxiyq9ioTdOfyFj Hs7t6Diw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92 #3 (Red Hat Linux)) id 1hp9M6-000508-Qt; Sun, 21 Jul 2019 10:46:14 +0000 From: Matthew Wilcox To: Andrew Morton , linux-mm@kvack.org Cc: Matthew Wilcox Subject: [PATCH v2 3/3] mm: Introduce compound_nr() Date: Sun, 21 Jul 2019 03:46:12 -0700 Message-Id: <20190721104612.19120-4-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190721104612.19120-1-willy@infradead.org> References: <20190721104612.19120-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox (Oracle) Replace 1 << compound_order(page) with compound_nr(page). Minor improvements in readability. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Ira Weiny --- arch/arm/include/asm/xen/page-coherent.h | 3 +-- arch/arm/mm/flush.c | 4 ++-- arch/arm64/include/asm/xen/page-coherent.h | 3 +-- arch/powerpc/mm/hugetlbpage.c | 2 +- fs/proc/task_mmu.c | 2 +- include/linux/mm.h | 6 ++++++ mm/compaction.c | 2 +- mm/filemap.c | 2 +- mm/gup.c | 2 +- mm/hugetlb_cgroup.c | 2 +- mm/kasan/common.c | 2 +- mm/memcontrol.c | 4 ++-- mm/memory_hotplug.c | 4 ++-- mm/migrate.c | 2 +- mm/page_alloc.c | 2 +- mm/rmap.c | 3 +-- mm/shmem.c | 8 ++++---- mm/swap_state.c | 2 +- mm/util.c | 2 +- mm/vmscan.c | 4 ++-- 20 files changed, 32 insertions(+), 29 deletions(-) diff --git a/arch/arm/include/asm/xen/page-coherent.h b/arch/arm/include/asm/xen/page-coherent.h index 2c403e7c782d..ea39cb724ffa 100644 --- a/arch/arm/include/asm/xen/page-coherent.h +++ b/arch/arm/include/asm/xen/page-coherent.h @@ -31,8 +31,7 @@ static inline void xen_dma_map_page(struct device *hwdev, struct page *page, { unsigned long page_pfn = page_to_xen_pfn(page); unsigned long dev_pfn = XEN_PFN_DOWN(dev_addr); - unsigned long compound_pages = - (1<index, compound_order(page)); - nr = 1U << compound_order(page); + nr = compound_nr(page); } VM_BUG_ON_PAGE(!PageLocked(page), page); diff --git a/mm/gup.c b/mm/gup.c index 98f13ab37bac..84a36d80dd2e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1460,7 +1460,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, * gup may start from a tail page. Advance step by the left * part. */ - step = (1 << compound_order(head)) - (pages[i] - head); + step = compound_nr(head) - (pages[i] - head); /* * If we get a page from the CMA zone, since we are going to * be pinning these entries, we might as well move them out diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c index 68c2f2f3c05b..f1930fa0b445 100644 --- a/mm/hugetlb_cgroup.c +++ b/mm/hugetlb_cgroup.c @@ -139,7 +139,7 @@ static void hugetlb_cgroup_move_parent(int idx, struct hugetlb_cgroup *h_cg, if (!page_hcg || page_hcg != h_cg) goto out; - nr_pages = 1 << compound_order(page); + nr_pages = compound_nr(page); if (!parent) { parent = root_h_cgroup; /* root has no limit */ diff --git a/mm/kasan/common.c b/mm/kasan/common.c index a929a3b9444d..895dc5e2b3d5 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -319,7 +319,7 @@ void kasan_poison_slab(struct page *page) { unsigned long i; - for (i = 0; i < (1 << compound_order(page)); i++) + for (i = 0; i < compound_nr(page); i++) page_kasan_tag_reset(page + i); kasan_poison_shadow(page_address(page), page_size(page), KASAN_KMALLOC_REDZONE); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index cdbb7a84cb6e..b5c4c618d087 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6257,7 +6257,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) unsigned int nr_pages = 1; if (PageTransHuge(page)) { - nr_pages <<= compound_order(page); + nr_pages = compound_nr(page); ug->nr_huge += nr_pages; } if (PageAnon(page)) @@ -6269,7 +6269,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) } ug->pgpgout++; } else { - ug->nr_kmem += 1 << compound_order(page); + ug->nr_kmem += compound_nr(page); __ClearPageKmemcg(page); } diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 2a9bbddb0e55..bb2ab9f58f8c 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1311,7 +1311,7 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end) head = compound_head(page); if (page_huge_active(head)) return pfn; - skip = (1 << compound_order(head)) - (page - head); + skip = compound_nr(head) - (page - head); pfn += skip - 1; } return 0; @@ -1349,7 +1349,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) if (PageHuge(page)) { struct page *head = compound_head(page); - pfn = page_to_pfn(head) + (1<i_pages, index, compound_order(page)); unsigned long i = 0; - unsigned long nr = 1UL << compound_order(page); + unsigned long nr = compound_nr(page); VM_BUG_ON_PAGE(PageTail(page), page); VM_BUG_ON_PAGE(index != round_down(index, nr), page); @@ -1869,7 +1869,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, lru_cache_add_anon(page); spin_lock_irq(&info->lock); - info->alloced += 1 << compound_order(page); + info->alloced += compound_nr(page); inode->i_blocks += BLOCKS_PER_PAGE << compound_order(page); shmem_recalc_inode(inode); spin_unlock_irq(&info->lock); @@ -1910,7 +1910,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, struct page *head = compound_head(page); int i; - for (i = 0; i < (1 << compound_order(head)); i++) { + for (i = 0; i < compound_nr(head); i++) { clear_highpage(head + i); flush_dcache_page(head + i); } @@ -1937,7 +1937,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, * Error recovery. */ unacct: - shmem_inode_unacct_blocks(inode, 1 << compound_order(page)); + shmem_inode_unacct_blocks(inode, compound_nr(page)); if (PageTransHuge(page)) { unlock_page(page); diff --git a/mm/swap_state.c b/mm/swap_state.c index 8368621a0fc7..f844af5f09ba 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -116,7 +116,7 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) struct address_space *address_space = swap_address_space(entry); pgoff_t idx = swp_offset(entry); XA_STATE_ORDER(xas, &address_space->i_pages, idx, compound_order(page)); - unsigned long i, nr = 1UL << compound_order(page); + unsigned long i, nr = compound_nr(page); VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageSwapCache(page), page); diff --git a/mm/util.c b/mm/util.c index e6351a80f248..bab284d69c8c 100644 --- a/mm/util.c +++ b/mm/util.c @@ -521,7 +521,7 @@ bool page_mapped(struct page *page) return true; if (PageHuge(page)) return false; - for (i = 0; i < (1 << compound_order(page)); i++) { + for (i = 0; i < compound_nr(page); i++) { if (atomic_read(&page[i]._mapcount) >= 0) return true; } diff --git a/mm/vmscan.c b/mm/vmscan.c index 44df66a98f2a..bb69bd2d9c78 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1145,7 +1145,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, VM_BUG_ON_PAGE(PageActive(page), page); - nr_pages = 1 << compound_order(page); + nr_pages = compound_nr(page); /* Account the number of base pages even though THP */ sc->nr_scanned += nr_pages; @@ -1701,7 +1701,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, VM_BUG_ON_PAGE(!PageLRU(page), page); - nr_pages = 1 << compound_order(page); + nr_pages = compound_nr(page); total_scan += nr_pages; if (page_zonenum(page) > sc->reclaim_idx) {