From patchwork Fri May 4 18:33:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10381361 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6F42760541 for ; Fri, 4 May 2018 18:59:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6000829589 for ; Fri, 4 May 2018 18:59:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 54E882958D; Fri, 4 May 2018 18:59:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 80EC229592 for ; Fri, 4 May 2018 18:59:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D7DB36B026C; Fri, 4 May 2018 14:33:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CB9566B026E; Fri, 4 May 2018 14:33:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB8A16B026F; Fri, 4 May 2018 14:33:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id 6BD9D6B026C for ; Fri, 4 May 2018 14:33:26 -0400 (EDT) Received: by mail-pf0-f197.google.com with SMTP id e3so663877pfe.15 for ; Fri, 04 May 2018 11:33:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=kpOcBe2/vlo6vIOigWNJrbrpkMMvxopb/KiE7GFrUeE=; b=EfaZ3R2pBH7dN46r85uo1seW4PUOuNLrx9Rag3fwZ3l3f7q7YVuvtH66QeqzmeYexl /elg5jOepCS0lsU28M2nfqp56G89rDoccGFgBzby5zlIlYexoBIQlHm6HqghziOXbjdj uonm0Rp7qBWt1APJc4J5hYGqSc5vygW1d7phdjFKByNteDiiQ5TLagBzHZHJ9UOjKM1o gz08PzgR1LmNA4lByzsZBiGoIRwm0nxZ0mFaCt3RL9nUN5CqS7sPDjJFHEI/jVvIFB29 qHV9JEsdGDoLqNjml7dhi5odqnob4dKbaomnP5ECRe3JZC8Q1mNvpQaycOtGqbcT7oGw wDNA== X-Gm-Message-State: ALQs6tDvbHnOan8SbSDQ4t1pHHhtOL/wCbHs9Z/GTiHWLNjqWr/epAHa G/nPZuHvmEn4spsOVruDo0iZRDcGCfyVTA9I1SyTOV7mIsGEtLhUe1IMQfuqeX1MqEXvWMALUic t33Si2Pl2Ttfj2OMFK5nB5Etj2qgkJaMQ1b5eGqV4jb0lKBbjwhRlsxuNKl/E/e6OHQ== X-Received: by 2002:a65:4289:: with SMTP id j9-v6mr13871119pgp.136.1525458806109; Fri, 04 May 2018 11:33:26 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpzj9/EfS4B3tOwz00y8r0P6q51cBukv9Is6EtozoXl4lQ8U57ILhaW4HCSjscwyvLdWhpx X-Received: by 2002:a65:4289:: with SMTP id j9-v6mr13871018pgp.136.1525458802864; Fri, 04 May 2018 11:33:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525458802; cv=none; d=google.com; s=arc-20160816; b=FnUxuh2J6gggQtzkf74r5PCdlzscUVsghDOnnhaD8itlb/KCj1Kfz02kQVZVTnIV30 OnUPsQoS6piWMga3uoWRls1WM4x35XJPIruJ4opRenW7bcFhbLZKnRvUYM1JF9c+iC5i QespqJ18QJUz6GT9tyWD55qeFy4Qt3IG3+WDFZ2CX+rlYUt05XTjtxCTrR7A721ursf/ UGiUS6Uw2bHBZ8p9CLhh2HXoT3tZj8oT1+64vUk2y8CeJQF5L2oBiSBzB9HNmeGu1yIm yZyjFUwznmtXdqA9ljFMbzD3dfyhbmTV8jKeWMgu+syRSzVM4bdeLKnNiRPG0UyICqV+ H5EQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=kpOcBe2/vlo6vIOigWNJrbrpkMMvxopb/KiE7GFrUeE=; b=zurWkD+cTV5r3Y4cU6xeePQgClvV/lXBrYJyP7EYgsNdDcAVGM6nItCnLtUSLz+Rk9 CXe+OvbOUZmBDctCcKcwhpigIiGj+gFHarS10+3imTaF7Koi9T6CYue1SiJhcKWZ0mmt IHwZuot5raY83v2PlRRu3KhaNdbxu3emBuQQwMrh7uL7J8xMcWde1t34OBzsCAPmEhsP zJ2Xh38+O5K6auqcqORj1sLNNSb9oNjty8L3eAgBTlzaN6rvbmQ7gROsL9A6r4aV8SZX HCrahG0faCGc4iCAjp6Sv4KflPelWvlTHDAOl8eivoSnFGg2Nl2e+k24s/ucfnkLaX5J GMFA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=PWcuZ++J; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id v2-v6si13336032pge.105.2018.05.04.11.33.22 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 May 2018 11:33:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=PWcuZ++J; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=kpOcBe2/vlo6vIOigWNJrbrpkMMvxopb/KiE7GFrUeE=; b=PWcuZ++JwHuaYg6pP0v/cZJYB OKtNIrugeI7YWv/gvGSpg0qnq7VXlB1LPpvhp3OraUSwAaL6DT5z7Tkwrk500QpWEnXXw+xynmSWv OKZ+DlCQqE6a5mNKG1DTIxgDNJpxXIyPicraP0qn703AEB+f5qdL1ToK963ozVSnl5p/zKip04t1k WhfVFjhqxV2+AsiyAmsHWv4VhkeadhjDQHQVtpFSwMewI0kk8ShfiQCKQjOBt4Vp5zSGBYyzF011T PuQ3AWbD5s2wLz6VTJRY3T1yLaH8W3FPJZbEsizS+fGqPa8SVZv3udBZjZhBBn3u6Rr6hU8zRnCeW o6lKLXydA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fEfWD-0003lo-NO; Fri, 04 May 2018 18:33:21 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: Matthew Wilcox , Andrew Morton , "Kirill A . Shutemov" , Christoph Lameter , Lai Jiangshan , Pekka Enberg , Vlastimil Babka , Dave Hansen , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [PATCH v5 05/17] mm: Move 'private' union within struct page Date: Fri, 4 May 2018 11:33:06 -0700 Message-Id: <20180504183318.14415-6-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180504183318.14415-1-willy@infradead.org> References: <20180504183318.14415-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox By moving page->private to the fourth word of struct page, we can put the SLUB counters in the same word as SLAB's s_mem and still do the cmpxchg_double trick. Now the SLUB counters no longer overlap with the mapcount or refcount so we can drop the call to page_mapcount_reset() and simplify set_page_slub_counters() to a single line. Signed-off-by: Matthew Wilcox Acked-by: Vlastimil Babka Acked-by: Kirill A. Shutemov --- include/linux/mm_types.h | 56 ++++++++++++++++++---------------------- mm/slub.c | 20 ++------------ 2 files changed, 27 insertions(+), 49 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index e97a310a6abe..23378a789af4 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -65,15 +65,9 @@ struct hmm; */ #ifdef CONFIG_HAVE_ALIGNED_STRUCT_PAGE #define _struct_page_alignment __aligned(2 * sizeof(unsigned long)) -#if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) -#define _slub_counter_t unsigned long #else -#define _slub_counter_t unsigned int -#endif -#else /* !CONFIG_HAVE_ALIGNED_STRUCT_PAGE */ #define _struct_page_alignment -#define _slub_counter_t unsigned int -#endif /* !CONFIG_HAVE_ALIGNED_STRUCT_PAGE */ +#endif struct page { /* First double word block */ @@ -95,6 +89,30 @@ struct page { /* page_deferred_list().prev -- second tail page */ }; + union { + /* + * Mapping-private opaque data: + * Usually used for buffer_heads if PagePrivate + * Used for swp_entry_t if PageSwapCache + * Indicates order in the buddy system if PageBuddy + */ + unsigned long private; +#if USE_SPLIT_PTE_PTLOCKS +#if ALLOC_SPLIT_PTLOCKS + spinlock_t *ptl; +#else + spinlock_t ptl; +#endif +#endif + void *s_mem; /* slab first object */ + unsigned long counters; /* SLUB */ + struct { /* SLUB */ + unsigned inuse:16; + unsigned objects:15; + unsigned frozen:1; + }; + }; + union { /* * If the page is neither PageSlab nor mappable to userspace, @@ -104,13 +122,7 @@ struct page { */ unsigned int page_type; - _slub_counter_t counters; unsigned int active; /* SLAB */ - struct { /* SLUB */ - unsigned inuse:16; - unsigned objects:15; - unsigned frozen:1; - }; int units; /* SLOB */ struct { /* Page cache */ @@ -179,24 +191,6 @@ struct page { #endif }; - union { - /* - * Mapping-private opaque data: - * Usually used for buffer_heads if PagePrivate - * Used for swp_entry_t if PageSwapCache - * Indicates order in the buddy system if PageBuddy - */ - unsigned long private; -#if USE_SPLIT_PTE_PTLOCKS -#if ALLOC_SPLIT_PTLOCKS - spinlock_t *ptl; -#else - spinlock_t ptl; -#endif -#endif - void *s_mem; /* slab first object */ - }; - #ifdef CONFIG_MEMCG struct mem_cgroup *mem_cgroup; #endif diff --git a/mm/slub.c b/mm/slub.c index 7fc13c46e975..05ca612a5fe6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -356,21 +356,6 @@ static __always_inline void slab_unlock(struct page *page) __bit_spin_unlock(PG_locked, &page->flags); } -static inline void set_page_slub_counters(struct page *page, unsigned long counters_new) -{ - struct page tmp; - tmp.counters = counters_new; - /* - * page->counters can cover frozen/inuse/objects as well - * as page->_refcount. If we assign to ->counters directly - * we run the risk of losing updates to page->_refcount, so - * be careful and only assign to the fields we need. - */ - page->frozen = tmp.frozen; - page->inuse = tmp.inuse; - page->objects = tmp.objects; -} - /* Interrupts must be disabled (for the fallback code to work right) */ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page, void *freelist_old, unsigned long counters_old, @@ -392,7 +377,7 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page if (page->freelist == freelist_old && page->counters == counters_old) { page->freelist = freelist_new; - set_page_slub_counters(page, counters_new); + page->counters = counters_new; slab_unlock(page); return true; } @@ -431,7 +416,7 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, if (page->freelist == freelist_old && page->counters == counters_old) { page->freelist = freelist_new; - set_page_slub_counters(page, counters_new); + page->counters = counters_new; slab_unlock(page); local_irq_restore(flags); return true; @@ -1689,7 +1674,6 @@ static void __free_slab(struct kmem_cache *s, struct page *page) __ClearPageSlabPfmemalloc(page); __ClearPageSlab(page); - page_mapcount_reset(page); page->mapping = NULL; if (current->reclaim_state) current->reclaim_state->reclaimed_slab += pages;