From patchwork Fri May 4 18:33:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10381367 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2CDC560353 for ; Fri, 4 May 2018 18:59:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1D19D2954C for ; Fri, 4 May 2018 18:59:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 118F929572; Fri, 4 May 2018 18:59:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 626A829590 for ; Fri, 4 May 2018 18:59:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 36A576B026B; Fri, 4 May 2018 14:33:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2CAC56B026C; Fri, 4 May 2018 14:33:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1BA3F6B026E; Fri, 4 May 2018 14:33:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id D12416B026B for ; Fri, 4 May 2018 14:33:25 -0400 (EDT) Received: by mail-pg0-f70.google.com with SMTP id j18-v6so14133868pgv.18 for ; Fri, 04 May 2018 11:33:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=w4vkc/YddBVe9qSJRN3RlvliqCDq6IUSfwmLCX5texM=; b=d7XUQN7Rw7Uef51UXLwN581asOCJeIRblxyK1Vgs39LAXKbchQzVkuhRwVGnJhanMi p6WsNGrRI0sLKrVJvuY3lj+wbQrSprMI9uExZQGn4nFmqhmzmw62VKE5E17iA7YtSPKZ 0ZjUfKjcwGkNdF+MUQB3Jj/oBLifW7WEq68fkdm5cxSNKM+vdrieRNUEu4mmZ41KslE4 jhFOWpQtzlyIoncbf0NHvVBBpsrhoDRK/kHja5AEr8M+FnTr+V2UKZMD0CzuWslO1Opw joQVfGSC+GgP53krfvG/10dN5xeInnVxE/nAOQMCK6VkP12KgA3YP/ABygzoGgt3q/rS DLVw== X-Gm-Message-State: ALQs6tA8UrMRZUd+N7D0DnxkginLoiD/3zrIM+7a+LZ3m9tG2kf7xAWX uNpitxzUJKAMl9OfidUKA4WuxqlFl8J7Od0tu+SVsNssyp60YjHTU53N+zlZXV1Rj57IVH7GyH7 FNk8QPMjb0pVf4Wv+KlDKGiIrw53ZRE9liNySGJMlqlBdef55fL/rxQmuTFLBsQFV5Q== X-Received: by 2002:a17:902:b483:: with SMTP id y3-v6mr28366789plr.157.1525458805541; Fri, 04 May 2018 11:33:25 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrUTx5jXimMhq7nc7OEiIlXr0y+hYRdIvJOdGZptIUhuUF2qZoDgWPYxmrZ0kcn/XNBKUQY X-Received: by 2002:a17:902:b483:: with SMTP id y3-v6mr28366742plr.157.1525458804516; Fri, 04 May 2018 11:33:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525458804; cv=none; d=google.com; s=arc-20160816; b=SMp0FHd4/aZuBw8eqrgeRmIa16N4qnzQ4XCPEXCCqg26cp6AP+hn3Jqcw9ALWNX/de eRDlpaf0x9Ew1xD46UmXo6W/3A/dhntnsp1SxofdXAqBemlb7GcvVny7cTd7bY1Z7Noq zvVsvyXIqolbNeV/O9SO7wBkvU3AYlZAdcMtkGbGL1LdK3GF6i9aZ5gjUZRxUJtxzl7H c0r0CLOqSfXNBSTNexshcykAWbiPiThY/ksl9J2+qeb0ZVdFMzMoZ3XhCIVrJFMTG9hA IMR0lXu90Vp02T/pcAHanFKwmfU1HQfVTh1Bi7Xd00Th1+8V96dwvZxZ4/ixmjqEUMKj j2rw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=w4vkc/YddBVe9qSJRN3RlvliqCDq6IUSfwmLCX5texM=; b=xCKMRtWUtU+gjJP+zyzLUehdwth35bfiPu7VSoiCbT/oKKPeKgp+10XX9Lx3lEZ3D7 krkywtV7ZmWJI4Zp8JpTMLkMj2gNerRnrfanJtyCcyV4bd0PasLtkbQss+ERiVWNYgBa tY1KV4LxpTcvFkJc5DR2jm9a3037WBEjdQ+AXoeOyCMOLCBh8Fx4NuHc7P2NHFzjwx/S aGE3mZqjPeE5ZZqKYLmG9EJBaNVQkhGMS50MEy3IQLOACeWlEgGMdLLShiW6FkhZIA6m IjEfMIaYELg8PidWX8ZcFvUhXQL9H3USGo2cZ577vedSOG7OiwaknVwDow7w7iXfpO/Y madg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=lzHCE6Zw; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id 69-v6si2141856pla.548.2018.05.04.11.33.24 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 May 2018 11:33:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=lzHCE6Zw; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=w4vkc/YddBVe9qSJRN3RlvliqCDq6IUSfwmLCX5texM=; b=lzHCE6Zw5GPucEhfVIokYPzOL 8pcrWEyccyXuthOHRPH4p5JKeIAyoVDOaFA9qmVzm+Wmq1HYgxfe1WuzEBrhiOcjxHWg+FimhAw83 Cx8kqhS8FNLzMujWBeX9uSoTZSUd3Cped8poje0VWxRswrWh/5JmFQwtSMXu7RcDL4A7TlQ3frN5x ls+TCtZVfo8lSd+j5UEFGdvv0LFYMvG/IsYC3l/IG0Yf+2Txl2SwUx6YcoT0Hbpd6fKNeSaowN8rM km+3IGfiLt+yfUTSbTRCaZxXudbsCmQwqrvfXoQT/qCo5DpKh3NyiuYGgIDDsiYNruVrlNPTqfxMy C1/w6mxAg==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fEfWF-0003mX-LA; Fri, 04 May 2018 18:33:23 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: Matthew Wilcox , Andrew Morton , "Kirill A . Shutemov" , Christoph Lameter , Lai Jiangshan , Pekka Enberg , Vlastimil Babka , Dave Hansen , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [PATCH v5 09/17] mm: Move lru union within struct page Date: Fri, 4 May 2018 11:33:10 -0700 Message-Id: <20180504183318.14415-10-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180504183318.14415-1-willy@infradead.org> References: <20180504183318.14415-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox Since the LRU is two words, this does not affect the double-word alignment of SLUB's freelist. Signed-off-by: Matthew Wilcox Acked-by: Vlastimil Babka Acked-by: Kirill A. Shutemov --- include/linux/mm_types.h | 102 +++++++++++++++++++-------------------- mm/slub.c | 8 +-- 2 files changed, 55 insertions(+), 55 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 629a7b568ed7..b6a3948195d3 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -72,6 +72,57 @@ struct hmm; struct page { unsigned long flags; /* Atomic flags, some possibly * updated asynchronously */ + /* + * WARNING: bit 0 of the first word encode PageTail(). That means + * the rest users of the storage space MUST NOT use the bit to + * avoid collision and false-positive PageTail(). + */ + union { + struct list_head lru; /* Pageout list, eg. active_list + * protected by zone_lru_lock ! + * Can be used as a generic list + * by the page owner. + */ + struct dev_pagemap *pgmap; /* ZONE_DEVICE pages are never on an + * lru or handled by a slab + * allocator, this points to the + * hosting device page map. + */ + struct { /* slub per cpu partial pages */ + struct page *next; /* Next partial slab */ +#ifdef CONFIG_64BIT + int pages; /* Nr of partial slabs left */ + int pobjects; /* Approximate # of objects */ +#else + short int pages; + short int pobjects; +#endif + }; + + struct rcu_head rcu_head; /* Used by SLAB + * when destroying via RCU + */ + /* Tail pages of compound page */ + struct { + unsigned long compound_head; /* If bit zero is set */ + + /* First tail page only */ + unsigned char compound_dtor; + unsigned char compound_order; + /* two/six bytes available here */ + }; + +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && USE_SPLIT_PMD_PTLOCKS + struct { + unsigned long __pad; /* do not overlay pmd_huge_pte + * with compound_head to avoid + * possible bit 0 collision. + */ + pgtable_t pmd_huge_pte; /* protected by page->ptl */ + }; +#endif + }; + /* Three words (12/24 bytes) are available in this union. */ union { struct { /* Page cache and anonymous pages */ @@ -135,57 +186,6 @@ struct page { /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */ atomic_t _refcount; - /* - * WARNING: bit 0 of the first word encode PageTail(). That means - * the rest users of the storage space MUST NOT use the bit to - * avoid collision and false-positive PageTail(). - */ - union { - struct list_head lru; /* Pageout list, eg. active_list - * protected by zone_lru_lock ! - * Can be used as a generic list - * by the page owner. - */ - struct dev_pagemap *pgmap; /* ZONE_DEVICE pages are never on an - * lru or handled by a slab - * allocator, this points to the - * hosting device page map. - */ - struct { /* slub per cpu partial pages */ - struct page *next; /* Next partial slab */ -#ifdef CONFIG_64BIT - int pages; /* Nr of partial slabs left */ - int pobjects; /* Approximate # of objects */ -#else - short int pages; - short int pobjects; -#endif - }; - - struct rcu_head rcu_head; /* Used by SLAB - * when destroying via RCU - */ - /* Tail pages of compound page */ - struct { - unsigned long compound_head; /* If bit zero is set */ - - /* First tail page only */ - unsigned char compound_dtor; - unsigned char compound_order; - /* two/six bytes available here */ - }; - -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && USE_SPLIT_PMD_PTLOCKS - struct { - unsigned long __pad; /* do not overlay pmd_huge_pte - * with compound_head to avoid - * possible bit 0 collision. - */ - pgtable_t pmd_huge_pte; /* protected by page->ptl */ - }; -#endif - }; - #ifdef CONFIG_MEMCG struct mem_cgroup *mem_cgroup; #endif diff --git a/mm/slub.c b/mm/slub.c index 05ca612a5fe6..57a20f995220 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -52,11 +52,11 @@ * and to synchronize major metadata changes to slab cache structures. * * The slab_lock is only used for debugging and on arches that do not - * have the ability to do a cmpxchg_double. It only protects the second - * double word in the page struct. Meaning + * have the ability to do a cmpxchg_double. It only protects: * A. page->freelist -> List of object free in a page - * B. page->counters -> Counters of objects - * C. page->frozen -> frozen state + * B. page->inuse -> Number of objects in use + * C. page->objects -> Number of objects in page + * D. page->frozen -> frozen state * * If a slab is frozen then it is exempt from list management. It is not * on any list. The processor that froze the slab is the one who can