From patchwork Fri May 4 18:33:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10381357 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3B9DA60353 for ; Fri, 4 May 2018 18:59:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2BA202958E for ; Fri, 4 May 2018 18:59:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1E8092958D; Fri, 4 May 2018 18:59:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 68D5B29591 for ; Fri, 4 May 2018 18:59:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 346176B026E; Fri, 4 May 2018 14:33:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2CB656B026F; Fri, 4 May 2018 14:33:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E16F6B0270; Fri, 4 May 2018 14:33:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg0-f72.google.com (mail-pg0-f72.google.com [74.125.83.72]) by kanga.kvack.org (Postfix) with ESMTP id D1EE96B026F for ; Fri, 4 May 2018 14:33:26 -0400 (EDT) Received: by mail-pg0-f72.google.com with SMTP id w3-v6so14176495pgv.17 for ; Fri, 04 May 2018 11:33:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=zalGaDqBS4d2enUprStcmnkbSHARGTWptO06Ksi5rcI=; b=O1XmMps5aZSYOF8botNTwH7cJb7WslzIFMgcd8iFitkY+mYloRwRX9BAaltLwoMMua 5Sg2QCK6B3kLoZeP6k5VpifguvBKmMALYZOZkFofin2rI31mOsY8BJGNOE7wTm1fsCOr +ObEdW1LoKgcR3aEPBh/o8TZYLmxGshMznvZ7oYMapnLoGlP4qHNM77k79FOTXlL9TO5 cgMcmuinkucdbva9TlxWJzu39TQ8Y8DRG1CSWXUDkVPLeGwZ92l+oAssL4Kwh2DEkpy2 BI2BU9zEzhuigrlpqiDAOp3s2SFbJuaslrMv/Wtemzp8CYZ+isL3dBe3tfbHE3Qw+j33 TiFQ== X-Gm-Message-State: ALQs6tANL7H15KbXzsY6ipG78M0QhZTDWUfUiSUX3pwA3kvSva8Lb7Ns Z8jdkSCyfdiHeaU3IluUtRSk1J20VNJiBs2NWTe3oWx9IZu8Xd37rNHyG8ALKqJik4U8KYCNuDk hhrPYT+wwt6uSPqXPK4kQ71CSLvcVaFQWAyhsCF3pXXyhCQyZP8vKgQTAo1UpFUZzqA== X-Received: by 10.98.157.90 with SMTP id i87mr28265677pfd.190.1525458806533; Fri, 04 May 2018 11:33:26 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpXHqOI/POoxVqiNw83beBfeOrQlAG4xdhky6XSSMfZyhL8sDT4ZeapdGjp/5NLC7tm9DEp X-Received: by 10.98.157.90 with SMTP id i87mr28265638pfd.190.1525458805570; Fri, 04 May 2018 11:33:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525458805; cv=none; d=google.com; s=arc-20160816; b=QBDTf7rprvMGhWQiNrfg9Sjx5AF/wrsq9lGjKZWTU8pXJxeQJNqfMdWQJJWjY9sR19 x3M4eKtd83k//FjnsjjcXHdZfz7VRM7SXz5slzNuKAN8RD3k4kwcPRSaHCfqVmjsy6/g J92GvKUtsg7Fwevp9Yl1DX6I9/gmGJKO7V0ac+5PAyayl9Et3ElW/piv1XzzFu1Y2g/h +cRkjGrOnd0Q3GeusloQsndN8/6Rom9EEl2rL1F2l7cmHYY5abmZuTPGGy9S7cD5HmBp JzbStR6maT9YxSn2AHUM4w+Gms/h/XHMZ1KBXADibljGkiz58e7DqhBoJ1KiGdUoPtse SSOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=zalGaDqBS4d2enUprStcmnkbSHARGTWptO06Ksi5rcI=; b=f3mG2y/Tln4aSsIqwPfSgR12IOhmq709/9NZhx7//NQn9KXhu3n1k01f40G1mWrp1L q9SaG066qiVxtjfkgtoti3fiOA1+tig3kabR4ox+eq2zj9j7HYz2wBUZRyuZcwm1hvTE VmleZfhwRa38hsS/7WT0XncuOdqC3JVOZlALJ7EQe8VS+IaLNpnZJUwEzXlGpOnfl+Qz L4PhDbviFBNT1Om6MssSrBKe928zAXB5XBE6yAWPJ76j89WFifT92Olaa8eW8USRXl9l Za+LCPeEqGloe5mnruHzEw8D3lZcoViLbQObeNOZPGGJhUj934Eg2BTMHIkXFze0/Cea azpg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=QMx+zQTk; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id y1-v6si13751799pge.198.2018.05.04.11.33.25 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 May 2018 11:33:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=QMx+zQTk; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=zalGaDqBS4d2enUprStcmnkbSHARGTWptO06Ksi5rcI=; b=QMx+zQTkP9CXvrb3moUEq/sqc kyCVt6RUDkvvZ8Srb/goeMALbQ3/B89PNObm31LcL+kGpugxfTIBLfHgTjQ2gYbA0rE0r+hRlJntE 2LCKkViDZ3m7hjnWqUk+1ME/0ndtrF2DO8WjKnEhEfUGHCO5dsA9mh+n32jUzEPbZocQdAEv7th6/ Q6m+eJDEgN0Q6l7KznnJunB6qeuVXVDklH9GVqZAcDHABLFL+ASfv9/p+lcXX9matjJ/o/mnSc7ui txcCPch3PIno9lu5h4Gm12t/1cuC9FIccLRetLec3aN3cs1sl/r8L48YUeDkGLlebb44KO8TVDOo2 rcQS7TLcA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fEfWG-0003ml-CQ; Fri, 04 May 2018 18:33:24 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: Matthew Wilcox , Andrew Morton , "Kirill A . Shutemov" , Christoph Lameter , Lai Jiangshan , Pekka Enberg , Vlastimil Babka , Dave Hansen , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [PATCH v5 10/17] mm: Combine LRU and main union in struct page Date: Fri, 4 May 2018 11:33:11 -0700 Message-Id: <20180504183318.14415-11-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180504183318.14415-1-willy@infradead.org> References: <20180504183318.14415-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox This gives us five words of space in a single union in struct page. The compound_mapcount moves position (from offset 24 to offset 20) on 64-bit systems, but that does not seem likely to cause any trouble. Signed-off-by: Matthew Wilcox Acked-by: Vlastimil Babka Acked-by: Kirill A. Shutemov --- include/linux/mm_types.h | 97 +++++++++++++++++++--------------------- mm/page_alloc.c | 2 +- 2 files changed, 47 insertions(+), 52 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index b6a3948195d3..cf3bbee8c9a1 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -73,59 +73,19 @@ struct page { unsigned long flags; /* Atomic flags, some possibly * updated asynchronously */ /* - * WARNING: bit 0 of the first word encode PageTail(). That means - * the rest users of the storage space MUST NOT use the bit to + * Five words (20/40 bytes) are available in this union. + * WARNING: bit 0 of the first word is used for PageTail(). That + * means the other users of this union MUST NOT use the bit to * avoid collision and false-positive PageTail(). */ - union { - struct list_head lru; /* Pageout list, eg. active_list - * protected by zone_lru_lock ! - * Can be used as a generic list - * by the page owner. - */ - struct dev_pagemap *pgmap; /* ZONE_DEVICE pages are never on an - * lru or handled by a slab - * allocator, this points to the - * hosting device page map. - */ - struct { /* slub per cpu partial pages */ - struct page *next; /* Next partial slab */ -#ifdef CONFIG_64BIT - int pages; /* Nr of partial slabs left */ - int pobjects; /* Approximate # of objects */ -#else - short int pages; - short int pobjects; -#endif - }; - - struct rcu_head rcu_head; /* Used by SLAB - * when destroying via RCU - */ - /* Tail pages of compound page */ - struct { - unsigned long compound_head; /* If bit zero is set */ - - /* First tail page only */ - unsigned char compound_dtor; - unsigned char compound_order; - /* two/six bytes available here */ - }; - -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && USE_SPLIT_PMD_PTLOCKS - struct { - unsigned long __pad; /* do not overlay pmd_huge_pte - * with compound_head to avoid - * possible bit 0 collision. - */ - pgtable_t pmd_huge_pte; /* protected by page->ptl */ - }; -#endif - }; - - /* Three words (12/24 bytes) are available in this union. */ union { struct { /* Page cache and anonymous pages */ + /** + * @lru: Pageout list, eg. active_list protected by + * zone_lru_lock. Sometimes used as a generic list + * by the page owner. + */ + struct list_head lru; /* See page-flags.h for PAGE_MAPPING_FLAGS */ struct address_space *mapping; pgoff_t index; /* Our offset within mapping. */ @@ -138,6 +98,19 @@ struct page { unsigned long private; }; struct { /* slab, slob and slub */ + union { + struct list_head slab_list; /* uses lru */ + struct { /* Partial pages */ + struct page *next; +#ifdef CONFIG_64BIT + int pages; /* Nr of pages left */ + int pobjects; /* Approximate count */ +#else + short int pages; + short int pobjects; +#endif + }; + }; struct kmem_cache *slab_cache; /* not slob */ /* Double-word boundary */ void *freelist; /* first free object */ @@ -151,9 +124,22 @@ struct page { }; }; }; - atomic_t compound_mapcount; /* first tail page */ - struct list_head deferred_list; /* second tail page */ + struct { /* Tail pages of compound page */ + unsigned long compound_head; /* Bit zero is set */ + + /* First tail page only */ + unsigned char compound_dtor; + unsigned char compound_order; + atomic_t compound_mapcount; + }; + struct { /* Second tail page of compound page */ + unsigned long _compound_pad_1; /* compound_head */ + unsigned long _compound_pad_2; + struct list_head deferred_list; + }; struct { /* Page table pages */ + unsigned long _pt_pad_1; /* compound_head */ + pgtable_t pmd_huge_pte; /* protected by page->ptl */ unsigned long _pt_pad_2; /* mapping */ unsigned long _pt_pad_3; #if ALLOC_SPLIT_PTLOCKS @@ -162,6 +148,15 @@ struct page { spinlock_t ptl; #endif }; + + /** @rcu_head: You can use this to free a page by RCU. */ + struct rcu_head rcu_head; + + /** + * @pgmap: For ZONE_DEVICE pages, this points to the hosting + * device page map. + */ + struct dev_pagemap *pgmap; }; union { /* This union is 4 bytes in size. */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1a0149c4f672..787440218def 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -924,7 +924,7 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) } switch (page - head_page) { case 1: - /* the first tail page: ->mapping is compound_mapcount() */ + /* the first tail page: ->mapping may be compound_mapcount() */ if (unlikely(compound_mapcount(page))) { bad_page(page, "nonzero compound_mapcount", 0); goto out;