From patchwork Fri May 4 18:33:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10381349 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8E7C56037D for ; Fri, 4 May 2018 18:59:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7EA462957C for ; Fri, 4 May 2018 18:59:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7232B29598; Fri, 4 May 2018 18:59:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E60AE2957B for ; Fri, 4 May 2018 18:59:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 05EC36B0272; Fri, 4 May 2018 14:33:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F26886B0273; Fri, 4 May 2018 14:33:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DCC296B0275; Fri, 4 May 2018 14:33:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id 8463E6B0272 for ; Fri, 4 May 2018 14:33:29 -0400 (EDT) Received: by mail-pg0-f71.google.com with SMTP id f19-v6so14224117pgv.4 for ; Fri, 04 May 2018 11:33:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=erD7PzZDCevreLYxsDPDW1EUZ/wIGSkvCk60577pB40=; b=H/iQoydNKxy+o9/1kBLAVVfIy2VSaeWmwGaaAWRhc7tJP0GqmBdV71gUf1kGghCh8y Vm598qgGKJ2ZKApj2ugdGhPkWWPfp2oEYRlmG8dyC+E7UxfpP8ep76LnI8ClrXHN/nei HZmxKzCr07QGs+vKrBzCBno3b7Jk6brQMSkFni08FqBSWuZNMjxLfk4W3oiws//I15VD mCDRVv1cuObNM3cmFC6G4RNdRi46nJZAfby1k/SFKck3cPZzOHzaTT7A5SvgD3pC4+CX +eNKcQtP4BoQ4L8qq/pbbDmowcQTemSGmzJWdVyhP91XgMHRCgGAklA3e9MXNm7RZqEp 64/w== X-Gm-Message-State: ALQs6tDzUB6Y0QsWoSYx8+IAgmR+gQrPU/cajqBsib7Se8tVuCwylgdb 4W+vuBnANEzsyCSVzE9k43DRGGB/bp6lhHKuVnlU7NACsdQIYkujrJARiG+12J4eADZ7qm6kKsn 43wR93aaA8TIRzavRlMcDLWN791d0HK5YlJMNMsYGGXS9SbTW86mmHd60wCYU20vggg== X-Received: by 2002:a17:902:3c5:: with SMTP id d63-v6mr28483319pld.163.1525458809214; Fri, 04 May 2018 11:33:29 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoS3Y5nG2PiPHBEwwwUUgzc8aucTiSkhze9RkJEt2RfHpjpU833NhpP7BU3xF0929bKI7cy X-Received: by 2002:a17:902:3c5:: with SMTP id d63-v6mr28483290pld.163.1525458808401; Fri, 04 May 2018 11:33:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525458808; cv=none; d=google.com; s=arc-20160816; b=vU8UY+VOIqtWhPoR+j6A/nGUvvzlB7VJfcpxenoMpnUMHj0I0/h+d9eIQoU/3+Fz79 mRE6B7+M4UZx8QPwkm/iWN7ckd6UGYIR5nv7Pr4/AVDBLlz8H0RS5dzqJzJzYbW3fTVj PVDBvlG814tpa0QpsVxPIv+lOVPwVgxsmtSztpUNw9lPwxN/nj3Ab1heVHrf1nRpQZi4 7wrpCotvTh3+O5h4j6+geCmm8uXrJsFj0+ze4qpHdBR5yl/E5Y6GzRC4svWtKf/NIhpq 7tR3zks2TF3arrMQHYKSXufvbzxce0nFCzc7HoJY9DlrS35kICj6yCCxAukfPv4UH7ut spjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=erD7PzZDCevreLYxsDPDW1EUZ/wIGSkvCk60577pB40=; b=tlPfICHBP3NR2NoWaoH3SCqYhLiqXp+c5nGLp4rwaKB5OZGcnl03IJJfyi4HtgAqED Ohm8BqHpkJL/Qyk+VQLA4flFwIog3UKFaf4g+u40DJOnp3SM/0reFJJSw5HlUMB+nSgf wZdKT5RA8W5D/OeqH3PKV8b2ItrD00uyNeHRm5iPPWzmCLLry8lQijsayBMkw5oZxGAj gBauFGu4aM/xoVsGY+vsjZVqAEIMXkS+ppaou3zdPh7XgA84q7/8J3FCPCFDZnnf3dlf 1nYLk9oSg33QVk9BE9nOpAGc3j1eZFM2g27zAqZfMJxm263VQYL42w9RsKWCGmiN4a0B 6Wug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=XkMV67OH; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id 64si16336531pfl.309.2018.05.04.11.33.28 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 04 May 2018 11:33:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=XkMV67OH; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=erD7PzZDCevreLYxsDPDW1EUZ/wIGSkvCk60577pB40=; b=XkMV67OH0ilE7WUA0mVc6hcf2 eQjthdnvaA4fedTRz7+jOnnxvN8shKhuDB5Cv34jiY0CSjG/bbpxaAJoHswnC4bUDwqv/pjM90bkG mG1hLdOcVeLQXgTjFwQlyiy8KkHmpKeP7Un8TLldl8myWrG+uIlVIA03xmxEdKO5SR3R43f69O4f2 JWKHSVNS/F+oux1ZgGtNEvU4mS0s03bDr1JH/DxkDnfTcszwEz/4qqpfGqekgeENv99OqWpy1bx5K Ah3rsej4M8fRdfRbYQG3l/1iLVfCFK5HtBIPr5zg+oe2yw0XkGgqSakcl+FFH0YqsD+AgX8AW30s0 WK1NWnwIA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fEfWJ-0003o6-HY; Fri, 04 May 2018 18:33:27 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: Matthew Wilcox , Andrew Morton , "Kirill A . Shutemov" , Christoph Lameter , Lai Jiangshan , Pekka Enberg , Vlastimil Babka , Dave Hansen , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [PATCH v5 15/17] slub: Remove kmem_cache->reserved Date: Fri, 4 May 2018 11:33:16 -0700 Message-Id: <20180504183318.14415-16-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180504183318.14415-1-willy@infradead.org> References: <20180504183318.14415-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox The reserved field was only used for embedding an rcu_head in the data structure. With the previous commit, we no longer need it. That lets us remove the 'reserved' argument to a lot of functions. Signed-off-by: Matthew Wilcox Acked-by: Christoph Lameter --- include/linux/slub_def.h | 1 - mm/slub.c | 41 ++++++++++++++++++++-------------------- 2 files changed, 20 insertions(+), 22 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 3773e26c08c1..09fa2c6f0e68 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -101,7 +101,6 @@ struct kmem_cache { void (*ctor)(void *); unsigned int inuse; /* Offset to metadata */ unsigned int align; /* Alignment */ - unsigned int reserved; /* Reserved bytes at the end of slabs */ unsigned int red_left_pad; /* Left redzone padding size */ const char *name; /* Name (only for display!) */ struct list_head list; /* List of slab caches */ diff --git a/mm/slub.c b/mm/slub.c index 8e2407f69855..33a811168fa9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -316,16 +316,16 @@ static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr) return (p - addr) / s->size; } -static inline unsigned int order_objects(unsigned int order, unsigned int size, unsigned int reserved) +static inline unsigned int order_objects(unsigned int order, unsigned int size) { - return (((unsigned int)PAGE_SIZE << order) - reserved) / size; + return ((unsigned int)PAGE_SIZE << order) / size; } static inline struct kmem_cache_order_objects oo_make(unsigned int order, - unsigned int size, unsigned int reserved) + unsigned int size) { struct kmem_cache_order_objects x = { - (order << OO_SHIFT) + order_objects(order, size, reserved) + (order << OO_SHIFT) + order_objects(order, size) }; return x; @@ -832,7 +832,7 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page) return 1; start = page_address(page); - length = (PAGE_SIZE << compound_order(page)) - s->reserved; + length = PAGE_SIZE << compound_order(page); end = start + length; remainder = length % s->size; if (!remainder) @@ -921,7 +921,7 @@ static int check_slab(struct kmem_cache *s, struct page *page) return 0; } - maxobj = order_objects(compound_order(page), s->size, s->reserved); + maxobj = order_objects(compound_order(page), s->size); if (page->objects > maxobj) { slab_err(s, page, "objects %u > max %u", page->objects, maxobj); @@ -971,7 +971,7 @@ static int on_freelist(struct kmem_cache *s, struct page *page, void *search) nr++; } - max_objects = order_objects(compound_order(page), s->size, s->reserved); + max_objects = order_objects(compound_order(page), s->size); if (max_objects > MAX_OBJS_PER_PAGE) max_objects = MAX_OBJS_PER_PAGE; @@ -3188,21 +3188,21 @@ static unsigned int slub_min_objects; */ static inline unsigned int slab_order(unsigned int size, unsigned int min_objects, unsigned int max_order, - unsigned int fract_leftover, unsigned int reserved) + unsigned int fract_leftover) { unsigned int min_order = slub_min_order; unsigned int order; - if (order_objects(min_order, size, reserved) > MAX_OBJS_PER_PAGE) + if (order_objects(min_order, size) > MAX_OBJS_PER_PAGE) return get_order(size * MAX_OBJS_PER_PAGE) - 1; - for (order = max(min_order, (unsigned int)get_order(min_objects * size + reserved)); + for (order = max(min_order, (unsigned int)get_order(min_objects * size)); order <= max_order; order++) { unsigned int slab_size = (unsigned int)PAGE_SIZE << order; unsigned int rem; - rem = (slab_size - reserved) % size; + rem = slab_size % size; if (rem <= slab_size / fract_leftover) break; @@ -3211,7 +3211,7 @@ static inline unsigned int slab_order(unsigned int size, return order; } -static inline int calculate_order(unsigned int size, unsigned int reserved) +static inline int calculate_order(unsigned int size) { unsigned int order; unsigned int min_objects; @@ -3228,7 +3228,7 @@ static inline int calculate_order(unsigned int size, unsigned int reserved) min_objects = slub_min_objects; if (!min_objects) min_objects = 4 * (fls(nr_cpu_ids) + 1); - max_objects = order_objects(slub_max_order, size, reserved); + max_objects = order_objects(slub_max_order, size); min_objects = min(min_objects, max_objects); while (min_objects > 1) { @@ -3237,7 +3237,7 @@ static inline int calculate_order(unsigned int size, unsigned int reserved) fraction = 16; while (fraction >= 4) { order = slab_order(size, min_objects, - slub_max_order, fraction, reserved); + slub_max_order, fraction); if (order <= slub_max_order) return order; fraction /= 2; @@ -3249,14 +3249,14 @@ static inline int calculate_order(unsigned int size, unsigned int reserved) * We were unable to place multiple objects in a slab. Now * lets see if we can place a single object there. */ - order = slab_order(size, 1, slub_max_order, 1, reserved); + order = slab_order(size, 1, slub_max_order, 1); if (order <= slub_max_order) return order; /* * Doh this slab cannot be placed using slub_max_order. */ - order = slab_order(size, 1, MAX_ORDER, 1, reserved); + order = slab_order(size, 1, MAX_ORDER, 1); if (order < MAX_ORDER) return order; return -ENOSYS; @@ -3524,7 +3524,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) if (forced_order >= 0) order = forced_order; else - order = calculate_order(size, s->reserved); + order = calculate_order(size); if ((int)order < 0) return 0; @@ -3542,8 +3542,8 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) /* * Determine the number of objects per slab */ - s->oo = oo_make(order, size, s->reserved); - s->min = oo_make(get_order(size), size, s->reserved); + s->oo = oo_make(order, size); + s->min = oo_make(get_order(size), size); if (oo_objects(s->oo) > oo_objects(s->max)) s->max = s->oo; @@ -3553,7 +3553,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) { s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor); - s->reserved = 0; #ifdef CONFIG_SLAB_FREELIST_HARDENED s->random = get_random_long(); #endif @@ -5097,7 +5096,7 @@ SLAB_ATTR_RO(destroy_by_rcu); static ssize_t reserved_show(struct kmem_cache *s, char *buf) { - return sprintf(buf, "%u\n", s->reserved); + return sprintf(buf, "0\n"); } SLAB_ATTR_RO(reserved);