From patchwork Wed Jul 9 11:30:04 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Ryabinin X-Patchwork-Id: 4513911 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 94FC5BEEAA for ; Wed, 9 Jul 2014 11:40:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DA3DF202E5 for ; Wed, 9 Jul 2014 11:40:18 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 118E62028D for ; Wed, 9 Jul 2014 11:40:18 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1X4qCZ-0001E1-Jb; Wed, 09 Jul 2014 11:38:19 +0000 Received: from mailout3.w1.samsung.com ([210.118.77.13]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1X4qBB-0008OR-5f for linux-arm-kernel@lists.infradead.org; Wed, 09 Jul 2014 11:36:54 +0000 Received: from eucpsbgm2.samsung.com (unknown [203.254.199.245]) by mailout3.w1.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N8G0083T08XL860@mailout3.w1.samsung.com> for linux-arm-kernel@lists.infradead.org; Wed, 09 Jul 2014 12:36:33 +0100 (BST) X-AuditID: cbfec7f5-b7f626d000004b39-95-53bd2942480a Received: from eusync1.samsung.com ( [203.254.199.211]) by eucpsbgm2.samsung.com (EUCPMTA) with SMTP id 31.B4.19257.2492DB35; Wed, 09 Jul 2014 12:36:34 +0100 (BST) Received: from localhost.localdomain ([106.109.129.143]) by eusync1.samsung.com (Oracle Communications Messaging Server 7u4-23.01 (7.0.4.23.0) 64bit (built Aug 10 2011)) with ESMTPA id <0N8G0029508FLQ20@eusync1.samsung.com>; Wed, 09 Jul 2014 12:36:34 +0100 (BST) From: Andrey Ryabinin To: linux-kernel@vger.kernel.org Subject: [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub Date: Wed, 09 Jul 2014 15:30:04 +0400 Message-id: <1404905415-9046-11-git-send-email-a.ryabinin@samsung.com> X-Mailer: git-send-email 1.8.5.5 In-reply-to: <1404905415-9046-1-git-send-email-a.ryabinin@samsung.com> References: <1404905415-9046-1-git-send-email-a.ryabinin@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrLLMWRmVeSWpSXmKPExsVy+t/xy7pOmnuDDd78FLXY9usRm8XvvTNZ LeasX8Nmcf3bG0aLCQ/b2C1WdjezWWx/9pbJYmXnA1aLTY+vsVr82bWDyeLyrjlsFvfW/Ge1 uH2Z1+LSgQVMFi37LjBZtH3+x2qxb+V5IGvJRiaLxUduM1u8ezaZ2WLzpqnMFj82PGZ1EPNo ae5h89g56y67x4JNpR6bVnWyeWz6NIndo+vtFSaPd+fOsXucmPGbxePJlelMHpuX1Ht8fHqL xeP9vqtsHn1bVjF6nFlwhN3j8ya5AP4oLpuU1JzMstQifbsEroxz846xFszmq7gxX7yB8Rx3 FyMnh4SAicTlxlcsELaYxIV769m6GLk4hASWMko8mPCNHcLpY5K4vr+RGaSKTUBP4t+s7Wwg toiAgsTm3mesIEXMAs1sEu0dH1hBEsIC8RLPvyxi7GLk4GARUJXYcy8ExOQVcJPYPN0DYpmC xLLlM8GqOYHCE6ZfYwYpERJwlZiwQm0CI+8CRoZVjKKppckFxUnpuUZ6xYm5xaV56XrJ+bmb GCFR9HUH49JjVocYBTgYlXh4X+zeEyzEmlhWXJl7iFGCg1lJhNdWdG+wEG9KYmVValF+fFFp TmrxIUYmDk6pBkYm7lt5fQaLItSX7o2WyTVmjdcTtdDducqdKd/znPgKsdzP3w3T5t0qnzPn jsTvd5yH/7ed+GxyfloCw4EFLT+lIxRqJr0LjU6pippQoPB/gv+UOw4fokvUOfreRsb8DGh5 sSPQwOHIyeMMmr5OM9mrlyrxCMQaPF+o0rlcI0Y9YJbpPE2hrUosxRmJhlrMRcWJAF6aqSyA AgAA X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140709_043653_379608_504AB2D4 X-CRM114-Status: GOOD ( 12.40 ) X-Spam-Score: -5.7 (-----) Cc: Michal Marek , Christoph Lameter , x86@kernel.org, Russell King , Andrew Morton , linux-kbuild@vger.kernel.org, Andrey Ryabinin , Joonsoo Kim , David Rientjes , linux-mm@kvack.org, Pekka Enberg , Konstantin Serebryany , Yuri Gribov , Dmitry Vyukov , Sasha Levin , Andrey Konovalov , Thomas Gleixner , Alexey Preobrazhensky , Ingo Molnar , Konstantin Khlebnikov , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch shares virt_to_cache() between slab and slub and it used in cache_from_obj() now. Later virt_to_cache() will be kernel address sanitizer also. Signed-off-by: Andrey Ryabinin --- mm/slab.c | 6 ------ mm/slab.h | 10 +++++++--- 2 files changed, 7 insertions(+), 9 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index e7763db..fa4f840 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -433,12 +433,6 @@ static inline void set_obj_status(struct page *page, int idx, int val) {} static int slab_max_order = SLAB_MAX_ORDER_LO; static bool slab_max_order_set __initdata; -static inline struct kmem_cache *virt_to_cache(const void *obj) -{ - struct page *page = virt_to_head_page(obj); - return page->slab_cache; -} - static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, unsigned int idx) { diff --git a/mm/slab.h b/mm/slab.h index 84c160a..1257ade 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -260,10 +260,15 @@ static inline void memcg_uncharge_slab(struct kmem_cache *s, int order) } #endif +static inline struct kmem_cache *virt_to_cache(const void *obj) +{ + struct page *page = virt_to_head_page(obj); + return page->slab_cache; +} + static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) { struct kmem_cache *cachep; - struct page *page; /* * When kmemcg is not being used, both assignments should return the @@ -275,8 +280,7 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) if (!memcg_kmem_enabled() && !unlikely(s->flags & SLAB_DEBUG_FREE)) return s; - page = virt_to_head_page(x); - cachep = page->slab_cache; + cachep = virt_to_cache(x); if (slab_equal_or_root(cachep, s)) return cachep;