From patchwork Thu Dec 6 12:24:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10715831 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B0067109C for ; Thu, 6 Dec 2018 12:26:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9FA94297D0 for ; Thu, 6 Dec 2018 12:26:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 925BB2D942; Thu, 6 Dec 2018 12:26:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E19362D564 for ; Thu, 6 Dec 2018 12:26:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729404AbeLFM0X (ORCPT ); Thu, 6 Dec 2018 07:26:23 -0500 Received: from mail-wm1-f66.google.com ([209.85.128.66]:40195 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729749AbeLFMZU (ORCPT ); Thu, 6 Dec 2018 07:25:20 -0500 Received: by mail-wm1-f66.google.com with SMTP id q26so814914wmf.5 for ; Thu, 06 Dec 2018 04:25:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FaduiTR0nXd7TQE8kfxwL/3d+/oPKBLmf6eac6TNL3k=; b=amkNNJFo/2s75YdZiL5n/4pvWc/wOQ5QEZTyzJHRBocvyLknCN88PMDvlrNNK9gwNE AmP8SUBWlWQUDUWncQqFRmp6dXktNfMvWYNee0okybhgmzKzdmWhGpx7eT8OTM0tuxCr yHRpBgiG9Q9Lb121RHPzhivLL7zymiGKbF+87vY6gQTCGuObr8dCNOYldKpjaZ/F7tbj QSJJJsuNLdT8o+8BNy/DeiQD99TXxSCjPa/b+qc+Wd537R3TivAl4UuDa7B/uHzpPWyb YIUHha77STDq8QIfw/CO/h7jdBKIpVlPuH3SfA1z5ySRzid61e9FSti/Uf8Fzr9GGYTk FIMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FaduiTR0nXd7TQE8kfxwL/3d+/oPKBLmf6eac6TNL3k=; b=YD1JXe6mWAYEz+nqnXAKq6ELX2GLhZJTlCIsMU74OLYd7qNhTbzzY6YXVtNlHz5mue XtKrACpo8bPLVrfaXeCV/eyprboipLGUGAPMcgb+5kF9dXrsiJRE+aHCg5jw6df38yTT WoP65gC0cqPgQDDmQOqKxPE8TDwSBfeB6yCw6ZGYES6jm8mgXWeJfGxO4nPbPZ8DXcJl XT395LrQP76p5eqOTfBpssJA/11OL46Xo8QD/nvqC4FHlpVum9bzNUmekHlwG3l8H7lT VzlpACV6tlkpKxYqzPAMvS4lh+qQHNUA3kag48wu06Er8J7SZvMxfh64Cf+3d0NwnRvd HFIw== X-Gm-Message-State: AA+aEWZ8TBBoFLLUERX8IqUh8MBGcaI3lkkFHK4KA4rtacKYaS082GMr 7SjW+gDaYIw8ZAnYsy6xM+5S6A== X-Google-Smtp-Source: AFSGD/U/BIIiVj5yOvWgeJVa0sA+MatzAWdvqK0fNNEjF6B0VUUl4+dlpOmebsfE/BPopXVlB79eDg== X-Received: by 2002:a1c:2457:: with SMTP id k84mr19289771wmk.139.1544099117816; Thu, 06 Dec 2018 04:25:17 -0800 (PST) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:3180:41f8:3010:ff61]) by smtp.gmail.com with ESMTPSA id j8sm339988wrt.40.2018.12.06.04.25.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 06 Dec 2018 04:25:17 -0800 (PST) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v13 18/25] mm: move obj_to_index to include/linux/slab_def.h Date: Thu, 6 Dec 2018 13:24:36 +0100 Message-Id: X-Mailer: git-send-email 2.20.0.rc1.387.gf8505762e3-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-sparse-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sparse@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP While with SLUB we can actually preassign tags for caches with contructors and store them in pointers in the freelist, SLAB doesn't allow that since the freelist is stored as an array of indexes, so there are no pointers to store the tags. Instead we compute the tag twice, once when a slab is created before calling the constructor and then again each time when an object is allocated with kmalloc. Tag is computed simply by taking the lowest byte of the index that corresponds to the object. However in kasan_kmalloc we only have access to the objects pointer, so we need a way to find out which index this object corresponds to. This patch moves obj_to_index from slab.c to include/linux/slab_def.h to be reused by KASAN. Acked-by: Christoph Lameter Reviewed-by: Andrey Ryabinin Reviewed-by: Dmitry Vyukov Signed-off-by: Andrey Konovalov --- include/linux/slab_def.h | 13 +++++++++++++ mm/slab.c | 13 ------------- 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 3485c58cfd1c..9a5eafb7145b 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -104,4 +104,17 @@ static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, return object; } +/* + * We want to avoid an expensive divide : (offset / cache->size) + * Using the fact that size is a constant for a particular cache, + * we can replace (offset / cache->size) by + * reciprocal_divide(offset, cache->reciprocal_buffer_size) + */ +static inline unsigned int obj_to_index(const struct kmem_cache *cache, + const struct page *page, void *obj) +{ + u32 offset = (obj - page->s_mem); + return reciprocal_divide(offset, cache->reciprocal_buffer_size); +} + #endif /* _LINUX_SLAB_DEF_H */ diff --git a/mm/slab.c b/mm/slab.c index 27859fb39889..d2f827316dfc 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -406,19 +406,6 @@ static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, return page->s_mem + cache->size * idx; } -/* - * We want to avoid an expensive divide : (offset / cache->size) - * Using the fact that size is a constant for a particular cache, - * we can replace (offset / cache->size) by - * reciprocal_divide(offset, cache->reciprocal_buffer_size) - */ -static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct page *page, void *obj) -{ - u32 offset = (obj - page->s_mem); - return reciprocal_divide(offset, cache->reciprocal_buffer_size); -} - #define BOOT_CPUCACHE_ENTRIES 1 /* internal cache of cache description objs */ static struct kmem_cache kmem_cache_boot = {