From patchwork Fri Sep 21 15:13:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10610343 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EA5305A4 for ; Fri, 21 Sep 2018 15:14:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB86B2E46D for ; Fri, 21 Sep 2018 15:14:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CF0BC2E4AA; Fri, 21 Sep 2018 15:14:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5A99D2E49D for ; Fri, 21 Sep 2018 15:14:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390653AbeIUVD0 (ORCPT ); Fri, 21 Sep 2018 17:03:26 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:36913 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391007AbeIUVDZ (ORCPT ); Fri, 21 Sep 2018 17:03:25 -0400 Received: by mail-wr1-f66.google.com with SMTP id u12-v6so13217486wrr.4 for ; Fri, 21 Sep 2018 08:14:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hO7cmdJIadnfEdKyPe4txnXVBC1YpbB0zh8lfgmkasM=; b=WZr1l6JoY+I0VxauNdDkNf5GbEjoBY448dSPfl6A/EwVGXDZXLzizhBatYrXgDgJ+p KTgu+UF7i1DJRCIsAG7F8dscP6Rl8uiTZxk6+Xx7W2BA5DUX9mAf1icqxX/KTD/Ta/Rl wJ5/g+GfWMk5uZl0iLUk3Wj7Osat6QvSbQB4yOUvmCGaaK6QNVebHsIzc2rpCyZwY4oL Z4QM5Y/hq5Y7An9eBAxqHA9RdBAlGTBhdrUNFnPARYlsc6cJUGb3l8VPf813j7Y6FwTt Io+7WeFgT9a4CiU88YOPH8tg0GiEQtlAS3cySWe2bCgSmPXmahDbcqBsBG59zf7I3VDe wLQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hO7cmdJIadnfEdKyPe4txnXVBC1YpbB0zh8lfgmkasM=; b=F4hxUySagNE3NEYUPX1qC9tmxfau88qDZv3oQzh+9Cel97Zzr4q3aWzID4MUdCHXdk C77c+XcPiXt7c6XiPysG4OG4t/1zKWHAbQ17CB4EBNXRgJ8G9ivYtQqs0Nws9njB3kgr YWfp3yiIngSMdyQysnn3E226pFuioYYFdW8JPGC8bpZV6iXnY/8B/0FNt3P6zsldjYVS nrgSYYFAq6gYpYha5emu5uXOFkTDfyicTJB19jTC5bCS7XmaDuBWExdbvP4WthavXcnJ mRpRKzfZzHarKFPiFWVQZgM7Khww6EgTZgIhe5Mc+eCMaTEYNGHEMTqfpIqJoFqVgdP/ GHJg== X-Gm-Message-State: ABuFfoiYdtyAZC9CLuZC9ld8bwywsk/+Ut0WAvpBE6jjYi+c2fEi2xUA 7yKgeTfDgahG0lEtn7QbU4175A== X-Google-Smtp-Source: ACcGV62HGuXDmfLPQoXZKl+/GDSgQWuvhepQk59IFa9Kr66tHi0cJhwaP86aGB6dF1ZWELIwPOT/hg== X-Received: by 2002:adf:ad29:: with SMTP id p38-v6mr1807954wrc.25.1537542844026; Fri, 21 Sep 2018 08:14:04 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id e7-v6sm27990271wru.46.2018.09.21.08.14.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 21 Sep 2018 08:14:02 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v9 10/20] mm: move obj_to_index to include/linux/slab_def.h Date: Fri, 21 Sep 2018 17:13:32 +0200 Message-Id: <9d62f917393456653c1d38c7173dc876cef03c93.1537542735.git.andreyknvl@google.com> X-Mailer: git-send-email 2.19.0.444.g18242da7ef-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP While with SLUB we can actually preassign tags for caches with contructors and store them in pointers in the freelist, SLAB doesn't allow that since the freelist is stored as an array of indexes, so there are no pointers to store the tags. Instead we compute the tag twice, once when a slab is created before calling the constructor and then again each time when an object is allocated with kmalloc. Tag is computed simply by taking the lowest byte of the index that corresponds to the object. However in kasan_kmalloc we only have access to the objects pointer, so we need a way to find out which index this object corresponds to. This patch moves obj_to_index from slab.c to include/linux/slab_def.h to be reused by KASAN. Signed-off-by: Andrey Konovalov --- include/linux/slab_def.h | 13 +++++++++++++ mm/slab.c | 13 ------------- 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 3485c58cfd1c..9a5eafb7145b 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -104,4 +104,17 @@ static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, return object; } +/* + * We want to avoid an expensive divide : (offset / cache->size) + * Using the fact that size is a constant for a particular cache, + * we can replace (offset / cache->size) by + * reciprocal_divide(offset, cache->reciprocal_buffer_size) + */ +static inline unsigned int obj_to_index(const struct kmem_cache *cache, + const struct page *page, void *obj) +{ + u32 offset = (obj - page->s_mem); + return reciprocal_divide(offset, cache->reciprocal_buffer_size); +} + #endif /* _LINUX_SLAB_DEF_H */ diff --git a/mm/slab.c b/mm/slab.c index fe0ddf08aa2c..6d8de7630944 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -406,19 +406,6 @@ static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, return page->s_mem + cache->size * idx; } -/* - * We want to avoid an expensive divide : (offset / cache->size) - * Using the fact that size is a constant for a particular cache, - * we can replace (offset / cache->size) by - * reciprocal_divide(offset, cache->reciprocal_buffer_size) - */ -static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct page *page, void *obj) -{ - u32 offset = (obj - page->s_mem); - return reciprocal_divide(offset, cache->reciprocal_buffer_size); -} - #define BOOT_CPUCACHE_ENTRIES 1 /* internal cache of cache description objs */ static struct kmem_cache kmem_cache_boot = {