From patchwork Wed Aug 29 11:35:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10580015 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A70711709 for ; Wed, 29 Aug 2018 11:36:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 94B232AA9C for ; Wed, 29 Aug 2018 11:36:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8849B2AB44; Wed, 29 Aug 2018 11:36:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 24AFD2AB91 for ; Wed, 29 Aug 2018 11:36:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728448AbeH2PdU (ORCPT ); Wed, 29 Aug 2018 11:33:20 -0400 Received: from mail-wm0-f66.google.com ([74.125.82.66]:39177 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728542AbeH2PcL (ORCPT ); Wed, 29 Aug 2018 11:32:11 -0400 Received: by mail-wm0-f66.google.com with SMTP id q8-v6so5190206wmq.4 for ; Wed, 29 Aug 2018 04:35:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tx0f81qpatBTU8Zf6Dt4IAw08NEBAMzFHV/7KuZJzc4=; b=tGIcisOz2STm0aJshASbTJeJ69RT8JMoAH3f4aCxVwgBItaA52XRJ9qdZYIaZsI2u5 84WxLKUwRjMLHaDEbGCci5rvpBoXQf4II3EtOgQRIvriYehFbI+64zM0dxij1O8dZcgA 0d4nuejpivMGGQO1gx9cPHhEWv7CdpbHsbyN8BudnoODPUQmrsBQrUnuYsKojZIqQrCt Y9gel9LN3l73cl3wtQZRwiyWtT1nUqrnnz+hgzlv/Tl+9TGC4ePGJhT2Lb0s6waiatxg 2jcKeIiW7gFzRHE/xwny+3A+SxEd+mYTqitxyw5Ii/nxehKFNx+YeMhys9uxy7VlTNq1 bVyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tx0f81qpatBTU8Zf6Dt4IAw08NEBAMzFHV/7KuZJzc4=; b=lh3MTnTVGxPUp90CPW70GhpgHm5zfxYGY6AbTdFfYYn/elCj7ZXOTF5qHVcyX5YGL/ uYN91pZCPxgXnhFG5d37UmHTXTbsReM7xN11wLYjT3G1QYd/R0I2OJOt91xViJWK23u/ KSmu3XGOsSm3gonUn5O6A7aMQu2HoEaLMLh8a7MwGenJqYFEKfpc9iSzu/JqaijsssDl ftf2tgJGXe3u2VSANP6qlYTA59X1MzRw338Dqwh1K04C9MuXvdWYTmDnTd8EwWn/JFih dqPlS2u5PZRFeKI1z+1jfhLwgXby8mU2UQcXHFHIARe1XjEe1sUdfWjcFpT9kO/diOt3 li1w== X-Gm-Message-State: APzg51Bfswto/XNE0ECj9lCgJQ4rUnub7U2G7JT1OLE09DCKT0KEmaLD 9zZh58zhsSmz1gpgfUVrnbro7g== X-Google-Smtp-Source: ANB0VdY0nxFKwxx75PWHyKBWLlnvXmuWojLgvitD/kFGr51SPbBFXRZ2fLWPnFpHFc/eIOY2zo7EzA== X-Received: by 2002:a1c:2dc8:: with SMTP id t191-v6mr4162464wmt.94.1535542540551; Wed, 29 Aug 2018 04:35:40 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id s10-v6sm7800454wmd.22.2018.08.29.04.35.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Aug 2018 04:35:39 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v6 08/18] khwasan: preassign tags to objects with ctors or SLAB_TYPESAFE_BY_RCU Date: Wed, 29 Aug 2018 13:35:12 +0200 Message-Id: <95b5beb7ec13b7e998efe84c9a7a5c1fa49a9fe3.1535462971.git.andreyknvl@google.com> X-Mailer: git-send-email 2.19.0.rc0.228.g281dcd1b4d0-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP An object constructor can initialize pointers within this objects based on the address of the object. Since the object address might be tagged, we need to assign a tag before calling constructor. The implemented approach is to assign tags to objects with constructors when a slab is allocated and call constructors once as usual. The downside is that such object would always have the same tag when it is reallocated, so we won't catch use-after-frees on it. Also pressign tags for objects from SLAB_TYPESAFE_BY_RCU caches, since they can be validy accessed after having been freed. Signed-off-by: Andrey Konovalov --- mm/slab.c | 6 +++++- mm/slub.c | 4 ++++ 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/mm/slab.c b/mm/slab.c index 6fdca9ec2ea4..3b4227059f2e 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -403,7 +403,11 @@ static inline struct kmem_cache *virt_to_cache(const void *obj) static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, unsigned int idx) { - return page->s_mem + cache->size * idx; + void *obj; + + obj = page->s_mem + cache->size * idx; + obj = khwasan_preset_slab_tag(cache, idx, obj); + return obj; } /* diff --git a/mm/slub.c b/mm/slub.c index 4206e1b616e7..086d6558a6b6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1531,12 +1531,14 @@ static bool shuffle_freelist(struct kmem_cache *s, struct page *page) /* First entry is used as the base of the freelist */ cur = next_freelist_entry(s, page, &pos, start, page_limit, freelist_count); + cur = khwasan_preset_slub_tag(s, cur); page->freelist = cur; for (idx = 1; idx < page->objects; idx++) { setup_object(s, page, cur); next = next_freelist_entry(s, page, &pos, start, page_limit, freelist_count); + next = khwasan_preset_slub_tag(s, next); set_freepointer(s, cur, next); cur = next; } @@ -1613,8 +1615,10 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) shuffle = shuffle_freelist(s, page); if (!shuffle) { + start = khwasan_preset_slub_tag(s, start); for_each_object_idx(p, idx, s, start, page->objects) { setup_object(s, page, p); + p = khwasan_preset_slub_tag(s, p); if (likely(idx < page->objects)) set_freepointer(s, p, p + s->size); else