From patchwork Thu Aug 9 19:21:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10561713 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 71DCA1390 for ; Thu, 9 Aug 2018 19:21:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 626862B90D for ; Thu, 9 Aug 2018 19:21:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 603BF2B92A; Thu, 9 Aug 2018 19:21:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DEB6C2B93C for ; Thu, 9 Aug 2018 19:21:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727595AbeHIVro (ORCPT ); Thu, 9 Aug 2018 17:47:44 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:35355 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727538AbeHIVrn (ORCPT ); Thu, 9 Aug 2018 17:47:43 -0400 Received: by mail-wr1-f67.google.com with SMTP id g1-v6so6106998wru.2 for ; Thu, 09 Aug 2018 12:21:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VF0nRIiNwsRgtxZxCam3wcoWu6FgQBt9JWN6ZbYEIxE=; b=QpCmIiPMHe6+oM4RZShxaxZZNbTQRBzBBSRaY11nA6pHh6Cplzw0n7Bv2ZKvLSc+Z4 4UHKKdRN140+D0usZ8bvkHnWnG8txA0JlSKLpWAQfuza/22n8fTJiK0/0pYWU3wYztfI imwCswDdjh8G72+vHnwhhEr+krHLWPfXvoJeBVl8horIV9DgQ3aUKbkBKZWci/UtBDYA PL0EQx54VrD05gyV+JikLNQvat9nDkHWRVe9t+VM5PtRt2mCIrCx1YRfqVdZ9scLkicl DbwqIS5YQJecOXU2pSRkfwTgR/fmR0bIo8aT7DX//C7AsNUp8s3K7DafoAwaXJJicyGk TwKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VF0nRIiNwsRgtxZxCam3wcoWu6FgQBt9JWN6ZbYEIxE=; b=bB6uSZTw89WlWPrJe7nvtyPxNOup7f3eH7tltRpFVJEvW+A73ZsYG6mEy+yK+g0EM6 S3MiN05XQHva5cgSy4bSGN+/yxiFNCHO/ft0/2yr57Y6aNawI1GxzntWMzT6Vc03zdLc UuXgCLPoMm9rrRBGyweJzqJcueuBJk2hrJUKmnoP5O+3/E7dqszYJVZGBaXCjoCq/Q8i Y/nNTXx8BllVZ6XtqlCxBhMy6FigGusP7WzhbQPdWhFoIBl+A4uombSRNwfdpN4uQvHy 8wrZ+0aod+SgJ5HIVy9yt9CP9A3x4gvtP5KvC9JN87rBT4cFKaowYzpwrG7tkc1k1qtl Jq6w== X-Gm-Message-State: AOUpUlHVWXAbmu90gtGYaIcSw/RSQ5L/Ligr6pakwxTLYxXjI2fb2qx8 byvtVOjUE0yK1rOG3raqpX0+/Q== X-Google-Smtp-Source: AA+uWPxhYsId7VaUHrC78gT2n+1+K1ulTL3bbqXgHVcHJ0gFRtLR022OJpGJB7i55vRAGW0nyD8Atg== X-Received: by 2002:a5d:6401:: with SMTP id z1-v6mr2271039wru.64.1533842488063; Thu, 09 Aug 2018 12:21:28 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id o14-v6sm14738797wmd.35.2018.08.09.12.21.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 09 Aug 2018 12:21:27 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v5 08/18] khwasan: preassign tags to objects with ctors or SLAB_TYPESAFE_BY_RCU Date: Thu, 9 Aug 2018 21:21:00 +0200 Message-Id: <625d42d5cb7f20bb54ce7af2c4b87910b1474c74.1533842385.git.andreyknvl@google.com> X-Mailer: git-send-email 2.18.0.597.ga71716f1ad-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP An object constructor can initialize pointers within this objects based on the address of the object. Since the object address might be tagged, we need to assign a tag before calling constructor. The implemented approach is to assign tags to objects with constructors when a slab is allocated and call constructors once as usual. The downside is that such object would always have the same tag when it is reallocated, so we won't catch use-after-frees on it. Also pressign tags for objects from SLAB_TYPESAFE_BY_RCU caches, since they can be validy accessed after having been freed. Signed-off-by: Andrey Konovalov --- mm/slab.c | 6 +++++- mm/slub.c | 4 ++++ 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/mm/slab.c b/mm/slab.c index 6fdca9ec2ea4..3b4227059f2e 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -403,7 +403,11 @@ static inline struct kmem_cache *virt_to_cache(const void *obj) static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, unsigned int idx) { - return page->s_mem + cache->size * idx; + void *obj; + + obj = page->s_mem + cache->size * idx; + obj = khwasan_preset_slab_tag(cache, idx, obj); + return obj; } /* diff --git a/mm/slub.c b/mm/slub.c index 8fa21afcd3fb..a891bc49dc38 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1532,12 +1532,14 @@ static bool shuffle_freelist(struct kmem_cache *s, struct page *page) /* First entry is used as the base of the freelist */ cur = next_freelist_entry(s, page, &pos, start, page_limit, freelist_count); + cur = khwasan_preset_slub_tag(s, cur); page->freelist = cur; for (idx = 1; idx < page->objects; idx++) { setup_object(s, page, cur); next = next_freelist_entry(s, page, &pos, start, page_limit, freelist_count); + next = khwasan_preset_slub_tag(s, next); set_freepointer(s, cur, next); cur = next; } @@ -1614,8 +1616,10 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) shuffle = shuffle_freelist(s, page); if (!shuffle) { + start = khwasan_preset_slub_tag(s, start); for_each_object_idx(p, idx, s, start, page->objects) { setup_object(s, page, p); + p = khwasan_preset_slub_tag(s, p); if (likely(idx < page->objects)) set_freepointer(s, p, p + s->size); else