From patchwork Tue Nov 3 17:58:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 11878621 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 63DBA921 for ; Tue, 3 Nov 2020 17:59:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0F5552072D for ; Tue, 3 Nov 2020 17:59:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="M/AwsFi3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0F5552072D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7A3A46B0075; Tue, 3 Nov 2020 12:59:08 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 756336B0078; Tue, 3 Nov 2020 12:59:08 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 619076B007B; Tue, 3 Nov 2020 12:59:08 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0039.hostedemail.com [216.40.44.39]) by kanga.kvack.org (Postfix) with ESMTP id 353F16B0075 for ; Tue, 3 Nov 2020 12:59:08 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id CC1031EE6 for ; Tue, 3 Nov 2020 17:59:07 +0000 (UTC) X-FDA: 77443868334.15.owner93_4104208272ba Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id A5A4B1814B0C1 for ; Tue, 3 Nov 2020 17:59:07 +0000 (UTC) X-Spam-Summary: 10,1,0,1ca09e8078007dac,d41d8cd98f00b204,3azqhxwukccgipzivksskpi.gsqpmryb-qqozego.svk@flex--elver.bounces.google.com,,RULES_HIT:1:2:41:152:355:379:541:800:960:966:973:982:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:2194:2196:2199:2200:2393:2553:2559:2562:2693:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3874:4052:4250:4321:4385:4605:5007:6119:6261:6653:6742:6743:7576:7875:7903:8603:9010:9969:10007:11026:11473:11658:11914:12043:12291:12296:12297:12438:12555:12679:12683:12895:12986:13141:13142:13153:13228:13230:14096:14097:14394:14659:21080:21220:21324:21444:21451:21627:21740:21796:21966:21987:21990:30003:30012:30036:30054:30090:30091,0,RBL:209.85.128.73:@flex--elver.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04ygyykeezi9tb9xu393xmnhctuz5oc5k7aycmiwuiz6341qjtw8dh45ogb7g5h.3nma6yyuf6d5r5q74ubj15xam4ff3ubzp65fc4f4ibc443w7q14rhgd56u94gwk.q-lbl8.mailshell.net-223.238.255.100,Cache IP:none, X-HE-Tag: owner93_4104208272ba X-Filterd-Recvd-Size: 13328 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Tue, 3 Nov 2020 17:59:07 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id y187so72832wmy.3 for ; Tue, 03 Nov 2020 09:59:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=f54NMbGpMSq/iIHeUCAr1FoCQza8wHra8DmMf8c0R2Y=; b=M/AwsFi3nlvmUabkxCVW1hFIJpeJbGRCV+CoxtrdWYwwTbgWu4ZKwn1rmFNXrRoO5U ebdoEWJHw4UIMRNXs8o73LKnCI1BMdS4Ep1MaiL5LnksqiLAtlPUga6hIKSL9ir6ItRG GNIyqWjSbFpXvx5aCQeVV2BV3Tfx7eHxONv04x6JJ7G+bmAwH0kHyDTGoqLKfNO9U0JW SS2rm5e+IirVY+eB1IlamdTOHvIJSxLbJGPd9yln0mbaxAhzgJJJ2vkYwkWlz1BhaVWs mhxjWSuzwosnJnL6Tip8vVI8Z1HOA0c4TBhQaiCpZpCy8xc5D55ypA/WaiasS39+CTZ5 LcVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=f54NMbGpMSq/iIHeUCAr1FoCQza8wHra8DmMf8c0R2Y=; b=OLso12LOnHPIoc/a0zqvFywibbxNEb4Eo4vOKOjZsRrxDbZHnTOGbwjfGeBbaaMhZQ x3I2NMOKUddCY4lPADUwhr1EUNozCqng9kD7gvJkuTtu/EqamJq/kgXMBAS3nJzgMx62 o5JTIx/d9ni+iBtzlXzL4WkyDF9XHZK2o7QK1rwteiQG4agxLY3uvJ17e/4HCKXHgyhv jCTLOwC+aiRf8h/w3OsR+pGXKZ9MlBzBtBePz+9uvDUSJ/QOI1pBVeyDQn8DHBMC08UP AihPDDljm7BPpniED0rY+cottWVfdKpf8p/cqTdBMg+Yimn78H7w56vMy1bjtPaG7KGE k46g== X-Gm-Message-State: AOAM533rGFpuWDdlQHRnjP/0Jv1ArhY4JF3XpbPvjl1QksGBvBzunOOG o8W7egmL3hhkfztmdovIlMKFJzRI3w== X-Google-Smtp-Source: ABdhPJwNiLoe/fIjEiL09kMaCZeOAMiVHjhxrlwLraQ0UQpLcnNX3GJkiamUMlcL9PdPyRYCSvqETJZrTg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:f693:9fff:fef4:2449]) (user=elver job=sendgmr) by 2002:a1c:6a0d:: with SMTP id f13mr347027wmc.172.1604426345701; Tue, 03 Nov 2020 09:59:05 -0800 (PST) Date: Tue, 3 Nov 2020 18:58:36 +0100 In-Reply-To: <20201103175841.3495947-1-elver@google.com> Message-Id: <20201103175841.3495947-5-elver@google.com> Mime-Version: 1.0 References: <20201103175841.3495947-1-elver@google.com> X-Mailer: git-send-email 2.29.1.341.ge80a0c044ae-goog Subject: [PATCH v7 4/9] mm, kfence: insert KFENCE hooks for SLAB From: Marco Elver To: elver@google.com, akpm@linux-foundation.org, glider@google.com Cc: hpa@zytor.com, paulmck@kernel.org, andreyknvl@google.com, aryabinin@virtuozzo.com, luto@kernel.org, bp@alien8.de, catalin.marinas@arm.com, cl@linux.com, dave.hansen@linux.intel.com, rientjes@google.com, dvyukov@google.com, edumazet@google.com, gregkh@linuxfoundation.org, hdanton@sina.com, mingo@redhat.com, jannh@google.com, Jonathan.Cameron@huawei.com, corbet@lwn.net, iamjoonsoo.kim@lge.com, joern@purestorage.com, keescook@chromium.org, mark.rutland@arm.com, penberg@kernel.org, peterz@infradead.org, sjpark@amazon.com, tglx@linutronix.de, vbabka@suse.cz, will@kernel.org, x86@kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alexander Potapenko Inserts KFENCE hooks into the SLAB allocator. To pass the originally requested size to KFENCE, add an argument 'orig_size' to slab_alloc*(). The additional argument is required to preserve the requested original size for kmalloc() allocations, which uses size classes (e.g. an allocation of 272 bytes will return an object of size 512). Therefore, kmem_cache::size does not represent the kmalloc-caller's requested size, and we must introduce the argument 'orig_size' to propagate the originally requested size to KFENCE. Without the originally requested size, we would not be able to detect out-of-bounds accesses for objects placed at the end of a KFENCE object page if that object is not equal to the kmalloc-size class it was bucketed into. When KFENCE is disabled, there is no additional overhead, since slab_alloc*() functions are __always_inline. Reviewed-by: Dmitry Vyukov Co-developed-by: Marco Elver Signed-off-by: Marco Elver Signed-off-by: Alexander Potapenko --- v7: * Move kmemleak_free_recursive() before kfence_free() for KFENCE objects. kmemleak_free_recursive() should be before releasing the object, to avoid a potential race where the object may immediately be reused [reported by Jann Horn]. * Re-add SLAB-specific code setting page->s_mem. v5: * New kfence_shutdown_cache(): we need to defer kfence_shutdown_cache() to before the cache is actually freed. In case of SLAB_TYPESAFE_BY_RCU, the objects may still legally be used until the next RCU grace period. * Fix objs_per_slab_page for kfence objects. * Revert and use fixed obj_to_index() in __check_heap_object(). v3: * Rewrite patch description to clarify need for 'orig_size' [reported by Christopher Lameter]. --- include/linux/slab_def.h | 3 +++ mm/kfence/core.c | 2 ++ mm/slab.c | 38 +++++++++++++++++++++++++++++--------- mm/slab_common.c | 5 ++++- 4 files changed, 38 insertions(+), 10 deletions(-) diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 9eb430c163c2..3aa5e1e73ab6 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -2,6 +2,7 @@ #ifndef _LINUX_SLAB_DEF_H #define _LINUX_SLAB_DEF_H +#include #include /* @@ -114,6 +115,8 @@ static inline unsigned int obj_to_index(const struct kmem_cache *cache, static inline int objs_per_slab_page(const struct kmem_cache *cache, const struct page *page) { + if (is_kfence_address(page_address(page))) + return 1; return cache->num; } diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 64f33b93223b..721fd6318c91 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -313,6 +313,8 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g /* Set required struct page fields. */ page = virt_to_page(meta->addr); page->slab_cache = cache; + if (IS_ENABLED(CONFIG_SLAB)) + page->s_mem = addr; raw_spin_unlock_irqrestore(&meta->lock, flags); diff --git a/mm/slab.c b/mm/slab.c index b1113561b98b..a1c2809731c6 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -100,6 +100,7 @@ #include #include #include +#include #include #include #include @@ -3208,7 +3209,7 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, } static __always_inline void * -slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, +slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, unsigned long caller) { unsigned long save_flags; @@ -3221,6 +3222,10 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, if (unlikely(!cachep)) return NULL; + ptr = kfence_alloc(cachep, orig_size, flags); + if (unlikely(ptr)) + goto out_hooks; + cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); @@ -3253,6 +3258,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, if (unlikely(slab_want_init_on_alloc(flags, cachep)) && ptr) memset(ptr, 0, cachep->object_size); +out_hooks: slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr); return ptr; } @@ -3290,7 +3296,7 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags) #endif /* CONFIG_NUMA */ static __always_inline void * -slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) +slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) { unsigned long save_flags; void *objp; @@ -3301,6 +3307,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) if (unlikely(!cachep)) return NULL; + objp = kfence_alloc(cachep, orig_size, flags); + if (unlikely(objp)) + goto out; + cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); objp = __do_cache_alloc(cachep, flags); @@ -3311,6 +3321,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) if (unlikely(slab_want_init_on_alloc(flags, cachep)) && objp) memset(objp, 0, cachep->object_size); +out: slab_post_alloc_hook(cachep, objcg, flags, 1, &objp); return objp; } @@ -3416,6 +3427,12 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, unsigned long caller) { + if (is_kfence_address(objp)) { + kmemleak_free_recursive(objp, cachep->flags); + __kfence_free(objp); + return; + } + /* Put the object into the quarantine, don't touch it for now. */ if (kasan_slab_free(cachep, objp, _RET_IP_)) return; @@ -3481,7 +3498,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, */ void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) { - void *ret = slab_alloc(cachep, flags, _RET_IP_); + void *ret = slab_alloc(cachep, flags, cachep->object_size, _RET_IP_); trace_kmem_cache_alloc(_RET_IP_, ret, cachep->object_size, cachep->size, flags); @@ -3514,7 +3531,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, local_irq_disable(); for (i = 0; i < size; i++) { - void *objp = __do_cache_alloc(s, flags); + void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags); if (unlikely(!objp)) goto error; @@ -3547,7 +3564,7 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) { void *ret; - ret = slab_alloc(cachep, flags, _RET_IP_); + ret = slab_alloc(cachep, flags, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc(_RET_IP_, ret, @@ -3573,7 +3590,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_trace); */ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { - void *ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); + void *ret = slab_alloc_node(cachep, flags, nodeid, cachep->object_size, _RET_IP_); trace_kmem_cache_alloc_node(_RET_IP_, ret, cachep->object_size, cachep->size, @@ -3591,7 +3608,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, { void *ret; - ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); + ret = slab_alloc_node(cachep, flags, nodeid, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc_node(_RET_IP_, ret, @@ -3652,7 +3669,7 @@ static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, cachep = kmalloc_slab(size, flags); if (unlikely(ZERO_OR_NULL_PTR(cachep))) return cachep; - ret = slab_alloc(cachep, flags, caller); + ret = slab_alloc(cachep, flags, size, caller); ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc(caller, ret, @@ -4151,7 +4168,10 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, BUG_ON(objnr >= cachep->num); /* Find offset within object. */ - offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep); + if (is_kfence_address(ptr)) + offset = ptr - kfence_object_start(ptr); + else + offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep); /* Allow address range falling entirely within usercopy region. */ if (offset >= cachep->useroffset && diff --git a/mm/slab_common.c b/mm/slab_common.c index f9ccd5dc13f3..13125773dae2 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -435,6 +436,7 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work) rcu_barrier(); list_for_each_entry_safe(s, s2, &to_destroy, list) { + kfence_shutdown_cache(s); #ifdef SLAB_SUPPORTS_SYSFS sysfs_slab_release(s); #else @@ -460,6 +462,7 @@ static int shutdown_cache(struct kmem_cache *s) list_add_tail(&s->list, &slab_caches_to_rcu_destroy); schedule_work(&slab_caches_to_rcu_destroy_work); } else { + kfence_shutdown_cache(s); #ifdef SLAB_SUPPORTS_SYSFS sysfs_slab_unlink(s); sysfs_slab_release(s); @@ -1171,7 +1174,7 @@ size_t ksize(const void *objp) if (unlikely(ZERO_OR_NULL_PTR(objp)) || !__kasan_check_read(objp, 1)) return 0; - size = __ksize(objp); + size = kfence_ksize(objp) ?: __ksize(objp); /* * We assume that ksize callers could use whole allocated area, * so we need to unpoison this area.