From patchwork Tue Feb 21 13:57:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Breno Leitao X-Patchwork-Id: 13147964 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD52BC61DA3 for ; Tue, 21 Feb 2023 13:57:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234005AbjBUN5u (ORCPT ); Tue, 21 Feb 2023 08:57:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47262 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234167AbjBUN5s (ORCPT ); Tue, 21 Feb 2023 08:57:48 -0500 Received: from mail-wr1-f51.google.com (mail-wr1-f51.google.com [209.85.221.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61D332A141; Tue, 21 Feb 2023 05:57:40 -0800 (PST) Received: by mail-wr1-f51.google.com with SMTP id c12so4529775wrw.1; Tue, 21 Feb 2023 05:57:40 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=YSMy4d7C3y9QQLTh7tMc5rp+Km3cKnTrKiObXhQbZYk=; b=dHoMC+aQyQVUjLFhfb1/VdaRtnJfZV5MgCYsEaKSsn98/DhT44XyfDtcs/50JC26+c tNFIyccl7Xez8pPtaU7cotB8+7Pv782jAZ/hsjalB35hKE5vm002TP5LXASLjI/UxJR1 RfXog0WZqsgG3mHd9nyX9C/uz5QZuVZ8aHW9HDRo21ORVeOoQHdgyPsNtL8MBKyjH7QB dtbN6HpeMJ6AVDjZIavTbR0d2bjC5PRAW2aaEb7xPzUqHpIliBx+mOpbuYDoWvCyVk/2 eMMHcSFeL5c4PIWCSzXA6kWyej7Yj7UAEH4YtfZwHbePOe+Jbb9iBzI1FFyqagPwpYLH Ttrg== X-Gm-Message-State: AO0yUKXnL9R9D/+66ldHxZcokMiXxG3H8U3Ebnd+0sTtoklJCgCrzAjz 300n1VPxXVCMGYnTy/BpQjc= X-Google-Smtp-Source: AK7set+rooNjlGjHUBW3iVmIaIoLzaSLgvfU/NI34Y4ljmwSvswAnVkbfzLmoUhGpM7rbGytjI4tlA== X-Received: by 2002:a05:6000:98c:b0:2c5:68a9:843f with SMTP id by12-20020a056000098c00b002c568a9843fmr3046536wrb.4.1676987858752; Tue, 21 Feb 2023 05:57:38 -0800 (PST) Received: from localhost (fwdproxy-cln-008.fbsv.net. [2a03:2880:31ff:8::face:b00c]) by smtp.gmail.com with ESMTPSA id h7-20020a5d6887000000b002c5501a5803sm2715946wru.65.2023.02.21.05.57.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Feb 2023 05:57:38 -0800 (PST) From: Breno Leitao To: axboe@kernel.dk, asml.silence@gmail.com, io-uring@vger.kernel.org Cc: linux-kernel@vger.kernel.org, gustavold@meta.com, leit@meta.com Subject: [PATCH 1/2] io_uring: Move from hlist to io_wq_work_node Date: Tue, 21 Feb 2023 05:57:20 -0800 Message-Id: <20230221135721.3230763-1-leitao@debian.org> X-Mailer: git-send-email 2.39.0 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Having cache entries linked using the hlist format brings no benefit, and also requires an unnecessary extra pointer address per cache entry. Use the internal io_wq_work_node single-linked list for the internal alloc caches (async_msghdr and async_poll) This is required to be able to use KASAN on cache entries, since we do not need to touch unused (and poisoned) cache entries when adding more entries to the list. Suggested-by: Pavel Begunkov Signed-off-by: Breno Leitao --- include/linux/io_uring_types.h | 2 +- io_uring/alloc_cache.h | 27 +++++++++++++++------------ 2 files changed, 16 insertions(+), 13 deletions(-) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 0efe4d784358..efa66b6c32c9 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -188,7 +188,7 @@ struct io_ev_fd { }; struct io_alloc_cache { - struct hlist_head list; + struct io_wq_work_node list; unsigned int nr_cached; }; diff --git a/io_uring/alloc_cache.h b/io_uring/alloc_cache.h index 729793ae9712..0d9ff9402a37 100644 --- a/io_uring/alloc_cache.h +++ b/io_uring/alloc_cache.h @@ -7,7 +7,7 @@ #define IO_ALLOC_CACHE_MAX 512 struct io_cache_entry { - struct hlist_node node; + struct io_wq_work_node node; }; static inline bool io_alloc_cache_put(struct io_alloc_cache *cache, @@ -15,7 +15,7 @@ static inline bool io_alloc_cache_put(struct io_alloc_cache *cache, { if (cache->nr_cached < IO_ALLOC_CACHE_MAX) { cache->nr_cached++; - hlist_add_head(&entry->node, &cache->list); + wq_stack_add_head(&entry->node, &cache->list); return true; } return false; @@ -23,11 +23,14 @@ static inline bool io_alloc_cache_put(struct io_alloc_cache *cache, static inline struct io_cache_entry *io_alloc_cache_get(struct io_alloc_cache *cache) { - if (!hlist_empty(&cache->list)) { - struct hlist_node *node = cache->list.first; - - hlist_del(node); - return container_of(node, struct io_cache_entry, node); + struct io_wq_work_node *node; + struct io_cache_entry *entry; + + if (cache->list.next) { + node = cache->list.next; + entry = container_of(node, struct io_cache_entry, node); + cache->list.next = node->next; + return entry; } return NULL; @@ -35,19 +38,19 @@ static inline struct io_cache_entry *io_alloc_cache_get(struct io_alloc_cache *c static inline void io_alloc_cache_init(struct io_alloc_cache *cache) { - INIT_HLIST_HEAD(&cache->list); + cache->list.next = NULL; cache->nr_cached = 0; } static inline void io_alloc_cache_free(struct io_alloc_cache *cache, void (*free)(struct io_cache_entry *)) { - while (!hlist_empty(&cache->list)) { - struct hlist_node *node = cache->list.first; + struct io_cache_entry *entry; - hlist_del(node); - free(container_of(node, struct io_cache_entry, node)); + while ((entry = io_alloc_cache_get(cache))) { + free(entry); } + cache->nr_cached = 0; } #endif From patchwork Tue Feb 21 13:57:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Breno Leitao X-Patchwork-Id: 13147965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 298A6C61DA3 for ; Tue, 21 Feb 2023 13:57:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234183AbjBUN54 (ORCPT ); Tue, 21 Feb 2023 08:57:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234186AbjBUN5x (ORCPT ); Tue, 21 Feb 2023 08:57:53 -0500 Received: from mail-wr1-f52.google.com (mail-wr1-f52.google.com [209.85.221.52]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD93A2A991; Tue, 21 Feb 2023 05:57:44 -0800 (PST) Received: by mail-wr1-f52.google.com with SMTP id c5so5216923wrr.5; Tue, 21 Feb 2023 05:57:44 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dX2zL4ynD4tsLZJVAOZ5QiEaA3whpal2a4zVJpxHSGQ=; b=Fv5PAw/PD9TMB6l7f1dRpw5JSK/ive9grpaA7UhoOT8lNPbqahw6fdV/+SRaIXM9xH XhnpU+tfPqjg1wyrxTda8KAAov6kiGR+Xvo7tpgSmz7CjhYERlARyhkjVPriK44PAlHg /dN2bBpWKBV5bznkEYBsMuUwRoNWpLvtzO27Rj20OEiCFBlXykZNSJcN+7xqtzb+bHf9 EiWECTGKB1UOwmoYaqWlBJp5OyuNB8M4GXsN0eSNFY6uPg1lFMH8VDWEnPLkcTDNRfQE snV7uz6SkS2yvKXGbH6c/owKYVx0SI//y+cWPdtsI9A1XvFxIwuNm2oI1Toglar6cWxo S0bw== X-Gm-Message-State: AO0yUKVACP2lAnYoJPE0muVnWeigopSlGTe3hSGR2JOhNRWCjl7X0Qlu ITpqP/mql9+GYqFuNuZ8Nqs= X-Google-Smtp-Source: AK7set9tddnTsoeabtc6JU5TGUwCxo5rwr74HD2REu6Mo9BK0oHhpMqdVSeWTJepL0AD3NGbvznAxw== X-Received: by 2002:a05:6000:2cf:b0:2c5:5313:9d19 with SMTP id o15-20020a05600002cf00b002c553139d19mr4013084wry.26.1676987863269; Tue, 21 Feb 2023 05:57:43 -0800 (PST) Received: from localhost (fwdproxy-cln-010.fbsv.net. [2a03:2880:31ff:a::face:b00c]) by smtp.gmail.com with ESMTPSA id i18-20020adfe492000000b002c56287bd2csm4963461wrm.114.2023.02.21.05.57.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Feb 2023 05:57:42 -0800 (PST) From: Breno Leitao To: axboe@kernel.dk, asml.silence@gmail.com, io-uring@vger.kernel.org Cc: linux-kernel@vger.kernel.org, gustavold@meta.com, leit@meta.com Subject: [PATCH 2/2] io_uring: Add KASAN support for alloc_caches Date: Tue, 21 Feb 2023 05:57:21 -0800 Message-Id: <20230221135721.3230763-2-leitao@debian.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230221135721.3230763-1-leitao@debian.org> References: <20230221135721.3230763-1-leitao@debian.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Add support for KASAN in the alloc_caches (apoll and netmsg_cache). Thus, if something touches the unused caches, it will raise a KASAN warning/exception. It poisons the object when the object is put to the cache, and unpoisons it when the object is gotten or freed. Signed-off-by: Breno Leitao --- io_uring/alloc_cache.h | 11 ++++++++--- io_uring/io_uring.c | 12 ++++++++++-- io_uring/net.c | 2 +- io_uring/poll.c | 2 +- 4 files changed, 20 insertions(+), 7 deletions(-) diff --git a/io_uring/alloc_cache.h b/io_uring/alloc_cache.h index 0d9ff9402a37..0d5cd2c0a0ba 100644 --- a/io_uring/alloc_cache.h +++ b/io_uring/alloc_cache.h @@ -16,12 +16,15 @@ static inline bool io_alloc_cache_put(struct io_alloc_cache *cache, if (cache->nr_cached < IO_ALLOC_CACHE_MAX) { cache->nr_cached++; wq_stack_add_head(&entry->node, &cache->list); + /* KASAN poisons object */ + kasan_slab_free_mempool(entry); return true; } return false; } -static inline struct io_cache_entry *io_alloc_cache_get(struct io_alloc_cache *cache) +static inline struct io_cache_entry *io_alloc_cache_get(struct io_alloc_cache *cache, + size_t size) { struct io_wq_work_node *node; struct io_cache_entry *entry; @@ -29,6 +32,7 @@ static inline struct io_cache_entry *io_alloc_cache_get(struct io_alloc_cache *c if (cache->list.next) { node = cache->list.next; entry = container_of(node, struct io_cache_entry, node); + kasan_unpoison_range(entry, size); cache->list.next = node->next; return entry; } @@ -43,11 +47,12 @@ static inline void io_alloc_cache_init(struct io_alloc_cache *cache) } static inline void io_alloc_cache_free(struct io_alloc_cache *cache, - void (*free)(struct io_cache_entry *)) + void (*free)(struct io_cache_entry *), + size_t size) { struct io_cache_entry *entry; - while ((entry = io_alloc_cache_get(cache))) { + while ((entry = io_alloc_cache_get(cache, size))) { free(entry); } diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 80b6204769e8..6a98902b8f62 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -2766,6 +2766,15 @@ static void io_req_caches_free(struct io_ring_ctx *ctx) mutex_unlock(&ctx->uring_lock); } +static __cold void io_uring_acache_free(struct io_ring_ctx *ctx) +{ + + io_alloc_cache_free(&ctx->apoll_cache, io_apoll_cache_free, + sizeof(struct async_poll)); + io_alloc_cache_free(&ctx->netmsg_cache, io_netmsg_cache_free, + sizeof(struct io_async_msghdr)); +} + static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx) { io_sq_thread_finish(ctx); @@ -2781,8 +2790,7 @@ static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx) __io_sqe_files_unregister(ctx); io_cqring_overflow_kill(ctx); io_eventfd_unregister(ctx); - io_alloc_cache_free(&ctx->apoll_cache, io_apoll_cache_free); - io_alloc_cache_free(&ctx->netmsg_cache, io_netmsg_cache_free); + io_uring_acache_free(ctx); mutex_unlock(&ctx->uring_lock); io_destroy_buffers(ctx); if (ctx->sq_creds) diff --git a/io_uring/net.c b/io_uring/net.c index fbc34a7c2743..8dc67b23b030 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -139,7 +139,7 @@ static struct io_async_msghdr *io_msg_alloc_async(struct io_kiocb *req, struct io_async_msghdr *hdr; if (!(issue_flags & IO_URING_F_UNLOCKED)) { - entry = io_alloc_cache_get(&ctx->netmsg_cache); + entry = io_alloc_cache_get(&ctx->netmsg_cache, sizeof(struct io_async_msghdr)); if (entry) { hdr = container_of(entry, struct io_async_msghdr, cache); hdr->free_iov = NULL; diff --git a/io_uring/poll.c b/io_uring/poll.c index 8339a92b4510..295d59875f00 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -661,7 +661,7 @@ static struct async_poll *io_req_alloc_apoll(struct io_kiocb *req, apoll = req->apoll; kfree(apoll->double_poll); } else if (!(issue_flags & IO_URING_F_UNLOCKED)) { - entry = io_alloc_cache_get(&ctx->apoll_cache); + entry = io_alloc_cache_get(&ctx->apoll_cache, sizeof(struct async_poll)); if (entry == NULL) goto alloc_apoll; apoll = container_of(entry, struct async_poll, cache);