From patchwork Tue Aug 1 22:07:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Kanner X-Patchwork-Id: 13337309 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54F7B275A8 for ; Tue, 1 Aug 2023 22:09:22 +0000 (UTC) Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com [IPv6:2a00:1450:4864:20::231]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1EEB2690; Tue, 1 Aug 2023 15:08:52 -0700 (PDT) Received: by mail-lj1-x231.google.com with SMTP id 38308e7fff4ca-2b9b904bb04so97912021fa.1; Tue, 01 Aug 2023 15:08:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690927726; x=1691532526; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=G1hkgYbEyenByhiJwV7HkHjwBRGI9O4uNXYD4EgYxI0=; b=kaotcAVcJkwsqpE9BJbV/XN+eV8bYnbuh3SRhYaVa0Q/fSJh+DRrukxPvbyd8radiL P7aNclrVt4fzkZQSQy9toXXQEUtU33zUdgWp8436CNj6cUaj0nykCVaQQhSf+HXCpB3L nqVc4olywChap+KoZXu6zpr/PM2yGPeDb+pRoUESyWkGrf0UTzlDA17bWU0yUpXUhgdz qOMixXr2r/5m7Q80jNGtbzZKKlkmkHZutQSBLUEUtnVKFwfQi9TMzkIlqHahckUuxS5o cxvEConNhDfn8iiaxVy8h699euN19H+A6fxbZ4NnIhb6oLS2gMwXBYc8nFzUY/QFIipd devA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690927726; x=1691532526; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=G1hkgYbEyenByhiJwV7HkHjwBRGI9O4uNXYD4EgYxI0=; b=g0HcjneH5SXxKpSTT7a6chqV2blSYM7t2PNZkUq2OAjBAoqNubD6aoHfWiIag2s44+ Q4yJKLHHbBaJ38FRHgKlqo6+cH7LQ+TpFNcErwT7c3JevSUcjuAbYcJ4b2HcVzAh/RZ8 9LtmemNN77ssqyj8+1rUQ28EjJhV1jBrIR3RKdosQ5ZHYYMN2mCP/ti3O89D1ZxwyTut ZpEdakDvgIH1Mg+sLDNn1dmGTdWl1MwwsYFaGE9JtYG5l5WM4KbJu1poZxd55mFeMJdr YGJI97DoQx6KN3BO6egn4wWJfzaUTK3YRPjwkA2Tkw9+PZBdeRyWbOrAeVnFv9dWlBdd gX+g== X-Gm-Message-State: ABy/qLZPYy/1xvtPOoIYX4QEET37sSHdoFr5Bj4OMX6tAlOY6LKWGua8 NI9CIJJ34UpF/D43RJpwDid2Tje7xWcexg== X-Google-Smtp-Source: APBJJlEQLbXtHrYLczB2Kt01YL/EZrQRIWtxgHivm16qea/RpBi1FApyfFFXUDWH4sX5Hq88RIbR4A== X-Received: by 2002:a2e:805a:0:b0:2b6:e2aa:8fbc with SMTP id p26-20020a2e805a000000b002b6e2aa8fbcmr2976839ljg.8.1690927725514; Tue, 01 Aug 2023 15:08:45 -0700 (PDT) Received: from localhost.localdomain ([77.222.27.96]) by smtp.gmail.com with ESMTPSA id y18-20020a2e95d2000000b002b9ea00a7bbsm1391013ljh.60.2023.08.01.15.08.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Aug 2023 15:08:45 -0700 (PDT) From: Andrew Kanner To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, jasowang@redhat.com, netdev@vger.kernel.org, brouer@redhat.com, dsahern@gmail.com, jbrouer@redhat.com, john.fastabend@gmail.com, linux-kernel@vger.kernel.org Cc: linux-kernel-mentees@lists.linuxfoundation.org, syzbot+f817490f5bd20541b90a@syzkaller.appspotmail.com, Andrew Kanner Subject: [PATCH v4 1/2] drivers: net: prevent tun_build_skb() to exceed the packet size limit Date: Wed, 2 Aug 2023 01:07:11 +0300 Message-Id: <20230801220710.464-1-andrew.kanner@gmail.com> X-Mailer: git-send-email 2.39.3 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=1.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_SBL_CSS,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.6 X-Spam-Level: * X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Using the syzkaller repro with reduced packet size it was discovered that XDP_PACKET_HEADROOM is not checked in tun_can_build_skb(), although pad may be incremented in tun_build_skb(). This may end up with exceeding the PAGE_SIZE limit in tun_build_skb(). Fixes: 7df13219d757 ("tun: reserve extra headroom only when XDP is set") Link: https://syzkaller.appspot.com/bug?extid=f817490f5bd20541b90a Signed-off-by: Andrew Kanner --- Notes: v3 -> v4: * fall back to v1, fixing only missing XDP_PACKET_HEADROOM in pad and removing bpf_xdp_adjust_tail() check for frame_sz. * added rcu read lock, noted by Jason Wang in v1 * I decided to leave the packet length check in tun_can_build_skb() instead of moving to tun_build_skb() suggested by Jason Wang . Otherwise extra packets will be dropped without falling back to tun_alloc_skb(). And in the discussion of v3 Jesper Dangaard Brouer noticed that XDP is ok with a higher order pages if it's a contiguous physical memory allocation, so falling to tun_alloc_skb() -> do_xdp_generic() should be ok. v2 -> v3: * attach the forgotten changelog v1 -> v2: * merged 2 patches in 1, fixing both issues: WARN_ON_ONCE with syzkaller repro and missing XDP_PACKET_HEADROOM in pad * changed the title and description of the execution path, suggested by Jason Wang * move the limit check from tun_can_build_skb() to tun_build_skb() to remove duplication and locking issue, and also drop the packet in case of a failed check - noted by Jason Wang drivers/net/tun.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/drivers/net/tun.c b/drivers/net/tun.c index d75456adc62a..a1d04bc9485f 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -1582,6 +1582,9 @@ static void tun_rx_batched(struct tun_struct *tun, struct tun_file *tfile, static bool tun_can_build_skb(struct tun_struct *tun, struct tun_file *tfile, int len, int noblock, bool zerocopy) { + struct bpf_prog *xdp_prog; + int pad = TUN_RX_PAD; + if ((tun->flags & TUN_TYPE_MASK) != IFF_TAP) return false; @@ -1594,7 +1597,13 @@ static bool tun_can_build_skb(struct tun_struct *tun, struct tun_file *tfile, if (zerocopy) return false; - if (SKB_DATA_ALIGN(len + TUN_RX_PAD) + + rcu_read_lock(); + xdp_prog = rcu_dereference(tun->xdp_prog); + if (xdp_prog) + pad += XDP_PACKET_HEADROOM; + rcu_read_unlock(); + + if (SKB_DATA_ALIGN(len + pad) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) > PAGE_SIZE) return false; From patchwork Tue Aug 1 22:07:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Kanner X-Patchwork-Id: 13337310 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E1465275A8 for ; Tue, 1 Aug 2023 22:09:45 +0000 (UTC) Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com [IPv6:2a00:1450:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3BD926AA; Tue, 1 Aug 2023 15:09:28 -0700 (PDT) Received: by mail-lj1-x236.google.com with SMTP id 38308e7fff4ca-2b9d3dacb33so67870921fa.1; Tue, 01 Aug 2023 15:09:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690927767; x=1691532567; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dSpmN52R4zvJ1S+i6B6OGYZwzIS1/5yW5wWragBrFA0=; b=XaniT5r6suwpR7jjJvcmJOh5zqRrMucOLpNrGei5m8doTVhDnRJgRnqGz4phmQsdLD nzXezpAZXhLt7Ph6121T6v0zExcd2XJdiF0GOMSC9PuIx2pHAlbb9tPR/UWGlfRVH3wZ ejpBzUQxLFfJHTa6zEpY1J3TZ7qAdMzgPMTQy7Y8kiCABW+2fACQY8cT8kIl2w3pw8Ht oAHfupQqrwFQq+i7VgGvg209uSclzOJSLpWRYGz7WxUEQGpYd0qyJyv9t3HZsPlKCiTq pkKAtjAliHu35zGMGRlAcA7QJwo7Azk84djCD7yXwzJZPfuSqbIRYDP5AxjDBsPqVIkL 4UwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690927767; x=1691532567; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dSpmN52R4zvJ1S+i6B6OGYZwzIS1/5yW5wWragBrFA0=; b=UDQJqeacLJFC2QnxmBWeitBbtsbZj8Q2NUG0q3TRAsAu4CYQmrV5gnYWpPmffJbB3C C4pwQSOOWvKHQAPdITFPmfPP2vEgV/W4914ATmIIrQCwaUGSH0320JX58GX+YXkiT2oT VvmtaCEVtQnR5r/QYsQBo40n9VZ8UA7mXUXCI0xWGgz13jhQRDNCiKLf68ztPYqv/fYj bEQZpwRXtyFeLcxFLcC7QLtChgN03srERkN+ajfueYb9VW1BxNQBx1kSIa0dCaiee1Y2 gHsZFyqmgO14n1+aX2WKYP4xrQK6xFRxknEkDsnTSe4rZv54jv3Xft05/ANocb7OhLnp aK8w== X-Gm-Message-State: ABy/qLZK2wzYwfZAOUdNjJ8lkqRcOVenCmIEWd9WLUskekISFYJ9GAQ3 BYd0MY5GrB7BnYMzZ9VHLLs= X-Google-Smtp-Source: APBJJlERRP6xJs2BKH/KZpMsUDEY7EYHNGhvbI9dYp3bWCLckdTMj77kulkRDU1t+LxiGszftcRpFA== X-Received: by 2002:a05:651c:cf:b0:2b9:b067:9559 with SMTP id 15-20020a05651c00cf00b002b9b0679559mr3382679ljr.23.1690927766923; Tue, 01 Aug 2023 15:09:26 -0700 (PDT) Received: from localhost.localdomain ([77.222.27.96]) by smtp.gmail.com with ESMTPSA id y18-20020a2e95d2000000b002b9ea00a7bbsm1391013ljh.60.2023.08.01.15.09.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Aug 2023 15:09:26 -0700 (PDT) From: Andrew Kanner To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, jasowang@redhat.com, netdev@vger.kernel.org, brouer@redhat.com, dsahern@gmail.com, jbrouer@redhat.com, john.fastabend@gmail.com, linux-kernel@vger.kernel.org Cc: linux-kernel-mentees@lists.linuxfoundation.org, syzbot+f817490f5bd20541b90a@syzkaller.appspotmail.com, Andrew Kanner Subject: [PATCH v4 2/2] net: core: remove unnecessary frame_sz check in bpf_xdp_adjust_tail() Date: Wed, 2 Aug 2023 01:07:13 +0300 Message-Id: <20230801220710.464-2-andrew.kanner@gmail.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230801220710.464-1-andrew.kanner@gmail.com> References: <20230801220710.464-1-andrew.kanner@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=1.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_SBL_CSS,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.6 X-Spam-Level: * X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Syzkaller reported the following issue: ======================================= Too BIG xdp->frame_sz = 131072 WARNING: CPU: 0 PID: 5020 at net/core/filter.c:4121 ____bpf_xdp_adjust_tail net/core/filter.c:4121 [inline] WARNING: CPU: 0 PID: 5020 at net/core/filter.c:4121 bpf_xdp_adjust_tail+0x466/0xa10 net/core/filter.c:4103 ... Call Trace: bpf_prog_4add87e5301a4105+0x1a/0x1c __bpf_prog_run include/linux/filter.h:600 [inline] bpf_prog_run_xdp include/linux/filter.h:775 [inline] bpf_prog_run_generic_xdp+0x57e/0x11e0 net/core/dev.c:4721 netif_receive_generic_xdp net/core/dev.c:4807 [inline] do_xdp_generic+0x35c/0x770 net/core/dev.c:4866 tun_get_user+0x2340/0x3ca0 drivers/net/tun.c:1919 tun_chr_write_iter+0xe8/0x210 drivers/net/tun.c:2043 call_write_iter include/linux/fs.h:1871 [inline] new_sync_write fs/read_write.c:491 [inline] vfs_write+0x650/0xe40 fs/read_write.c:584 ksys_write+0x12f/0x250 fs/read_write.c:637 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd xdp->frame_sz > PAGE_SIZE check was introduced in commit c8741e2bfe87 ("xdp: Allow bpf_xdp_adjust_tail() to grow packet size"). But Jesper Dangaard Brouer noted that after introducing the xdp_init_buff() which all XDP driver use - it's safe to remove this check. The original intend was to catch cases where XDP drivers have not been updated to use xdp.frame_sz, but that is not longer a concern (since xdp_init_buff). Running the initial syzkaller repro it was discovered that the contiguous physical memory allocation is used for both xdp paths in tun_get_user(), e.g. tun_build_skb() and tun_alloc_skb(). It was also stated by Jesper Dangaard Brouer that XDP can work on higher order pages, as long as this is contiguous physical memory (e.g. a page). Reported-and-tested-by: syzbot+f817490f5bd20541b90a@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/000000000000774b9205f1d8a80d@google.com/T/ Link: https://syzkaller.appspot.com/bug?extid=f817490f5bd20541b90a Link: https://lore.kernel.org/all/20230725155403.796-1-andrew.kanner@gmail.com/T/ Fixes: 43b5169d8355 ("net, xdp: Introduce xdp_init_buff utility routine") Signed-off-by: Andrew Kanner Acked-by: Jesper Dangaard Brouer Acked-by: Jason Wang --- net/core/filter.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/net/core/filter.c b/net/core/filter.c index 06ba0e56e369..28a59596987a 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -4116,12 +4116,6 @@ BPF_CALL_2(bpf_xdp_adjust_tail, struct xdp_buff *, xdp, int, offset) if (unlikely(data_end > data_hard_end)) return -EINVAL; - /* ALL drivers MUST init xdp->frame_sz, chicken check below */ - if (unlikely(xdp->frame_sz > PAGE_SIZE)) { - WARN_ONCE(1, "Too BIG xdp->frame_sz = %d\n", xdp->frame_sz); - return -EINVAL; - } - if (unlikely(data_end < xdp->data + ETH_HLEN)) return -EINVAL;