From patchwork Fri Aug 16 19:12:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kui-Feng Lee X-Patchwork-Id: 13766782 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8ABBCC3DA4A for ; Fri, 16 Aug 2024 19:12:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A4AC08D00AE; Fri, 16 Aug 2024 15:12:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FB0A8D0066; Fri, 16 Aug 2024 15:12:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7FF338D00AE; Fri, 16 Aug 2024 15:12:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5E88B8D0066 for ; Fri, 16 Aug 2024 15:12:25 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id D6DEEA51B7 for ; Fri, 16 Aug 2024 19:12:24 +0000 (UTC) X-FDA: 82459054608.22.E93DD0C Received: from mail-yw1-f174.google.com (mail-yw1-f174.google.com [209.85.128.174]) by imf05.hostedemail.com (Postfix) with ESMTP id 0860F10000F for ; Fri, 16 Aug 2024 19:12:22 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=OmThTNL1; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf05.hostedemail.com: domain of thinker.li@gmail.com designates 209.85.128.174 as permitted sender) smtp.mailfrom=thinker.li@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723835482; a=rsa-sha256; cv=none; b=y0jQdvn4jVOT4sArvKc+6QOSJvHmSy6YvZ9+CkAA0f/IX9QO2729864yyEu/ifSOB2OTCp rnmPcoa6DKLZdjpRaRJSiYLt2EdKcBJ08Nua2e/+MnI3XUi4IKz+RuSvUS5zlr3Cggvj2i 3MNQWVqDEWgzOIZ2opuNgNXjzPTI9wI= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=OmThTNL1; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf05.hostedemail.com: domain of thinker.li@gmail.com designates 209.85.128.174 as permitted sender) smtp.mailfrom=thinker.li@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723835482; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=C/I8iBRqpMYf74wgXxhVFGCiJEuxAMdQwnIxNEYMw1s=; b=x68nVoznfMMd5SONfQ+5ZmEN644NNE9Ayj6GXzbFVEj4CdfrACBnxJNHXCYsMc54XQmGDx d4Ld29S3XiGwu/b6y2Goj4D+nTLkSCEfKi9ChAvrdyFMD7XmbA3aGPT/DfA3Pqhs0BbaLq ettM1E0S2OuBD9CSRRRKspNOf6Wk3FY= Received: by mail-yw1-f174.google.com with SMTP id 00721157ae682-691bb56eb65so23744207b3.0 for ; Fri, 16 Aug 2024 12:12:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1723835542; x=1724440342; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C/I8iBRqpMYf74wgXxhVFGCiJEuxAMdQwnIxNEYMw1s=; b=OmThTNL1y2bVYQAdz10qDqa70/1QuhBJNlqjB+ddsusCiEHYl2fAuL8K/B8EioBTSO mL2fKs+QUFtAOFbcW/GyCwK+HrtDvg91ntGKnXRqtbtQF0tWm9JUv/Y5ZXeoowKzFHrD mAdwwv/5AsOnB7Usn/1ijNXmXh0IrVrRVbBzQUcyiWVLYldJc3UsIxUYDZohldvZ7oae RYKiqVzeWDKOyBHvm1HNPNsvrrBrZXE+YE9k+caI7NzBvi7KuE84TQ2nbVcjKhVYBBOx YRI2JfkKjnTsGI3SvHVq2yCyqbRW90Z9tgTFz3r1gAZGhbLtC8kVI8ikCej2d+iwLju3 Fd/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723835542; x=1724440342; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C/I8iBRqpMYf74wgXxhVFGCiJEuxAMdQwnIxNEYMw1s=; b=AWdr7XMmLe/2spBMejKG+sFJfN45i5gWXlULG+ph44p6xTxHZeH2wIR2MWWmUFqStw 6Bj0dPWZZMdaKQZr45mzXPzc+WaH0G8uTyzzNQUwLhnU3zyvOOcph8k5D/QArvgVhH3P pdl2A0RFqKJwI+ql3tcSLSWGzWfJkInsOp81PQgTBqDoi0NT5aG83IaMxlsRDjDkAJ3T AGeSk9cOAt6MOHOV3T0+0zgWx4mDD5jwHxdKutEuCuPBaitsiyAPKZsr8c01+AiBwZAE l1Yc33o0QRM3yiizr+D6Rd+QC0ASeEOuAv+IntolQPVXP969odGK09uLFJt8CscEAU0Y HdwQ== X-Forwarded-Encrypted: i=1; AJvYcCVRV36aH6EdV3ecYN6dpSOxhR+ySiMFief5HM9NG8l0UyM3EyTbyBlTcNeGukiLUxEwV53jofxctDNFvKEzgldujAk= X-Gm-Message-State: AOJu0YzTKYMrOmQYdzvRi8KO70hYa1BxLqDaKFGOdtaCT9vNBbgiRHuC e0DgC5zaUyPy+ppuCHxd3m8hzksil2Mat7J3eD2+yi6pfOlx6dbt X-Google-Smtp-Source: AGHT+IHq4RA32ay93U1en6x8yYxt+dO0XGSRDyCHucRhFjmHGef+aIi+zyUNHVN/Mpu4qS4awr/g1w== X-Received: by 2002:a05:690c:4183:b0:66a:843c:4c58 with SMTP id 00721157ae682-6b1bb75e74bmr42271967b3.34.1723835541912; Fri, 16 Aug 2024 12:12:21 -0700 (PDT) Received: from kickker.attlocal.net ([2600:1700:6cf8:1240:ca12:c8db:5571:aa13]) by smtp.gmail.com with ESMTPSA id 00721157ae682-6af9cd7a50dsm7233327b3.94.2024.08.16.12.12.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2024 12:12:21 -0700 (PDT) From: Kui-Feng Lee To: bpf@vger.kernel.org, ast@kernel.org, martin.lau@linux.dev, andrii@kernel.org Cc: sinquersw@gmail.com, kuifeng@meta.com, Kui-Feng Lee , linux-mm@kvack.org Subject: [RFC bpf-next v4 4/6] bpf: pin, translate, and unpin __uptr from syscalls. Date: Fri, 16 Aug 2024 12:12:11 -0700 Message-Id: <20240816191213.35573-5-thinker.li@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240816191213.35573-1-thinker.li@gmail.com> References: <20240816191213.35573-1-thinker.li@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 0860F10000F X-Stat-Signature: or7rzy9gj1en3h9546scffsrnq6qoi4x X-Rspam-User: X-HE-Tag: 1723835542-238816 X-HE-Meta: U2FsdGVkX1+eNgeARBQsezdfwm9GVvcoDgDNhaXNwMk3FCY5Xx+L2DNjNxglE4iqrgjYGHcG3+xr2MUuR6y5ItYtVMCiaLBDzpTppIpTgiV6Io7rFc6Nqe8VnlgbkL9gQt9b2BAsfYOd0IMh+uhAXa2ePs6pn5Nbk8QWKliKmFLBaAaVl7MadYkllnL2Rp8b6o/KJipdpyUXloA5soNUVnOGrCQN85P6L1bJwfwsTuU+x1JJsrtRMNnq1Jdo1lvO/gec+kMu064PDjs7AgIUFQMa/a1grCeRsWUQcFluh/GHL/UQ7Le5mUOTJbsTf19H42Kn+7AVQZujbQ9HwyX/nRq1EVwKTdlplxPD9V/4FyVYkTFxApMXNPmbzGDfRIuXQPTP2kMWQGDvMc8L9CeuRgrkT2PILtsNFY7JfCIcBtj5ozLzir7OiMoTRwmRr8sRLRQmhHoJUe+or6SkgB55FF4luqHha/ESme9JZXr1IMI8UrJCsixeNFi00no71NVO972Q0i8m+sC98327c2eyOpB6Pgssx6l4SUQhl1RwwMSsj5yVDYbsMr/l22idyg/YKtHx4ysyQs3j/YeJT7114Pgw8vLjcYbhYgx0IooIWlKzXqunxlAmzAufWk1nlBu31hETFML1NddIhRYnGrpwK1gNK6/rv3BQbo4Ps0F3dNHypypxg6VFbeV3YudB7h1trGNyHGAKQH7YM42dgIdaTTOTv/J1e+iSQ90Wt4YfUegVZVtvGff8boNG/Hp6nlnjJht74gkVd6ldKL6QWpUslFDTonv7fuIQhEsyZy9E3Z5efUR7U+AjQ5TJz0YymKJ6OoIknCa8fTx6gt9yXoxfVI+bpmRFISazpebN/CUo5WuodMW4cchDvO2i0jubt10wuRzXGTO0aua1eNnzDOkk+IwcqwRAEbLbhSdnSEoougrlCXRhFPxPCjVDSGoYyI7HFS53WlH4HsMLLudv4rI iom+4jti 3wwVagWQa3deVdmR9Kt7ShdDUfbK9TOMWzcZfio8wPxr3Mp75g8dBz2bsfgx8h/BaPeqWyPKwO2FPfwmwRmxKp4OLMiSLQLZKe9gergSALz6g2DyEmJZAmG2aaGQsY9r6mYNjP92UDRP5rSfwxwCOrW3ah81rrnqUKsP0DwdwXWGMwZX3Qt98ZkNs96RaQP4KTqKijq7WK+twKOMxzAsahTOk3WKZiIRkvvvupYmD/iI5Qs0rc7Tz65mRW+RC7CgvQYalCL+MQ6X1DH7YL0IhcIl1tMMY0v7KQEWM/GNsEfA4/HKQiy9l70hmPvLZ521REYjs+A0pXoWuyDnK3E079NtFAxados0gse176HxdZ2dRsQjjvvDpxzPOX+7IQ+iVuSbeB6zjEmEQvy5BrtBUjRyTHSYRTO8etM/U2sHys5tPmklzmD0RT4OiW+wEaZVA55lglYSgQyD1r73SwHQCwbEz8lpQFsDBgyIwWn8AMVTsckYOC8cHBOiSEpTsdm+PX5yps7Z6IV3/o0a5zE/I9YSRD4nwaFv9gnwbWcbkIYGjr7eRNsFcalOC0Twf31kAvnWa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When a user program updates a map value, every uptr will be pinned and translated to an address in the kernel. This process is initiated by calling bpf_map_update_elem() from user programs. To access uptrs in BPF programs, they are pinned using pin_user_pages_fast(), but the conversion to kernel addresses is actually done by page_address(). The uptrs can be unpinned using unpin_user_pages(). Currently, the memory block pointed to by a uptr must reside in a single memory page, as crossing multiple pages is not supported. uptr is only supported by task storage maps and can only be set by user programs through syscalls. When the value of an uptr is overwritten or destroyed, the memory pointed to by the old value must be unpinned. This is ensured by calling bpf_obj_uptrcpy() and copy_map_uptr_locked() when updating map value and by bpf_obj_free_fields() when destroying map value. Cc: linux-mm@kvack.org Signed-off-by: Kui-Feng Lee --- include/linux/bpf.h | 30 ++++++ kernel/bpf/bpf_local_storage.c | 23 ++++- kernel/bpf/helpers.c | 20 ++++ kernel/bpf/syscall.c | 172 ++++++++++++++++++++++++++++++++- 4 files changed, 237 insertions(+), 8 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 954e476b5605..886c818ff555 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -477,6 +477,8 @@ static inline void bpf_long_memcpy(void *dst, const void *src, u32 size) data_race(*ldst++ = *lsrc++); } +void bpf_obj_unpin_uptr(const struct btf_field *field, void *addr); + /* copy everything but bpf_spin_lock, bpf_timer, and kptrs. There could be one of each. */ static inline void bpf_obj_memcpy(struct btf_record *rec, void *dst, void *src, u32 size, @@ -503,6 +505,34 @@ static inline void bpf_obj_memcpy(struct btf_record *rec, memcpy(dst + curr_off, src + curr_off, size - curr_off); } +static inline void bpf_obj_uptrcpy(struct btf_record *rec, + void *dst, void *src) +{ + int i; + + if (IS_ERR_OR_NULL(rec)) + return; + + for (i = 0; i < rec->cnt; i++) { + u32 next_off = rec->fields[i].offset; + void *addr; + + if (rec->fields[i].type == BPF_UPTR) { + /* Unpin old address. + * + * Alignments are guaranteed by btf_find_field_one(). + */ + addr = *(void **)(dst + next_off); + if (addr) + bpf_obj_unpin_uptr(&rec->fields[i], addr); + + *(void **)(dst + next_off) = *(void **)(src + next_off); + } + } +} + +void copy_map_uptr_locked(struct bpf_map *map, void *dst, void *src, bool lock_src); + static inline void copy_map_value(struct bpf_map *map, void *dst, void *src) { bpf_obj_memcpy(map->record, dst, src, map->value_size, false); diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c index c938dea5ddbf..2fafad53b9d9 100644 --- a/kernel/bpf/bpf_local_storage.c +++ b/kernel/bpf/bpf_local_storage.c @@ -99,8 +99,11 @@ bpf_selem_alloc(struct bpf_local_storage_map *smap, void *owner, } if (selem) { - if (value) + if (value) { copy_map_value(&smap->map, SDATA(selem)->data, value); + if (smap->map.map_type == BPF_MAP_TYPE_TASK_STORAGE) + bpf_obj_uptrcpy(smap->map.record, SDATA(selem)->data, value); + } /* No need to call check_and_init_map_value as memory is zero init */ return selem; } @@ -575,8 +578,13 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, if (err) return ERR_PTR(err); if (old_sdata && selem_linked_to_storage_lockless(SELEM(old_sdata))) { - copy_map_value_locked(&smap->map, old_sdata->data, - value, false); + if (smap->map.map_type == BPF_MAP_TYPE_TASK_STORAGE && + btf_record_has_field(smap->map.record, BPF_UPTR)) + copy_map_uptr_locked(&smap->map, old_sdata->data, + value, false); + else + copy_map_value_locked(&smap->map, old_sdata->data, + value, false); return old_sdata; } } @@ -607,8 +615,13 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, goto unlock; if (old_sdata && (map_flags & BPF_F_LOCK)) { - copy_map_value_locked(&smap->map, old_sdata->data, value, - false); + if (smap->map.map_type == BPF_MAP_TYPE_TASK_STORAGE && + btf_record_has_field(smap->map.record, BPF_UPTR)) + copy_map_uptr_locked(&smap->map, old_sdata->data, + value, false); + else + copy_map_value_locked(&smap->map, old_sdata->data, + value, false); selem = SELEM(old_sdata); goto unlock; } diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index d02ae323996b..d588b52605b9 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -388,6 +388,26 @@ void copy_map_value_locked(struct bpf_map *map, void *dst, void *src, preempt_enable(); } +/* Copy map value and uptr from src to dst, with lock_src indicating + * whether src or dst is locked. + */ +void copy_map_uptr_locked(struct bpf_map *map, void *src, void *dst, + bool lock_src) +{ + struct bpf_spin_lock *lock; + + if (lock_src) + lock = src + map->record->spin_lock_off; + else + lock = dst + map->record->spin_lock_off; + preempt_disable(); + __bpf_spin_lock_irqsave(lock); + copy_map_value(map, dst, src); + bpf_obj_uptrcpy(map->record, dst, src); + __bpf_spin_unlock_irqrestore(lock); + preempt_enable(); +} + BPF_CALL_0(bpf_jiffies64) { return get_jiffies_64(); diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index fed4a2145f81..1854aeb13ff7 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -155,8 +155,140 @@ static void maybe_wait_bpf_programs(struct bpf_map *map) synchronize_rcu(); } -static int bpf_map_update_value(struct bpf_map *map, struct file *map_file, - void *key, void *value, __u64 flags) +void bpf_obj_unpin_uptr(const struct btf_field *field, void *addr) +{ + struct page *pages[1]; + u32 size, type_id; + int npages; + void *ptr; + + type_id = field->kptr.btf_id; + btf_type_id_size(field->kptr.btf, &type_id, &size); + if (size == 0) + return; + + ptr = (void *)((intptr_t)addr & PAGE_MASK); + + npages = (((intptr_t)addr + size + ~PAGE_MASK) - (intptr_t)ptr) >> PAGE_SHIFT; + if (WARN_ON_ONCE(npages > 1)) + return; + + pages[0] = virt_to_page(ptr); + unpin_user_pages(pages, 1); +} + +/* Unpin uptr fields in the record up to cnt */ +static void bpf_obj_unpin_uptrs_cnt(struct btf_record *rec, int cnt, void *src) +{ + u32 next_off; + void **kaddr_ptr; + int i; + + for (i = 0; i < cnt; i++) { + if (rec->fields[i].type != BPF_UPTR) + continue; + + next_off = rec->fields[i].offset; + kaddr_ptr = src + next_off; + if (*kaddr_ptr) { + bpf_obj_unpin_uptr(&rec->fields[i], *kaddr_ptr); + *kaddr_ptr = NULL; + } + } +} + +/* Find all BPF_UPTR fields in the record, pin the user memory, map it + * to kernel space, and update the addresses in the source memory. + * + * The map value passing from userspace may contain user kptrs pointing to + * user memory. This function pins the user memory and maps it to kernel + * memory so that BPF programs can access it. + */ +static int bpf_obj_trans_pin_uptrs(struct btf_record *rec, void *src, u32 size) +{ + u32 type_id, tsz, npages, next_off; + void *uaddr, *kaddr, **uaddr_ptr; + const struct btf_type *t; + struct page *pages[1]; + int i, err; + + if (IS_ERR_OR_NULL(rec)) + return 0; + + if (!btf_record_has_field(rec, BPF_UPTR)) + return 0; + + for (i = 0; i < rec->cnt; i++) { + if (rec->fields[i].type != BPF_UPTR) + continue; + + next_off = rec->fields[i].offset; + if (next_off + sizeof(void *) > size) { + err = -EFAULT; + goto rollback; + } + uaddr_ptr = src + next_off; + uaddr = *uaddr_ptr; + if (!uaddr) + continue; + + /* Make sure the user memory takes up at most one page */ + type_id = rec->fields[i].kptr.btf_id; + t = btf_type_id_size(rec->fields[i].kptr.btf, &type_id, &tsz); + if (!t) { + err = -EFAULT; + goto rollback; + } + if (tsz == 0) { + *uaddr_ptr = NULL; + continue; + } + npages = (((intptr_t)uaddr + tsz + ~PAGE_MASK) - + ((intptr_t)uaddr & PAGE_MASK)) >> PAGE_SHIFT; + if (npages > 1) { + /* Allow only one page */ + err = -EFAULT; + goto rollback; + } + + /* Pin the user memory */ + err = pin_user_pages_fast((intptr_t)uaddr, 1, FOLL_LONGTERM | FOLL_WRITE, pages); + if (err < 0) + goto rollback; + + /* Map to kernel space */ + kaddr = page_address(pages[0]); + if (unlikely(!kaddr)) { + WARN_ON_ONCE(1); + unpin_user_pages(pages, 1); + err = -EFAULT; + goto rollback; + } + *uaddr_ptr = kaddr + ((intptr_t)uaddr & ~PAGE_MASK); + } + + return 0; + +rollback: + /* Unpin the user memory of earlier fields */ + bpf_obj_unpin_uptrs_cnt(rec, i, src); + + return err; +} + +static void bpf_obj_unpin_uptrs(struct btf_record *rec, void *src) +{ + if (IS_ERR_OR_NULL(rec)) + return; + + if (!btf_record_has_field(rec, BPF_UPTR)) + return; + + bpf_obj_unpin_uptrs_cnt(rec, rec->cnt, src); +} + +static int bpf_map_update_value_inner(struct bpf_map *map, struct file *map_file, + void *key, void *value, __u64 flags) { int err; @@ -208,6 +340,29 @@ static int bpf_map_update_value(struct bpf_map *map, struct file *map_file, return err; } +static int bpf_map_update_value(struct bpf_map *map, struct file *map_file, + void *key, void *value, __u64 flags) +{ + int err; + + if (map->map_type == BPF_MAP_TYPE_TASK_STORAGE) { + /* Pin user memory can lead to context switch, so we need + * to do it before potential RCU lock. + */ + err = bpf_obj_trans_pin_uptrs(map->record, value, + bpf_map_value_size(map)); + if (err) + return err; + } + + err = bpf_map_update_value_inner(map, map_file, key, value, flags); + + if (err && map->map_type == BPF_MAP_TYPE_TASK_STORAGE) + bpf_obj_unpin_uptrs(map->record, value); + + return err; +} + static int bpf_map_copy_value(struct bpf_map *map, void *key, void *value, __u64 flags) { @@ -714,6 +869,11 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj) field->kptr.dtor(xchgd_field); } break; + case BPF_UPTR: + if (*(void **)field_ptr) + bpf_obj_unpin_uptr(field, *(void **)field_ptr); + *(void **)field_ptr = NULL; + break; case BPF_LIST_HEAD: if (WARN_ON_ONCE(rec->spin_lock_off < 0)) continue; @@ -1099,7 +1259,7 @@ static int map_check_btf(struct bpf_map *map, struct bpf_token *token, map->record = btf_parse_fields(btf, value_type, BPF_SPIN_LOCK | BPF_TIMER | BPF_KPTR | BPF_LIST_HEAD | - BPF_RB_ROOT | BPF_REFCOUNT | BPF_WORKQUEUE, + BPF_RB_ROOT | BPF_REFCOUNT | BPF_WORKQUEUE | BPF_UPTR, map->value_size); if (!IS_ERR_OR_NULL(map->record)) { int i; @@ -1155,6 +1315,12 @@ static int map_check_btf(struct bpf_map *map, struct bpf_token *token, goto free_map_tab; } break; + case BPF_UPTR: + if (map->map_type != BPF_MAP_TYPE_TASK_STORAGE) { + ret = -EOPNOTSUPP; + goto free_map_tab; + } + break; case BPF_LIST_HEAD: case BPF_RB_ROOT: if (map->map_type != BPF_MAP_TYPE_HASH &&