From patchwork Thu Mar 24 23:41:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hao Luo X-Patchwork-Id: 12791085 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4074CC4332F for ; Thu, 24 Mar 2022 23:42:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356385AbiCXXoM (ORCPT ); Thu, 24 Mar 2022 19:44:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356440AbiCXXnD (ORCPT ); Thu, 24 Mar 2022 19:43:03 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B69927CFB for ; Thu, 24 Mar 2022 16:41:30 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id y32-20020a25ad20000000b006339fb8e18cso4771255ybi.9 for ; Thu, 24 Mar 2022 16:41:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=tLfVhuftDS2I97SfE/fCAmNEtl4mglUQ3ZiplfSvBYs=; b=MyneO3w2pWRzsuKRSo39wsFCCS5zIcPCI5YckBT39sTeFZG9S+TPytRRLCxdOMQEtc D5nxfBH76+TK3lYZ4yUPCQS9f0YU1om2bujQG2cxtvFmntFnCkBcnPw1eMMJkRv6y/Df j+1F3rfdUJBU4mOW+qCxNjptsyAiN8qKSZPC0Pda5rRrzCpHW9wkZyjLsbmpcT6wJmNa sove9I3+fQVM0td930CDd29erKq6T7P7OadZaFCu2lrdQ/bU7fZ9IECKWCboZC75amOy bvTCCwrwWedfp9k15QNUa3g6Hj3L5K/gYMde+a2Kz3s7bgfBwHO8rODy9KvmBoaX3rUu AvYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=tLfVhuftDS2I97SfE/fCAmNEtl4mglUQ3ZiplfSvBYs=; b=0yWzZki535K8ZO+7oC5Oe7XjUc2kDylbHwjR6pS9SjkDe/ns9fbBDMr+SmFQuvaMnk Xz3hZPqBkuRYktGNyy/Z019B16WOQ0IaGMhl1fU6PP+S5c4e46KSDZpyLJn3CMyNrNrO zJ8hZlrrXkWi114l0uRqoUnDfg+U4sQ6NAjz0wg4L3LeXzHQN4Pu6szQhUXdzaTqY2rJ HbxvV3EWIq+xB6fpDoPdnJHro0T6hWtmgHRaJb+24J6nmJ2/AFUCWWcEv5VchsV9lTh/ x5xEM8efapiak+g9facMsu6K1JJfMxn9BQHx0djjadEth/1zk7e78ysGjOYoQ16gzTRZ p5HA== X-Gm-Message-State: AOAM532/8/65p8U0HL/RWlc8FNspJoxgfvrdnltlBeAsVqKVULwNtDmq hlha+VD2I+E1BrgLrCd335pX8HZrWrs= X-Google-Smtp-Source: ABdhPJxLNpRtYopxr92jZEhZzxZ72y70Ho05qx2sIUyAgsYScZUqcJs3NnNusOWUvsCblGexwTEsUEBwnzE= X-Received: from haoluo.svl.corp.google.com ([2620:15c:2cd:202:f3eb:bf7b:2da4:12c9]) (user=haoluo job=sendgmr) by 2002:a81:517:0:b0:2e6:af62:b11 with SMTP id 23-20020a810517000000b002e6af620b11mr7979743ywf.234.1648165289479; Thu, 24 Mar 2022 16:41:29 -0700 (PDT) Date: Thu, 24 Mar 2022 16:41:22 -0700 In-Reply-To: <20220324234123.1608337-1-haoluo@google.com> Message-Id: <20220324234123.1608337-2-haoluo@google.com> Mime-Version: 1.0 References: <20220324234123.1608337-1-haoluo@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH RFC bpf-next 1/2] bpf: Mmapable local storage. From: Hao Luo To: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann Cc: yhs@fb.com, KP Singh , Martin KaFai Lau , Song Liu , bpf@vger.kernel.org, linux-kernel@vger.kernel.org, Hao Luo Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Some map types support mmap operation, which allows userspace to communicate with BPF programs directly. Currently only arraymap and ringbuf have mmap implemented. However, in some use cases, when multiple program instances can run concurrently, global mmapable memory can cause race. In that case, userspace needs to provide necessary synchronizations to coordinate the usage of mapped global data. This can be a source of bottleneck. It would be great to have a mmapable local storage in that case. This patch adds that. Mmap isn't BPF syscall, so unpriv users can also use it to interact with maps. Currently the only way of allocating mmapable map area is using vmalloc() and it's only used at map allocation time. Vmalloc() may sleep, therefore it's not suitable for maps that may allocate memory in an atomic context such as local storage. Local storage uses kmalloc() with GFP_ATOMIC, which doesn't sleep. This patch uses kmalloc() with GFP_ATOMIC as well for mmapable map area. Allocating mmapable memory has requirment on page alignment. So we have to deliberately allocate more memory than necessary to obtain an address that has sdata->data aligned at page boundary. The calculations for mmapable allocation size, and the actual allocation/deallocation are packaged in three functions: - bpf_map_mmapable_alloc_size() - bpf_map_mmapable_kzalloc() - bpf_map_mmapable_kfree() BPF local storage uses them to provide generic mmap API: - bpf_local_storage_mmap() And task local storage adds the mmap callback: - task_storage_map_mmap() When application calls mmap on a task local storage, it gets its own local storage. Overall, mmapable local storage trades off memory with flexibility and efficiency. It brings memory fragmentation but can make programs stateless. Therefore useful in some cases. Cc: Song Liu Signed-off-by: Hao Luo --- include/linux/bpf.h | 4 ++ include/linux/bpf_local_storage.h | 5 ++- kernel/bpf/bpf_local_storage.c | 73 ++++++++++++++++++++++++++++--- kernel/bpf/bpf_task_storage.c | 40 +++++++++++++++++ kernel/bpf/syscall.c | 67 ++++++++++++++++++++++++++++ 5 files changed, 181 insertions(+), 8 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index bdb5298735ce..d76b8d6f91d2 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1549,6 +1549,10 @@ bpf_map_alloc_percpu(const struct bpf_map *map, size_t size, size_t align, return __alloc_percpu_gfp(size, align, flags); } #endif +size_t bpf_map_mmapable_alloc_size(size_t size, size_t offset); +void *bpf_map_mmapable_kzalloc(const struct bpf_map *map, size_t size, + size_t offset, gfp_t flags); +void bpf_map_mmapable_kfree(void *ptr); extern int sysctl_unprivileged_bpf_disabled; diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h index 493e63258497..36dc1102ec48 100644 --- a/include/linux/bpf_local_storage.h +++ b/include/linux/bpf_local_storage.h @@ -74,7 +74,8 @@ struct bpf_local_storage_elem { struct hlist_node snode; /* Linked to bpf_local_storage */ struct bpf_local_storage __rcu *local_storage; struct rcu_head rcu; - /* 8 bytes hole */ + u32 map_flags; + /* 4 bytes hole */ /* The data is stored in another cacheline to minimize * the number of cachelines access during a cache hit. */ @@ -168,4 +169,6 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, void bpf_local_storage_free_rcu(struct rcu_head *rcu); +int bpf_local_storage_mmap(struct bpf_local_storage_map *smap, void *data, + struct vm_area_struct *vma); #endif /* _BPF_LOCAL_STORAGE_H */ diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c index 01aa2b51ec4d..4dd1d7af4451 100644 --- a/kernel/bpf/bpf_local_storage.c +++ b/kernel/bpf/bpf_local_storage.c @@ -15,7 +15,7 @@ #include #include -#define BPF_LOCAL_STORAGE_CREATE_FLAG_MASK (BPF_F_NO_PREALLOC | BPF_F_CLONE) +#define BPF_LOCAL_STORAGE_CREATE_FLAG_MASK (BPF_F_NO_PREALLOC | BPF_F_CLONE | BPF_F_MMAPABLE) static struct bpf_local_storage_map_bucket * select_bucket(struct bpf_local_storage_map *smap, @@ -66,13 +66,26 @@ bpf_selem_alloc(struct bpf_local_storage_map *smap, void *owner, void *value, bool charge_mem, gfp_t gfp_flags) { struct bpf_local_storage_elem *selem; + struct bpf_map *map = &smap->map; if (charge_mem && mem_charge(smap, owner, smap->elem_size)) return NULL; - selem = bpf_map_kzalloc(&smap->map, smap->elem_size, - gfp_flags | __GFP_NOWARN); + if (map->map_flags & BPF_F_MMAPABLE) { + size_t offset; + + offset = offsetof(struct bpf_local_storage_elem, sdata) + + offsetof(struct bpf_local_storage_data, data); + selem = bpf_map_mmapable_kzalloc(&smap->map, offset, + map->value_size, + gfp_flags | __GFP_NOWARN); + } else { + selem = bpf_map_kzalloc(&smap->map, smap->elem_size, + gfp_flags | __GFP_NOWARN); + } + if (selem) { + selem->map_flags = map->map_flags; if (value) memcpy(SDATA(selem)->data, value, smap->map.value_size); return selem; @@ -92,12 +105,24 @@ void bpf_local_storage_free_rcu(struct rcu_head *rcu) kfree_rcu(local_storage, rcu); } +static void selem_mmapable_free_rcu(struct rcu_head *rcu) +{ + struct bpf_local_storage_elem *selem; + + selem = container_of(rcu, struct bpf_local_storage_elem, rcu); + bpf_map_mmapable_kfree(selem); +} + static void bpf_selem_free_rcu(struct rcu_head *rcu) { struct bpf_local_storage_elem *selem; selem = container_of(rcu, struct bpf_local_storage_elem, rcu); - kfree_rcu(selem, rcu); + if (selem->map_flags & BPF_F_MMAPABLE) { + call_rcu(rcu, selem_mmapable_free_rcu); + } else { + kfree_rcu(selem, rcu); + } } /* local_storage->lock must be held and selem->local_storage == local_storage. @@ -383,7 +408,10 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, err = bpf_local_storage_alloc(owner, smap, selem, gfp_flags); if (err) { - kfree(selem); + if (map_flags & BPF_F_MMAPABLE) + bpf_map_mmapable_kfree(selem); + else + kfree(selem); mem_uncharge(smap, owner, smap->elem_size); return ERR_PTR(err); } @@ -623,8 +651,17 @@ struct bpf_local_storage_map *bpf_local_storage_map_alloc(union bpf_attr *attr) raw_spin_lock_init(&smap->buckets[i].lock); } - smap->elem_size = - sizeof(struct bpf_local_storage_elem) + attr->value_size; + if (attr->map_flags & BPF_F_MMAPABLE) { + size_t offset; + + offset = offsetof(struct bpf_local_storage_elem, sdata) + + offsetof(struct bpf_local_storage_data, data); + smap->elem_size = bpf_map_mmapable_alloc_size(offset, + attr->value_size); + } else { + smap->elem_size = + sizeof(struct bpf_local_storage_elem) + attr->value_size; + } return smap; } @@ -645,3 +682,25 @@ int bpf_local_storage_map_check_btf(const struct bpf_map *map, return 0; } + +int bpf_local_storage_mmap(struct bpf_local_storage_map *smap, void *data, + struct vm_area_struct *vma) +{ + struct bpf_map *map; + unsigned long pfn; + unsigned long count; + unsigned long size; + + map = &smap->map; + size = PAGE_ALIGN(map->value_size); + if (vma->vm_pgoff * PAGE_SIZE + (vma->vm_end - vma->vm_start) > size) + return -EINVAL; + + if (!IS_ALIGNED((unsigned long)data, PAGE_SIZE)) + return -EINVAL; + + pfn = virt_to_phys(data) >> PAGE_SHIFT; + count = size >> PAGE_SHIFT; + return remap_pfn_range(vma, vma->vm_start, pfn + vma->vm_pgoff, + count << PAGE_SHIFT, vma->vm_page_prot); +} diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c index 6638a0ecc3d2..9552b84f96db 100644 --- a/kernel/bpf/bpf_task_storage.c +++ b/kernel/bpf/bpf_task_storage.c @@ -307,6 +307,45 @@ static void task_storage_map_free(struct bpf_map *map) bpf_local_storage_map_free(smap, &bpf_task_storage_busy); } +static int task_storage_map_mmap(struct bpf_map *map, struct vm_area_struct *vma) +{ + struct bpf_local_storage_map *smap; + struct bpf_local_storage_data *sdata; + int err; + + if (!(map->map_flags & BPF_F_MMAPABLE)) + return -EINVAL; + + rcu_read_lock(); + if (!bpf_task_storage_trylock()) { + rcu_read_unlock(); + return -EBUSY; + } + + smap = (struct bpf_local_storage_map *)map; + sdata = task_storage_lookup(current, map, true); + if (sdata) { + err = bpf_local_storage_mmap(smap, sdata->data, vma); + goto unlock; + } + + /* only allocate new storage, when the task is refcounted */ + if (refcount_read(¤t->usage)) { + sdata = bpf_local_storage_update(current, smap, NULL, + BPF_NOEXIST, GFP_ATOMIC); + if (IS_ERR(sdata)) { + err = PTR_ERR(sdata); + goto unlock; + } + } + + err = bpf_local_storage_mmap(smap, sdata->data, vma); +unlock: + bpf_task_storage_unlock(); + rcu_read_unlock(); + return err; +} + static int task_storage_map_btf_id; const struct bpf_map_ops task_storage_map_ops = { .map_meta_equal = bpf_map_meta_equal, @@ -321,6 +360,7 @@ const struct bpf_map_ops task_storage_map_ops = { .map_btf_name = "bpf_local_storage_map", .map_btf_id = &task_storage_map_btf_id, .map_owner_storage_ptr = task_storage_ptr, + .map_mmap = &task_storage_map_mmap, }; const struct bpf_func_proto bpf_task_storage_get_proto = { diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index cdaa1152436a..facd6918698d 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -473,6 +473,73 @@ static void bpf_map_release_memcg(struct bpf_map *map) } #endif +/* Given an address 'addr', return an address A such that A + offset is + * page aligned. The distance between 'addr' and that page boundary + * (i.e. A + offset) must be >= offset + sizeof(ptr). + */ +static unsigned long mmapable_alloc_ret_addr(void *addr, size_t offset) +{ + const size_t ptr_size = sizeof(void *); + + return PAGE_ALIGN((unsigned long)addr + offset + ptr_size) - offset; +} + +/* Given an offset and size, return the minimal allocation size, such that it's + * guaranteed to contains an address where address + offset is page aligned and + * [address + offset, address + offset + size] is covered in the allocated area + */ +size_t bpf_map_mmapable_alloc_size(size_t offset, size_t size) +{ + const size_t ptr_size = sizeof(void *); + + return offset + ptr_size + PAGE_ALIGN(size) + PAGE_SIZE; +} + +/* Allocate a chunk of memory and return an address in the allocated area, such + * that address + offset is page aligned and address + offset + PAGE_ALIGN(size) + * is within the allocated area. + */ +void *bpf_map_mmapable_kzalloc(const struct bpf_map *map, size_t offset, + size_t size, gfp_t flags) +{ + const size_t ptr_size = sizeof(void *); + size_t alloc_size; + void *alloc_ptr; + unsigned long addr, ret_addr; + + if (!IS_ALIGNED(offset, ptr_size)) { + pr_warn("bpf_map_mmapable_kzalloc: offset (%lx) is not aligned with ptr_size (%lu)\n", + offset, ptr_size); + return NULL; + } + + alloc_size = bpf_map_mmapable_alloc_size(offset, size); + alloc_ptr = bpf_map_kzalloc(map, alloc_size, flags); + if (!alloc_ptr) + return NULL; + + ret_addr = mmapable_alloc_ret_addr(alloc_ptr, offset); + + /* Save the raw allocation address just below the address to be returned. */ + addr = ret_addr - ptr_size; + *(void **)addr = alloc_ptr; + + return (void *)ret_addr; +} + +/* Free the memory allocated from bpf_map_mmapable_kzalloc() */ +void bpf_map_mmapable_kfree(void *ptr) +{ + const size_t ptr_size = sizeof(void *); + unsigned long addr; + + if (!IS_ALIGNED((unsigned long)ptr, ptr_size)) + return; + + addr = (unsigned long)ptr - ptr_size; + kfree(*(void **)addr); +} + /* called from workqueue */ static void bpf_map_free_deferred(struct work_struct *work) { From patchwork Thu Mar 24 23:41:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hao Luo X-Patchwork-Id: 12791083 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27465C433FE for ; Thu, 24 Mar 2022 23:42:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356100AbiCXXoM (ORCPT ); Thu, 24 Mar 2022 19:44:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49108 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356450AbiCXXnF (ORCPT ); Thu, 24 Mar 2022 19:43:05 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF0AA27CFB for ; Thu, 24 Mar 2022 16:41:32 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id i5-20020a258b05000000b006347131d40bso4710045ybl.17 for ; Thu, 24 Mar 2022 16:41:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Fvc6J6M6wcnrLQYRH6BhKAfsqygwLe3UGholaX7FMA0=; b=qwc6aIu5Ovc+0GN2azCLlUlH2jfPiydIUMXBP/9wtLJp8BKskc9/DY2+dCmr0NLdx5 CzW10s1Kb6fzMKDh7Sw3PF1JYO84FG1kyfULFPQI5D/xYr+frrv9ObUJ67LBoFWDI6hj OOaMseMGnEg5OVT6ajCRQ4pY3FwwP5OblzYsfwrussZTcHfXPzKZ+KLs613k8ud5i3EG o1OK7th6NWrLoDhkua8yon0Y9Ew/cesv62AkwEHWcxBHjIeQ87zgwRjEHpOScmRzwHkt pDaC+xoAmSjXzeS8gBkEHSz7yuLJ9VN5/7ufq6373j64IHyxT0KWeSAaZROGMnAIY5K/ JMGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Fvc6J6M6wcnrLQYRH6BhKAfsqygwLe3UGholaX7FMA0=; b=WClSeLDCyxFcjhgGhuYY+2auNNfTEuXaXPjNaR5scUgZU0XHc9h5xXsr8JCvayiRhQ 06Qj9uYH/Y8imVvoLTFlvz/MVymfKse125LC2oZh/bgr2XLOX8jRol1OxQv5oxm+D7QB BaQxDkc0Lavcgz8o3GhCaeaiKaRnv9JHZlYb9pjorotsHylD02plcB72QIagccV/qru0 J/y1Vkt+AufFGqckDJsI5yUY8pt9gq2zVv83oYpJcYD/5Xl09+VghwHwTUFB/mFz4/8a jWCiZ8ALLKOfxhX6wlKugcUsSqtLzSpNceZsBPKk74TuSBlIN9xyslDgjB84MCNSDPAJ 42ZQ== X-Gm-Message-State: AOAM531TyO8vFj2QK6KyeDhv8NjR2qAFzZZRFdm+s3vuRFhcg1WuQAUA wtQD/MIjKyIpswPnhNgtbmtKQPDSi0Q= X-Google-Smtp-Source: ABdhPJzbLWng5ePUh2h4yTHDw5MikqqEhBkzdNWfUsNQmPt0KjRTRPPBspbB/2lcY6syjTsOHEFYmJog1Ew= X-Received: from haoluo.svl.corp.google.com ([2620:15c:2cd:202:f3eb:bf7b:2da4:12c9]) (user=haoluo job=sendgmr) by 2002:a81:5545:0:b0:2e5:a302:4739 with SMTP id j66-20020a815545000000b002e5a3024739mr7412732ywb.348.1648165292087; Thu, 24 Mar 2022 16:41:32 -0700 (PDT) Date: Thu, 24 Mar 2022 16:41:23 -0700 In-Reply-To: <20220324234123.1608337-1-haoluo@google.com> Message-Id: <20220324234123.1608337-3-haoluo@google.com> Mime-Version: 1.0 References: <20220324234123.1608337-1-haoluo@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH RFC bpf-next 2/2] selftests/bpf: Test mmapable task local storage. From: Hao Luo To: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann Cc: yhs@fb.com, KP Singh , Martin KaFai Lau , Song Liu , bpf@vger.kernel.org, linux-kernel@vger.kernel.org, Hao Luo Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Tests mmapable task local storage. Signed-off-by: Hao Luo --- .../bpf/prog_tests/task_local_storage.c | 38 +++++++++++++++++++ .../bpf/progs/task_local_storage_mmapable.c | 38 +++++++++++++++++++ 2 files changed, 76 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/task_local_storage_mmapable.c diff --git a/tools/testing/selftests/bpf/prog_tests/task_local_storage.c b/tools/testing/selftests/bpf/prog_tests/task_local_storage.c index 035c263aab1b..24e6edd32a78 100644 --- a/tools/testing/selftests/bpf/prog_tests/task_local_storage.c +++ b/tools/testing/selftests/bpf/prog_tests/task_local_storage.c @@ -6,8 +6,10 @@ #include /* For SYS_xxx definitions */ #include #include +#include #include "task_local_storage.skel.h" #include "task_local_storage_exit_creds.skel.h" +#include "task_local_storage_mmapable.skel.h" #include "task_ls_recursion.skel.h" static void test_sys_enter_exit(void) @@ -81,6 +83,40 @@ static void test_recursion(void) task_ls_recursion__destroy(skel); } +#define MAGIC_VALUE 0xabcd1234 + +static void test_mmapable(void) +{ + struct task_local_storage_mmapable *skel; + const long page_size = sysconf(_SC_PAGE_SIZE); + int fd, err; + void *ptr; + + skel = task_local_storage_mmapable__open_and_load(); + if (!ASSERT_OK_PTR(skel, "skel_open_and_load")) + return; + + fd = bpf_map__fd(skel->maps.mmapable_map); + ptr = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + if (!ASSERT_NEQ(ptr, MAP_FAILED, "mmap")) + goto out; + + skel->bss->target_pid = syscall(SYS_gettid); + + err = task_local_storage_mmapable__attach(skel); + if (!ASSERT_OK(err, "skel_attach")) + goto unmap; + + syscall(SYS_gettid); + + ASSERT_EQ(*(u64 *)ptr, MAGIC_VALUE, "value"); + +unmap: + munmap(ptr, page_size); +out: + task_local_storage_mmapable__destroy(skel); +} + void test_task_local_storage(void) { if (test__start_subtest("sys_enter_exit")) @@ -89,4 +125,6 @@ void test_task_local_storage(void) test_exit_creds(); if (test__start_subtest("recursion")) test_recursion(); + if (test__start_subtest("mmapable")) + test_mmapable(); } diff --git a/tools/testing/selftests/bpf/progs/task_local_storage_mmapable.c b/tools/testing/selftests/bpf/progs/task_local_storage_mmapable.c new file mode 100644 index 000000000000..8cd82bb7336a --- /dev/null +++ b/tools/testing/selftests/bpf/progs/task_local_storage_mmapable.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Google */ + +#include "vmlinux.h" +#include +#include + +char _license[] SEC("license") = "GPL"; + +struct { + __uint(type, BPF_MAP_TYPE_TASK_STORAGE); + __uint(map_flags, BPF_F_NO_PREALLOC | BPF_F_MMAPABLE); + __type(key, int); + __type(value, long); +} mmapable_map SEC(".maps"); + +#define MAGIC_VALUE 0xabcd1234 + +pid_t target_pid = 0; + +SEC("tp_btf/sys_enter") +int BPF_PROG(on_enter, struct pt_regs *regs, long id) +{ + struct task_struct *task; + long *ptr; + + task = bpf_get_current_task_btf(); + if (task->pid != target_pid) + return 0; + + ptr = bpf_task_storage_get(&mmapable_map, task, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + if (!ptr) + return 0; + + *ptr = MAGIC_VALUE; + return 0; +}