From patchwork Wed Jul 24 22:52:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13741418 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B5186F068 for ; Wed, 24 Jul 2024 22:52:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861537; cv=none; b=d0Ecq5R5b3Oa+XH1ZALNh28UH9lqeY+hOFR44LUVFs4elzqXctkxVxAOdFT7fkL8EUXrrMoy1gRtiQGmVxjKo0OEYMtNevdWONVkbZds2sV39nOk4YCeGlBCdI22Fz/O4ziYVD3m16ArbhO++QSfL01PL3r7DEGL0QDKwihmxiY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861537; c=relaxed/simple; bh=q2ogziqHOj9Wq6uIoecdUqbK1pcPKH+zumPWi1mLFts=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=P/VLZFp4LfYJ1vZVy0Nz+udi6MuuULVs2/2hM4hD8Oug8vVhLl+oCwQ4p3K/hRN33Kr3BfKVT+EkzvCzGLRkLPct65zD2RT0AtmouKhqzdiqr6x+T/jWNPO6XAbfUGqM1oWgyNaWN9ZfuITMusyFdb1/g/JIvUPFI3rBxaaHMLM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=slxma4Nt; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="slxma4Nt" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD426C32781; Wed, 24 Jul 2024 22:52:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721861537; bh=q2ogziqHOj9Wq6uIoecdUqbK1pcPKH+zumPWi1mLFts=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=slxma4Nt6lk2yBcUSWcrz/O7yhnKN6d/fDfC3KYyg3WMr2Yk/o3C8Jv6+8p7Beq4l XTm2UNGjlMym9tAaGdUcxDI8YXIudpPK72idypZk6Yo1Bfsk3QgDu6LsTsZIro7OTH amPRV6p8tMtq8H5nBXsjznhBSHbGF7taJWHmSnPUczHPxtNgK/7v8u+3XZEUAw2rgn YS74qPwEl+2bnCgwwpCpy8t7NdX0SLMKw8qE9Z/tjHUA5lKddKoGo1w/SybTFlZww+ tc8qpt9dB2TBenH7LKXTzPggYsx/FxjFmjwkgnU7uPKPWFoE9Xc6Dz2GmqUzGgEBiT egOMhKG7AoMmw== From: Andrii Nakryiko To: bpf@vger.kernel.org Cc: linux-mm@kvack.org, akpm@linux-foundation.org, adobriyan@gmail.com, shakeel.butt@linux.dev, hannes@cmpxchg.org, ak@linux.intel.com, osandov@osandov.com, song@kernel.org, Andrii Nakryiko Subject: [PATCH v2 bpf-next 01/10] lib/buildid: add single page-based file reader abstraction Date: Wed, 24 Jul 2024 15:52:01 -0700 Message-ID: <20240724225210.545423-2-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240724225210.545423-1-andrii@kernel.org> References: <20240724225210.545423-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Add freader abstraction that transparently manages fetching and local mapping of the underlying file page(s) and provides a simple direct data access interface. freader_fetch() is the only and single interface necessary. It accepts file offset and desired number of bytes that should be accessed, and will return a kernel mapped pointer that caller can use to dereference data up to requested size. Requested size can't be bigger than the size of the extra buffer provided during initialization (because, worst case, all requested data has to be copied into it, so it's better to flag wrongly sized buffer unconditionally, regardless if requested data range is crossing page boundaries or not). If page is not paged in, or some of the conditions are not satisfied, NULL is returned and more detailed error code can be accessed through freader->err field. This approach makes the usage of freader_fetch() cleaner. To accommodate accessing file data that crosses page boundaries, user has to provide an extra buffer that will be used to make a local copy, if necessary. This is done to maintain a simple linear pointer data access interface. We switch existing build ID parsing logic to it, without changing or lifting any of the existing constraints, yet. This will be done separately. Given existing code was written with the assumption that it's always working with a single (first) page of the underlying ELF file, logic passes direct pointers around, which doesn't really work well with freader approach and would be limiting when removing the single page limitation. So we adjust all the logic to work in terms of file offsets. There is also a memory buffer-based version (freader_init_from_mem()) for cases when desired data is already available in kernel memory. This is used for parsing vmlinux's own build ID note. In this mode assumption is that provided data starts at "file offset" zero, which works great when parsing ELF notes sections, as all the parsing logic is relative to note section's start. Signed-off-by: Andrii Nakryiko --- lib/buildid.c | 278 +++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 217 insertions(+), 61 deletions(-) diff --git a/lib/buildid.c b/lib/buildid.c index 7954dd92e36c..1442a2483a8b 100644 --- a/lib/buildid.c +++ b/lib/buildid.c @@ -8,38 +8,174 @@ #define BUILD_ID 3 +struct freader { + void *buf; + u32 buf_sz; + int err; + union { + struct { + struct address_space *mapping; + struct page *page; + void *page_addr; + u64 file_off; + }; + struct { + const char *data; + u64 data_sz; + }; + }; +}; + +static void freader_init_from_file(struct freader *r, void *buf, u32 buf_sz, + struct address_space *mapping) +{ + memset(r, 0, sizeof(*r)); + r->buf = buf; + r->buf_sz = buf_sz; + r->mapping = mapping; +} + +static void freader_init_from_mem(struct freader *r, const char *data, u64 data_sz) +{ + memset(r, 0, sizeof(*r)); + r->data = data; + r->data_sz = data_sz; +} + +static void freader_put_page(struct freader *r) +{ + if (!r->page) + return; + kunmap_local(r->page_addr); + put_page(r->page); + r->page = NULL; +} + +static int freader_get_page(struct freader *r, u64 file_off) +{ + pgoff_t pg_off = file_off >> PAGE_SHIFT; + + freader_put_page(r); + + r->page = find_get_page(r->mapping, pg_off); + if (!r->page) + return -EFAULT; /* page not mapped */ + + r->page_addr = kmap_local_page(r->page); + r->file_off = file_off & PAGE_MASK; + + return 0; +} + +static const void *freader_fetch(struct freader *r, u64 file_off, size_t sz) +{ + int err; + + /* provided internal temporary buffer should be sized correctly */ + if (WARN_ON(r->buf && sz > r->buf_sz)) { + r->err = -E2BIG; + return NULL; + } + + if (unlikely(file_off + sz < file_off)) { + r->err = -EOVERFLOW; + return NULL; + } + + /* working with memory buffer is much more straightforward */ + if (!r->buf) { + if (file_off + sz > r->data_sz) { + r->err = -ERANGE; + return NULL; + } + return r->data + file_off; + } + + /* check if we need to fetch a different page first */ + if (!r->page || file_off < r->file_off || file_off >= r->file_off + PAGE_SIZE) { + err = freader_get_page(r, file_off); + if (err) { + r->err = err; + return NULL; + } + } + + /* if requested data is crossing page boundaries, we have to copy + * everything into our local buffer to keep a simple linear memory + * access interface + */ + if (file_off + sz > r->file_off + PAGE_SIZE) { + int part_sz = r->file_off + PAGE_SIZE - file_off; + + /* copy the part that resides in the current page */ + memcpy(r->buf, r->page_addr + (file_off - r->file_off), part_sz); + + /* fetch next page */ + err = freader_get_page(r, r->file_off + PAGE_SIZE); + if (err) { + r->err = err; + return NULL; + } + + /* copy the rest of requested data */ + memcpy(r->buf + part_sz, r->page_addr, sz - part_sz); + + return r->buf; + } + + /* if data fits in a single page, just return direct pointer */ + return r->page_addr + (file_off - r->file_off); +} + +static void freader_cleanup(struct freader *r) +{ + freader_put_page(r); +} + /* * Parse build id from the note segment. This logic can be shared between * 32-bit and 64-bit system, because Elf32_Nhdr and Elf64_Nhdr are * identical. */ -static int parse_build_id_buf(unsigned char *build_id, - __u32 *size, - const void *note_start, - Elf32_Word note_size) +static int parse_build_id_buf(struct freader *r, + unsigned char *build_id, __u32 *size, + u64 note_offs, Elf32_Word note_size) { - Elf32_Word note_offs = 0, new_offs; + const char note_name[] = "GNU"; + const size_t note_name_sz = sizeof(note_name); + u64 build_id_off, new_offs, note_end = note_offs + note_size; + u32 build_id_sz; + const Elf32_Nhdr *nhdr; + const char *data; - while (note_offs + sizeof(Elf32_Nhdr) < note_size) { - Elf32_Nhdr *nhdr = (Elf32_Nhdr *)(note_start + note_offs); + while (note_offs + sizeof(Elf32_Nhdr) < note_end) { + nhdr = freader_fetch(r, note_offs, sizeof(Elf32_Nhdr) + note_name_sz); + if (!nhdr) + return r->err; if (nhdr->n_type == BUILD_ID && - nhdr->n_namesz == sizeof("GNU") && - !strcmp((char *)(nhdr + 1), "GNU") && + nhdr->n_namesz == note_name_sz && + !strcmp((char *)(nhdr + 1), note_name) && nhdr->n_descsz > 0 && nhdr->n_descsz <= BUILD_ID_SIZE_MAX) { - memcpy(build_id, - note_start + note_offs + - ALIGN(sizeof("GNU"), 4) + sizeof(Elf32_Nhdr), - nhdr->n_descsz); - memset(build_id + nhdr->n_descsz, 0, - BUILD_ID_SIZE_MAX - nhdr->n_descsz); + + build_id_off = note_offs + sizeof(Elf32_Nhdr) + ALIGN(note_name_sz, 4); + build_id_sz = nhdr->n_descsz; + + /* freader_fetch() will invalidate nhdr pointer */ + data = freader_fetch(r, build_id_off, build_id_sz); + if (!data) + return r->err; + + memcpy(build_id, data, build_id_sz); + memset(build_id + build_id_sz, 0, BUILD_ID_SIZE_MAX - build_id_sz); if (size) - *size = nhdr->n_descsz; + *size = build_id_sz; return 0; } + new_offs = note_offs + sizeof(Elf32_Nhdr) + - ALIGN(nhdr->n_namesz, 4) + ALIGN(nhdr->n_descsz, 4); + ALIGN(nhdr->n_namesz, 4) + ALIGN(nhdr->n_descsz, 4); if (new_offs <= note_offs) /* overflow */ break; note_offs = new_offs; @@ -48,73 +184,87 @@ static int parse_build_id_buf(unsigned char *build_id, return -EINVAL; } -static inline int parse_build_id(const void *page_addr, +static inline int parse_build_id(struct freader *r, unsigned char *build_id, __u32 *size, - const void *note_start, + u64 note_start_off, Elf32_Word note_size) { /* check for overflow */ - if (note_start < page_addr || note_start + note_size < note_start) + if (note_start_off + note_size < note_start_off) return -EINVAL; /* only supports note that fits in the first page */ - if (note_start + note_size > page_addr + PAGE_SIZE) + if (note_start_off + note_size > PAGE_SIZE) return -EINVAL; - return parse_build_id_buf(build_id, size, note_start, note_size); + return parse_build_id_buf(r, build_id, size, note_start_off, note_size); } /* Parse build ID from 32-bit ELF */ -static int get_build_id_32(const void *page_addr, unsigned char *build_id, - __u32 *size) +static int get_build_id_32(struct freader *r, unsigned char *build_id, __u32 *size) { - Elf32_Ehdr *ehdr = (Elf32_Ehdr *)page_addr; - Elf32_Phdr *phdr; - int i; + const Elf32_Ehdr *ehdr; + const Elf32_Phdr *phdr; + __u32 phnum, i; + + ehdr = freader_fetch(r, 0, sizeof(Elf32_Ehdr)); + if (!ehdr) + return r->err; + + /* subsequent freader_fetch() calls invalidate pointers, so remember locally */ + phnum = ehdr->e_phnum; /* only supports phdr that fits in one page */ - if (ehdr->e_phnum > - (PAGE_SIZE - sizeof(Elf32_Ehdr)) / sizeof(Elf32_Phdr)) + if (phnum > (PAGE_SIZE - sizeof(Elf32_Ehdr)) / sizeof(Elf32_Phdr)) return -EINVAL; - phdr = (Elf32_Phdr *)(page_addr + sizeof(Elf32_Ehdr)); + for (i = 0; i < phnum; ++i) { + phdr = freader_fetch(r, i * sizeof(Elf32_Phdr), sizeof(Elf32_Phdr)); + if (!phdr) + return r->err; - for (i = 0; i < ehdr->e_phnum; ++i) { - if (phdr[i].p_type == PT_NOTE && - !parse_build_id(page_addr, build_id, size, - page_addr + phdr[i].p_offset, - phdr[i].p_filesz)) + if (phdr->p_type == PT_NOTE && + !parse_build_id(r, build_id, size, phdr->p_offset, phdr->p_filesz)) return 0; } return -EINVAL; } /* Parse build ID from 64-bit ELF */ -static int get_build_id_64(const void *page_addr, unsigned char *build_id, - __u32 *size) +static int get_build_id_64(struct freader *r, unsigned char *build_id, __u32 *size) { - Elf64_Ehdr *ehdr = (Elf64_Ehdr *)page_addr; - Elf64_Phdr *phdr; - int i; + const Elf64_Ehdr *ehdr; + const Elf64_Phdr *phdr; + __u32 phnum, i; + + ehdr = freader_fetch(r, 0, sizeof(Elf64_Ehdr)); + if (!ehdr) + return r->err; + + /* subsequent freader_fetch() calls invalidate pointers, so remember locally */ + phnum = ehdr->e_phnum; /* only supports phdr that fits in one page */ - if (ehdr->e_phnum > - (PAGE_SIZE - sizeof(Elf64_Ehdr)) / sizeof(Elf64_Phdr)) + if (phnum > (PAGE_SIZE - sizeof(Elf64_Ehdr)) / sizeof(Elf64_Phdr)) return -EINVAL; - phdr = (Elf64_Phdr *)(page_addr + sizeof(Elf64_Ehdr)); + for (i = 0; i < phnum; ++i) { + phdr = freader_fetch(r, i * sizeof(Elf64_Phdr), sizeof(Elf64_Phdr)); + if (!phdr) + return r->err; - for (i = 0; i < ehdr->e_phnum; ++i) { - if (phdr[i].p_type == PT_NOTE && - !parse_build_id(page_addr, build_id, size, - page_addr + phdr[i].p_offset, - phdr[i].p_filesz)) + if (phdr->p_type == PT_NOTE && + !parse_build_id(r, build_id, size, phdr->p_offset, phdr->p_filesz)) return 0; } + return -EINVAL; } +/* enough for Elf64_Ehdr, Elf64_Phdr, and all the smaller requests */ +#define MAX_FREADER_BUF_SZ 64 + /* * Parse build ID of ELF file mapped to vma * @vma: vma object @@ -126,22 +276,25 @@ static int get_build_id_64(const void *page_addr, unsigned char *build_id, int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size) { - Elf32_Ehdr *ehdr; - struct page *page; - void *page_addr; + const Elf32_Ehdr *ehdr; + struct freader r; + char buf[MAX_FREADER_BUF_SZ]; int ret; /* only works for page backed storage */ if (!vma->vm_file) return -EINVAL; - page = find_get_page(vma->vm_file->f_mapping, 0); - if (!page) - return -EFAULT; /* page not mapped */ + freader_init_from_file(&r, buf, sizeof(buf), vma->vm_file->f_mapping); + + /* fetch first 18 bytes of ELF header for checks */ + ehdr = freader_fetch(&r, 0, offsetofend(Elf32_Ehdr, e_type)); + if (!ehdr) { + ret = r.err; + goto out; + } ret = -EINVAL; - page_addr = kmap_local_page(page); - ehdr = (Elf32_Ehdr *)page_addr; /* compare magic x7f "ELF" */ if (memcmp(ehdr->e_ident, ELFMAG, SELFMAG) != 0) @@ -152,12 +305,11 @@ int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, goto out; if (ehdr->e_ident[EI_CLASS] == ELFCLASS32) - ret = get_build_id_32(page_addr, build_id, size); + ret = get_build_id_32(&r, build_id, size); else if (ehdr->e_ident[EI_CLASS] == ELFCLASS64) - ret = get_build_id_64(page_addr, build_id, size); + ret = get_build_id_64(&r, build_id, size); out: - kunmap_local(page_addr); - put_page(page); + freader_cleanup(&r); return ret; } @@ -171,7 +323,11 @@ int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, */ int build_id_parse_buf(const void *buf, unsigned char *build_id, u32 buf_size) { - return parse_build_id_buf(build_id, NULL, buf, buf_size); + struct freader r; + + freader_init_from_mem(&r, buf, buf_size); + + return parse_build_id_buf(&r, build_id, NULL, 0, buf_size); } #if IS_ENABLED(CONFIG_STACKTRACE_BUILD_ID) || IS_ENABLED(CONFIG_VMCORE_INFO) From patchwork Wed Jul 24 22:52:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13741419 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D02BF6F068 for ; Wed, 24 Jul 2024 22:52:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861540; cv=none; b=of4HLgDTIcNRxmlgDMPopuCa4ZR6QzJQD5NK7gX2YvrdzF6OwlpulxHrr7pjfERgQqlozQRKHXuMlCTgkNDW8s0id3ZUNEJU/8Lx2q8oZU53Qow4bD+j1wYuE/cJIVaI5QZ5+VPBA89yWASqGzXMRuEdCd3Q7AnUcOqvxnUZXy8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861540; c=relaxed/simple; bh=C6pZlI4b95ZXtBjMHdytNmZo2a2DbuYqAFHnzJFYuwM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ja1gpEWsSvye2HnkLCodkAqJcoVRVWnMesiAQB2K3MlGfqvgAzQXBENi79NRPnQ3oioX+/v5FbxEG0zUTRSzqUUnTCPMt/1BBZyRHaE4Lzdtj/+3A/L2N2207mYihzXKkfGBJGJ5q5+yhDr3AZvLl8F1xuH1XLp46ejHc09M5bc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TQ5vO9qh; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TQ5vO9qh" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3ABE4C32781; Wed, 24 Jul 2024 22:52:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721861540; bh=C6pZlI4b95ZXtBjMHdytNmZo2a2DbuYqAFHnzJFYuwM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TQ5vO9qhbM2GJE56vZsS/lOuShzvkYaxdZObXx7h0SBq5zPT+TFk3JKaQtNMmPm/b Zt/UnPeecmXrpRtskvooa1y4ovCwvyNHYlX4tohVHjDNJsRCuNZXQkXj8xjHj7BCCj Hb9RSTDC86TG4steWnHHflvUG2Ct63Fub0wiHe3Q+4kJmd7M/Jkm/+VoVXZG4ESrGj 2XJmGVlhE4wEnkf2FfU4pg+p3VnxQ4d3afOk4exQvPBXk0JfkNYXEUNz0Iqo/btYbJ bhAtrJ0zu3zaRayCKdmpE1OaU33VqG5DMp+b1mkKaVJ4yw/zuIMGHO6QHvuP/d+seT pQoXN6KfioxaA== From: Andrii Nakryiko To: bpf@vger.kernel.org Cc: linux-mm@kvack.org, akpm@linux-foundation.org, adobriyan@gmail.com, shakeel.butt@linux.dev, hannes@cmpxchg.org, ak@linux.intel.com, osandov@osandov.com, song@kernel.org, Andrii Nakryiko Subject: [PATCH v2 bpf-next 02/10] lib/buildid: take into account e_phoff when fetching program headers Date: Wed, 24 Jul 2024 15:52:02 -0700 Message-ID: <20240724225210.545423-3-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240724225210.545423-1-andrii@kernel.org> References: <20240724225210.545423-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Current code assumption is that program (segment) headers are following ELF header immediately. This is a common case, but is not guaranteed. So take into account e_phoff field of the ELF header when accessing program headers. Reported-by: Alexey Dobriyan Signed-off-by: Andrii Nakryiko --- lib/buildid.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/lib/buildid.c b/lib/buildid.c index 1442a2483a8b..ce48ffab4111 100644 --- a/lib/buildid.c +++ b/lib/buildid.c @@ -206,7 +206,7 @@ static int get_build_id_32(struct freader *r, unsigned char *build_id, __u32 *si { const Elf32_Ehdr *ehdr; const Elf32_Phdr *phdr; - __u32 phnum, i; + __u32 phnum, phoff, i; ehdr = freader_fetch(r, 0, sizeof(Elf32_Ehdr)); if (!ehdr) @@ -214,13 +214,14 @@ static int get_build_id_32(struct freader *r, unsigned char *build_id, __u32 *si /* subsequent freader_fetch() calls invalidate pointers, so remember locally */ phnum = ehdr->e_phnum; + phoff = READ_ONCE(ehdr->e_phoff); /* only supports phdr that fits in one page */ if (phnum > (PAGE_SIZE - sizeof(Elf32_Ehdr)) / sizeof(Elf32_Phdr)) return -EINVAL; for (i = 0; i < phnum; ++i) { - phdr = freader_fetch(r, i * sizeof(Elf32_Phdr), sizeof(Elf32_Phdr)); + phdr = freader_fetch(r, phoff + i * sizeof(Elf32_Phdr), sizeof(Elf32_Phdr)); if (!phdr) return r->err; @@ -237,6 +238,7 @@ static int get_build_id_64(struct freader *r, unsigned char *build_id, __u32 *si const Elf64_Ehdr *ehdr; const Elf64_Phdr *phdr; __u32 phnum, i; + __u64 phoff; ehdr = freader_fetch(r, 0, sizeof(Elf64_Ehdr)); if (!ehdr) @@ -244,13 +246,14 @@ static int get_build_id_64(struct freader *r, unsigned char *build_id, __u32 *si /* subsequent freader_fetch() calls invalidate pointers, so remember locally */ phnum = ehdr->e_phnum; + phoff = READ_ONCE(ehdr->e_phoff); /* only supports phdr that fits in one page */ if (phnum > (PAGE_SIZE - sizeof(Elf64_Ehdr)) / sizeof(Elf64_Phdr)) return -EINVAL; for (i = 0; i < phnum; ++i) { - phdr = freader_fetch(r, i * sizeof(Elf64_Phdr), sizeof(Elf64_Phdr)); + phdr = freader_fetch(r, phoff + i * sizeof(Elf64_Phdr), sizeof(Elf64_Phdr)); if (!phdr) return r->err; From patchwork Wed Jul 24 22:52:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13741420 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7A0513C683 for ; Wed, 24 Jul 2024 22:52:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861543; cv=none; b=NM3JjjB0bSftr12emCF4fBxlGVt56gOnhM/ONDIN4bcqDx83iMvq3NfU+6eImwOQUu5HZ7UDQFNMS0WdPxJ+UcK0HwCDIY3H6/YM3KHhmwmafeAMg+bIHZFf64gJXCtSVD5CrgmNw36nwdGH98C+/V/Ot7QT2P9NrYioxOSi51s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861543; c=relaxed/simple; bh=azpAbr+6SOKGRgY4Eh1WfByGTMRsHGb35RAVJRMiCV8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VQpY9JhQSPdNe/rKRJcAZug0q9OU/wpUjotO9fhvz4Q2VV1RdNVMihWg+OoIk013f5gqWoGPlEDpdz/1kLW9iHH7XMBGlJqvpymNTlsXKrXw2/31GzEAbAEJounHBTX3iC2IwC/s4hZTEznKDyG4vsxKWZgiDhox22x1t/CG3fc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UjGKrOEr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UjGKrOEr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 826EAC32781; Wed, 24 Jul 2024 22:52:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721861543; bh=azpAbr+6SOKGRgY4Eh1WfByGTMRsHGb35RAVJRMiCV8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UjGKrOEr8SUACvJMfJ7sDCw9seYAgaMF3ZMxmmZpq4vj4qMYa9xiqeAmuuKoohnBo 7+duYW6MfOp2gbqM7OlYLUwOZZASzT+SMf6Ho3xL2Pi2VfeWd1EAN9VDOf7BXja1lc mjCGPgBTNySMRJpVrFk0vOteGjmcj5j3cRTooKuOIj5POsidfNKxtbMTM+Jf0ZOcGK dq7xrFoHuU8iDX8rQmcWNNDNH7M7TsroTDYTF5aHNuwhOLmlgW/kXUn8YE2RQAG4sh m957If7WczlcdFQOpAXnRrOnl7eSXxJbNVc0s21u2tDNru3WMztTenxeZuJSV+UdDp RQQWSQ+iWStvA== From: Andrii Nakryiko To: bpf@vger.kernel.org Cc: linux-mm@kvack.org, akpm@linux-foundation.org, adobriyan@gmail.com, shakeel.butt@linux.dev, hannes@cmpxchg.org, ak@linux.intel.com, osandov@osandov.com, song@kernel.org, Andrii Nakryiko Subject: [PATCH v2 bpf-next 03/10] lib/buildid: remove single-page limit for PHDR search Date: Wed, 24 Jul 2024 15:52:03 -0700 Message-ID: <20240724225210.545423-4-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240724225210.545423-1-andrii@kernel.org> References: <20240724225210.545423-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Now that freader allows to access multiple pages transparently, there is no need to limit program headers to the very first ELF file page. Remove this limitation, but still put some sane limit on amount of program headers that we are willing to iterate over (set arbitrarily to 256). Signed-off-by: Andrii Nakryiko --- lib/buildid.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/lib/buildid.c b/lib/buildid.c index ce48ffab4111..49fcb9a549bf 100644 --- a/lib/buildid.c +++ b/lib/buildid.c @@ -8,6 +8,8 @@ #define BUILD_ID 3 +#define MAX_PHDR_CNT 256 + struct freader { void *buf; u32 buf_sz; @@ -216,9 +218,9 @@ static int get_build_id_32(struct freader *r, unsigned char *build_id, __u32 *si phnum = ehdr->e_phnum; phoff = READ_ONCE(ehdr->e_phoff); - /* only supports phdr that fits in one page */ - if (phnum > (PAGE_SIZE - sizeof(Elf32_Ehdr)) / sizeof(Elf32_Phdr)) - return -EINVAL; + /* set upper bound on amount of segments (phdrs) we iterate */ + if (phnum > MAX_PHDR_CNT) + phnum = MAX_PHDR_CNT; for (i = 0; i < phnum; ++i) { phdr = freader_fetch(r, phoff + i * sizeof(Elf32_Phdr), sizeof(Elf32_Phdr)); @@ -248,9 +250,9 @@ static int get_build_id_64(struct freader *r, unsigned char *build_id, __u32 *si phnum = ehdr->e_phnum; phoff = READ_ONCE(ehdr->e_phoff); - /* only supports phdr that fits in one page */ - if (phnum > (PAGE_SIZE - sizeof(Elf64_Ehdr)) / sizeof(Elf64_Phdr)) - return -EINVAL; + /* set upper bound on amount of segments (phdrs) we iterate */ + if (phnum > MAX_PHDR_CNT) + phnum = MAX_PHDR_CNT; for (i = 0; i < phnum; ++i) { phdr = freader_fetch(r, phoff + i * sizeof(Elf64_Phdr), sizeof(Elf64_Phdr)); From patchwork Wed Jul 24 22:52:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13741421 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E5B213C9D3 for ; Wed, 24 Jul 2024 22:52:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861547; cv=none; b=eR1l+f4uDiWRV4FPWrnue9qhEV/kLmC9yV/9HAvEscaT0DxtZuD9qNso5ccLcovTx0pUNCxIMAiwKUROQBuZTgo/4JlPK2hAJfwGFpedZR8b+TKVl826uvMyiDW1EeraTE74v0Uao6MognjqpR4L67jFRs6vuepBPX5uEIsWDq0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861547; c=relaxed/simple; bh=ttN3FJZUIXMIoxb3+/1Zwk+6ong/Mn3pgESyR8mPPTQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IG0ZedqHJXvUZvbtKaufhZsyhy74um6fDKwiz0pMWs7Yhucw9cI4bl+Io+wj9mHRk8DJCtUalT6v0mLKWuFpWAfOlqzmGmAkvZ9ChidjmE6MTJuzdicie6+n+mqqwxL1ZZlL4r54t3MaAzaW13mfXQcl2eGtz+nWQ7geU1CW1Ng= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=esih5uxC; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="esih5uxC" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4159C4AF0F; Wed, 24 Jul 2024 22:52:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721861547; bh=ttN3FJZUIXMIoxb3+/1Zwk+6ong/Mn3pgESyR8mPPTQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=esih5uxCybKcEIXW7Uqc3DL51opICdz6HHVG1J6Vfx7wCI6BfOD481/t5nwtlbRzT k/279fhp3fHxpaIW4ahunTN3A2seclhLFmwJeHSW1g8bOO5bBn7LJ94p1JR2vJMjHR ujSSDoyFVwY39DftebQIdmhqOzqe1JpOyu++tQ+yAXxtX2r7SZFFyJH1anQ0eaVCQ/ vhwIvddChY20RKQ9hqKhkNff6xiHNvln9C0KjhbP43WBOvnVRQ2SqikP0Mrum8EZ1X Ilqv1G+KJH8MHKwPSUEdmdEZjTZoCjRAgiT8evGe0tQp0c7BTXqpV/R4SzCScYyJkB msrEBspoAclmw== From: Andrii Nakryiko To: bpf@vger.kernel.org Cc: linux-mm@kvack.org, akpm@linux-foundation.org, adobriyan@gmail.com, shakeel.butt@linux.dev, hannes@cmpxchg.org, ak@linux.intel.com, osandov@osandov.com, song@kernel.org, Andrii Nakryiko Subject: [PATCH v2 bpf-next 04/10] lib/buildid: rename build_id_parse() into build_id_parse_nofault() Date: Wed, 24 Jul 2024 15:52:04 -0700 Message-ID: <20240724225210.545423-5-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240724225210.545423-1-andrii@kernel.org> References: <20240724225210.545423-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Make it clear that build_id_parse() assumes that it can take no page fault by renaming it and current few users to build_id_parse_nofault(). Also add build_id_parse() stub, which will be implemented in subsequent patches, just to preserve succesful kernel compilation if another upcoming user of build_id_parse() (PROCMAP_QUERY ioctl() for /proc//maps, see [0]) gets merged with bpf-next tree. That ioctl() users of build_id_parse() doesn't have no-page-fault restriction, so it will automatically benefit from sleepable implementation. [0] https://lore.kernel.org/linux-mm/20240627170900.1672542-4-andrii@kernel.org/ Signed-off-by: Andrii Nakryiko --- include/linux/buildid.h | 4 ++-- kernel/bpf/stackmap.c | 2 +- kernel/events/core.c | 2 +- lib/buildid.c | 24 +++++++++++++++++++++--- 4 files changed, 25 insertions(+), 7 deletions(-) diff --git a/include/linux/buildid.h b/include/linux/buildid.h index 20aa3c2d89f7..014a88c41073 100644 --- a/include/linux/buildid.h +++ b/include/linux/buildid.h @@ -7,8 +7,8 @@ #define BUILD_ID_SIZE_MAX 20 struct vm_area_struct; -int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, - __u32 *size); +int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size); +int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size); int build_id_parse_buf(const void *buf, unsigned char *build_id, u32 buf_size); #if IS_ENABLED(CONFIG_STACKTRACE_BUILD_ID) || IS_ENABLED(CONFIG_VMCORE_INFO) diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index c99f8e5234ac..770ae8e88016 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -156,7 +156,7 @@ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs, goto build_id_valid; } vma = find_vma(current->mm, ips[i]); - if (!vma || build_id_parse(vma, id_offs[i].build_id, NULL)) { + if (!vma || build_id_parse_nofault(vma, id_offs[i].build_id, NULL)) { /* per entry fall back to ips */ id_offs[i].status = BPF_STACK_BUILD_ID_IP; id_offs[i].ip = ips[i]; diff --git a/kernel/events/core.c b/kernel/events/core.c index ab6c4c942f79..c2079e25f211 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -8850,7 +8850,7 @@ static void perf_event_mmap_event(struct perf_mmap_event *mmap_event) mmap_event->event_id.header.size = sizeof(mmap_event->event_id) + size; if (atomic_read(&nr_build_id_events)) - build_id_parse(vma, mmap_event->build_id, &mmap_event->build_id_size); + build_id_parse_nofault(vma, mmap_event->build_id, &mmap_event->build_id_size); perf_iterate_sb(perf_event_mmap_output, mmap_event, diff --git a/lib/buildid.c b/lib/buildid.c index 49fcb9a549bf..5f898fee43d7 100644 --- a/lib/buildid.c +++ b/lib/buildid.c @@ -276,10 +276,12 @@ static int get_build_id_64(struct freader *r, unsigned char *build_id, __u32 *si * @build_id: buffer to store build id, at least BUILD_ID_SIZE long * @size: returns actual build id size in case of success * - * Return: 0 on success, -EINVAL otherwise + * Assumes no page fault can be taken, so if relevant portions of ELF file are + * not already paged in, fetching of build ID fails. + * + * Return: 0 on success; negative error, otherwise */ -int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, - __u32 *size) +int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size) { const Elf32_Ehdr *ehdr; struct freader r; @@ -318,6 +320,22 @@ int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, return ret; } +/* + * Parse build ID of ELF file mapped to VMA + * @vma: vma object + * @build_id: buffer to store build id, at least BUILD_ID_SIZE long + * @size: returns actual build id size in case of success + * + * Assumes faultable context and can cause page faults to bring in file data + * into page cache. + * + * Return: 0 on success; negative error, otherwise + */ +int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size) +{ + return -EOPNOTSUPP; +} + /** * build_id_parse_buf - Get build ID from a buffer * @buf: ELF note section(s) to parse From patchwork Wed Jul 24 22:52:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13741422 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6717213C918 for ; Wed, 24 Jul 2024 22:52:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861550; cv=none; b=ORiLSryUQ2KqpMK8oVaVEpbkxXdG5IMDfSi0IR6+LTHxsbd5+W53KxvfawvxSLrWeg7a5b2uFqNfLRzLk7fCRLyKFKiT8paegq3eshaeRJM14rWY/QQ6KfHYgAFFlIcR4ePO7OShjT3Ky3lC6q7lnKwrBsffSmGZ1dhODMBOBvU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861550; c=relaxed/simple; bh=+9jpZjOCZAyOPgnFUaMIL0c75rF54bF41tQH1aEgnRg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nwAp6byba2Lq2mpz3sKYLhw4fdFIBCb9BAwiltUtSg/1gKSgG1c/5P84ORmd3W7MO6lMwELU2QnduNwQvv+nNWDfPIHNnzzmlJ4kc4CHUHXkTu4Mfh/uZ7EA5WCQUJWqhEQRQb0YajXtoOJaZRSLGVh5+/ShKFQqjtA4c0QrnQQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=uG6+2et4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="uG6+2et4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19CC1C32781; Wed, 24 Jul 2024 22:52:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721861550; bh=+9jpZjOCZAyOPgnFUaMIL0c75rF54bF41tQH1aEgnRg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uG6+2et43OBP3FNk1iAcFBteE3sbEyD50GG4olgea0zPxXigaux0PlOVnBi9DW6Uq 2v9IJ37KZRRMt0IftZlUrokKuS58syUGRkf5jRRD5CPF485YrJBDMNsPgsUKhiwBu5 S2N8fpL547zmvJPqVlCkToQPmuIh8hFSIRyfsKKjsnbI88zGAbiDcEsFu8HzQMXu9D a1659IeuFCW5FMJj4bDS4HAwBLvkQefpnYzsOgXloykMnDilPaN7PwR9r6hjboiOqR h+EHRyjOBCfUBsfLUssy8gs+82RQRHZYjuWZRi6Lo3N6iSbK/0DYeTkZq8G3bAIlMm C69F0VwDpOZZQ== From: Andrii Nakryiko To: bpf@vger.kernel.org Cc: linux-mm@kvack.org, akpm@linux-foundation.org, adobriyan@gmail.com, shakeel.butt@linux.dev, hannes@cmpxchg.org, ak@linux.intel.com, osandov@osandov.com, song@kernel.org, Andrii Nakryiko , Omar Sandoval Subject: [PATCH v2 bpf-next 05/10] lib/buildid: implement sleepable build_id_parse() API Date: Wed, 24 Jul 2024 15:52:05 -0700 Message-ID: <20240724225210.545423-6-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240724225210.545423-1-andrii@kernel.org> References: <20240724225210.545423-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Extend freader with a flag specifying whether it's OK to cause page fault to fetch file data that is not already physically present in memory. With this, it's now easy to wait for data if the caller is running in sleepable (faultable) context. We utilize read_cache_folio() to bring the desired file page into page cache, after which the rest of the logic works just the same at page level. Suggested-by: Omar Sandoval Cc: Shakeel Butt Cc: Johannes Weiner Signed-off-by: Andrii Nakryiko --- lib/buildid.c | 49 ++++++++++++++++++++++++++++++++++--------------- 1 file changed, 34 insertions(+), 15 deletions(-) diff --git a/lib/buildid.c b/lib/buildid.c index 5f898fee43d7..23bfc811981a 100644 --- a/lib/buildid.c +++ b/lib/buildid.c @@ -20,6 +20,7 @@ struct freader { struct page *page; void *page_addr; u64 file_off; + bool may_fault; }; struct { const char *data; @@ -29,12 +30,13 @@ struct freader { }; static void freader_init_from_file(struct freader *r, void *buf, u32 buf_sz, - struct address_space *mapping) + struct address_space *mapping, bool may_fault) { memset(r, 0, sizeof(*r)); r->buf = buf; r->buf_sz = buf_sz; r->mapping = mapping; + r->may_fault = may_fault; } static void freader_init_from_mem(struct freader *r, const char *data, u64 data_sz) @@ -60,6 +62,17 @@ static int freader_get_page(struct freader *r, u64 file_off) freader_put_page(r); r->page = find_get_page(r->mapping, pg_off); + + if (!r->page && r->may_fault) { + struct folio *folio; + + folio = read_cache_folio(r->mapping, pg_off, NULL, NULL); + if (IS_ERR(folio)) + return PTR_ERR(folio); + + r->page = folio_file_page(folio, pg_off); + } + if (!r->page) return -EFAULT; /* page not mapped */ @@ -270,18 +283,8 @@ static int get_build_id_64(struct freader *r, unsigned char *build_id, __u32 *si /* enough for Elf64_Ehdr, Elf64_Phdr, and all the smaller requests */ #define MAX_FREADER_BUF_SZ 64 -/* - * Parse build ID of ELF file mapped to vma - * @vma: vma object - * @build_id: buffer to store build id, at least BUILD_ID_SIZE long - * @size: returns actual build id size in case of success - * - * Assumes no page fault can be taken, so if relevant portions of ELF file are - * not already paged in, fetching of build ID fails. - * - * Return: 0 on success; negative error, otherwise - */ -int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size) +static int __build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, + __u32 *size, bool may_fault) { const Elf32_Ehdr *ehdr; struct freader r; @@ -292,7 +295,7 @@ int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, if (!vma->vm_file) return -EINVAL; - freader_init_from_file(&r, buf, sizeof(buf), vma->vm_file->f_mapping); + freader_init_from_file(&r, buf, sizeof(buf), vma->vm_file->f_mapping, may_fault); /* fetch first 18 bytes of ELF header for checks */ ehdr = freader_fetch(&r, 0, offsetofend(Elf32_Ehdr, e_type)); @@ -320,6 +323,22 @@ int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, return ret; } +/* + * Parse build ID of ELF file mapped to vma + * @vma: vma object + * @build_id: buffer to store build id, at least BUILD_ID_SIZE long + * @size: returns actual build id size in case of success + * + * Assumes no page fault can be taken, so if relevant portions of ELF file are + * not already paged in, fetching of build ID fails. + * + * Return: 0 on success; negative error, otherwise + */ +int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size) +{ + return __build_id_parse(vma, build_id, size, false /* !may_fault */); +} + /* * Parse build ID of ELF file mapped to VMA * @vma: vma object @@ -333,7 +352,7 @@ int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, */ int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size) { - return -EOPNOTSUPP; + return __build_id_parse(vma, build_id, size, true /* may_fault */); } /** From patchwork Wed Jul 24 22:52:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13741423 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1806B13D610 for ; Wed, 24 Jul 2024 22:52:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861554; cv=none; b=CpuQlriia7VLcH0hVhVYrJSMG74wrG/mOK3vskwmnJEvGslYwwoGUayLQqG0Nj92yoj0F7qQyQzopoPRSZAYRYdYx4UfJaf8JGd9tlHetFQPhdSqNRtt2UPGtd6uCtRaNo+bkIAByjTN7LdWlqtn0OkcXo66tdz7T6H2lX3vO38= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861554; c=relaxed/simple; bh=z2GiV9WjwgN2ZnoTOoBHdWopafhvx9MuPC6sEC6Crd8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LOhTIsSps+mNV0HU/IPdRv6hMC/vDrXMbXWPKqLKXetpDaDTA7I/aB86GchZQ8C4dR0QxOlflxj1RFFExzL1WkWkVbyRXlCP/K0ovfQgMGfskLHXR/y/cd19NJ5o54nCdS3HbjbDhm4w7S83rB8OCgQb+6cEu8nuXWpmqZGV2+w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=iHUH9AVU; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="iHUH9AVU" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 788D3C32781; Wed, 24 Jul 2024 22:52:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721861553; bh=z2GiV9WjwgN2ZnoTOoBHdWopafhvx9MuPC6sEC6Crd8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iHUH9AVUcEK7NLYHGspPRaMdC8HeusUjt3LTsHRjp9gWWo4MPW4MgLl2MGu3YDiIs xVyIwtqGyMCbLdQyVaoad66Nd0AA9SZ+DyPte034LGtLR7fMFcHhOKt8yV5Pi1zHCV 7NIF+N1Jgy1otd8xqT8YO1nsRgn33w2pwuxtxCiWzVy/AyvV5kCs5GbNH9l7iEojsQ 0mBTpOZrMJlWbOXIk690QI2WW2TOrsWw9b4/lRTZgE4+q/bMAR3EsUPnO8vFWkD80U 3kWPgDLeUtEvxJnjxMcMrqxANgHr5nvlHIwutDewpDBIRh5kCeXTbwsPo0/iF5oCME rrEf7fiznF4RQ== From: Andrii Nakryiko To: bpf@vger.kernel.org Cc: linux-mm@kvack.org, akpm@linux-foundation.org, adobriyan@gmail.com, shakeel.butt@linux.dev, hannes@cmpxchg.org, ak@linux.intel.com, osandov@osandov.com, song@kernel.org, Andrii Nakryiko Subject: [PATCH v2 bpf-next 06/10] lib/buildid: don't limit .note.gnu.build-id to the first page in ELF Date: Wed, 24 Jul 2024 15:52:06 -0700 Message-ID: <20240724225210.545423-7-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240724225210.545423-1-andrii@kernel.org> References: <20240724225210.545423-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net With freader we don't need to restrict ourselves to a single page, so let's allow ELF notes to be at any valid position with the file. We also merge parse_build_id() and parse_build_id_buf() as now the only difference between them is note offset overflow, which makes sense to check in all situations. Signed-off-by: Andrii Nakryiko --- lib/buildid.c | 28 +++++++--------------------- 1 file changed, 7 insertions(+), 21 deletions(-) diff --git a/lib/buildid.c b/lib/buildid.c index 23bfc811981a..419966d88cd5 100644 --- a/lib/buildid.c +++ b/lib/buildid.c @@ -152,9 +152,8 @@ static void freader_cleanup(struct freader *r) * 32-bit and 64-bit system, because Elf32_Nhdr and Elf64_Nhdr are * identical. */ -static int parse_build_id_buf(struct freader *r, - unsigned char *build_id, __u32 *size, - u64 note_offs, Elf32_Word note_size) +static int parse_build_id(struct freader *r, unsigned char *build_id, __u32 *size, + u64 note_offs, Elf32_Word note_size) { const char note_name[] = "GNU"; const size_t note_name_sz = sizeof(note_name); @@ -163,6 +162,10 @@ static int parse_build_id_buf(struct freader *r, const Elf32_Nhdr *nhdr; const char *data; + /* check for overflow */ + if (note_offs + note_size < note_offs) + return -EINVAL; + while (note_offs + sizeof(Elf32_Nhdr) < note_end) { nhdr = freader_fetch(r, note_offs, sizeof(Elf32_Nhdr) + note_name_sz); if (!nhdr) @@ -199,23 +202,6 @@ static int parse_build_id_buf(struct freader *r, return -EINVAL; } -static inline int parse_build_id(struct freader *r, - unsigned char *build_id, - __u32 *size, - u64 note_start_off, - Elf32_Word note_size) -{ - /* check for overflow */ - if (note_start_off + note_size < note_start_off) - return -EINVAL; - - /* only supports note that fits in the first page */ - if (note_start_off + note_size > PAGE_SIZE) - return -EINVAL; - - return parse_build_id_buf(r, build_id, size, note_start_off, note_size); -} - /* Parse build ID from 32-bit ELF */ static int get_build_id_32(struct freader *r, unsigned char *build_id, __u32 *size) { @@ -369,7 +355,7 @@ int build_id_parse_buf(const void *buf, unsigned char *build_id, u32 buf_size) freader_init_from_mem(&r, buf, buf_size); - return parse_build_id_buf(&r, build_id, NULL, 0, buf_size); + return parse_build_id(&r, build_id, NULL, 0, buf_size); } #if IS_ENABLED(CONFIG_STACKTRACE_BUILD_ID) || IS_ENABLED(CONFIG_VMCORE_INFO) From patchwork Wed Jul 24 22:52:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13741424 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED0F54D8C6 for ; Wed, 24 Jul 2024 22:52:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861557; cv=none; b=MuejeI0J4LBF73QK5+h2AyckPNH/wpY3LVNGWSs3dwAf51HCh0V0w5mIAbQ5zQj/+Wmw2tOU8h6eMcL3KoshdBs8uxag7nuDE3q1ozJFptYUTxqNXTZUnNva0kZLIuicbjKRLZ1W/GIIrGZsAMq1kHGJHLy7NN+xljbaSK/x4+M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861557; c=relaxed/simple; bh=ntgIIpn+tzkBcO08o2+RKeRyEZVireJUIPte90Dh4v8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kfeueVRljcKLZNEOit4EXqncwD220bZpU6SG3EuI+Z1OiftgHSv4qyC2NT+nHHT/bqH7G5XfPmq1FFi/Zs5OZn+7HDdgLBtNj16QA1YWT1PJaT3JdebirVjg6ucCZgEOjDp3ZSMx6vwWQWGrJXtSXqUywHo1EVR+SJhQT1os+8M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DQwpSIMi; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DQwpSIMi" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AA922C32781; Wed, 24 Jul 2024 22:52:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721861556; bh=ntgIIpn+tzkBcO08o2+RKeRyEZVireJUIPte90Dh4v8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DQwpSIMimhn7FkBAz2vr8LNKFVUQN83+hiSazx2Qt70eqWzSTOLdzhmnRy2wauKDC XprAOeoDwPz6pMgua9jNTDdtoZJnpt50jO3m0QcetBXcmIA8QAdtdsI5KaOKIjZ1AC w9vkqW0aq5enF+8I1XGYtiyOYNBVO/F+V2o/u7oBAXDrbIay1V+rNz9TI4T8pfamGA 0/u2trA0alBnBrJLjiK+Ydm9nA3nQzoZlHqXaCoZWbNDKkQOkoDFHiL2USGDbGWZRQ 9YorGVDfMlYftfv7v+SXMpkIo61qyCkdZza2jvCfu6uIaes9fzSh9ctcz/3Phbjfze Utnqd2yaZxa9A== From: Andrii Nakryiko To: bpf@vger.kernel.org Cc: linux-mm@kvack.org, akpm@linux-foundation.org, adobriyan@gmail.com, shakeel.butt@linux.dev, hannes@cmpxchg.org, ak@linux.intel.com, osandov@osandov.com, song@kernel.org, Andrii Nakryiko Subject: [PATCH v2 bpf-next 07/10] lib/buildid: harden build ID parsing logic some more Date: Wed, 24 Jul 2024 15:52:07 -0700 Message-ID: <20240724225210.545423-8-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240724225210.545423-1-andrii@kernel.org> References: <20240724225210.545423-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Harden build ID parsing logic some more, adding explicit READ_ONCE() when fetching values that we then use to check correctness and various note iteration invariants. Suggested-by: Andi Kleen Signed-off-by: Andrii Nakryiko --- lib/buildid.c | 32 +++++++++++++++++--------------- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/lib/buildid.c b/lib/buildid.c index 419966d88cd5..7e36a32fbb90 100644 --- a/lib/buildid.c +++ b/lib/buildid.c @@ -158,7 +158,7 @@ static int parse_build_id(struct freader *r, unsigned char *build_id, __u32 *siz const char note_name[] = "GNU"; const size_t note_name_sz = sizeof(note_name); u64 build_id_off, new_offs, note_end = note_offs + note_size; - u32 build_id_sz; + u32 build_id_sz, name_sz, desc_sz; const Elf32_Nhdr *nhdr; const char *data; @@ -171,14 +171,15 @@ static int parse_build_id(struct freader *r, unsigned char *build_id, __u32 *siz if (!nhdr) return r->err; - if (nhdr->n_type == BUILD_ID && - nhdr->n_namesz == note_name_sz && - !strcmp((char *)(nhdr + 1), note_name) && - nhdr->n_descsz > 0 && - nhdr->n_descsz <= BUILD_ID_SIZE_MAX) { + name_sz = READ_ONCE(nhdr->n_namesz); + desc_sz = READ_ONCE(nhdr->n_descsz); + if (READ_ONCE(nhdr->n_type) == BUILD_ID && + name_sz == note_name_sz && + !strncmp((char *)(nhdr + 1), note_name, note_name_sz) && + desc_sz > 0 && desc_sz <= BUILD_ID_SIZE_MAX) { build_id_off = note_offs + sizeof(Elf32_Nhdr) + ALIGN(note_name_sz, 4); - build_id_sz = nhdr->n_descsz; + build_id_sz = desc_sz; /* freader_fetch() will invalidate nhdr pointer */ data = freader_fetch(r, build_id_off, build_id_sz); @@ -192,8 +193,7 @@ static int parse_build_id(struct freader *r, unsigned char *build_id, __u32 *siz return 0; } - new_offs = note_offs + sizeof(Elf32_Nhdr) + - ALIGN(nhdr->n_namesz, 4) + ALIGN(nhdr->n_descsz, 4); + new_offs = note_offs + sizeof(Elf32_Nhdr) + ALIGN(name_sz, 4) + ALIGN(desc_sz, 4); if (new_offs <= note_offs) /* overflow */ break; note_offs = new_offs; @@ -214,7 +214,7 @@ static int get_build_id_32(struct freader *r, unsigned char *build_id, __u32 *si return r->err; /* subsequent freader_fetch() calls invalidate pointers, so remember locally */ - phnum = ehdr->e_phnum; + phnum = READ_ONCE(ehdr->e_phnum); phoff = READ_ONCE(ehdr->e_phoff); /* set upper bound on amount of segments (phdrs) we iterate */ @@ -226,8 +226,9 @@ static int get_build_id_32(struct freader *r, unsigned char *build_id, __u32 *si if (!phdr) return r->err; - if (phdr->p_type == PT_NOTE && - !parse_build_id(r, build_id, size, phdr->p_offset, phdr->p_filesz)) + if (READ_ONCE(phdr->p_type) == PT_NOTE && + !parse_build_id(r, build_id, size, + READ_ONCE(phdr->p_offset), READ_ONCE(phdr->p_filesz))) return 0; } return -EINVAL; @@ -246,7 +247,7 @@ static int get_build_id_64(struct freader *r, unsigned char *build_id, __u32 *si return r->err; /* subsequent freader_fetch() calls invalidate pointers, so remember locally */ - phnum = ehdr->e_phnum; + phnum = READ_ONCE(ehdr->e_phnum); phoff = READ_ONCE(ehdr->e_phoff); /* set upper bound on amount of segments (phdrs) we iterate */ @@ -258,8 +259,9 @@ static int get_build_id_64(struct freader *r, unsigned char *build_id, __u32 *si if (!phdr) return r->err; - if (phdr->p_type == PT_NOTE && - !parse_build_id(r, build_id, size, phdr->p_offset, phdr->p_filesz)) + if (READ_ONCE(phdr->p_type) == PT_NOTE && + !parse_build_id(r, build_id, size, + READ_ONCE(phdr->p_offset), READ_ONCE(phdr->p_filesz))) return 0; } From patchwork Wed Jul 24 22:52:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13741425 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43C2013C683 for ; Wed, 24 Jul 2024 22:52:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861560; cv=none; b=G9LptMyoDgkMw8j9gqqoCB3sMhbmKag0PuISVS3GBhKw2gGnXpyPsIm4RpZKs+5ohav7rhKW9zt0kkb/O5ka5oXQ7gBOm/F+s5aYtKsrUFU1Zu1AFH+9IqGc8xNub5K6jOleilUH7PiQ8veln9/gSB8QFN11m9prESGgwWt1l+k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861560; c=relaxed/simple; bh=kcQeVuVwnIJJkBHc/ynMBCpPQQtY5kY6QBTEhCYYFCI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GH5Z5KzV1FuytIL+MwEsl6N1atl6LV4VJS41kzc02S4LcUYqdVZxhAzPiZkS7Ej81aCSP37PldDsdkV/cARuoVO+aWeNRvtIEcyG26U0kSpj3Sz5QGPO9sRbR4O6bSeHOTICiO3Pkl0F0cNNHVLqZCvcTl0sIIr3hDz18WPJ0Mc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BGDkWmUr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BGDkWmUr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E9582C4AF0B; Wed, 24 Jul 2024 22:52:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721861560; bh=kcQeVuVwnIJJkBHc/ynMBCpPQQtY5kY6QBTEhCYYFCI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BGDkWmUrnZuRcU338NjBL7jISkv1hnIqu7EmRZtJYWqjfuNTDFVhavENhOlvKQQSC htiPnRUx56FVGLy6y9FmD0Mg9cXTodSpvRBXVhd9VbCcimJEMqZgr8XsaMut99P4EW 4O4k1eFKlMHBPhHkhHGx0nzwefPziyBydQetHHkbo5UjUelJlYxKHkwAK76DqlRH/A d7jK6YSLVcsQ37iktrEAyy9ZPcna3kEuhe9qqjN7Pw+EQtJF1vH5ZEE7sUzEKu0q8C l7GXLqDK7VXlh/YLNPFlW9fsLME0ruLo2ZVLvLoQy8qtAiJNmK4Sp92LI88/xdPMFc USYEDfusZiTQQ== From: Andrii Nakryiko To: bpf@vger.kernel.org Cc: linux-mm@kvack.org, akpm@linux-foundation.org, adobriyan@gmail.com, shakeel.butt@linux.dev, hannes@cmpxchg.org, ak@linux.intel.com, osandov@osandov.com, song@kernel.org, Andrii Nakryiko Subject: [PATCH v2 bpf-next 08/10] bpf: decouple stack_map_get_build_id_offset() from perf_callchain_entry Date: Wed, 24 Jul 2024 15:52:08 -0700 Message-ID: <20240724225210.545423-9-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240724225210.545423-1-andrii@kernel.org> References: <20240724225210.545423-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Change stack_map_get_build_id_offset() which is used to convert stack trace IP addresses into build ID+offset pairs. Right now this function accepts an array of u64s as an input, and uses array of struct bpf_stack_build_id as an output. This is problematic because u64 array is coming from perf_callchain_entry, which is (non-sleepable) RCU protected, so once we allows sleepable build ID fetching, this all breaks down. But its actually pretty easy to make stack_map_get_build_id_offset() works with array of struct bpf_stack_build_id as both input and output. Which is what this patch is doing, eliminating the dependency on perf_callchain_entry. We require caller to fill out bpf_stack_build_id.ip fields (all other can be left uninitialized), and update in place as we do build ID resolution. We make sure to READ_ONCE() and cache locally current IP value as we used it in a few places to find matching VMA and so on. Given this data is directly accessible and modifiable by user's BPF code, we should make sure to have a consistent view of it. Signed-off-by: Andrii Nakryiko --- kernel/bpf/stackmap.c | 49 +++++++++++++++++++++++++++++-------------- 1 file changed, 33 insertions(+), 16 deletions(-) diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index 770ae8e88016..6457222b0b46 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -124,8 +124,18 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr) return ERR_PTR(err); } +/* + * Expects all id_offs[i].ip values to be set to correct initial IPs. + * They will be subsequently: + * - either adjusted in place to a file offset, if build ID fetching + * succeeds; in this case id_offs[i].build_id is set to correct build ID, + * and id_offs[i].status is set to BPF_STACK_BUILD_ID_VALID; + * - or IP will be kept intact, if build ID fetching failed; in this case + * id_offs[i].build_id is zeroed out and id_offs[i].status is set to + * BPF_STACK_BUILD_ID_IP. + */ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs, - u64 *ips, u32 trace_nr, bool user) + u32 trace_nr, bool user) { int i; struct mmap_unlock_irq_work *work = NULL; @@ -142,30 +152,28 @@ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs, /* cannot access current->mm, fall back to ips */ for (i = 0; i < trace_nr; i++) { id_offs[i].status = BPF_STACK_BUILD_ID_IP; - id_offs[i].ip = ips[i]; memset(id_offs[i].build_id, 0, BUILD_ID_SIZE_MAX); } return; } for (i = 0; i < trace_nr; i++) { - if (range_in_vma(prev_vma, ips[i], ips[i])) { + u64 ip = READ_ONCE(id_offs[i].ip); + + if (range_in_vma(prev_vma, ip, ip)) { vma = prev_vma; - memcpy(id_offs[i].build_id, prev_build_id, - BUILD_ID_SIZE_MAX); + memcpy(id_offs[i].build_id, prev_build_id, BUILD_ID_SIZE_MAX); goto build_id_valid; } - vma = find_vma(current->mm, ips[i]); + vma = find_vma(current->mm, ip); if (!vma || build_id_parse_nofault(vma, id_offs[i].build_id, NULL)) { /* per entry fall back to ips */ id_offs[i].status = BPF_STACK_BUILD_ID_IP; - id_offs[i].ip = ips[i]; memset(id_offs[i].build_id, 0, BUILD_ID_SIZE_MAX); continue; } build_id_valid: - id_offs[i].offset = (vma->vm_pgoff << PAGE_SHIFT) + ips[i] - - vma->vm_start; + id_offs[i].offset = (vma->vm_pgoff << PAGE_SHIFT) + ip - vma->vm_start; id_offs[i].status = BPF_STACK_BUILD_ID_VALID; prev_vma = vma; prev_build_id = id_offs[i].build_id; @@ -216,7 +224,7 @@ static long __bpf_get_stackid(struct bpf_map *map, struct bpf_stack_map *smap = container_of(map, struct bpf_stack_map, map); struct stack_map_bucket *bucket, *new_bucket, *old_bucket; u32 skip = flags & BPF_F_SKIP_FIELD_MASK; - u32 hash, id, trace_nr, trace_len; + u32 hash, id, trace_nr, trace_len, i; bool user = flags & BPF_F_USER_STACK; u64 *ips; bool hash_matches; @@ -238,15 +246,18 @@ static long __bpf_get_stackid(struct bpf_map *map, return id; if (stack_map_use_build_id(map)) { + struct bpf_stack_build_id *id_offs; + /* for build_id+offset, pop a bucket before slow cmp */ new_bucket = (struct stack_map_bucket *) pcpu_freelist_pop(&smap->freelist); if (unlikely(!new_bucket)) return -ENOMEM; new_bucket->nr = trace_nr; - stack_map_get_build_id_offset( - (struct bpf_stack_build_id *)new_bucket->data, - ips, trace_nr, user); + id_offs = (struct bpf_stack_build_id *)new_bucket->data; + for (i = 0; i < trace_nr; i++) + id_offs[i].ip = ips[i]; + stack_map_get_build_id_offset(id_offs, trace_nr, user); trace_len = trace_nr * sizeof(struct bpf_stack_build_id); if (hash_matches && bucket->nr == trace_nr && memcmp(bucket->data, new_bucket->data, trace_len) == 0) { @@ -445,10 +456,16 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, copy_len = trace_nr * elem_size; ips = trace->ip + skip; - if (user && user_build_id) - stack_map_get_build_id_offset(buf, ips, trace_nr, user); - else + if (user && user_build_id) { + struct bpf_stack_build_id *id_offs = buf; + u32 i; + + for (i = 0; i < trace_nr; i++) + id_offs[i].ip = ips[i]; + stack_map_get_build_id_offset(buf, trace_nr, user); + } else { memcpy(buf, ips, copy_len); + } if (size > copy_len) memset(buf + copy_len, 0, size - copy_len); From patchwork Wed Jul 24 22:52:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13741426 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D5AD34D8C6 for ; Wed, 24 Jul 2024 22:52:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861563; cv=none; b=Vc0m23VuPonWhE8EWMJ3EcW5og1LtcEBGr6iewG3d7yfFyzfclyckzjvMY2lkdm4EslipIjkhuwxQswGNknXLEHvpFEWYdvCOUsq6NmPnkLHqLfNGy89R0hdrv3ediesOYbVQJJd288Kf8IZNzI13wA87VOv4aUHESdmu02gV4o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861563; c=relaxed/simple; bh=B7j4vEXsH2ch5co68Pm2tINZDzZ6FR433unf55TyQcU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DNpmbqOLtkssxhU7awqsrElVUhITAWQ30zXPDmaxWf943GD+GzalZUWv4r0EkQnzTOd5vDZf3/jAJn70QcEimH22ni+BcuM6uPQZMqbbVHMt4rR+oXgtd+/gvChkqf1keFwip3cusB+YQutWnE6fTxiAs7sBi1J7U5+jzpsVrVc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=XmkCnKFT; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="XmkCnKFT" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3F06CC32781; Wed, 24 Jul 2024 22:52:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721861563; bh=B7j4vEXsH2ch5co68Pm2tINZDzZ6FR433unf55TyQcU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XmkCnKFThfLhslgvNaBpkxdi+HcyqMmSdoJ7vOi99k2xlFFvwXTTxu2eMxOvbgcdE V8+YYulLSRsNlttrfLABWg8+Fcq1T3D1s0k5dSLdxIacwyUKaa5TRGnDuj8BK0N5Bx DqXjjeP2h34WWW1Q8WfvnoqKlaECJcxTCf5dWsCmSb2gkEwlT+6IydnOCmUz+XVYYL xrbO/OfraE9HZ6NRur1LZl5wkkMrSVeNMhWwSOlwcLeWFJTduL8MmBIrD2UDUo/ffp aiAoVZJlaS+fLgl7zdC+LLLC2u3RJz3iOs02jkgpQC46CJu/aWnvxnmMKlXM4jRLii JRBPTqUx6OVLA== From: Andrii Nakryiko To: bpf@vger.kernel.org Cc: linux-mm@kvack.org, akpm@linux-foundation.org, adobriyan@gmail.com, shakeel.butt@linux.dev, hannes@cmpxchg.org, ak@linux.intel.com, osandov@osandov.com, song@kernel.org, Andrii Nakryiko Subject: [PATCH v2 bpf-next 09/10] bpf: wire up sleepable bpf_get_stack() and bpf_get_task_stack() helpers Date: Wed, 24 Jul 2024 15:52:09 -0700 Message-ID: <20240724225210.545423-10-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240724225210.545423-1-andrii@kernel.org> References: <20240724225210.545423-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Add sleepable implementations of bpf_get_stack() and bpf_get_task_stack() helpers and allow them to be used from sleepable BPF program (e.g., sleepable uprobes). Note, the stack trace IPs capturing itself is not sleepable (that would need to be a separate project), only build ID fetching is sleepable and thus more reliable, as it will wait for data to be paged in, if necessary. For that we make use of sleepable build_id_parse() implementation. Now that build ID related internals in kernel/bpf/stackmap.c can be used both in sleepable and non-sleepable contexts, we need to add additional rcu_read_lock()/rcu_read_unlock() protection around fetching perf_callchain_entry, but with the refactoring in previous commit it's now pretty straightforward. We make sure to do rcu_read_unlock (in sleepable mode only) right before stack_map_get_build_id_offset() call which can sleep. By that time we don't have any more use of perf_callchain_entry. Note, bpf_get_task_stack() will fail for user mode if task != current. And for kernel mode build ID are irrelevant. So in that sense adding sleepable bpf_get_task_stack() implementation is a no-op. It feel right to wire this up for symmetry and completeness, but I'm open to just dropping it until we support `user && crosstask` condition. Signed-off-by: Andrii Nakryiko --- include/linux/bpf.h | 2 + kernel/bpf/stackmap.c | 90 ++++++++++++++++++++++++++++++++-------- kernel/trace/bpf_trace.c | 5 ++- 3 files changed, 77 insertions(+), 20 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 7ad37cbdc815..8e7a9f5ccecf 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -3194,7 +3194,9 @@ extern const struct bpf_func_proto bpf_get_current_uid_gid_proto; extern const struct bpf_func_proto bpf_get_current_comm_proto; extern const struct bpf_func_proto bpf_get_stackid_proto; extern const struct bpf_func_proto bpf_get_stack_proto; +extern const struct bpf_func_proto bpf_get_stack_sleepable_proto; extern const struct bpf_func_proto bpf_get_task_stack_proto; +extern const struct bpf_func_proto bpf_get_task_stack_sleepable_proto; extern const struct bpf_func_proto bpf_get_stackid_proto_pe; extern const struct bpf_func_proto bpf_get_stack_proto_pe; extern const struct bpf_func_proto bpf_sock_map_update_proto; diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index 6457222b0b46..3615c06b7dfa 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -124,6 +124,12 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr) return ERR_PTR(err); } +static int fetch_build_id(struct vm_area_struct *vma, unsigned char *build_id, bool may_fault) +{ + return may_fault ? build_id_parse(vma, build_id, NULL) + : build_id_parse_nofault(vma, build_id, NULL); +} + /* * Expects all id_offs[i].ip values to be set to correct initial IPs. * They will be subsequently: @@ -135,7 +141,7 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr) * BPF_STACK_BUILD_ID_IP. */ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs, - u32 trace_nr, bool user) + u32 trace_nr, bool user, bool may_fault) { int i; struct mmap_unlock_irq_work *work = NULL; @@ -166,7 +172,7 @@ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs, goto build_id_valid; } vma = find_vma(current->mm, ip); - if (!vma || build_id_parse_nofault(vma, id_offs[i].build_id, NULL)) { + if (!vma || fetch_build_id(vma, id_offs[i].build_id, may_fault)) { /* per entry fall back to ips */ id_offs[i].status = BPF_STACK_BUILD_ID_IP; memset(id_offs[i].build_id, 0, BUILD_ID_SIZE_MAX); @@ -257,7 +263,7 @@ static long __bpf_get_stackid(struct bpf_map *map, id_offs = (struct bpf_stack_build_id *)new_bucket->data; for (i = 0; i < trace_nr; i++) id_offs[i].ip = ips[i]; - stack_map_get_build_id_offset(id_offs, trace_nr, user); + stack_map_get_build_id_offset(id_offs, trace_nr, user, false /* !may_fault */); trace_len = trace_nr * sizeof(struct bpf_stack_build_id); if (hash_matches && bucket->nr == trace_nr && memcmp(bucket->data, new_bucket->data, trace_len) == 0) { @@ -398,7 +404,7 @@ const struct bpf_func_proto bpf_get_stackid_proto_pe = { static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, struct perf_callchain_entry *trace_in, - void *buf, u32 size, u64 flags) + void *buf, u32 size, u64 flags, bool may_fault) { u32 trace_nr, copy_len, elem_size, num_elem, max_depth; bool user_build_id = flags & BPF_F_USER_BUILD_ID; @@ -416,8 +422,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, if (kernel && user_build_id) goto clear; - elem_size = (user && user_build_id) ? sizeof(struct bpf_stack_build_id) - : sizeof(u64); + elem_size = user_build_id ? sizeof(struct bpf_stack_build_id) : sizeof(u64); if (unlikely(size % elem_size)) goto clear; @@ -438,6 +443,9 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, if (sysctl_perf_event_max_stack < max_depth) max_depth = sysctl_perf_event_max_stack; + if (may_fault) + rcu_read_lock(); /* need RCU for perf's callchain below */ + if (trace_in) trace = trace_in; else if (kernel && task) @@ -445,28 +453,35 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, else trace = get_perf_callchain(regs, 0, kernel, user, max_depth, crosstask, false); - if (unlikely(!trace)) - goto err_fault; - if (trace->nr < skip) + if (unlikely(!trace) || trace->nr < skip) { + if (may_fault) + rcu_read_unlock(); goto err_fault; + } trace_nr = trace->nr - skip; trace_nr = (trace_nr <= num_elem) ? trace_nr : num_elem; copy_len = trace_nr * elem_size; ips = trace->ip + skip; - if (user && user_build_id) { + if (user_build_id) { struct bpf_stack_build_id *id_offs = buf; u32 i; for (i = 0; i < trace_nr; i++) id_offs[i].ip = ips[i]; - stack_map_get_build_id_offset(buf, trace_nr, user); } else { memcpy(buf, ips, copy_len); } + /* trace/ips should not be dereferenced after this point */ + if (may_fault) + rcu_read_unlock(); + + if (user_build_id) + stack_map_get_build_id_offset(buf, trace_nr, user, may_fault); + if (size > copy_len) memset(buf + copy_len, 0, size - copy_len); return copy_len; @@ -481,7 +496,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, BPF_CALL_4(bpf_get_stack, struct pt_regs *, regs, void *, buf, u32, size, u64, flags) { - return __bpf_get_stack(regs, NULL, NULL, buf, size, flags); + return __bpf_get_stack(regs, NULL, NULL, buf, size, flags, false /* !may_fault */); } const struct bpf_func_proto bpf_get_stack_proto = { @@ -494,8 +509,24 @@ const struct bpf_func_proto bpf_get_stack_proto = { .arg4_type = ARG_ANYTHING, }; -BPF_CALL_4(bpf_get_task_stack, struct task_struct *, task, void *, buf, - u32, size, u64, flags) +BPF_CALL_4(bpf_get_stack_sleepable, struct pt_regs *, regs, void *, buf, u32, size, + u64, flags) +{ + return __bpf_get_stack(regs, NULL, NULL, buf, size, flags, true /* may_fault */); +} + +const struct bpf_func_proto bpf_get_stack_sleepable_proto = { + .func = bpf_get_stack_sleepable, + .gpl_only = true, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_CTX, + .arg2_type = ARG_PTR_TO_UNINIT_MEM, + .arg3_type = ARG_CONST_SIZE_OR_ZERO, + .arg4_type = ARG_ANYTHING, +}; + +static long __bpf_get_task_stack(struct task_struct *task, void *buf, u32 size, + u64 flags, bool may_fault) { struct pt_regs *regs; long res = -EINVAL; @@ -505,12 +536,18 @@ BPF_CALL_4(bpf_get_task_stack, struct task_struct *, task, void *, buf, regs = task_pt_regs(task); if (regs) - res = __bpf_get_stack(regs, task, NULL, buf, size, flags); + res = __bpf_get_stack(regs, task, NULL, buf, size, flags, may_fault); put_task_stack(task); return res; } +BPF_CALL_4(bpf_get_task_stack, struct task_struct *, task, void *, buf, + u32, size, u64, flags) +{ + return __bpf_get_task_stack(task, buf, size, flags, false /* !may_fault */); +} + const struct bpf_func_proto bpf_get_task_stack_proto = { .func = bpf_get_task_stack, .gpl_only = false, @@ -522,6 +559,23 @@ const struct bpf_func_proto bpf_get_task_stack_proto = { .arg4_type = ARG_ANYTHING, }; +BPF_CALL_4(bpf_get_task_stack_sleepable, struct task_struct *, task, void *, buf, + u32, size, u64, flags) +{ + return __bpf_get_task_stack(task, buf, size, flags, true /* !may_fault */); +} + +const struct bpf_func_proto bpf_get_task_stack_sleepable_proto = { + .func = bpf_get_task_stack_sleepable, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_BTF_ID, + .arg1_btf_id = &btf_tracing_ids[BTF_TRACING_TYPE_TASK], + .arg2_type = ARG_PTR_TO_UNINIT_MEM, + .arg3_type = ARG_CONST_SIZE_OR_ZERO, + .arg4_type = ARG_ANYTHING, +}; + BPF_CALL_4(bpf_get_stack_pe, struct bpf_perf_event_data_kern *, ctx, void *, buf, u32, size, u64, flags) { @@ -533,7 +587,7 @@ BPF_CALL_4(bpf_get_stack_pe, struct bpf_perf_event_data_kern *, ctx, __u64 nr_kernel; if (!(event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)) - return __bpf_get_stack(regs, NULL, NULL, buf, size, flags); + return __bpf_get_stack(regs, NULL, NULL, buf, size, flags, false /* !may_fault */); if (unlikely(flags & ~(BPF_F_SKIP_FIELD_MASK | BPF_F_USER_STACK | BPF_F_USER_BUILD_ID))) @@ -553,7 +607,7 @@ BPF_CALL_4(bpf_get_stack_pe, struct bpf_perf_event_data_kern *, ctx, __u64 nr = trace->nr; trace->nr = nr_kernel; - err = __bpf_get_stack(regs, NULL, trace, buf, size, flags); + err = __bpf_get_stack(regs, NULL, trace, buf, size, flags, false /* !may_fault */); /* restore nr */ trace->nr = nr; @@ -565,7 +619,7 @@ BPF_CALL_4(bpf_get_stack_pe, struct bpf_perf_event_data_kern *, ctx, goto clear; flags = (flags & ~BPF_F_SKIP_FIELD_MASK) | skip; - err = __bpf_get_stack(regs, NULL, trace, buf, size, flags); + err = __bpf_get_stack(regs, NULL, trace, buf, size, flags, false /* !may_fault */); } return err; diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index cd098846e251..c3845470f56d 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1598,7 +1598,8 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_jiffies64: return &bpf_jiffies64_proto; case BPF_FUNC_get_task_stack: - return &bpf_get_task_stack_proto; + return prog->sleepable ? &bpf_get_task_stack_sleepable_proto + : &bpf_get_task_stack_proto; case BPF_FUNC_copy_from_user: return &bpf_copy_from_user_proto; case BPF_FUNC_copy_from_user_task: @@ -1654,7 +1655,7 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_get_stackid: return &bpf_get_stackid_proto; case BPF_FUNC_get_stack: - return &bpf_get_stack_proto; + return prog->sleepable ? &bpf_get_stack_sleepable_proto : &bpf_get_stack_proto; #ifdef CONFIG_BPF_KPROBE_OVERRIDE case BPF_FUNC_override_return: return &bpf_override_return_proto; From patchwork Wed Jul 24 22:52:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13741427 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 337BF13D610 for ; Wed, 24 Jul 2024 22:52:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861567; cv=none; b=cW9U5ryXjDM4ae+tWARrK+bL95ob8QeXqUFyblmrWopkB9CxNIAvfW2L9l2ZQvHOG+MUP8QfNlvjH58hq1Fu4ok+cprLDL8p0c6NAfigShDzQ9MV3nw88bfH0vB0G0Cg7yJjdcmKzC3eJRYwCv5PsT+NrqWQtEWB4GGxLlBq8x0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721861567; c=relaxed/simple; bh=xrKF6BEn7xM/lFZAUWkAXCxHoR7Z8xeRExFe75nCrl8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=o5dKf5exLsOU3IoNx6MBHzqxNB15Pn6Wja8CHmP4cWKv/YG0OAgX/oV2QbpsUSDlSvC868c2FuklHvGTVYo6jFPiAnjLC8wyHv55a/q8A181hhsY24btFcozqVOJq9zJBJCAMx0L08uP2wRUIrTMNJ/7vdm2ixj+oKdt0Z5jv7w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=uV+A/Re2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="uV+A/Re2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 83978C32781; Wed, 24 Jul 2024 22:52:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721861566; bh=xrKF6BEn7xM/lFZAUWkAXCxHoR7Z8xeRExFe75nCrl8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uV+A/Re2GIQb4j2gTs/TZE5wf0bQp8rT3fcqecC970ONYKPLuTF/ACsI57L9L3MHw /RXK5vt6P7p4GHM5UkhzErVvC39w+c/jkmckOk2STb1pfO2QNNterkYoW9N0WmdxPz sFpFV0JLyzPBgsKx52RUpepA98Gz/aJMZEG26WFgIILR1TmaTgVgF3qBoRdSHNEsYb 7cBXKvBkl2Pr0i5kOStJjMCj2Z16+IzcqGEbxcWnMCie1WABVAcSLmN+F42RUGgdkF 4bLEWgkNSIFhqYBsp+HL0sSNb1GrNjzXnD6yHbxrfbgog8R43tJHtBKBzHnvxRSW6b jgAD1R2lbNPEw== From: Andrii Nakryiko To: bpf@vger.kernel.org Cc: linux-mm@kvack.org, akpm@linux-foundation.org, adobriyan@gmail.com, shakeel.butt@linux.dev, hannes@cmpxchg.org, ak@linux.intel.com, osandov@osandov.com, song@kernel.org, Andrii Nakryiko Subject: [PATCH v2 bpf-next 10/10] selftests/bpf: add build ID tests Date: Wed, 24 Jul 2024 15:52:10 -0700 Message-ID: <20240724225210.545423-11-andrii@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240724225210.545423-1-andrii@kernel.org> References: <20240724225210.545423-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Add a new set of tests validating behavior of capturing stack traces with build ID. We extend uprobe_multi target binary with ability to trigger uprobe (so that we can capture stack traces from it), but also we allow to force build ID data to be either resident or non-resident in memory (see also a comment about quirks of MADV_PAGEOUT). That way we can validate that in non-sleepable context we won't get build ID (as expected), but with sleepable uprobes we will get that build ID regardless of it being physically present in memory. Also, we add a small add-on linker script which reorders .note.gnu.build-id section and puts it after (big) .text section, putting build ID data outside of the very first page of ELF file. This will test all the relaxations we did in build ID parsing logic in kernel thanks to freader abstraction. Signed-off-by: Andrii Nakryiko --- tools/testing/selftests/bpf/Makefile | 5 +- .../selftests/bpf/prog_tests/build_id.c | 118 ++++++++++++++++++ .../selftests/bpf/progs/test_build_id.c | 31 +++++ tools/testing/selftests/bpf/uprobe_multi.c | 41 ++++++ tools/testing/selftests/bpf/uprobe_multi.ld | 11 ++ 5 files changed, 204 insertions(+), 2 deletions(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/build_id.c create mode 100644 tools/testing/selftests/bpf/progs/test_build_id.c create mode 100644 tools/testing/selftests/bpf/uprobe_multi.ld diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 888ba68e6592..fe4bca113c78 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -790,9 +790,10 @@ $(OUTPUT)/veristat: $(OUTPUT)/veristat.o # Linking uprobe_multi can fail due to relocation overflows on mips. $(OUTPUT)/uprobe_multi: CFLAGS += $(if $(filter mips, $(ARCH)),-mxgot) -$(OUTPUT)/uprobe_multi: uprobe_multi.c +$(OUTPUT)/uprobe_multi: uprobe_multi.c uprobe_multi.ld $(call msg,BINARY,,$@) - $(Q)$(CC) $(CFLAGS) -O0 $(LDFLAGS) $^ $(LDLIBS) -o $@ + $(Q)$(CC) $(CFLAGS) -Wl,-T,uprobe_multi.ld -O0 $(LDFLAGS) \ + $(filter-out %.ld,$^) $(LDLIBS) -o $@ EXTRA_CLEAN := $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \ prog_tests/tests.h map_tests/tests.h verifier/tests.h \ diff --git a/tools/testing/selftests/bpf/prog_tests/build_id.c b/tools/testing/selftests/bpf/prog_tests/build_id.c new file mode 100644 index 000000000000..8e6d3603be61 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/build_id.c @@ -0,0 +1,118 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */ +#include + +#include "test_build_id.skel.h" + +static char build_id[BPF_BUILD_ID_SIZE]; +static int build_id_sz; + +static void print_stack(struct bpf_stack_build_id *stack, int frame_cnt) +{ + int i, j; + + for (i = 0; i < frame_cnt; i++) { + printf("FRAME #%02d: ", i); + switch (stack[i].status) { + case BPF_STACK_BUILD_ID_EMPTY: + printf("\n"); + break; + case BPF_STACK_BUILD_ID_VALID: + printf("BUILD ID = "); + for (j = 0; j < BPF_BUILD_ID_SIZE; j++) + printf("%02hhx", (unsigned)stack[i].build_id[j]); + printf(" OFFSET = %llx", (unsigned long long)stack[i].offset); + break; + case BPF_STACK_BUILD_ID_IP: + printf("IP = %llx", (unsigned long long)stack[i].ip); + break; + default: + printf("UNEXPECTED STATUS %d ", stack[i].status); + break; + } + printf("\n"); + } +} + +static void subtest_nofault(bool build_id_resident) +{ + struct test_build_id *skel; + struct bpf_stack_build_id *stack; + int frame_cnt; + + skel = test_build_id__open_and_load(); + if (!ASSERT_OK_PTR(skel, "skel_open")) + return; + + skel->links.uprobe_nofault = bpf_program__attach(skel->progs.uprobe_nofault); + if (!ASSERT_OK_PTR(skel->links.uprobe_nofault, "link")) + goto cleanup; + + if (build_id_resident) + ASSERT_OK(system("./uprobe_multi uprobe-paged-in"), "trigger_uprobe"); + else + ASSERT_OK(system("./uprobe_multi uprobe-paged-out"), "trigger_uprobe"); + + if (!ASSERT_GT(skel->bss->res_nofault, 0, "res")) + goto cleanup; + + stack = skel->bss->stack_nofault; + frame_cnt = skel->bss->res_nofault / sizeof(struct bpf_stack_build_id); + if (env.verbosity >= VERBOSE_NORMAL) + print_stack(stack, frame_cnt); + + if (build_id_resident) { + ASSERT_EQ(stack[0].status, BPF_STACK_BUILD_ID_VALID, "build_id_status"); + ASSERT_EQ(memcmp(stack[0].build_id, build_id, build_id_sz), 0, "build_id_match"); + } else { + ASSERT_EQ(stack[0].status, BPF_STACK_BUILD_ID_IP, "build_id_status"); + } + +cleanup: + test_build_id__destroy(skel); +} + +static void subtest_sleepable(void) +{ + struct test_build_id *skel; + struct bpf_stack_build_id *stack; + int frame_cnt; + + skel = test_build_id__open_and_load(); + if (!ASSERT_OK_PTR(skel, "skel_open")) + return; + + skel->links.uprobe_sleepable = bpf_program__attach(skel->progs.uprobe_sleepable); + if (!ASSERT_OK_PTR(skel->links.uprobe_sleepable, "link")) + goto cleanup; + + /* force build ID to not be paged in */ + ASSERT_OK(system("./uprobe_multi uprobe-paged-out"), "trigger_uprobe"); + + if (!ASSERT_GT(skel->bss->res_sleepable, 0, "res")) + goto cleanup; + + stack = skel->bss->stack_sleepable; + frame_cnt = skel->bss->res_sleepable / sizeof(struct bpf_stack_build_id); + if (env.verbosity >= VERBOSE_NORMAL) + print_stack(stack, frame_cnt); + + ASSERT_EQ(stack[0].status, BPF_STACK_BUILD_ID_VALID, "build_id_status"); + ASSERT_EQ(memcmp(stack[0].build_id, build_id, build_id_sz), 0, "build_id_match"); + +cleanup: + test_build_id__destroy(skel); +} + +void test_build_id(void) +{ + build_id_sz = read_build_id("uprobe_multi", build_id, sizeof(build_id)); + ASSERT_EQ(build_id_sz, BPF_BUILD_ID_SIZE, "parse_build_id"); + + if (test__start_subtest("nofault-paged-out")) + subtest_nofault(false /* not resident */); + if (test__start_subtest("nofault-paged-in")) + subtest_nofault(true /* resident */); + if (test__start_subtest("sleepable")) + subtest_sleepable(); +} diff --git a/tools/testing/selftests/bpf/progs/test_build_id.c b/tools/testing/selftests/bpf/progs/test_build_id.c new file mode 100644 index 000000000000..32ce59f9aa27 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_build_id.c @@ -0,0 +1,31 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */ + +#include "vmlinux.h" +#include + +struct bpf_stack_build_id stack_sleepable[128]; +int res_sleepable; + +struct bpf_stack_build_id stack_nofault[128]; +int res_nofault; + +SEC("uprobe.multi/./uprobe_multi:uprobe") +int uprobe_nofault(struct pt_regs *ctx) +{ + res_nofault = bpf_get_stack(ctx, stack_nofault, sizeof(stack_nofault), + BPF_F_USER_STACK | BPF_F_USER_BUILD_ID); + + return 0; +} + +SEC("uprobe.multi.s/./uprobe_multi:uprobe") +int uprobe_sleepable(struct pt_regs *ctx) +{ + res_sleepable = bpf_get_stack(ctx, stack_sleepable, sizeof(stack_sleepable), + BPF_F_USER_STACK | BPF_F_USER_BUILD_ID); + + return 0; +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/uprobe_multi.c b/tools/testing/selftests/bpf/uprobe_multi.c index 7ffa563ffeba..c7828b13e5ff 100644 --- a/tools/testing/selftests/bpf/uprobe_multi.c +++ b/tools/testing/selftests/bpf/uprobe_multi.c @@ -2,8 +2,21 @@ #include #include +#include +#include +#include +#include #include +#ifndef MADV_POPULATE_READ +#define MADV_POPULATE_READ 22 +#endif + +int __attribute__((weak)) uprobe(void) +{ + return 0; +} + #define __PASTE(a, b) a##b #define PASTE(a, b) __PASTE(a, b) @@ -75,6 +88,30 @@ static int usdt(void) return 0; } +extern char build_id_start[]; +extern char build_id_end[]; + +int __attribute__((weak)) trigger_uprobe(bool build_id_resident) +{ + int page_sz = sysconf(_SC_PAGESIZE); + void *addr; + + /* page-align build ID start */ + addr = (void *)((uintptr_t)&build_id_start & ~(page_sz - 1)); + + /* to guarantee MADV_PAGEOUT work reliably, we need to ensure that + * memory range is mapped into current process, so we unconditionally + * do MADV_POPULATE_READ, and then MADV_PAGEOUT, if necessary + */ + madvise(addr, page_sz, MADV_POPULATE_READ); + if (!build_id_resident) + madvise(addr, page_sz, MADV_PAGEOUT); + + (void)uprobe(); + + return 0; +} + int main(int argc, char **argv) { if (argc != 2) @@ -84,6 +121,10 @@ int main(int argc, char **argv) return bench(); if (!strcmp("usdt", argv[1])) return usdt(); + if (!strcmp("uprobe-paged-out", argv[1])) + return trigger_uprobe(false /* page-out build ID */); + if (!strcmp("uprobe-paged-in", argv[1])) + return trigger_uprobe(true /* page-in build ID */); error: fprintf(stderr, "usage: %s \n", argv[0]); diff --git a/tools/testing/selftests/bpf/uprobe_multi.ld b/tools/testing/selftests/bpf/uprobe_multi.ld new file mode 100644 index 000000000000..a2e94828bc8c --- /dev/null +++ b/tools/testing/selftests/bpf/uprobe_multi.ld @@ -0,0 +1,11 @@ +SECTIONS +{ + . = ALIGN(4096); + .note.gnu.build-id : { *(.note.gnu.build-id) } + . = ALIGN(4096); +} +INSERT AFTER .text; + +build_id_start = ADDR(.note.gnu.build-id); +build_id_end = ADDR(.note.gnu.build-id) + SIZEOF(.note.gnu.build-id); +