Message ID | 20201207210649.19194-2-borisp@mellanox.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | nvme-tcp receive offloads | expand |
Context | Check | Description |
---|---|---|
netdev/apply | fail | Patch does not apply to net-next |
netdev/tree_selection | success | Clearly marked for net-next |
On 12/7/20 2:06 PM, Boris Pismenny wrote: > When using direct data placement the NIC writes some of the payload > directly to the destination buffer, and constructs the SKB such that it > points to this data. As a result, the skb_copy datagram_iter call will > attempt to copy data when it is not necessary. > > This patch adds a check to avoid this copy, and a static_key to enabled > it when TCP direct data placement is possible. > Why not mark the skb as ZEROCOPY -- an Rx version of the existing SKBTX_DEV_ZEROCOPY and skb_shared_info->tx_flags? Use that as a generic way of indicating to the stack what is happening.
On 08/12/2020 2:39, David Ahern wrote: > On 12/7/20 2:06 PM, Boris Pismenny wrote: >> When using direct data placement the NIC writes some of the payload >> directly to the destination buffer, and constructs the SKB such that it >> points to this data. As a result, the skb_copy datagram_iter call will >> attempt to copy data when it is not necessary. >> >> This patch adds a check to avoid this copy, and a static_key to enabled >> it when TCP direct data placement is possible. >> > Why not mark the skb as ZEROCOPY -- an Rx version of the existing > SKBTX_DEV_ZEROCOPY and skb_shared_info->tx_flags? Use that as a generic > way of indicating to the stack what is happening. > > [Re-sending as the previous one didn't hit the mailing list] Interesting idea. But, unlike SKBTX_DEV_ZEROCOPY this SKB can be inspected/modified by the stack without the need to copy things out. Additionally, the SKB may contain both data that is already placed in its final destination buffer (PDU data) and data that isn't (PDU header); it doesn't matter. Therefore, labeling the entire SKB as zerocopy doesn't convey the desired information. Moreover, skipping copies in the stack to receive zerocopy SKBs will require more invasive changes. Our goal in this approach was to provide the smallest change that enables the desired functionality while preserving the performance of existing flows that do not care for it. An alternative approach, that doesn't affect existing flows at all, which we considered was to make a special version of memcpy_to_page to be used by DDP providers (nvme-tcp). This alternative will require creating corresponding special versions for users of this function such skb_copy_datagram_iter. Thit is more invasive, thus in this patchset we decided to avoid it.
diff --git a/include/linux/uio.h b/include/linux/uio.h index 72d88566694e..05573d848ff5 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -282,4 +282,6 @@ int iov_iter_for_each_range(struct iov_iter *i, size_t bytes, int (*f)(struct kvec *vec, void *context), void *context); +extern struct static_key_false skip_copy_enabled; + #endif diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 1635111c5bd2..206edb051135 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -15,6 +15,9 @@ #define PIPE_PARANOIA /* for now */ +DEFINE_STATIC_KEY_FALSE(skip_copy_enabled); +EXPORT_SYMBOL_GPL(skip_copy_enabled); + #define iterate_iovec(i, n, __v, __p, skip, STEP) { \ size_t left; \ size_t wanted = n; \ @@ -476,7 +479,13 @@ static void memcpy_from_page(char *to, struct page *page, size_t offset, size_t static void memcpy_to_page(struct page *page, size_t offset, const char *from, size_t len) { char *to = kmap_atomic(page); - memcpy(to + offset, from, len); + + if (static_branch_unlikely(&skip_copy_enabled)) { + if (to + offset != from) + memcpy(to + offset, from, len); + } else { + memcpy(to + offset, from, len); + } kunmap_atomic(to); }