Message ID | 20231009142005.21338-2-quic_kriskura@quicinc.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] Documentation: usb: Update NCM configfs parameters | expand |
On Mon, Oct 09, 2023 at 07:50:05PM +0530, Krishna Kurapati wrote: > Currently the NCM driver restricts wMaxSegmentSize that indicates > the datagram size coming from network layer to 1514. I don't see that restriction in the existing driver, where does that happen? > However the spec doesn't have any limitation. What spec? > For P2P connections over NCM, increasing MTU helps increasing > throughput. While increasing latency, right? > Add support to configure this value before configfs symlink is > created. Also since the NTB Out/In buffer sizes are fixed at 16384 > bytes, limit the segment size to an upper cap of 15014. Set the > default MTU size for the ncm interface during function bind before > network interface is registered allowing MTU to be set in parity > with wMaxSegmentSize. Where does 15014 come from? > > Signed-off-by: Krishna Kurapati <quic_kriskura@quicinc.com> > --- > drivers/usb/gadget/function/f_ncm.c | 51 +++++++++++++++++++++++++++++ > drivers/usb/gadget/function/u_ncm.h | 2 ++ > 2 files changed, 53 insertions(+) > > diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c > index feccf4c8cc4f..eab297b22200 100644 > --- a/drivers/usb/gadget/function/f_ncm.c > +++ b/drivers/usb/gadget/function/f_ncm.c > @@ -103,6 +103,8 @@ static inline struct f_ncm *func_to_ncm(struct usb_function *f) > /* Delay for the transmit to wait before sending an unfilled NTB frame. */ > #define TX_TIMEOUT_NSECS 300000 > > +#define MAX_DATAGRAM_SIZE 15014 Where does this magic value come from? Please document it really really well. > + > #define FORMATS_SUPPORTED (USB_CDC_NCM_NTB16_SUPPORTED | \ > USB_CDC_NCM_NTB32_SUPPORTED) > > @@ -1408,6 +1410,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) > ncm_opts = container_of(f->fi, struct f_ncm_opts, func_inst); > > if (cdev->use_os_string) { > + ncm_opts->net->mtu = (ncm_opts->max_segment_size - ETH_HLEN); > f->os_desc_table = kzalloc(sizeof(*f->os_desc_table), > GFP_KERNEL); > if (!f->os_desc_table) > @@ -1469,6 +1472,8 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) > > status = -ENODEV; > > + ecm_desc.wMaxSegmentSize = ncm_opts->max_segment_size; > + > /* allocate instance-specific endpoints */ > ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_in_desc); > if (!ep) > @@ -1569,11 +1574,56 @@ USB_ETHERNET_CONFIGFS_ITEM_ATTR_QMULT(ncm); > /* f_ncm_opts_ifname */ > USB_ETHERNET_CONFIGFS_ITEM_ATTR_IFNAME(ncm); > > +static ssize_t ncm_opts_max_segment_size_show(struct config_item *item, > + char *page) > +{ > + struct f_ncm_opts *opts = to_f_ncm_opts(item); > + u32 segment_size; > + > + mutex_lock(&opts->lock); > + segment_size = opts->max_segment_size; > + mutex_unlock(&opts->lock); > + > + return sprintf(page, "%u\n", segment_size); sysfs_emit()? thanks, greg k-h
On 10/9/2023 8:38 PM, Greg Kroah-Hartman wrote: > On Mon, Oct 09, 2023 at 07:50:05PM +0530, Krishna Kurapati wrote: >> Currently the NCM driver restricts wMaxSegmentSize that indicates >> the datagram size coming from network layer to 1514. > > I don't see that restriction in the existing driver, where does that > happen? Hi Greg, In the ecm_desc, the following line restricts the value: .wMaxSegmentSize = cpu_to_le16(ETH_FRAME_LEN), > >> However the spec doesn't have any limitation. > > What spec? NCM Specification. I didn't mention "NCM specification" as I thought the patch header would imply it is NCM Spec. Will update the wording here. > >> For P2P connections over NCM, increasing MTU helps increasing >> throughput. > > While increasing latency, right? I used iperf for capturing the data. I was more focussing on throughput. Here are some results I found: For throughput, the increase is as follows (HS link with and an old iperf): MTU Size Throughput 1500 145 2500 204 3500 208 4500 223 5500 227 6500 299 7500 324 8050 335 As per the latency, an internal test application I was using didn't show much difference in latency as MTU was increasing. Then again, it could be application specific. > >> Add support to configure this value before configfs symlink is >> created. Also since the NTB Out/In buffer sizes are fixed at 16384 >> bytes, limit the segment size to an upper cap of 15014. Set the >> default MTU size for the ncm interface during function bind before >> network interface is registered allowing MTU to be set in parity >> with wMaxSegmentSize. > > Where does 15014 come from? > >> >> Signed-off-by: Krishna Kurapati <quic_kriskura@quicinc.com> >> --- >> drivers/usb/gadget/function/f_ncm.c | 51 +++++++++++++++++++++++++++++ >> drivers/usb/gadget/function/u_ncm.h | 2 ++ >> 2 files changed, 53 insertions(+) >> >> diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c >> index feccf4c8cc4f..eab297b22200 100644 >> --- a/drivers/usb/gadget/function/f_ncm.c >> +++ b/drivers/usb/gadget/function/f_ncm.c >> @@ -103,6 +103,8 @@ static inline struct f_ncm *func_to_ncm(struct usb_function *f) >> /* Delay for the transmit to wait before sending an unfilled NTB frame. */ >> #define TX_TIMEOUT_NSECS 300000 >> >> +#define MAX_DATAGRAM_SIZE 15014 > > Where does this magic value come from? Please document it really really > well. > Sorry for not being clear. The 14 here meant ETH_HLEN which gets appended to MTU usually. I wanted to limit mtu to 15000 max (via wMaxsegment size) and mentioned it here as 15014. Will update comments here in v2. >> + >> #define FORMATS_SUPPORTED (USB_CDC_NCM_NTB16_SUPPORTED | \ >> USB_CDC_NCM_NTB32_SUPPORTED) >> >> @@ -1408,6 +1410,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) >> ncm_opts = container_of(f->fi, struct f_ncm_opts, func_inst); >> >> if (cdev->use_os_string) { >> + ncm_opts->net->mtu = (ncm_opts->max_segment_size - ETH_HLEN); >> f->os_desc_table = kzalloc(sizeof(*f->os_desc_table), >> GFP_KERNEL); >> if (!f->os_desc_table) >> @@ -1469,6 +1472,8 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) >> >> status = -ENODEV; >> >> + ecm_desc.wMaxSegmentSize = ncm_opts->max_segment_size; >> + >> /* allocate instance-specific endpoints */ >> ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_in_desc); >> if (!ep) >> @@ -1569,11 +1574,56 @@ USB_ETHERNET_CONFIGFS_ITEM_ATTR_QMULT(ncm); >> /* f_ncm_opts_ifname */ >> USB_ETHERNET_CONFIGFS_ITEM_ATTR_IFNAME(ncm); >> >> +static ssize_t ncm_opts_max_segment_size_show(struct config_item *item, >> + char *page) >> +{ >> + struct f_ncm_opts *opts = to_f_ncm_opts(item); >> + u32 segment_size; >> + >> + mutex_lock(&opts->lock); >> + segment_size = opts->max_segment_size; >> + mutex_unlock(&opts->lock); >> + >> + return sprintf(page, "%u\n", segment_size); > > sysfs_emit()? > Apologies. I followed u_ether_configfs.h which used sprintf. Will replace it with sysfs_emit in v2. Thanks for the review. Regards, Krishna,
On Mon, Oct 09, 2023 at 09:02:32PM +0530, Krishna Kurapati PSSNV wrote: > > > On 10/9/2023 8:38 PM, Greg Kroah-Hartman wrote: > > On Mon, Oct 09, 2023 at 07:50:05PM +0530, Krishna Kurapati wrote: > > > Currently the NCM driver restricts wMaxSegmentSize that indicates > > > the datagram size coming from network layer to 1514. > > > > I don't see that restriction in the existing driver, where does that > > happen? > > Hi Greg, > > In the ecm_desc, the following line restricts the value: > > .wMaxSegmentSize = cpu_to_le16(ETH_FRAME_LEN), Ok, so is that 1514? I don't know as I don't know what ETH_FRAM_LEN is. So how about saying something to the affect of "the max segment size is currently limited to the ethernet frame length of the kernel which happens to be 1514 at this point in time." thanks, greg k-h
On 10/9/2023 11:24 PM, Greg Kroah-Hartman wrote: > On Mon, Oct 09, 2023 at 09:02:32PM +0530, Krishna Kurapati PSSNV wrote: >> >> >> On 10/9/2023 8:38 PM, Greg Kroah-Hartman wrote: >>> On Mon, Oct 09, 2023 at 07:50:05PM +0530, Krishna Kurapati wrote: >>>> Currently the NCM driver restricts wMaxSegmentSize that indicates >>>> the datagram size coming from network layer to 1514. >>> >>> I don't see that restriction in the existing driver, where does that >>> happen? >> >> Hi Greg, >> >> In the ecm_desc, the following line restricts the value: >> >> .wMaxSegmentSize = cpu_to_le16(ETH_FRAME_LEN), > > Ok, so is that 1514? I don't know as I don't know what ETH_FRAM_LEN is. > Hi Greg, Yes, that is 1514. > So how about saying something to the affect of "the max segment size is > currently limited to the ethernet frame length of the kernel which > happens to be 1514 at this point in time." > Sure. I will rephrase this in v2 with the suggestion provided. Regards, Krishna,
On Mon, Oct 9, 2023 at 7:20 AM Krishna Kurapati <quic_kriskura@quicinc.com> wrote: > > Currently the NCM driver restricts wMaxSegmentSize that indicates > the datagram size coming from network layer to 1514. However the > spec doesn't have any limitation. For P2P connections over NCM, > increasing MTU helps increasing throughput. > > Add support to configure this value before configfs symlink is > created. Also since the NTB Out/In buffer sizes are fixed at 16384 > bytes, limit the segment size to an upper cap of 15014. Set the > default MTU size for the ncm interface during function bind before > network interface is registered allowing MTU to be set in parity > with wMaxSegmentSize. > > Signed-off-by: Krishna Kurapati <quic_kriskura@quicinc.com> > --- > drivers/usb/gadget/function/f_ncm.c | 51 +++++++++++++++++++++++++++++ > drivers/usb/gadget/function/u_ncm.h | 2 ++ > 2 files changed, 53 insertions(+) > > diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c > index feccf4c8cc4f..eab297b22200 100644 > --- a/drivers/usb/gadget/function/f_ncm.c > +++ b/drivers/usb/gadget/function/f_ncm.c > @@ -103,6 +103,8 @@ static inline struct f_ncm *func_to_ncm(struct usb_function *f) > /* Delay for the transmit to wait before sending an unfilled NTB frame. */ > #define TX_TIMEOUT_NSECS 300000 > > +#define MAX_DATAGRAM_SIZE 15014 > + > #define FORMATS_SUPPORTED (USB_CDC_NCM_NTB16_SUPPORTED | \ > USB_CDC_NCM_NTB32_SUPPORTED) > > @@ -1408,6 +1410,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) > ncm_opts = container_of(f->fi, struct f_ncm_opts, func_inst); > > if (cdev->use_os_string) { > + ncm_opts->net->mtu = (ncm_opts->max_segment_size - ETH_HLEN); > f->os_desc_table = kzalloc(sizeof(*f->os_desc_table), > GFP_KERNEL); > if (!f->os_desc_table) > @@ -1469,6 +1472,8 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) > > status = -ENODEV; > > + ecm_desc.wMaxSegmentSize = ncm_opts->max_segment_size; I think you need byte swap here. > + > /* allocate instance-specific endpoints */ > ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_in_desc); > if (!ep) > @@ -1569,11 +1574,56 @@ USB_ETHERNET_CONFIGFS_ITEM_ATTR_QMULT(ncm); > /* f_ncm_opts_ifname */ > USB_ETHERNET_CONFIGFS_ITEM_ATTR_IFNAME(ncm); > > +static ssize_t ncm_opts_max_segment_size_show(struct config_item *item, > + char *page) > +{ > + struct f_ncm_opts *opts = to_f_ncm_opts(item); > + u32 segment_size; > + > + mutex_lock(&opts->lock); > + segment_size = opts->max_segment_size; > + mutex_unlock(&opts->lock); > + > + return sprintf(page, "%u\n", segment_size); > +} > + > +static ssize_t ncm_opts_max_segment_size_store(struct config_item *item, > + const char *page, size_t len) > +{ > + struct f_ncm_opts *opts = to_f_ncm_opts(item); > + int ret; > + u32 segment_size; > + > + mutex_lock(&opts->lock); > + if (opts->refcnt) { > + ret = -EBUSY; > + goto out; > + } > + > + ret = kstrtou32(page, 0, &segment_size); > + if (ret) > + goto out; > + > + if (segment_size > MAX_DATAGRAM_SIZE) { > + ret = -EINVAL; > + goto out; > + } > + > + opts->max_segment_size = segment_size; > + ret = len; > +out: > + mutex_unlock(&opts->lock); > + return ret; > +} > + > +CONFIGFS_ATTR(ncm_opts_, max_segment_size); > + > static struct configfs_attribute *ncm_attrs[] = { > &ncm_opts_attr_dev_addr, > &ncm_opts_attr_host_addr, > &ncm_opts_attr_qmult, > &ncm_opts_attr_ifname, > + &ncm_opts_attr_max_segment_size, > NULL, > }; > > @@ -1616,6 +1666,7 @@ static struct usb_function_instance *ncm_alloc_inst(void) > kfree(opts); > return ERR_CAST(net); > } > + opts->max_segment_size = cpu_to_le16(ETH_FRAME_LEN); and not here. ie. max_segment_size should be native endian > INIT_LIST_HEAD(&opts->ncm_os_desc.ext_prop); > > descs[0] = &opts->ncm_os_desc; > diff --git a/drivers/usb/gadget/function/u_ncm.h b/drivers/usb/gadget/function/u_ncm.h > index 5408854d8407..d3403cf13f17 100644 > --- a/drivers/usb/gadget/function/u_ncm.h > +++ b/drivers/usb/gadget/function/u_ncm.h > @@ -31,6 +31,8 @@ struct f_ncm_opts { > */ > struct mutex lock; > int refcnt; > + > + u32 max_segment_size; > }; > > #endif /* U_NCM_H */ > -- > 2.42.0 > That said, I don't really follow what this is doing... Maciej Żenczykowski, Kernel Networking Developer @ Google
On Mon, Oct 9, 2023 at 5:17 PM Maciej Żenczykowski <maze@google.com> wrote: > > On Mon, Oct 9, 2023 at 7:20 AM Krishna Kurapati > <quic_kriskura@quicinc.com> wrote: > > > > Currently the NCM driver restricts wMaxSegmentSize that indicates > > the datagram size coming from network layer to 1514. However the > > spec doesn't have any limitation. For P2P connections over NCM, > > increasing MTU helps increasing throughput. > > > > Add support to configure this value before configfs symlink is > > created. Also since the NTB Out/In buffer sizes are fixed at 16384 > > bytes, limit the segment size to an upper cap of 15014. Set the > > default MTU size for the ncm interface during function bind before > > network interface is registered allowing MTU to be set in parity > > with wMaxSegmentSize. > > > > Signed-off-by: Krishna Kurapati <quic_kriskura@quicinc.com> > > --- > > drivers/usb/gadget/function/f_ncm.c | 51 +++++++++++++++++++++++++++++ > > drivers/usb/gadget/function/u_ncm.h | 2 ++ > > 2 files changed, 53 insertions(+) > > > > diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c > > index feccf4c8cc4f..eab297b22200 100644 > > --- a/drivers/usb/gadget/function/f_ncm.c > > +++ b/drivers/usb/gadget/function/f_ncm.c > > @@ -103,6 +103,8 @@ static inline struct f_ncm *func_to_ncm(struct usb_function *f) > > /* Delay for the transmit to wait before sending an unfilled NTB frame. */ > > #define TX_TIMEOUT_NSECS 300000 > > > > +#define MAX_DATAGRAM_SIZE 15014 > > + > > #define FORMATS_SUPPORTED (USB_CDC_NCM_NTB16_SUPPORTED | \ > > USB_CDC_NCM_NTB32_SUPPORTED) > > > > @@ -1408,6 +1410,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) > > ncm_opts = container_of(f->fi, struct f_ncm_opts, func_inst); > > > > if (cdev->use_os_string) { > > + ncm_opts->net->mtu = (ncm_opts->max_segment_size - ETH_HLEN); > > f->os_desc_table = kzalloc(sizeof(*f->os_desc_table), > > GFP_KERNEL); > > if (!f->os_desc_table) > > @@ -1469,6 +1472,8 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) > > > > status = -ENODEV; > > > > + ecm_desc.wMaxSegmentSize = ncm_opts->max_segment_size; > > I think you need byte swap here. > > > + > > /* allocate instance-specific endpoints */ > > ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_in_desc); > > if (!ep) > > @@ -1569,11 +1574,56 @@ USB_ETHERNET_CONFIGFS_ITEM_ATTR_QMULT(ncm); > > /* f_ncm_opts_ifname */ > > USB_ETHERNET_CONFIGFS_ITEM_ATTR_IFNAME(ncm); > > > > +static ssize_t ncm_opts_max_segment_size_show(struct config_item *item, > > + char *page) > > +{ > > + struct f_ncm_opts *opts = to_f_ncm_opts(item); > > + u32 segment_size; > > + > > + mutex_lock(&opts->lock); > > + segment_size = opts->max_segment_size; > > + mutex_unlock(&opts->lock); > > + > > + return sprintf(page, "%u\n", segment_size); > > +} > > + > > +static ssize_t ncm_opts_max_segment_size_store(struct config_item *item, > > + const char *page, size_t len) > > +{ > > + struct f_ncm_opts *opts = to_f_ncm_opts(item); > > + int ret; > > + u32 segment_size; > > + > > + mutex_lock(&opts->lock); > > + if (opts->refcnt) { > > + ret = -EBUSY; > > + goto out; > > + } > > + > > + ret = kstrtou32(page, 0, &segment_size); > > + if (ret) > > + goto out; > > + > > + if (segment_size > MAX_DATAGRAM_SIZE) { > > + ret = -EINVAL; > > + goto out; > > + } > > + > > + opts->max_segment_size = segment_size; > > + ret = len; > > +out: > > + mutex_unlock(&opts->lock); > > + return ret; > > +} > > + > > +CONFIGFS_ATTR(ncm_opts_, max_segment_size); > > + > > static struct configfs_attribute *ncm_attrs[] = { > > &ncm_opts_attr_dev_addr, > > &ncm_opts_attr_host_addr, > > &ncm_opts_attr_qmult, > > &ncm_opts_attr_ifname, > > + &ncm_opts_attr_max_segment_size, > > NULL, > > }; > > > > @@ -1616,6 +1666,7 @@ static struct usb_function_instance *ncm_alloc_inst(void) > > kfree(opts); > > return ERR_CAST(net); > > } > > + opts->max_segment_size = cpu_to_le16(ETH_FRAME_LEN); > > and not here. ie. max_segment_size should be native endian > > > INIT_LIST_HEAD(&opts->ncm_os_desc.ext_prop); > > > > descs[0] = &opts->ncm_os_desc; > > diff --git a/drivers/usb/gadget/function/u_ncm.h b/drivers/usb/gadget/function/u_ncm.h > > index 5408854d8407..d3403cf13f17 100644 > > --- a/drivers/usb/gadget/function/u_ncm.h > > +++ b/drivers/usb/gadget/function/u_ncm.h > > @@ -31,6 +31,8 @@ struct f_ncm_opts { > > */ > > struct mutex lock; > > int refcnt; > > + > > + u32 max_segment_size; > > }; > > > > #endif /* U_NCM_H */ > > -- > > 2.42.0 > > > > That said, I don't really follow what this is doing... Also static struct usb_cdc_ether_desc ecm_desc = { ... .wMaxSegmentSize = cpu_to_le16(ETH_FRAME_LEN), ^ I think this should be deleted now, right? since it's always overwritten? And if it isn't always overwritten, that would be a bug I think, cause what happens if you bring up 2 ncm devices and only change the setting on the 1st?
On Mon, Oct 9, 2023 at 5:20 PM Maciej Żenczykowski <maze@google.com> wrote: > > On Mon, Oct 9, 2023 at 5:17 PM Maciej Żenczykowski <maze@google.com> wrote: > > > > On Mon, Oct 9, 2023 at 7:20 AM Krishna Kurapati > > <quic_kriskura@quicinc.com> wrote: > > > > > > Currently the NCM driver restricts wMaxSegmentSize that indicates > > > the datagram size coming from network layer to 1514. However the > > > spec doesn't have any limitation. For P2P connections over NCM, > > > increasing MTU helps increasing throughput. > > > > > > Add support to configure this value before configfs symlink is > > > created. Also since the NTB Out/In buffer sizes are fixed at 16384 > > > bytes, limit the segment size to an upper cap of 15014. Set the > > > default MTU size for the ncm interface during function bind before > > > network interface is registered allowing MTU to be set in parity > > > with wMaxSegmentSize. > > > > > > Signed-off-by: Krishna Kurapati <quic_kriskura@quicinc.com> > > > --- > > > drivers/usb/gadget/function/f_ncm.c | 51 +++++++++++++++++++++++++++++ > > > drivers/usb/gadget/function/u_ncm.h | 2 ++ > > > 2 files changed, 53 insertions(+) > > > > > > diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c > > > index feccf4c8cc4f..eab297b22200 100644 > > > --- a/drivers/usb/gadget/function/f_ncm.c > > > +++ b/drivers/usb/gadget/function/f_ncm.c > > > @@ -103,6 +103,8 @@ static inline struct f_ncm *func_to_ncm(struct usb_function *f) > > > /* Delay for the transmit to wait before sending an unfilled NTB frame. */ > > > #define TX_TIMEOUT_NSECS 300000 > > > > > > +#define MAX_DATAGRAM_SIZE 15014 > > > + > > > #define FORMATS_SUPPORTED (USB_CDC_NCM_NTB16_SUPPORTED | \ > > > USB_CDC_NCM_NTB32_SUPPORTED) > > > > > > @@ -1408,6 +1410,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) > > > ncm_opts = container_of(f->fi, struct f_ncm_opts, func_inst); > > > > > > if (cdev->use_os_string) { > > > + ncm_opts->net->mtu = (ncm_opts->max_segment_size - ETH_HLEN); > > > f->os_desc_table = kzalloc(sizeof(*f->os_desc_table), > > > GFP_KERNEL); > > > if (!f->os_desc_table) > > > @@ -1469,6 +1472,8 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) > > > > > > status = -ENODEV; > > > > > > + ecm_desc.wMaxSegmentSize = ncm_opts->max_segment_size; > > > > I think you need byte swap here. > > > > > + > > > /* allocate instance-specific endpoints */ > > > ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_in_desc); > > > if (!ep) > > > @@ -1569,11 +1574,56 @@ USB_ETHERNET_CONFIGFS_ITEM_ATTR_QMULT(ncm); > > > /* f_ncm_opts_ifname */ > > > USB_ETHERNET_CONFIGFS_ITEM_ATTR_IFNAME(ncm); > > > > > > +static ssize_t ncm_opts_max_segment_size_show(struct config_item *item, > > > + char *page) > > > +{ > > > + struct f_ncm_opts *opts = to_f_ncm_opts(item); > > > + u32 segment_size; > > > + > > > + mutex_lock(&opts->lock); > > > + segment_size = opts->max_segment_size; > > > + mutex_unlock(&opts->lock); > > > + > > > + return sprintf(page, "%u\n", segment_size); > > > +} > > > + > > > +static ssize_t ncm_opts_max_segment_size_store(struct config_item *item, > > > + const char *page, size_t len) > > > +{ > > > + struct f_ncm_opts *opts = to_f_ncm_opts(item); > > > + int ret; > > > + u32 segment_size; > > > + > > > + mutex_lock(&opts->lock); > > > + if (opts->refcnt) { > > > + ret = -EBUSY; > > > + goto out; > > > + } > > > + > > > + ret = kstrtou32(page, 0, &segment_size); > > > + if (ret) > > > + goto out; > > > + > > > + if (segment_size > MAX_DATAGRAM_SIZE) { > > > + ret = -EINVAL; > > > + goto out; > > > + } > > > + > > > + opts->max_segment_size = segment_size; > > > + ret = len; > > > +out: > > > + mutex_unlock(&opts->lock); > > > + return ret; > > > +} > > > + > > > +CONFIGFS_ATTR(ncm_opts_, max_segment_size); > > > + > > > static struct configfs_attribute *ncm_attrs[] = { > > > &ncm_opts_attr_dev_addr, > > > &ncm_opts_attr_host_addr, > > > &ncm_opts_attr_qmult, > > > &ncm_opts_attr_ifname, > > > + &ncm_opts_attr_max_segment_size, > > > NULL, > > > }; > > > > > > @@ -1616,6 +1666,7 @@ static struct usb_function_instance *ncm_alloc_inst(void) > > > kfree(opts); > > > return ERR_CAST(net); > > > } > > > + opts->max_segment_size = cpu_to_le16(ETH_FRAME_LEN); > > > > and not here. ie. max_segment_size should be native endian > > > > > INIT_LIST_HEAD(&opts->ncm_os_desc.ext_prop); > > > > > > descs[0] = &opts->ncm_os_desc; > > > diff --git a/drivers/usb/gadget/function/u_ncm.h b/drivers/usb/gadget/function/u_ncm.h > > > index 5408854d8407..d3403cf13f17 100644 > > > --- a/drivers/usb/gadget/function/u_ncm.h > > > +++ b/drivers/usb/gadget/function/u_ncm.h > > > @@ -31,6 +31,8 @@ struct f_ncm_opts { > > > */ > > > struct mutex lock; > > > int refcnt; > > > + > > > + u32 max_segment_size; > > > }; > > > > > > #endif /* U_NCM_H */ > > > -- > > > 2.42.0 > > > > > > > That said, I don't really follow what this is doing... > > Also > > static struct usb_cdc_ether_desc ecm_desc = { > ... > .wMaxSegmentSize = cpu_to_le16(ETH_FRAME_LEN), > > ^ I think this should be deleted now, right? since it's always overwritten? > And if it isn't always overwritten, that would be a bug I think, cause > what happens if you bring up 2 ncm devices and only change the setting > on the 1st? One last thing... static int ncm_unwrap_ntb(struct gether *port, ... unsigned frame_max = le16_to_cpu(ecm_desc.wMaxSegmentSize); ^ is this a problem now if we have >1 gadget? how does it work then?
On 10/10/2023 5:47 AM, Maciej Żenczykowski wrote: > On Mon, Oct 9, 2023 at 7:20 AM Krishna Kurapati > <quic_kriskura@quicinc.com> wrote: >> >> Currently the NCM driver restricts wMaxSegmentSize that indicates >> the datagram size coming from network layer to 1514. However the >> spec doesn't have any limitation. For P2P connections over NCM, >> increasing MTU helps increasing throughput. >> >> Add support to configure this value before configfs symlink is >> created. Also since the NTB Out/In buffer sizes are fixed at 16384 >> bytes, limit the segment size to an upper cap of 15014. Set the >> default MTU size for the ncm interface during function bind before >> network interface is registered allowing MTU to be set in parity >> with wMaxSegmentSize. >> >> Signed-off-by: Krishna Kurapati <quic_kriskura@quicinc.com> >> --- >> drivers/usb/gadget/function/f_ncm.c | 51 +++++++++++++++++++++++++++++ >> drivers/usb/gadget/function/u_ncm.h | 2 ++ >> 2 files changed, 53 insertions(+) >> >> diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c >> index feccf4c8cc4f..eab297b22200 100644 >> --- a/drivers/usb/gadget/function/f_ncm.c >> +++ b/drivers/usb/gadget/function/f_ncm.c >> @@ -103,6 +103,8 @@ static inline struct f_ncm *func_to_ncm(struct usb_function *f) >> /* Delay for the transmit to wait before sending an unfilled NTB frame. */ >> #define TX_TIMEOUT_NSECS 300000 >> >> +#define MAX_DATAGRAM_SIZE 15014 >> + >> #define FORMATS_SUPPORTED (USB_CDC_NCM_NTB16_SUPPORTED | \ >> USB_CDC_NCM_NTB32_SUPPORTED) >> >> @@ -1408,6 +1410,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) >> ncm_opts = container_of(f->fi, struct f_ncm_opts, func_inst); >> >> if (cdev->use_os_string) { >> + ncm_opts->net->mtu = (ncm_opts->max_segment_size - ETH_HLEN); >> f->os_desc_table = kzalloc(sizeof(*f->os_desc_table), >> GFP_KERNEL); >> if (!f->os_desc_table) >> @@ -1469,6 +1472,8 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) >> >> status = -ENODEV; >> >> + ecm_desc.wMaxSegmentSize = ncm_opts->max_segment_size; > > I think you need byte swap here. > >> + >> /* allocate instance-specific endpoints */ >> ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_in_desc); >> if (!ep) >> @@ -1569,11 +1574,56 @@ USB_ETHERNET_CONFIGFS_ITEM_ATTR_QMULT(ncm); >> /* f_ncm_opts_ifname */ >> USB_ETHERNET_CONFIGFS_ITEM_ATTR_IFNAME(ncm); >> >> +static ssize_t ncm_opts_max_segment_size_show(struct config_item *item, >> + char *page) >> +{ >> + struct f_ncm_opts *opts = to_f_ncm_opts(item); >> + u32 segment_size; >> + >> + mutex_lock(&opts->lock); >> + segment_size = opts->max_segment_size; >> + mutex_unlock(&opts->lock); >> + >> + return sprintf(page, "%u\n", segment_size); >> +} >> + >> +static ssize_t ncm_opts_max_segment_size_store(struct config_item *item, >> + const char *page, size_t len) >> +{ >> + struct f_ncm_opts *opts = to_f_ncm_opts(item); >> + int ret; >> + u32 segment_size; >> + >> + mutex_lock(&opts->lock); >> + if (opts->refcnt) { >> + ret = -EBUSY; >> + goto out; >> + } >> + >> + ret = kstrtou32(page, 0, &segment_size); >> + if (ret) >> + goto out; >> + >> + if (segment_size > MAX_DATAGRAM_SIZE) { >> + ret = -EINVAL; >> + goto out; >> + } >> + >> + opts->max_segment_size = segment_size; >> + ret = len; >> +out: >> + mutex_unlock(&opts->lock); >> + return ret; >> +} >> + >> +CONFIGFS_ATTR(ncm_opts_, max_segment_size); >> + >> static struct configfs_attribute *ncm_attrs[] = { >> &ncm_opts_attr_dev_addr, >> &ncm_opts_attr_host_addr, >> &ncm_opts_attr_qmult, >> &ncm_opts_attr_ifname, >> + &ncm_opts_attr_max_segment_size, >> NULL, >> }; >> >> @@ -1616,6 +1666,7 @@ static struct usb_function_instance *ncm_alloc_inst(void) >> kfree(opts); >> return ERR_CAST(net); >> } >> + opts->max_segment_size = cpu_to_le16(ETH_FRAME_LEN); > > and not here. ie. max_segment_size should be native endian > >> INIT_LIST_HEAD(&opts->ncm_os_desc.ext_prop); >> >> descs[0] = &opts->ncm_os_desc; >> diff --git a/drivers/usb/gadget/function/u_ncm.h b/drivers/usb/gadget/function/u_ncm.h >> index 5408854d8407..d3403cf13f17 100644 >> --- a/drivers/usb/gadget/function/u_ncm.h >> +++ b/drivers/usb/gadget/function/u_ncm.h >> @@ -31,6 +31,8 @@ struct f_ncm_opts { >> */ >> struct mutex lock; >> int refcnt; >> + >> + u32 max_segment_size; >> }; >> >> #endif /* U_NCM_H */ >> -- >> 2.42.0 >> > > That said, I don't really follow what this is doing... Hi Maciej, Can you help clarify what you think here is wrong ? Since increasing segment size give better throughputs in P2P connections that don't necessarily exchange internet data, we need to have the felxibility to change the value as per requirement of the application using the interface and hence this attempt at adding the configfs parameter. Regards, Krishna,
On 10/10/2023 5:50 AM, Maciej Żenczykowski wrote: > On Mon, Oct 9, 2023 at 5:17 PM Maciej Żenczykowski <maze@google.com> wrote: >> >> On Mon, Oct 9, 2023 at 7:20 AM Krishna Kurapati >> <quic_kriskura@quicinc.com> wrote: >>> >>> Currently the NCM driver restricts wMaxSegmentSize that indicates >>> the datagram size coming from network layer to 1514. However the >>> spec doesn't have any limitation. For P2P connections over NCM, >>> increasing MTU helps increasing throughput. >>> >>> Add support to configure this value before configfs symlink is >>> created. Also since the NTB Out/In buffer sizes are fixed at 16384 >>> bytes, limit the segment size to an upper cap of 15014. Set the >>> default MTU size for the ncm interface during function bind before >>> network interface is registered allowing MTU to be set in parity >>> with wMaxSegmentSize. >>> >>> Signed-off-by: Krishna Kurapati <quic_kriskura@quicinc.com> >>> --- >>> drivers/usb/gadget/function/f_ncm.c | 51 +++++++++++++++++++++++++++++ >>> drivers/usb/gadget/function/u_ncm.h | 2 ++ >>> 2 files changed, 53 insertions(+) >>> >>> diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c >>> index feccf4c8cc4f..eab297b22200 100644 >>> --- a/drivers/usb/gadget/function/f_ncm.c >>> +++ b/drivers/usb/gadget/function/f_ncm.c >>> @@ -103,6 +103,8 @@ static inline struct f_ncm *func_to_ncm(struct usb_function *f) >>> /* Delay for the transmit to wait before sending an unfilled NTB frame. */ >>> #define TX_TIMEOUT_NSECS 300000 >>> >>> +#define MAX_DATAGRAM_SIZE 15014 >>> + >>> #define FORMATS_SUPPORTED (USB_CDC_NCM_NTB16_SUPPORTED | \ >>> USB_CDC_NCM_NTB32_SUPPORTED) >>> >>> @@ -1408,6 +1410,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) >>> ncm_opts = container_of(f->fi, struct f_ncm_opts, func_inst); >>> >>> if (cdev->use_os_string) { >>> + ncm_opts->net->mtu = (ncm_opts->max_segment_size - ETH_HLEN); >>> f->os_desc_table = kzalloc(sizeof(*f->os_desc_table), >>> GFP_KERNEL); >>> if (!f->os_desc_table) >>> @@ -1469,6 +1472,8 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) >>> >>> status = -ENODEV; >>> >>> + ecm_desc.wMaxSegmentSize = ncm_opts->max_segment_size; >> >> I think you need byte swap here. >> >>> + >>> /* allocate instance-specific endpoints */ >>> ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_in_desc); >>> if (!ep) >>> @@ -1569,11 +1574,56 @@ USB_ETHERNET_CONFIGFS_ITEM_ATTR_QMULT(ncm); >>> /* f_ncm_opts_ifname */ >>> USB_ETHERNET_CONFIGFS_ITEM_ATTR_IFNAME(ncm); >>> >>> +static ssize_t ncm_opts_max_segment_size_show(struct config_item *item, >>> + char *page) >>> +{ >>> + struct f_ncm_opts *opts = to_f_ncm_opts(item); >>> + u32 segment_size; >>> + >>> + mutex_lock(&opts->lock); >>> + segment_size = opts->max_segment_size; >>> + mutex_unlock(&opts->lock); >>> + >>> + return sprintf(page, "%u\n", segment_size); >>> +} >>> + >>> +static ssize_t ncm_opts_max_segment_size_store(struct config_item *item, >>> + const char *page, size_t len) >>> +{ >>> + struct f_ncm_opts *opts = to_f_ncm_opts(item); >>> + int ret; >>> + u32 segment_size; >>> + >>> + mutex_lock(&opts->lock); >>> + if (opts->refcnt) { >>> + ret = -EBUSY; >>> + goto out; >>> + } >>> + >>> + ret = kstrtou32(page, 0, &segment_size); >>> + if (ret) >>> + goto out; >>> + >>> + if (segment_size > MAX_DATAGRAM_SIZE) { >>> + ret = -EINVAL; >>> + goto out; >>> + } >>> + >>> + opts->max_segment_size = segment_size; >>> + ret = len; >>> +out: >>> + mutex_unlock(&opts->lock); >>> + return ret; >>> +} >>> + >>> +CONFIGFS_ATTR(ncm_opts_, max_segment_size); >>> + >>> static struct configfs_attribute *ncm_attrs[] = { >>> &ncm_opts_attr_dev_addr, >>> &ncm_opts_attr_host_addr, >>> &ncm_opts_attr_qmult, >>> &ncm_opts_attr_ifname, >>> + &ncm_opts_attr_max_segment_size, >>> NULL, >>> }; >>> >>> @@ -1616,6 +1666,7 @@ static struct usb_function_instance *ncm_alloc_inst(void) >>> kfree(opts); >>> return ERR_CAST(net); >>> } >>> + opts->max_segment_size = cpu_to_le16(ETH_FRAME_LEN); >> >> and not here. ie. max_segment_size should be native endian >> >>> INIT_LIST_HEAD(&opts->ncm_os_desc.ext_prop); >>> >>> descs[0] = &opts->ncm_os_desc; >>> diff --git a/drivers/usb/gadget/function/u_ncm.h b/drivers/usb/gadget/function/u_ncm.h >>> index 5408854d8407..d3403cf13f17 100644 >>> --- a/drivers/usb/gadget/function/u_ncm.h >>> +++ b/drivers/usb/gadget/function/u_ncm.h >>> @@ -31,6 +31,8 @@ struct f_ncm_opts { >>> */ >>> struct mutex lock; >>> int refcnt; >>> + >>> + u32 max_segment_size; >>> }; >>> >>> #endif /* U_NCM_H */ >>> -- >>> 2.42.0 >>> >> >> That said, I don't really follow what this is doing... > > Also > > static struct usb_cdc_ether_desc ecm_desc = { > ... > .wMaxSegmentSize = cpu_to_le16(ETH_FRAME_LEN), > > ^ I think this should be deleted now, right? since it's always overwritten? > And if it isn't always overwritten, that would be a bug I think, cause > what happens if you bring up 2 ncm devices and only change the setting > on the 1st? Actually, keeping it or removing it doesn't affect because the default value of this will be set in bind call via opts->max_segment_size. And this max_segment_size variable is defaulted to ETH_FRAME_LEN in alloc_inst. After bind, writing to this value doesn't take effect. Before the netdev interface was registered, if this value is updated then MTU is changed automatically for this interface. If there are two NCM devices, then each of them will have their own instance of wMaxSegmentSize property in their usb_gadget/g1/functions/ncm.X directories and both of them are defaulted ETH_FRAME_LEN if the configfs property is not disturbed. In case only one needs to change, we can change via configfs and the other stays as ETH_FRAME_LEN only. Regards, Krishna,
On 10/10/2023 6:07 AM, Maciej Żenczykowski wrote: > On Mon, Oct 9, 2023 at 5:20 PM Maciej Żenczykowski <maze@google.com> wrote: >> >> On Mon, Oct 9, 2023 at 5:17 PM Maciej Żenczykowski <maze@google.com> wrote: >>> >>> On Mon, Oct 9, 2023 at 7:20 AM Krishna Kurapati >>> <quic_kriskura@quicinc.com> wrote: >>>> >>>> Currently the NCM driver restricts wMaxSegmentSize that indicates >>>> the datagram size coming from network layer to 1514. However the >>>> spec doesn't have any limitation. For P2P connections over NCM, >>>> increasing MTU helps increasing throughput. >>>> >>>> Add support to configure this value before configfs symlink is >>>> created. Also since the NTB Out/In buffer sizes are fixed at 16384 >>>> bytes, limit the segment size to an upper cap of 15014. Set the >>>> default MTU size for the ncm interface during function bind before >>>> network interface is registered allowing MTU to be set in parity >>>> with wMaxSegmentSize. >>>> >>>> Signed-off-by: Krishna Kurapati <quic_kriskura@quicinc.com> >>>> --- >>>> drivers/usb/gadget/function/f_ncm.c | 51 +++++++++++++++++++++++++++++ >>>> drivers/usb/gadget/function/u_ncm.h | 2 ++ >>>> 2 files changed, 53 insertions(+) >>>> >>>> diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c >>>> index feccf4c8cc4f..eab297b22200 100644 >>>> --- a/drivers/usb/gadget/function/f_ncm.c >>>> +++ b/drivers/usb/gadget/function/f_ncm.c >>>> @@ -103,6 +103,8 @@ static inline struct f_ncm *func_to_ncm(struct usb_function *f) >>>> /* Delay for the transmit to wait before sending an unfilled NTB frame. */ >>>> #define TX_TIMEOUT_NSECS 300000 >>>> >>>> +#define MAX_DATAGRAM_SIZE 15014 >>>> + >>>> #define FORMATS_SUPPORTED (USB_CDC_NCM_NTB16_SUPPORTED | \ >>>> USB_CDC_NCM_NTB32_SUPPORTED) >>>> >>>> @@ -1408,6 +1410,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) >>>> ncm_opts = container_of(f->fi, struct f_ncm_opts, func_inst); >>>> >>>> if (cdev->use_os_string) { >>>> + ncm_opts->net->mtu = (ncm_opts->max_segment_size - ETH_HLEN); >>>> f->os_desc_table = kzalloc(sizeof(*f->os_desc_table), >>>> GFP_KERNEL); >>>> if (!f->os_desc_table) >>>> @@ -1469,6 +1472,8 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) >>>> >>>> status = -ENODEV; >>>> >>>> + ecm_desc.wMaxSegmentSize = ncm_opts->max_segment_size; >>> >>> I think you need byte swap here. >>> >>>> + >>>> /* allocate instance-specific endpoints */ >>>> ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_in_desc); >>>> if (!ep) >>>> @@ -1569,11 +1574,56 @@ USB_ETHERNET_CONFIGFS_ITEM_ATTR_QMULT(ncm); >>>> /* f_ncm_opts_ifname */ >>>> USB_ETHERNET_CONFIGFS_ITEM_ATTR_IFNAME(ncm); >>>> >>>> +static ssize_t ncm_opts_max_segment_size_show(struct config_item *item, >>>> + char *page) >>>> +{ >>>> + struct f_ncm_opts *opts = to_f_ncm_opts(item); >>>> + u32 segment_size; >>>> + >>>> + mutex_lock(&opts->lock); >>>> + segment_size = opts->max_segment_size; >>>> + mutex_unlock(&opts->lock); >>>> + >>>> + return sprintf(page, "%u\n", segment_size); >>>> +} >>>> + >>>> +static ssize_t ncm_opts_max_segment_size_store(struct config_item *item, >>>> + const char *page, size_t len) >>>> +{ >>>> + struct f_ncm_opts *opts = to_f_ncm_opts(item); >>>> + int ret; >>>> + u32 segment_size; >>>> + >>>> + mutex_lock(&opts->lock); >>>> + if (opts->refcnt) { >>>> + ret = -EBUSY; >>>> + goto out; >>>> + } >>>> + >>>> + ret = kstrtou32(page, 0, &segment_size); >>>> + if (ret) >>>> + goto out; >>>> + >>>> + if (segment_size > MAX_DATAGRAM_SIZE) { >>>> + ret = -EINVAL; >>>> + goto out; >>>> + } >>>> + >>>> + opts->max_segment_size = segment_size; >>>> + ret = len; >>>> +out: >>>> + mutex_unlock(&opts->lock); >>>> + return ret; >>>> +} >>>> + >>>> +CONFIGFS_ATTR(ncm_opts_, max_segment_size); >>>> + >>>> static struct configfs_attribute *ncm_attrs[] = { >>>> &ncm_opts_attr_dev_addr, >>>> &ncm_opts_attr_host_addr, >>>> &ncm_opts_attr_qmult, >>>> &ncm_opts_attr_ifname, >>>> + &ncm_opts_attr_max_segment_size, >>>> NULL, >>>> }; >>>> >>>> @@ -1616,6 +1666,7 @@ static struct usb_function_instance *ncm_alloc_inst(void) >>>> kfree(opts); >>>> return ERR_CAST(net); >>>> } >>>> + opts->max_segment_size = cpu_to_le16(ETH_FRAME_LEN); >>> >>> and not here. ie. max_segment_size should be native endian >>> >>>> INIT_LIST_HEAD(&opts->ncm_os_desc.ext_prop); >>>> >>>> descs[0] = &opts->ncm_os_desc; >>>> diff --git a/drivers/usb/gadget/function/u_ncm.h b/drivers/usb/gadget/function/u_ncm.h >>>> index 5408854d8407..d3403cf13f17 100644 >>>> --- a/drivers/usb/gadget/function/u_ncm.h >>>> +++ b/drivers/usb/gadget/function/u_ncm.h >>>> @@ -31,6 +31,8 @@ struct f_ncm_opts { >>>> */ >>>> struct mutex lock; >>>> int refcnt; >>>> + >>>> + u32 max_segment_size; >>>> }; >>>> >>>> #endif /* U_NCM_H */ >>>> -- >>>> 2.42.0 >>>> >>> >>> That said, I don't really follow what this is doing... >> >> Also >> >> static struct usb_cdc_ether_desc ecm_desc = { >> ... >> .wMaxSegmentSize = cpu_to_le16(ETH_FRAME_LEN), >> >> ^ I think this should be deleted now, right? since it's always overwritten? >> And if it isn't always overwritten, that would be a bug I think, cause >> what happens if you bring up 2 ncm devices and only change the setting >> on the 1st? > > One last thing... > > static int ncm_unwrap_ntb(struct gether *port, > ... > unsigned frame_max = le16_to_cpu(ecm_desc.wMaxSegmentSize); > > ^ is this a problem now if we have >1 gadget? > how does it work then? You are right. This would effect unwrap call and the wMaxSegmentSize is used directly. Thanks for the catch. I didn't test with 2 NCM interfaces and hence I wasn't able to find this bug. Perhaps changing this to opts->max_segment_size would fix the implementation as unwrap would anyways be called after bind. Regards, Krishna,
Hi Krishna, kernel test robot noticed the following build warnings: [auto build test WARNING on usb/usb-testing] [also build test WARNING on usb/usb-next usb/usb-linus linus/master v6.6-rc5 next-20231010] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Krishna-Kurapati/usb-gadget-ncm-Add-support-to-update-wMaxSegmentSize-via-configfs/20231009-222315 base: https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb.git usb-testing patch link: https://lore.kernel.org/r/20231009142005.21338-2-quic_kriskura%40quicinc.com patch subject: [PATCH 2/2] usb: gadget: ncm: Add support to update wMaxSegmentSize via configfs config: i386-randconfig-062-20231010 (https://download.01.org/0day-ci/archive/20231011/202310110658.n9yg3tJy-lkp@intel.com/config) compiler: gcc-9 (Debian 9.3.0-22) 9.3.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231011/202310110658.n9yg3tJy-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202310110658.n9yg3tJy-lkp@intel.com/ sparse warnings: (new ones prefixed by >>) >> drivers/usb/gadget/function/f_ncm.c:1475:34: sparse: sparse: incorrect type in assignment (different base types) @@ expected restricted __le16 static [addressable] [assigned] [toplevel] [usertype] wMaxSegmentSize @@ got unsigned int [usertype] max_segment_size @@ drivers/usb/gadget/function/f_ncm.c:1475:34: sparse: expected restricted __le16 static [addressable] [assigned] [toplevel] [usertype] wMaxSegmentSize drivers/usb/gadget/function/f_ncm.c:1475:34: sparse: got unsigned int [usertype] max_segment_size >> drivers/usb/gadget/function/f_ncm.c:1669:32: sparse: sparse: incorrect type in assignment (different base types) @@ expected unsigned int [usertype] max_segment_size @@ got restricted __le16 [usertype] @@ drivers/usb/gadget/function/f_ncm.c:1669:32: sparse: expected unsigned int [usertype] max_segment_size drivers/usb/gadget/function/f_ncm.c:1669:32: sparse: got restricted __le16 [usertype] vim +1475 drivers/usb/gadget/function/f_ncm.c 1397 1398 static int ncm_bind(struct usb_configuration *c, struct usb_function *f) 1399 { 1400 struct usb_composite_dev *cdev = c->cdev; 1401 struct f_ncm *ncm = func_to_ncm(f); 1402 struct usb_string *us; 1403 int status; 1404 struct usb_ep *ep; 1405 struct f_ncm_opts *ncm_opts; 1406 1407 if (!can_support_ecm(cdev->gadget)) 1408 return -EINVAL; 1409 1410 ncm_opts = container_of(f->fi, struct f_ncm_opts, func_inst); 1411 1412 if (cdev->use_os_string) { 1413 ncm_opts->net->mtu = (ncm_opts->max_segment_size - ETH_HLEN); 1414 f->os_desc_table = kzalloc(sizeof(*f->os_desc_table), 1415 GFP_KERNEL); 1416 if (!f->os_desc_table) 1417 return -ENOMEM; 1418 f->os_desc_n = 1; 1419 f->os_desc_table[0].os_desc = &ncm_opts->ncm_os_desc; 1420 } 1421 1422 /* 1423 * in drivers/usb/gadget/configfs.c:configfs_composite_bind() 1424 * configurations are bound in sequence with list_for_each_entry, 1425 * in each configuration its functions are bound in sequence 1426 * with list_for_each_entry, so we assume no race condition 1427 * with regard to ncm_opts->bound access 1428 */ 1429 if (!ncm_opts->bound) { 1430 mutex_lock(&ncm_opts->lock); 1431 gether_set_gadget(ncm_opts->net, cdev->gadget); 1432 status = gether_register_netdev(ncm_opts->net); 1433 mutex_unlock(&ncm_opts->lock); 1434 if (status) 1435 goto fail; 1436 ncm_opts->bound = true; 1437 } 1438 us = usb_gstrings_attach(cdev, ncm_strings, 1439 ARRAY_SIZE(ncm_string_defs)); 1440 if (IS_ERR(us)) { 1441 status = PTR_ERR(us); 1442 goto fail; 1443 } 1444 ncm_control_intf.iInterface = us[STRING_CTRL_IDX].id; 1445 ncm_data_nop_intf.iInterface = us[STRING_DATA_IDX].id; 1446 ncm_data_intf.iInterface = us[STRING_DATA_IDX].id; 1447 ecm_desc.iMACAddress = us[STRING_MAC_IDX].id; 1448 ncm_iad_desc.iFunction = us[STRING_IAD_IDX].id; 1449 1450 /* allocate instance-specific interface IDs */ 1451 status = usb_interface_id(c, f); 1452 if (status < 0) 1453 goto fail; 1454 ncm->ctrl_id = status; 1455 ncm_iad_desc.bFirstInterface = status; 1456 1457 ncm_control_intf.bInterfaceNumber = status; 1458 ncm_union_desc.bMasterInterface0 = status; 1459 1460 if (cdev->use_os_string) 1461 f->os_desc_table[0].if_id = 1462 ncm_iad_desc.bFirstInterface; 1463 1464 status = usb_interface_id(c, f); 1465 if (status < 0) 1466 goto fail; 1467 ncm->data_id = status; 1468 1469 ncm_data_nop_intf.bInterfaceNumber = status; 1470 ncm_data_intf.bInterfaceNumber = status; 1471 ncm_union_desc.bSlaveInterface0 = status; 1472 1473 status = -ENODEV; 1474 > 1475 ecm_desc.wMaxSegmentSize = ncm_opts->max_segment_size; 1476 1477 /* allocate instance-specific endpoints */ 1478 ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_in_desc); 1479 if (!ep) 1480 goto fail; 1481 ncm->port.in_ep = ep; 1482 1483 ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_out_desc); 1484 if (!ep) 1485 goto fail; 1486 ncm->port.out_ep = ep; 1487 1488 ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_notify_desc); 1489 if (!ep) 1490 goto fail; 1491 ncm->notify = ep; 1492 1493 status = -ENOMEM; 1494 1495 /* allocate notification request and buffer */ 1496 ncm->notify_req = usb_ep_alloc_request(ep, GFP_KERNEL); 1497 if (!ncm->notify_req) 1498 goto fail; 1499 ncm->notify_req->buf = kmalloc(NCM_STATUS_BYTECOUNT, GFP_KERNEL); 1500 if (!ncm->notify_req->buf) 1501 goto fail; 1502 ncm->notify_req->context = ncm; 1503 ncm->notify_req->complete = ncm_notify_complete; 1504 1505 /* 1506 * support all relevant hardware speeds... we expect that when 1507 * hardware is dual speed, all bulk-capable endpoints work at 1508 * both speeds 1509 */ 1510 hs_ncm_in_desc.bEndpointAddress = fs_ncm_in_desc.bEndpointAddress; 1511 hs_ncm_out_desc.bEndpointAddress = fs_ncm_out_desc.bEndpointAddress; 1512 hs_ncm_notify_desc.bEndpointAddress = 1513 fs_ncm_notify_desc.bEndpointAddress; 1514 1515 ss_ncm_in_desc.bEndpointAddress = fs_ncm_in_desc.bEndpointAddress; 1516 ss_ncm_out_desc.bEndpointAddress = fs_ncm_out_desc.bEndpointAddress; 1517 ss_ncm_notify_desc.bEndpointAddress = 1518 fs_ncm_notify_desc.bEndpointAddress; 1519 1520 status = usb_assign_descriptors(f, ncm_fs_function, ncm_hs_function, 1521 ncm_ss_function, ncm_ss_function); 1522 if (status) 1523 goto fail; 1524 1525 /* 1526 * NOTE: all that is done without knowing or caring about 1527 * the network link ... which is unavailable to this code 1528 * until we're activated via set_alt(). 1529 */ 1530 1531 ncm->port.open = ncm_open; 1532 ncm->port.close = ncm_close; 1533 1534 hrtimer_init(&ncm->task_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT); 1535 ncm->task_timer.function = ncm_tx_timeout; 1536 1537 DBG(cdev, "CDC Network: IN/%s OUT/%s NOTIFY/%s\n", 1538 ncm->port.in_ep->name, ncm->port.out_ep->name, 1539 ncm->notify->name); 1540 return 0; 1541 1542 fail: 1543 kfree(f->os_desc_table); 1544 f->os_desc_n = 0; 1545 1546 if (ncm->notify_req) { 1547 kfree(ncm->notify_req->buf); 1548 usb_ep_free_request(ncm->notify, ncm->notify_req); 1549 } 1550 1551 ERROR(cdev, "%s: can't bind, err %d\n", f->name, status); 1552 1553 return status; 1554 } 1555 1556 static inline struct f_ncm_opts *to_f_ncm_opts(struct config_item *item) 1557 { 1558 return container_of(to_config_group(item), struct f_ncm_opts, 1559 func_inst.group); 1560 } 1561 1562 /* f_ncm_item_ops */ 1563 USB_ETHERNET_CONFIGFS_ITEM(ncm); 1564 1565 /* f_ncm_opts_dev_addr */ 1566 USB_ETHERNET_CONFIGFS_ITEM_ATTR_DEV_ADDR(ncm); 1567 1568 /* f_ncm_opts_host_addr */ 1569 USB_ETHERNET_CONFIGFS_ITEM_ATTR_HOST_ADDR(ncm); 1570 1571 /* f_ncm_opts_qmult */ 1572 USB_ETHERNET_CONFIGFS_ITEM_ATTR_QMULT(ncm); 1573 1574 /* f_ncm_opts_ifname */ 1575 USB_ETHERNET_CONFIGFS_ITEM_ATTR_IFNAME(ncm); 1576 1577 static ssize_t ncm_opts_max_segment_size_show(struct config_item *item, 1578 char *page) 1579 { 1580 struct f_ncm_opts *opts = to_f_ncm_opts(item); 1581 u32 segment_size; 1582 1583 mutex_lock(&opts->lock); 1584 segment_size = opts->max_segment_size; 1585 mutex_unlock(&opts->lock); 1586 1587 return sprintf(page, "%u\n", segment_size); 1588 } 1589 1590 static ssize_t ncm_opts_max_segment_size_store(struct config_item *item, 1591 const char *page, size_t len) 1592 { 1593 struct f_ncm_opts *opts = to_f_ncm_opts(item); 1594 int ret; 1595 u32 segment_size; 1596 1597 mutex_lock(&opts->lock); 1598 if (opts->refcnt) { 1599 ret = -EBUSY; 1600 goto out; 1601 } 1602 1603 ret = kstrtou32(page, 0, &segment_size); 1604 if (ret) 1605 goto out; 1606 1607 if (segment_size > MAX_DATAGRAM_SIZE) { 1608 ret = -EINVAL; 1609 goto out; 1610 } 1611 1612 opts->max_segment_size = segment_size; 1613 ret = len; 1614 out: 1615 mutex_unlock(&opts->lock); 1616 return ret; 1617 } 1618 1619 CONFIGFS_ATTR(ncm_opts_, max_segment_size); 1620 1621 static struct configfs_attribute *ncm_attrs[] = { 1622 &ncm_opts_attr_dev_addr, 1623 &ncm_opts_attr_host_addr, 1624 &ncm_opts_attr_qmult, 1625 &ncm_opts_attr_ifname, 1626 &ncm_opts_attr_max_segment_size, 1627 NULL, 1628 }; 1629 1630 static const struct config_item_type ncm_func_type = { 1631 .ct_item_ops = &ncm_item_ops, 1632 .ct_attrs = ncm_attrs, 1633 .ct_owner = THIS_MODULE, 1634 }; 1635 1636 static void ncm_free_inst(struct usb_function_instance *f) 1637 { 1638 struct f_ncm_opts *opts; 1639 1640 opts = container_of(f, struct f_ncm_opts, func_inst); 1641 if (opts->bound) 1642 gether_cleanup(netdev_priv(opts->net)); 1643 else 1644 free_netdev(opts->net); 1645 kfree(opts->ncm_interf_group); 1646 kfree(opts); 1647 } 1648 1649 static struct usb_function_instance *ncm_alloc_inst(void) 1650 { 1651 struct f_ncm_opts *opts; 1652 struct usb_os_desc *descs[1]; 1653 char *names[1]; 1654 struct config_group *ncm_interf_group; 1655 1656 opts = kzalloc(sizeof(*opts), GFP_KERNEL); 1657 if (!opts) 1658 return ERR_PTR(-ENOMEM); 1659 opts->ncm_os_desc.ext_compat_id = opts->ncm_ext_compat_id; 1660 1661 mutex_init(&opts->lock); 1662 opts->func_inst.free_func_inst = ncm_free_inst; 1663 opts->net = gether_setup_default(); 1664 if (IS_ERR(opts->net)) { 1665 struct net_device *net = opts->net; 1666 kfree(opts); 1667 return ERR_CAST(net); 1668 } > 1669 opts->max_segment_size = cpu_to_le16(ETH_FRAME_LEN); 1670 INIT_LIST_HEAD(&opts->ncm_os_desc.ext_prop); 1671 1672 descs[0] = &opts->ncm_os_desc; 1673 names[0] = "ncm"; 1674 1675 config_group_init_type_name(&opts->func_inst.group, "", &ncm_func_type); 1676 ncm_interf_group = 1677 usb_os_desc_prepare_interf_dir(&opts->func_inst.group, 1, descs, 1678 names, THIS_MODULE); 1679 if (IS_ERR(ncm_interf_group)) { 1680 ncm_free_inst(&opts->func_inst); 1681 return ERR_CAST(ncm_interf_group); 1682 } 1683 opts->ncm_interf_group = ncm_interf_group; 1684 1685 return &opts->func_inst; 1686 } 1687
On 10/10/2023 10:08 AM, Krishna Kurapati PSSNV wrote: > >> >> ^ is this a problem now if we have >1 gadget? >> how does it work then? > > > You are right. This would effect unwrap call and the wMaxSegmentSize is > used directly. Thanks for the catch. I didn't test with 2 NCM interfaces > and hence I wasn't able to find this bug. Perhaps changing this to > opts->max_segment_size would fix the implementation as unwrap would > anyways be called after bind. Hi Maciej, How about the below diff: --------- +/* + * Allow max segment size to be in parity with max_mtu possible + * for the interface. + */ +#define MAX_DATAGRAM_SIZE GETHER_MAX_ETH_FRAME_LEN + #define FORMATS_SUPPORTED (USB_CDC_NCM_NTB16_SUPPORTED | \ USB_CDC_NCM_NTB32_SUPPORTED) @@ -194,7 +200,6 @@ static struct usb_cdc_ether_desc ecm_desc = { /* this descriptor actually adds value, surprise! */ /* .iMACAddress = DYNAMIC */ .bmEthernetStatistics = cpu_to_le32(0), /* no statistics */ - .wMaxSegmentSize = cpu_to_le16(ETH_FRAME_LEN), .wNumberMCFilters = cpu_to_le16(0), .bNumberPowerFilters = 0, }; @@ -1180,10 +1185,15 @@ static int ncm_unwrap_ntb(struct gether *port, struct sk_buff *skb2; int ret = -EINVAL; unsigned ntb_max = le32_to_cpu(ntb_parameters.dwNtbOutMaxSize); - unsigned frame_max = le16_to_cpu(ecm_desc.wMaxSegmentSize); + unsigned int frame_max; const struct ndp_parser_opts *opts = ncm->parser_opts; unsigned crc_len = ncm->is_crc ? sizeof(uint32_t) : 0; int dgram_counter; + struct f_ncm_opts *ncm_opts; + const struct usb_function_instance *fi = port->func.fi; + + ncm_opts = container_of(fi, struct f_ncm_opts, func_inst); + frame_max = ncm_opts->max_segment_size; /* dwSignature */ if (get_unaligned_le32(tmp) != opts->nth_sign) { @@ -1440,6 +1450,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) */ if (!ncm_opts->bound) { mutex_lock(&ncm_opts->lock); + ncm_opts->net->mtu = (ncm_opts->max_segment_size - ETH_HLEN); gether_set_gadget(ncm_opts->net, cdev->gadget); status = gether_register_netdev(ncm_opts->net); mutex_unlock(&ncm_opts->lock); @@ -1484,6 +1495,8 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) status = -ENODEV; + ecm_desc.wMaxSegmentSize = (__le16)ncm_opts->max_segment_size; + ------ I can limit the max segment size to (Max MTU + ETH_HELN) and this would be logical to do. Also we can set the frame_max from ncm_opts itself while initializing it to 1514 (default value) during alloc_inst callback and nothing would break while still being backward compatible. Let me know your thoughts on this. Regards, Krishna,
On Thu, Oct 12, 2023 at 1:48 AM Krishna Kurapati PSSNV <quic_kriskura@quicinc.com> wrote: > > > > On 10/10/2023 10:08 AM, Krishna Kurapati PSSNV wrote: > > > > >> > >> ^ is this a problem now if we have >1 gadget? > >> how does it work then? > > > > > > You are right. This would effect unwrap call and the wMaxSegmentSize is > > used directly. Thanks for the catch. I didn't test with 2 NCM interfaces > > and hence I wasn't able to find this bug. Perhaps changing this to > > opts->max_segment_size would fix the implementation as unwrap would > > anyways be called after bind. > > Hi Maciej, > > How about the below diff: > > --------- > > +/* > + * Allow max segment size to be in parity with max_mtu possible > + * for the interface. > + */ > +#define MAX_DATAGRAM_SIZE GETHER_MAX_ETH_FRAME_LEN > + > #define FORMATS_SUPPORTED (USB_CDC_NCM_NTB16_SUPPORTED | \ > USB_CDC_NCM_NTB32_SUPPORTED) > > @@ -194,7 +200,6 @@ static struct usb_cdc_ether_desc ecm_desc = { > /* this descriptor actually adds value, surprise! */ > /* .iMACAddress = DYNAMIC */ > .bmEthernetStatistics = cpu_to_le32(0), /* no statistics */ > - .wMaxSegmentSize = cpu_to_le16(ETH_FRAME_LEN), > .wNumberMCFilters = cpu_to_le16(0), > .bNumberPowerFilters = 0, > }; > @@ -1180,10 +1185,15 @@ static int ncm_unwrap_ntb(struct gether *port, > struct sk_buff *skb2; > int ret = -EINVAL; > unsigned ntb_max = > le32_to_cpu(ntb_parameters.dwNtbOutMaxSize); > - unsigned frame_max = le16_to_cpu(ecm_desc.wMaxSegmentSize); > + unsigned int frame_max; > const struct ndp_parser_opts *opts = ncm->parser_opts; > unsigned crc_len = ncm->is_crc ? sizeof(uint32_t) : 0; > int dgram_counter; > + struct f_ncm_opts *ncm_opts; > + const struct usb_function_instance *fi = port->func.fi; > + > + ncm_opts = container_of(fi, struct f_ncm_opts, func_inst); > + frame_max = ncm_opts->max_segment_size; > > /* dwSignature */ > if (get_unaligned_le32(tmp) != opts->nth_sign) { > @@ -1440,6 +1450,7 @@ static int ncm_bind(struct usb_configuration *c, > struct usb_function *f) > */ > if (!ncm_opts->bound) { > mutex_lock(&ncm_opts->lock); > + ncm_opts->net->mtu = (ncm_opts->max_segment_size - > ETH_HLEN); > gether_set_gadget(ncm_opts->net, cdev->gadget); > status = gether_register_netdev(ncm_opts->net); > mutex_unlock(&ncm_opts->lock); > @@ -1484,6 +1495,8 @@ static int ncm_bind(struct usb_configuration *c, > struct usb_function *f) > > status = -ENODEV; > > + ecm_desc.wMaxSegmentSize = (__le16)ncm_opts->max_segment_size; this looks wrong. pretty sure this should be some form of cpu_to_le16 > + > > ------ > > I can limit the max segment size to (Max MTU + ETH_HELN) and this would > be logical to do. Also we can set the frame_max from ncm_opts itself > while initializing it to 1514 (default value) during alloc_inst callback > and nothing would break while still being backward compatible. > > Let me know your thoughts on this. > > Regards, > Krishna, Could you paste the full patch? This is hard to review without looking at much more context then email is providing (or, even better, send me a link to a CL in gerrit somewhere - for example aosp ACK mainline tree)
On 10/12/2023 6:02 PM, Maciej Żenczykowski wrote: > On Thu, Oct 12, 2023 at 1:48 AM Krishna Kurapati PSSNV > > Could you paste the full patch? > This is hard to review without looking at much more context then email > is providing > (or, even better, send me a link to a CL in gerrit somewhere - for > example aosp ACK mainline tree) Sure. Will provide a gerrit on ACK for review before posting v2. The intent of posting the diff was two fold: 1. The question Greg asked regarding why the max segment size was limited to 15014 was valid. When I thought about it, I actually wanted to limit the max MTU to 15000, so the max segment size automatically needs to be limited to 15014. But my commit text didn't mention this properly which was a mistake on my behalf. But when I looked at the code, limiting the max segment size 15014 would force the practical max_mtu to not cross 15000 although theoretical max_mtu was set to: (GETHER_MAX_MTU_SIZE - 15412) during registration of net device. So my assumption of limiting it to 15000 was wrong. It must be limited to 15412 as mentioned in u_ether.c This inturn means we must limit max_segment_size to: GETHER_MAX_ETH_FRAME_LEN (GETHER_MAX_MTU_SIZE + ETH_HLEN) as mentioned in u_ether.c. I wanted to confirm that setting MAX_DATAGRAM_SIZE to GETHER_MAX_ETH_FRAME_LEN was correct. 2. I am not actually able to test with MTU beyond 15000. When my host device is a linux machine, the cdc_ncm.c limits max_segment_size to: CDC_NCM_MAX_DATAGRAM_SIZE 8192 /* bytes */ When connected to windows machine, I am able to set the mtu to a max value of 15000. So not sure how to test the patch if I set the max_segment_size to GETHER_MAX_ETH_FRAME_LEN. By pasting the diff, I wanted to confirm both the above queries. And you are right, while assigning value to ecm.wMaxSegmentSize, we must use cpu_to_le16(...). Will ensure to make this change in v2. It worked without that too, not sure how. Let me know your thoughts on the above. Regards, Krishna,
On Thu, Oct 12, 2023 at 8:40 AM Krishna Kurapati PSSNV <quic_kriskura@quicinc.com> wrote: > > > > On 10/12/2023 6:02 PM, Maciej Żenczykowski wrote: > > On Thu, Oct 12, 2023 at 1:48 AM Krishna Kurapati PSSNV > > > > Could you paste the full patch? > > This is hard to review without looking at much more context then email > > is providing > > (or, even better, send me a link to a CL in gerrit somewhere - for > > example aosp ACK mainline tree) > > Sure. Will provide a gerrit on ACK for review before posting v2. > > The intent of posting the diff was two fold: > > 1. The question Greg asked regarding why the max segment size was > limited to 15014 was valid. When I thought about it, I actually wanted > to limit the max MTU to 15000, so the max segment size automatically > needs to be limited to 15014. Note that this is a *very* abstract value. I get you want L3 MTU of 10 * 1500, but this value is not actually meaningful. IPv4/IPv6 fragmentation and IPv4/IPv6 TCP segmentation do not result in a trivial multiplication of the standard 1500 byte ethernet L3 MTU. Indeed aggregating 2 1500 L3 mtu frames results in *different* sized frames depending on which type of aggregation you do. (and for tcp it even depends on the number and size of tcp options, though it is often assumed that those take up 12 bytes, since that's the normal for Linux-to-Linux tcp connections) For example if you aggregate N standard Linux ipv6/tcp L3 1500 mtu frames, this means you have N frames: ethernet (14) + ipv6 (40) + tcp (20) + tcp options (12) + payload (1500-12-20-40=1500-72=1428) post aggregation: 1 frame: ethernet (14) + ipv6 (40) + tcp (20) + tcp options (12) + payload (N*1428) so N * 1500 == N * (72 + 1428) --> 1 * (72 + N * 1428) That value of 72 is instead 52 for 'standard Linux ipv4/tcp), it's 40/60 if there's no tcp options (which I think happens when talking to windows) it's different still with ipv4 fragmentation... and again different with ipv6 fragmentation... etc. ie. 15000 L3 mtu is exactly as meaningless as 14000 L3 mtu. Either way you don't get full frames. As such I'd recommend going with whatever is the largest mtu that can be meaningfully made to fit in 16K with all the NCM header overhead. That's likely closer to 15500-16000 (though I have *not* checked). > But my commit text didn't mention this > properly which was a mistake on my behalf. But when I looked at the > code, limiting the max segment size 15014 would force the practical > max_mtu to not cross 15000 although theoretical max_mtu was set to: > (GETHER_MAX_MTU_SIZE - 15412) during registration of net device. > > So my assumption of limiting it to 15000 was wrong. It must be limited > to 15412 as mentioned in u_ether.c This inturn means we must limit > max_segment_size to: > GETHER_MAX_ETH_FRAME_LEN (GETHER_MAX_MTU_SIZE + ETH_HLEN) > as mentioned in u_ether.c. > > I wanted to confirm that setting MAX_DATAGRAM_SIZE to > GETHER_MAX_ETH_FRAME_LEN was correct. > > 2. I am not actually able to test with MTU beyond 15000. When my host > device is a linux machine, the cdc_ncm.c limits max_segment_size to: > CDC_NCM_MAX_DATAGRAM_SIZE 8192 /* bytes */ In practice you get 50% of the benefits of infinitely large mtu by going from 1500 to ~2980. you get 75% of the benefits by going to ~6K you get 87.5% of the benefits by going to ~12K the benefits of going even higher are smaller and smaller... If the host side is limited to 8192, maybe we should match that here too? But the host side limitation of 8192 doesn't seem particularly sane either... Maybe we should relax that instead? (especially since for things like tcp zero copy you want an mtu which is slighly more then N * 4096, ie. around 4.5KB, 8.5KB, 12.5KB or something like that) > When connected to windows machine, I am able to set the mtu to a max > value of 15000. So not sure how to test the patch if I set the > max_segment_size to GETHER_MAX_ETH_FRAME_LEN. > > By pasting the diff, I wanted to confirm both the above queries. > > And you are right, while assigning value to ecm.wMaxSegmentSize, we must > use cpu_to_le16(...). Will ensure to make this change in v2. It worked > without that too, not sure how.
On Fri, Oct 13, 2023 at 11:39 AM Maciej Żenczykowski <maze@google.com> wrote: > > On Thu, Oct 12, 2023 at 8:40 AM Krishna Kurapati PSSNV > <quic_kriskura@quicinc.com> wrote: > > > > > > > > On 10/12/2023 6:02 PM, Maciej Żenczykowski wrote: > > > On Thu, Oct 12, 2023 at 1:48 AM Krishna Kurapati PSSNV > > > > > > Could you paste the full patch? > > > This is hard to review without looking at much more context then email > > > is providing > > > (or, even better, send me a link to a CL in gerrit somewhere - for > > > example aosp ACK mainline tree) > > > > Sure. Will provide a gerrit on ACK for review before posting v2. > > > > The intent of posting the diff was two fold: > > > > 1. The question Greg asked regarding why the max segment size was > > limited to 15014 was valid. When I thought about it, I actually wanted > > to limit the max MTU to 15000, so the max segment size automatically > > needs to be limited to 15014. > > Note that this is a *very* abstract value. > I get you want L3 MTU of 10 * 1500, but this value is not actually meaningful. > > IPv4/IPv6 fragmentation and IPv4/IPv6 TCP segmentation > do not result in a trivial multiplication of the standard 1500 byte > ethernet L3 MTU. > Indeed aggregating 2 1500 L3 mtu frames results in *different* sized > frames depending on which type of aggregation you do. > (and for tcp it even depends on the number and size of tcp options, > though it is often assumed that those take up 12 bytes, since that's the > normal for Linux-to-Linux tcp connections) > > For example if you aggregate N standard Linux ipv6/tcp L3 1500 mtu frames, > this means you have > N frames: ethernet (14) + ipv6 (40) + tcp (20) + tcp options (12) + > payload (1500-12-20-40=1500-72=1428) > post aggregation: > 1 frame: ethernet (14) + ipv6 (40) + tcp (20) + tcp options (12) + > payload (N*1428) > > so N * 1500 == N * (72 + 1428) --> 1 * (72 + N * 1428) As you can see, for N=10, this isn't 15000, it's 72+10*1428 = 14352 > > That value of 72 is instead 52 for 'standard Linux ipv4/tcp), > it's 40/60 if there's no tcp options (which I think happens when > talking to windows) > it's different still with ipv4 fragmentation... and again different > with ipv6 fragmentation... > etc. > > ie. 15000 L3 mtu is exactly as meaningless as 14000 L3 mtu. > Either way you don't get full frames. > > As such I'd recommend going with whatever is the largest mtu that can > be meaningfully made to fit in 16K with all the NCM header overhead. > That's likely closer to 15500-16000 (though I have *not* checked). > > > But my commit text didn't mention this > > properly which was a mistake on my behalf. But when I looked at the > > code, limiting the max segment size 15014 would force the practical > > max_mtu to not cross 15000 although theoretical max_mtu was set to: > > (GETHER_MAX_MTU_SIZE - 15412) during registration of net device. > > > > So my assumption of limiting it to 15000 was wrong. It must be limited > > to 15412 as mentioned in u_ether.c This inturn means we must limit > > max_segment_size to: > > GETHER_MAX_ETH_FRAME_LEN (GETHER_MAX_MTU_SIZE + ETH_HLEN) > > as mentioned in u_ether.c. > > > > I wanted to confirm that setting MAX_DATAGRAM_SIZE to > > GETHER_MAX_ETH_FRAME_LEN was correct. > > > > 2. I am not actually able to test with MTU beyond 15000. When my host > > device is a linux machine, the cdc_ncm.c limits max_segment_size to: > > CDC_NCM_MAX_DATAGRAM_SIZE 8192 /* bytes */ > > In practice you get 50% of the benefits of infinitely large mtu by > going from 1500 to ~2980. > you get 75% of the benefits by going to ~6K > you get 87.5% of the benefits by going to ~12K > the benefits of going even higher are smaller and smaller... > > If the host side is limited to 8192, maybe we should match that here too? > > But the host side limitation of 8192 doesn't seem particularly sane either... > Maybe we should relax that instead? > > (especially since for things like tcp zero copy you want an mtu which > is slighly more then N * 4096, > ie. around 4.5KB, 8.5KB, 12.5KB or something like that) > > > When connected to windows machine, I am able to set the mtu to a max > > value of 15000. So not sure how to test the patch if I set the > > max_segment_size to GETHER_MAX_ETH_FRAME_LEN. > > > > By pasting the diff, I wanted to confirm both the above queries. > > > > And you are right, while assigning value to ecm.wMaxSegmentSize, we must > > use cpu_to_le16(...). Will ensure to make this change in v2. It worked > > without that too, not sure how.Maciej Żenczykowski, Kernel Networking Developer @ Google
On 10/14/2023 12:09 AM, Maciej Żenczykowski wrote: > On Thu, Oct 12, 2023 at 8:40 AM Krishna Kurapati PSSNV > <quic_kriskura@quicinc.com> wrote: >> >> >> >> On 10/12/2023 6:02 PM, Maciej Żenczykowski wrote: >>> On Thu, Oct 12, 2023 at 1:48 AM Krishna Kurapati PSSNV >>> >>> Could you paste the full patch? >>> This is hard to review without looking at much more context then email >>> is providing >>> (or, even better, send me a link to a CL in gerrit somewhere - for >>> example aosp ACK mainline tree) >> >> Sure. Will provide a gerrit on ACK for review before posting v2. >> >> The intent of posting the diff was two fold: >> >> 1. The question Greg asked regarding why the max segment size was >> limited to 15014 was valid. When I thought about it, I actually wanted >> to limit the max MTU to 15000, so the max segment size automatically >> needs to be limited to 15014. > > Note that this is a *very* abstract value. > I get you want L3 MTU of 10 * 1500, but this value is not actually meaningful. > > IPv4/IPv6 fragmentation and IPv4/IPv6 TCP segmentation > do not result in a trivial multiplication of the standard 1500 byte > ethernet L3 MTU. > Indeed aggregating 2 1500 L3 mtu frames results in *different* sized > frames depending on which type of aggregation you do. > (and for tcp it even depends on the number and size of tcp options, > though it is often assumed that those take up 12 bytes, since that's the > normal for Linux-to-Linux tcp connections) > > For example if you aggregate N standard Linux ipv6/tcp L3 1500 mtu frames, > this means you have > N frames: ethernet (14) + ipv6 (40) + tcp (20) + tcp options (12) + > payload (1500-12-20-40=1500-72=1428) > post aggregation: > 1 frame: ethernet (14) + ipv6 (40) + tcp (20) + tcp options (12) + > payload (N*1428) > > so N * 1500 == N * (72 + 1428) --> 1 * (72 + N * 1428) > > That value of 72 is instead 52 for 'standard Linux ipv4/tcp), > it's 40/60 if there's no tcp options (which I think happens when > talking to windows) > it's different still with ipv4 fragmentation... and again different > with ipv6 fragmentation... > etc. > > ie. 15000 L3 mtu is exactly as meaningless as 14000 L3 mtu. > Either way you don't get full frames. > > As such I'd recommend going with whatever is the largest mtu that can > be meaningfully made to fit in 16K with all the NCM header overhead. > That's likely closer to 15500-16000 (though I have *not* checked). > >> But my commit text didn't mention this >> properly which was a mistake on my behalf. But when I looked at the >> code, limiting the max segment size 15014 would force the practical >> max_mtu to not cross 15000 although theoretical max_mtu was set to: >> (GETHER_MAX_MTU_SIZE - 15412) during registration of net device. >> >> So my assumption of limiting it to 15000 was wrong. It must be limited >> to 15412 as mentioned in u_ether.c This inturn means we must limit >> max_segment_size to: >> GETHER_MAX_ETH_FRAME_LEN (GETHER_MAX_MTU_SIZE + ETH_HLEN) >> as mentioned in u_ether.c. >> >> I wanted to confirm that setting MAX_DATAGRAM_SIZE to >> GETHER_MAX_ETH_FRAME_LEN was correct. >> >> 2. I am not actually able to test with MTU beyond 15000. When my host >> device is a linux machine, the cdc_ncm.c limits max_segment_size to: >> CDC_NCM_MAX_DATAGRAM_SIZE 8192 /* bytes */ > > In practice you get 50% of the benefits of infinitely large mtu by > going from 1500 to ~2980. > you get 75% of the benefits by going to ~6K > you get 87.5% of the benefits by going to ~12K > the benefits of going even higher are smaller and smaller... > > If the host side is limited to 8192, maybe we should match that here too? Hi Maciej, Thanks for the detailed explanation. I agree with you on setting device side also to 8192 instead of what max_mtu is present in u_ether or practical max segment size possible. > > But the host side limitation of 8192 doesn't seem particularly sane either... > Maybe we should relax that instead? > I really didn't understand why it was set to 8192 in first place. > (especially since for things like tcp zero copy you want an mtu which > is slighly more then N * 4096, > ie. around 4.5KB, 8.5KB, 12.5KB or something like that) > I am not sure about host mode completely. If we want to increase though, just increasing the MAX_DATAGRAM_SIZE to some bigger value help ? (I don't know the entire code of cdc_ncm, so I might be wrong). Regards, Krishna,
On Fri, Oct 13, 2023 at 12:58 PM Krishna Kurapati PSSNV <quic_kriskura@quicinc.com> wrote: > > > > On 10/14/2023 12:09 AM, Maciej Żenczykowski wrote: > > On Thu, Oct 12, 2023 at 8:40 AM Krishna Kurapati PSSNV > > <quic_kriskura@quicinc.com> wrote: > >> > >> > >> > >> On 10/12/2023 6:02 PM, Maciej Żenczykowski wrote: > >>> On Thu, Oct 12, 2023 at 1:48 AM Krishna Kurapati PSSNV > >>> > >>> Could you paste the full patch? > >>> This is hard to review without looking at much more context then email > >>> is providing > >>> (or, even better, send me a link to a CL in gerrit somewhere - for > >>> example aosp ACK mainline tree) > >> > >> Sure. Will provide a gerrit on ACK for review before posting v2. > >> > >> The intent of posting the diff was two fold: > >> > >> 1. The question Greg asked regarding why the max segment size was > >> limited to 15014 was valid. When I thought about it, I actually wanted > >> to limit the max MTU to 15000, so the max segment size automatically > >> needs to be limited to 15014. > > > > Note that this is a *very* abstract value. > > I get you want L3 MTU of 10 * 1500, but this value is not actually meaningful. > > > > IPv4/IPv6 fragmentation and IPv4/IPv6 TCP segmentation > > do not result in a trivial multiplication of the standard 1500 byte > > ethernet L3 MTU. > > Indeed aggregating 2 1500 L3 mtu frames results in *different* sized > > frames depending on which type of aggregation you do. > > (and for tcp it even depends on the number and size of tcp options, > > though it is often assumed that those take up 12 bytes, since that's the > > normal for Linux-to-Linux tcp connections) > > > > For example if you aggregate N standard Linux ipv6/tcp L3 1500 mtu frames, > > this means you have > > N frames: ethernet (14) + ipv6 (40) + tcp (20) + tcp options (12) + > > payload (1500-12-20-40=1500-72=1428) > > post aggregation: > > 1 frame: ethernet (14) + ipv6 (40) + tcp (20) + tcp options (12) + > > payload (N*1428) > > > > so N * 1500 == N * (72 + 1428) --> 1 * (72 + N * 1428) > > > > That value of 72 is instead 52 for 'standard Linux ipv4/tcp), > > it's 40/60 if there's no tcp options (which I think happens when > > talking to windows) > > it's different still with ipv4 fragmentation... and again different > > with ipv6 fragmentation... > > etc. > > > > ie. 15000 L3 mtu is exactly as meaningless as 14000 L3 mtu. > > Either way you don't get full frames. > > > > As such I'd recommend going with whatever is the largest mtu that can > > be meaningfully made to fit in 16K with all the NCM header overhead. > > That's likely closer to 15500-16000 (though I have *not* checked). > > > >> But my commit text didn't mention this > >> properly which was a mistake on my behalf. But when I looked at the > >> code, limiting the max segment size 15014 would force the practical > >> max_mtu to not cross 15000 although theoretical max_mtu was set to: > >> (GETHER_MAX_MTU_SIZE - 15412) during registration of net device. > >> > >> So my assumption of limiting it to 15000 was wrong. It must be limited > >> to 15412 as mentioned in u_ether.c This inturn means we must limit > >> max_segment_size to: > >> GETHER_MAX_ETH_FRAME_LEN (GETHER_MAX_MTU_SIZE + ETH_HLEN) > >> as mentioned in u_ether.c. > >> > >> I wanted to confirm that setting MAX_DATAGRAM_SIZE to > >> GETHER_MAX_ETH_FRAME_LEN was correct. > >> > >> 2. I am not actually able to test with MTU beyond 15000. When my host > >> device is a linux machine, the cdc_ncm.c limits max_segment_size to: > >> CDC_NCM_MAX_DATAGRAM_SIZE 8192 /* bytes */ > > > > In practice you get 50% of the benefits of infinitely large mtu by > > going from 1500 to ~2980. > > you get 75% of the benefits by going to ~6K > > you get 87.5% of the benefits by going to ~12K > > the benefits of going even higher are smaller and smaller... > > > If the host side is limited to 8192, maybe we should match that here too? > > Hi Maciej, > > Thanks for the detailed explanation. I agree with you on setting > device side also to 8192 instead of what max_mtu is present in u_ether > or practical max segment size possible. > > > > > But the host side limitation of 8192 doesn't seem particularly sane either... > > Maybe we should relax that instead? > > > I really didn't understand why it was set to 8192 in first place. > > > (especially since for things like tcp zero copy you want an mtu which > > is slighly more then N * 4096, > > ie. around 4.5KB, 8.5KB, 12.5KB or something like that) > > > > I am not sure about host mode completely. If we want to increase though, > just increasing the MAX_DATAGRAM_SIZE to some bigger value help ? (I > don't know the entire code of cdc_ncm, so I might be wrong). > > Regards, > Krishna, Hmm, I'm not sure. I know I've experimented with high mtu ncm in the past (around 2.5 years ago). I got it working between my Linux desktop (host) and a Pixel 6 (device/gadget) with absolutely no problems. I'm pretty sure I didn't change my desktop kernel, so I was probably limited to 8192 there (and I do more or less remember that). From what I vaguely remember, it wasn't difficult (at all) to hit upwards of 7gbps for iperf tests. I don't remember how close to the theoretical USB 10gbps maximum of 9.7gbps I could get... [this was never the real bottleneck / issue, so I didn't ever dig particularly deep] I'm pretty sure my gadget side changes were non-configurable... Probably just bumped one or two constants... I do *very* *vaguely* recall there being some funkiness though, where 8192 was *less* efficient than some slightly smaller value. If I recall correctly the issue is that 8192 + ethernet overhead + NCM overhead only fits *once* into 16384, which leaves a lot of space wasted. While ~7.5 kb + overhead fits twice and is thus a fair bit better. I don't remember if I found a way to boost the 16384 to double or triple that. That should have been a win, I can't remember if we were usb3 spec limitted there.
On 10/14/2023 4:05 AM, Maciej Żenczykowski wrote: >>>> The intent of posting the diff was two fold: >>>> >>>> 1. The question Greg asked regarding why the max segment size was >>>> limited to 15014 was valid. When I thought about it, I actually wanted >>>> to limit the max MTU to 15000, so the max segment size automatically >>>> needs to be limited to 15014. >>> >>> Note that this is a *very* abstract value. >>> I get you want L3 MTU of 10 * 1500, but this value is not actually meaningful. >>> >>> IPv4/IPv6 fragmentation and IPv4/IPv6 TCP segmentation >>> do not result in a trivial multiplication of the standard 1500 byte >>> ethernet L3 MTU. >>> Indeed aggregating 2 1500 L3 mtu frames results in *different* sized >>> frames depending on which type of aggregation you do. >>> (and for tcp it even depends on the number and size of tcp options, >>> though it is often assumed that those take up 12 bytes, since that's the >>> normal for Linux-to-Linux tcp connections) >>> >>> For example if you aggregate N standard Linux ipv6/tcp L3 1500 mtu frames, >>> this means you have >>> N frames: ethernet (14) + ipv6 (40) + tcp (20) + tcp options (12) + >>> payload (1500-12-20-40=1500-72=1428) >>> post aggregation: >>> 1 frame: ethernet (14) + ipv6 (40) + tcp (20) + tcp options (12) + >>> payload (N*1428) >>> >>> so N * 1500 == N * (72 + 1428) --> 1 * (72 + N * 1428) >>> >>> That value of 72 is instead 52 for 'standard Linux ipv4/tcp), >>> it's 40/60 if there's no tcp options (which I think happens when >>> talking to windows) >>> it's different still with ipv4 fragmentation... and again different >>> with ipv6 fragmentation... >>> etc. >>> >>> ie. 15000 L3 mtu is exactly as meaningless as 14000 L3 mtu. >>> Either way you don't get full frames. >>> >>> As such I'd recommend going with whatever is the largest mtu that can >>> be meaningfully made to fit in 16K with all the NCM header overhead. >>> That's likely closer to 15500-16000 (though I have *not* checked). >>> >>>> But my commit text didn't mention this >>>> properly which was a mistake on my behalf. But when I looked at the >>>> code, limiting the max segment size 15014 would force the practical >>>> max_mtu to not cross 15000 although theoretical max_mtu was set to: >>>> (GETHER_MAX_MTU_SIZE - 15412) during registration of net device. >>>> >>>> So my assumption of limiting it to 15000 was wrong. It must be limited >>>> to 15412 as mentioned in u_ether.c This inturn means we must limit >>>> max_segment_size to: >>>> GETHER_MAX_ETH_FRAME_LEN (GETHER_MAX_MTU_SIZE + ETH_HLEN) >>>> as mentioned in u_ether.c. >>>> >>>> I wanted to confirm that setting MAX_DATAGRAM_SIZE to >>>> GETHER_MAX_ETH_FRAME_LEN was correct. >>>> >>>> 2. I am not actually able to test with MTU beyond 15000. When my host >>>> device is a linux machine, the cdc_ncm.c limits max_segment_size to: >>>> CDC_NCM_MAX_DATAGRAM_SIZE 8192 /* bytes */ >>> >>> In practice you get 50% of the benefits of infinitely large mtu by >>> going from 1500 to ~2980. >>> you get 75% of the benefits by going to ~6K >>> you get 87.5% of the benefits by going to ~12K >>> the benefits of going even higher are smaller and smaller... >>> > If the host side is limited to 8192, maybe we should match that here too? >> >> Hi Maciej, >> >> Thanks for the detailed explanation. I agree with you on setting >> device side also to 8192 instead of what max_mtu is present in u_ether >> or practical max segment size possible. >> >>> >>> But the host side limitation of 8192 doesn't seem particularly sane either... >>> Maybe we should relax that instead? >>> >> I really didn't understand why it was set to 8192 in first place. >> >>> (especially since for things like tcp zero copy you want an mtu which >>> is slighly more then N * 4096, >>> ie. around 4.5KB, 8.5KB, 12.5KB or something like that) >>> >> >> I am not sure about host mode completely. If we want to increase though, >> just increasing the MAX_DATAGRAM_SIZE to some bigger value help ? (I >> don't know the entire code of cdc_ncm, so I might be wrong). >> >> Regards, >> Krishna, > > Hmm, I'm not sure. I know I've experimented with high mtu ncm in the past > (around 2.5 years ago). I got it working between my Linux desktop (host) > and a Pixel 6 (device/gadget) with absolutely no problems. > > I'm pretty sure I didn't change my desktop kernel, so I was probably > limited to 8192 there > (and I do more or less remember that). > From what I vaguely remember, it wasn't difficult (at all) to hit > upwards of 7gbps for iperf tests. > I don't remember how close to the theoretical USB 10gbps maximum of > 9.7gbps I could get... > [this was never the real bottleneck / issue, so I didn't ever dig > particularly deep] > > I'm pretty sure my gadget side changes were non-configurable... > Probably just bumped one or two constants... > Could you share what parameters you changed to get this high value of iperf throughput. > I do *very* *vaguely* recall there being some funkiness though, where 8192 was > *less* efficient than some slightly smaller value. > > If I recall correctly the issue is that 8192 + ethernet overhead + NCM > overhead only fits *once* into 16384, which leaves a lot of space > wasted. > While ~7.5 kb + overhead fits twice and is thus a fair bit better. >Right, same goes for using 5K vs 5.5K MTU. If MTU is 5K, 3 packets can conveniently fit into an NTB but if its 5.5, at max only two (5.5k) packets can fit in (essentially filling ~11k of the 16384 bytes and wasting the rest) And whether its Ipv4/Ipv6 like you mentioned on [1], the MTU is what NCM layer receives and we append the Ethernet header and add NCM headers and send it out after aggregation. Why can't we set the MAX_DATAGRAM_SIZE to ~8050 or ~8100 ? The reason I say this value is, obviously setting it to 8192 would not efficiently use the NTB buffer. We need to fill as much space in buffer as possible and assuming that each packet received on ncm layer is of MTU size set (not less that that), we can assume that even if only 2 packets are aggregated (minimum aggregation possible), we would be filling (2 * (8050 + ETH_HLEN) + (room for NCM headers)) would almost be close to 16384 ep max packet size. I already check 8050 MTU and it works. We can add a comment in code detailing the above explanation and why we chose to use 8050 or 8100 as MAX_DATAGRAM_SIZE. Hope my reasoning of why we can chose 8.1K or 8.05K makes sense. Let me know your thoughts on this. Regards, Krishna,
On 10/14/2023 12:32 PM, Krishna Kurapati PSSNV wrote: > > > On 10/14/2023 4:05 AM, Maciej Żenczykowski wrote: >>>>> The intent of posting the diff was two fold: >>>>> >>>>> 1. The question Greg asked regarding why the max segment size was >>>>> limited to 15014 was valid. When I thought about it, I actually wanted >>>>> to limit the max MTU to 15000, so the max segment size automatically >>>>> needs to be limited to 15014. >>>> >>>> Note that this is a *very* abstract value. >>>> I get you want L3 MTU of 10 * 1500, but this value is not actually >>>> meaningful. >>>> >>>> IPv4/IPv6 fragmentation and IPv4/IPv6 TCP segmentation >>>> do not result in a trivial multiplication of the standard 1500 byte >>>> ethernet L3 MTU. >>>> Indeed aggregating 2 1500 L3 mtu frames results in *different* sized >>>> frames depending on which type of aggregation you do. >>>> (and for tcp it even depends on the number and size of tcp options, >>>> though it is often assumed that those take up 12 bytes, since that's >>>> the >>>> normal for Linux-to-Linux tcp connections) >>>> >>>> For example if you aggregate N standard Linux ipv6/tcp L3 1500 mtu >>>> frames, >>>> this means you have >>>> N frames: ethernet (14) + ipv6 (40) + tcp (20) + tcp options (12) + >>>> payload (1500-12-20-40=1500-72=1428) >>>> post aggregation: >>>> 1 frame: ethernet (14) + ipv6 (40) + tcp (20) + tcp options (12) + >>>> payload (N*1428) >>>> >>>> so N * 1500 == N * (72 + 1428) --> 1 * (72 + N * 1428) >>>> >>>> That value of 72 is instead 52 for 'standard Linux ipv4/tcp), >>>> it's 40/60 if there's no tcp options (which I think happens when >>>> talking to windows) >>>> it's different still with ipv4 fragmentation... and again different >>>> with ipv6 fragmentation... >>>> etc. >>>> >>>> ie. 15000 L3 mtu is exactly as meaningless as 14000 L3 mtu. >>>> Either way you don't get full frames. >>>> >>>> As such I'd recommend going with whatever is the largest mtu that can >>>> be meaningfully made to fit in 16K with all the NCM header overhead. >>>> That's likely closer to 15500-16000 (though I have *not* checked). >>>> >>>>> But my commit text didn't mention this >>>>> properly which was a mistake on my behalf. But when I looked at the >>>>> code, limiting the max segment size 15014 would force the practical >>>>> max_mtu to not cross 15000 although theoretical max_mtu was set to: >>>>> (GETHER_MAX_MTU_SIZE - 15412) during registration of net device. >>>>> >>>>> So my assumption of limiting it to 15000 was wrong. It must be limited >>>>> to 15412 as mentioned in u_ether.c This inturn means we must limit >>>>> max_segment_size to: >>>>> GETHER_MAX_ETH_FRAME_LEN (GETHER_MAX_MTU_SIZE + ETH_HLEN) >>>>> as mentioned in u_ether.c. >>>>> >>>>> I wanted to confirm that setting MAX_DATAGRAM_SIZE to >>>>> GETHER_MAX_ETH_FRAME_LEN was correct. >>>>> >>>>> 2. I am not actually able to test with MTU beyond 15000. When my host >>>>> device is a linux machine, the cdc_ncm.c limits max_segment_size to: >>>>> CDC_NCM_MAX_DATAGRAM_SIZE 8192 /* bytes */ >>>> >>>> In practice you get 50% of the benefits of infinitely large mtu by >>>> going from 1500 to ~2980. >>>> you get 75% of the benefits by going to ~6K >>>> you get 87.5% of the benefits by going to ~12K >>>> the benefits of going even higher are smaller and smaller... >>>> > If the host side is limited to 8192, maybe we should match that >>>> here too? >>> >>> Hi Maciej, >>> >>> Thanks for the detailed explanation. I agree with you on setting >>> device side also to 8192 instead of what max_mtu is present in u_ether >>> or practical max segment size possible. >>> >>>> >>>> But the host side limitation of 8192 doesn't seem particularly sane >>>> either... >>>> Maybe we should relax that instead? >>>> >>> I really didn't understand why it was set to 8192 in first place. >>> >>>> (especially since for things like tcp zero copy you want an mtu which >>>> is slighly more then N * 4096, >>>> ie. around 4.5KB, 8.5KB, 12.5KB or something like that) >>>> >>> >>> I am not sure about host mode completely. If we want to increase though, >>> just increasing the MAX_DATAGRAM_SIZE to some bigger value help ? (I >>> don't know the entire code of cdc_ncm, so I might be wrong). >>> >>> Regards, >>> Krishna, >> >> Hmm, I'm not sure. I know I've experimented with high mtu ncm in the >> past >> (around 2.5 years ago). I got it working between my Linux desktop (host) >> and a Pixel 6 (device/gadget) with absolutely no problems. >> >> I'm pretty sure I didn't change my desktop kernel, so I was probably >> limited to 8192 there >> (and I do more or less remember that). >> From what I vaguely remember, it wasn't difficult (at all) to hit >> upwards of 7gbps for iperf tests. >> I don't remember how close to the theoretical USB 10gbps maximum of >> 9.7gbps I could get... >> [this was never the real bottleneck / issue, so I didn't ever dig >> particularly deep] >> >> I'm pretty sure my gadget side changes were non-configurable... >> Probably just bumped one or two constants... >> > Could you share what parameters you changed to get this high value of > iperf throughput. > >> I do *very* *vaguely* recall there being some funkiness though, where >> 8192 was >> *less* efficient than some slightly smaller value. >> >> If I recall correctly the issue is that 8192 + ethernet overhead + NCM >> overhead only fits *once* into 16384, which leaves a lot of space >> wasted. >> While ~7.5 kb + overhead fits twice and is thus a fair bit better. > Right, same goes for using 5K vs 5.5K MTU. If MTU is 5K, 3 packets can > conveniently fit into an NTB but if its 5.5, at max only two (5.5k) > packets can fit in (essentially filling ~11k of the 16384 bytes and > wasting the rest) Formatting gone wrong. So pasting the first paragraph again here: "Right, same goes for using 5K vs 5.5K MTU. If MTU is 5K, 3 packets can conveniently fit into an NTB but if its 5.5, at max only two (5.5k) packets can fit in (essentially filling ~11k of the 16384 bytes and wasting the rest)" > > And whether its Ipv4/Ipv6 like you mentioned on [1], the MTU is what NCM > layer receives and we append the Ethernet header and add NCM headers and > send it out after aggregation. Why can't we set the MAX_DATAGRAM_SIZE to > ~8050 or ~8100 ? The reason I say this value is, obviously setting it to > 8192 would not efficiently use the NTB buffer. We need to fill as much > space in buffer as possible and assuming that each packet received on > ncm layer is of MTU size set (not less that that), we can assume that > even if only 2 packets are aggregated (minimum aggregation possible), we > would be filling (2 * (8050 + ETH_HLEN) + (room for NCM headers)) would > almost be close to 16384 ep max packet size. I already check 8050 MTU > and it works. We can add a comment in code detailing the above > explanation and why we chose to use 8050 or 8100 as MAX_DATAGRAM_SIZE. > > Hope my reasoning of why we can chose 8.1K or 8.05K makes sense. Let me > know your thoughts on this. > [1]: https://lore.kernel.org/all/CANP3RGd4G4dkMOyg6wSX29NYP2mp=LhMhmZpoG=rgoCz=bh1=w@mail.gmail.com/ > Regards, > Krishna, >
On Sat, Oct 14, 2023 at 1:24 AM Krishna Kurapati PSSNV <quic_kriskura@quicinc.com> wrote: > > > > On 10/14/2023 12:32 PM, Krishna Kurapati PSSNV wrote: > > > > > > On 10/14/2023 4:05 AM, Maciej Żenczykowski wrote: > >>>>> The intent of posting the diff was two fold: > >>>>> > >>>>> 1. The question Greg asked regarding why the max segment size was > >>>>> limited to 15014 was valid. When I thought about it, I actually wanted > >>>>> to limit the max MTU to 15000, so the max segment size automatically > >>>>> needs to be limited to 15014. > >>>> > >>>> Note that this is a *very* abstract value. > >>>> I get you want L3 MTU of 10 * 1500, but this value is not actually > >>>> meaningful. > >>>> > >>>> IPv4/IPv6 fragmentation and IPv4/IPv6 TCP segmentation > >>>> do not result in a trivial multiplication of the standard 1500 byte > >>>> ethernet L3 MTU. > >>>> Indeed aggregating 2 1500 L3 mtu frames results in *different* sized > >>>> frames depending on which type of aggregation you do. > >>>> (and for tcp it even depends on the number and size of tcp options, > >>>> though it is often assumed that those take up 12 bytes, since that's > >>>> the > >>>> normal for Linux-to-Linux tcp connections) > >>>> > >>>> For example if you aggregate N standard Linux ipv6/tcp L3 1500 mtu > >>>> frames, > >>>> this means you have > >>>> N frames: ethernet (14) + ipv6 (40) + tcp (20) + tcp options (12) + > >>>> payload (1500-12-20-40=1500-72=1428) > >>>> post aggregation: > >>>> 1 frame: ethernet (14) + ipv6 (40) + tcp (20) + tcp options (12) + > >>>> payload (N*1428) > >>>> > >>>> so N * 1500 == N * (72 + 1428) --> 1 * (72 + N * 1428) > >>>> > >>>> That value of 72 is instead 52 for 'standard Linux ipv4/tcp), > >>>> it's 40/60 if there's no tcp options (which I think happens when > >>>> talking to windows) > >>>> it's different still with ipv4 fragmentation... and again different > >>>> with ipv6 fragmentation... > >>>> etc. > >>>> > >>>> ie. 15000 L3 mtu is exactly as meaningless as 14000 L3 mtu. > >>>> Either way you don't get full frames. > >>>> > >>>> As such I'd recommend going with whatever is the largest mtu that can > >>>> be meaningfully made to fit in 16K with all the NCM header overhead. > >>>> That's likely closer to 15500-16000 (though I have *not* checked). > >>>> > >>>>> But my commit text didn't mention this > >>>>> properly which was a mistake on my behalf. But when I looked at the > >>>>> code, limiting the max segment size 15014 would force the practical > >>>>> max_mtu to not cross 15000 although theoretical max_mtu was set to: > >>>>> (GETHER_MAX_MTU_SIZE - 15412) during registration of net device. > >>>>> > >>>>> So my assumption of limiting it to 15000 was wrong. It must be limited > >>>>> to 15412 as mentioned in u_ether.c This inturn means we must limit > >>>>> max_segment_size to: > >>>>> GETHER_MAX_ETH_FRAME_LEN (GETHER_MAX_MTU_SIZE + ETH_HLEN) > >>>>> as mentioned in u_ether.c. > >>>>> > >>>>> I wanted to confirm that setting MAX_DATAGRAM_SIZE to > >>>>> GETHER_MAX_ETH_FRAME_LEN was correct. > >>>>> > >>>>> 2. I am not actually able to test with MTU beyond 15000. When my host > >>>>> device is a linux machine, the cdc_ncm.c limits max_segment_size to: > >>>>> CDC_NCM_MAX_DATAGRAM_SIZE 8192 /* bytes */ > >>>> > >>>> In practice you get 50% of the benefits of infinitely large mtu by > >>>> going from 1500 to ~2980. > >>>> you get 75% of the benefits by going to ~6K > >>>> you get 87.5% of the benefits by going to ~12K > >>>> the benefits of going even higher are smaller and smaller... > >>>> > If the host side is limited to 8192, maybe we should match that > >>>> here too? > >>> > >>> Hi Maciej, > >>> > >>> Thanks for the detailed explanation. I agree with you on setting > >>> device side also to 8192 instead of what max_mtu is present in u_ether > >>> or practical max segment size possible. > >>> > >>>> > >>>> But the host side limitation of 8192 doesn't seem particularly sane > >>>> either... > >>>> Maybe we should relax that instead? > >>>> > >>> I really didn't understand why it was set to 8192 in first place. > >>> > >>>> (especially since for things like tcp zero copy you want an mtu which > >>>> is slighly more then N * 4096, > >>>> ie. around 4.5KB, 8.5KB, 12.5KB or something like that) > >>>> > >>> > >>> I am not sure about host mode completely. If we want to increase though, > >>> just increasing the MAX_DATAGRAM_SIZE to some bigger value help ? (I > >>> don't know the entire code of cdc_ncm, so I might be wrong). > >>> > >>> Regards, > >>> Krishna, > >> > >> Hmm, I'm not sure. I know I've experimented with high mtu ncm in the > >> past > >> (around 2.5 years ago). I got it working between my Linux desktop (host) > >> and a Pixel 6 (device/gadget) with absolutely no problems. > >> > >> I'm pretty sure I didn't change my desktop kernel, so I was probably > >> limited to 8192 there > >> (and I do more or less remember that). > >> From what I vaguely remember, it wasn't difficult (at all) to hit > >> upwards of 7gbps for iperf tests. > >> I don't remember how close to the theoretical USB 10gbps maximum of > >> 9.7gbps I could get... > >> [this was never the real bottleneck / issue, so I didn't ever dig > >> particularly deep] > >> > >> I'm pretty sure my gadget side changes were non-configurable... > >> Probably just bumped one or two constants... > >> > > Could you share what parameters you changed to get this high value of > > iperf throughput. Eh, I really don't remember, but it wasn't anything earth shattering. From what I recall it was just a matter of bumping mtu, and tweaking irq pinning to stronger cores. Indeed I'm not even certain that the mtu was required to be over 5gbps. Though I may be confusing some things, as at least some of the testing was done with the kernel's built in packet generator. > > > >> I do *very* *vaguely* recall there being some funkiness though, where > >> 8192 was > >> *less* efficient than some slightly smaller value. > >> > >> If I recall correctly the issue is that 8192 + ethernet overhead + NCM > >> overhead only fits *once* into 16384, which leaves a lot of space > >> wasted. > >> While ~7.5 kb + overhead fits twice and is thus a fair bit better. > > Right, same goes for using 5K vs 5.5K MTU. If MTU is 5K, 3 packets can > > conveniently fit into an NTB but if its 5.5, at max only two (5.5k) > > packets can fit in (essentially filling ~11k of the 16384 bytes and > > wasting the rest) > > Formatting gone wrong. So pasting the first paragraph again here: > > "Right, same goes for using 5K vs 5.5K MTU. If MTU is 5K, 3 packets can > conveniently fit into an NTB but if its 5.5, at max only two (5.5k) > packets can fit in (essentially filling ~11k of the 16384 bytes and > wasting the rest)" > > > > > And whether its Ipv4/Ipv6 like you mentioned on [1], the MTU is what NCM > > layer receives and we append the Ethernet header and add NCM headers and > > send it out after aggregation. Why can't we set the MAX_DATAGRAM_SIZE to > > ~8050 or ~8100 ? The reason I say this value is, obviously setting it to > > 8192 would not efficiently use the NTB buffer. We need to fill as much > > space in buffer as possible and assuming that each packet received on > > ncm layer is of MTU size set (not less that that), we can assume that > > even if only 2 packets are aggregated (minimum aggregation possible), we > > would be filling (2 * (8050 + ETH_HLEN) + (room for NCM headers)) would > > almost be close to 16384 ep max packet size. I already check 8050 MTU > > and it works. We can add a comment in code detailing the above > > explanation and why we chose to use 8050 or 8100 as MAX_DATAGRAM_SIZE. > > > > Hope my reasoning of why we can chose 8.1K or 8.05K makes sense. Let me > > know your thoughts on this. Maybe just use an L3 mtu of 8000 then? That's a nice round number... But I'm also fine with 8050 or 8100.. though 8100 seems 'rounder'. I'm not sure what the actual overhead is... I guess we control the overhead in one direction, but not in the other, and there could be some slop, so we need to be a little generous? > > > > [1]: > https://lore.kernel.org/all/CANP3RGd4G4dkMOyg6wSX29NYP2mp=LhMhmZpoG=rgoCz=bh1=w@mail.gmail.com/ > > > Regards, > > Krishna, > >Maciej Żenczykowski, Kernel Networking Developer @ Google
On 10/16/2023 6:49 AM, Maciej Żenczykowski wrote: >>>> >>>> Hmm, I'm not sure. I know I've experimented with high mtu ncm in the >>>> past >>>> (around 2.5 years ago). I got it working between my Linux desktop (host) >>>> and a Pixel 6 (device/gadget) with absolutely no problems. >>>> >>>> I'm pretty sure I didn't change my desktop kernel, so I was probably >>>> limited to 8192 there >>>> (and I do more or less remember that). >>>> From what I vaguely remember, it wasn't difficult (at all) to hit >>>> upwards of 7gbps for iperf tests. >>>> I don't remember how close to the theoretical USB 10gbps maximum of >>>> 9.7gbps I could get... >>>> [this was never the real bottleneck / issue, so I didn't ever dig >>>> particularly deep] >>>> >>>> I'm pretty sure my gadget side changes were non-configurable... >>>> Probably just bumped one or two constants... >>>> >>> Could you share what parameters you changed to get this high value of >>> iperf throughput. > > Eh, I really don't remember, but it wasn't anything earth shattering. > From what I recall it was just a matter of bumping mtu, and tweaking > irq pinning to stronger cores. > Indeed I'm not even certain that the mtu was required to be over 5gbps. > Though I may be confusing some things, as at least some of the testing was done > with the kernel's built in packet generator. > >>> >>>> I do *very* *vaguely* recall there being some funkiness though, where >>>> 8192 was >>>> *less* efficient than some slightly smaller value. >>>> >>>> If I recall correctly the issue is that 8192 + ethernet overhead + NCM >>>> overhead only fits *once* into 16384, which leaves a lot of space >>>> wasted. >>>> While ~7.5 kb + overhead fits twice and is thus a fair bit better. >>> Right, same goes for using 5K vs 5.5K MTU. If MTU is 5K, 3 packets can >>> conveniently fit into an NTB but if its 5.5, at max only two (5.5k) >>> packets can fit in (essentially filling ~11k of the 16384 bytes and >>> wasting the rest) >> >> Formatting gone wrong. So pasting the first paragraph again here: >> >> "Right, same goes for using 5K vs 5.5K MTU. If MTU is 5K, 3 packets can >> conveniently fit into an NTB but if its 5.5, at max only two (5.5k) >> packets can fit in (essentially filling ~11k of the 16384 bytes and >> wasting the rest)" >> >>> >>> And whether its Ipv4/Ipv6 like you mentioned on [1], the MTU is what NCM >>> layer receives and we append the Ethernet header and add NCM headers and >>> send it out after aggregation. Why can't we set the MAX_DATAGRAM_SIZE to >>> ~8050 or ~8100 ? The reason I say this value is, obviously setting it to >>> 8192 would not efficiently use the NTB buffer. We need to fill as much >>> space in buffer as possible and assuming that each packet received on >>> ncm layer is of MTU size set (not less that that), we can assume that >>> even if only 2 packets are aggregated (minimum aggregation possible), we >>> would be filling (2 * (8050 + ETH_HLEN) + (room for NCM headers)) would >>> almost be close to 16384 ep max packet size. I already check 8050 MTU >>> and it works. We can add a comment in code detailing the above >>> explanation and why we chose to use 8050 or 8100 as MAX_DATAGRAM_SIZE. >>> >>> Hope my reasoning of why we can chose 8.1K or 8.05K makes sense. Let me >>> know your thoughts on this. > > Maybe just use an L3 mtu of 8000 then? That's a nice round number... > But I'm also fine with 8050 or 8100.. though 8100 seems 'rounder'. > > I'm not sure what the actual overhead is... I guess we control the > overhead in one direction, but not in the other, and there could be > some slop, so we need to be a little generous? > >>> Hi Maciej, Sure. Let's go with 8000 to leave some space for headers. And would add the following paragraph as comment for readers to understand why this value was set: "Although max mtu as dictated by u_ether is 15412 bytes, setting max_segment_size to 15426 would not be efficient. If user chooses segment size to be (> 8192), then we can't aggregate more than one buffer in each NTB (assuming each packet coming from network layer is > 8192 bytes) as ep maxpacket limit is 16384. So let max_segment_size be limited to 8000 to allow atleast 2 packets to be aggregated reducing wastage of NTB buffer space" Hope that would be fine. Regards, Krishna,
diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c index feccf4c8cc4f..eab297b22200 100644 --- a/drivers/usb/gadget/function/f_ncm.c +++ b/drivers/usb/gadget/function/f_ncm.c @@ -103,6 +103,8 @@ static inline struct f_ncm *func_to_ncm(struct usb_function *f) /* Delay for the transmit to wait before sending an unfilled NTB frame. */ #define TX_TIMEOUT_NSECS 300000 +#define MAX_DATAGRAM_SIZE 15014 + #define FORMATS_SUPPORTED (USB_CDC_NCM_NTB16_SUPPORTED | \ USB_CDC_NCM_NTB32_SUPPORTED) @@ -1408,6 +1410,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) ncm_opts = container_of(f->fi, struct f_ncm_opts, func_inst); if (cdev->use_os_string) { + ncm_opts->net->mtu = (ncm_opts->max_segment_size - ETH_HLEN); f->os_desc_table = kzalloc(sizeof(*f->os_desc_table), GFP_KERNEL); if (!f->os_desc_table) @@ -1469,6 +1472,8 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f) status = -ENODEV; + ecm_desc.wMaxSegmentSize = ncm_opts->max_segment_size; + /* allocate instance-specific endpoints */ ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_in_desc); if (!ep) @@ -1569,11 +1574,56 @@ USB_ETHERNET_CONFIGFS_ITEM_ATTR_QMULT(ncm); /* f_ncm_opts_ifname */ USB_ETHERNET_CONFIGFS_ITEM_ATTR_IFNAME(ncm); +static ssize_t ncm_opts_max_segment_size_show(struct config_item *item, + char *page) +{ + struct f_ncm_opts *opts = to_f_ncm_opts(item); + u32 segment_size; + + mutex_lock(&opts->lock); + segment_size = opts->max_segment_size; + mutex_unlock(&opts->lock); + + return sprintf(page, "%u\n", segment_size); +} + +static ssize_t ncm_opts_max_segment_size_store(struct config_item *item, + const char *page, size_t len) +{ + struct f_ncm_opts *opts = to_f_ncm_opts(item); + int ret; + u32 segment_size; + + mutex_lock(&opts->lock); + if (opts->refcnt) { + ret = -EBUSY; + goto out; + } + + ret = kstrtou32(page, 0, &segment_size); + if (ret) + goto out; + + if (segment_size > MAX_DATAGRAM_SIZE) { + ret = -EINVAL; + goto out; + } + + opts->max_segment_size = segment_size; + ret = len; +out: + mutex_unlock(&opts->lock); + return ret; +} + +CONFIGFS_ATTR(ncm_opts_, max_segment_size); + static struct configfs_attribute *ncm_attrs[] = { &ncm_opts_attr_dev_addr, &ncm_opts_attr_host_addr, &ncm_opts_attr_qmult, &ncm_opts_attr_ifname, + &ncm_opts_attr_max_segment_size, NULL, }; @@ -1616,6 +1666,7 @@ static struct usb_function_instance *ncm_alloc_inst(void) kfree(opts); return ERR_CAST(net); } + opts->max_segment_size = cpu_to_le16(ETH_FRAME_LEN); INIT_LIST_HEAD(&opts->ncm_os_desc.ext_prop); descs[0] = &opts->ncm_os_desc; diff --git a/drivers/usb/gadget/function/u_ncm.h b/drivers/usb/gadget/function/u_ncm.h index 5408854d8407..d3403cf13f17 100644 --- a/drivers/usb/gadget/function/u_ncm.h +++ b/drivers/usb/gadget/function/u_ncm.h @@ -31,6 +31,8 @@ struct f_ncm_opts { */ struct mutex lock; int refcnt; + + u32 max_segment_size; }; #endif /* U_NCM_H */
Currently the NCM driver restricts wMaxSegmentSize that indicates the datagram size coming from network layer to 1514. However the spec doesn't have any limitation. For P2P connections over NCM, increasing MTU helps increasing throughput. Add support to configure this value before configfs symlink is created. Also since the NTB Out/In buffer sizes are fixed at 16384 bytes, limit the segment size to an upper cap of 15014. Set the default MTU size for the ncm interface during function bind before network interface is registered allowing MTU to be set in parity with wMaxSegmentSize. Signed-off-by: Krishna Kurapati <quic_kriskura@quicinc.com> --- drivers/usb/gadget/function/f_ncm.c | 51 +++++++++++++++++++++++++++++ drivers/usb/gadget/function/u_ncm.h | 2 ++ 2 files changed, 53 insertions(+)