Message ID | 1458066313-12203-1-git-send-email-dhannawatpooja1@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, Mar 15, 2016 at 11:55:13PM +0530, Pooja Dhannawat wrote: > nc_sendv_compat has a huge stack usage of 69680 bytes approx. > Moving large arrays to heap to reduce stack usage. > > Signed-off-by: Pooja Dhannawat <dhannawatpooja1@gmail.com> > --- > net/net.c | 14 ++++++++++---- > 1 file changed, 10 insertions(+), 4 deletions(-) > > diff --git a/net/net.c b/net/net.c > index b0c832e..f03c571 100644 > --- a/net/net.c > +++ b/net/net.c > @@ -709,23 +709,29 @@ ssize_t qemu_send_packet_raw(NetClientState *nc, const uint8_t *buf, int size) > static ssize_t nc_sendv_compat(NetClientState *nc, const struct iovec *iov, > int iovcnt, unsigned flags) > { > - uint8_t buf[NET_BUFSIZE]; > + uint8_t *buf; > uint8_t *buffer; > size_t offset; > + ssize_t ret; > + > + buf = g_new(uint8_t, NET_BUFSIZE); The linear buffer is only needed when iovcnt > 1. I suggest the following instead: uint8_t *buf = NULL; if (iovcnt == 1) { buffer = iov[0].iov_base; offset = iov[0].iov_len; } else { buf = g_new(uint8_t, NET_BUFSIZE); buffer = buf; offset = iov_to_buf(iov, iovcnt, 0, buf, NET_BUFSIZE); } This way the allocation is only made when we actually need to linearize the buffer.
On Thu, Mar 17, 2016 at 8:23 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote: > On Tue, Mar 15, 2016 at 11:55:13PM +0530, Pooja Dhannawat wrote: > > nc_sendv_compat has a huge stack usage of 69680 bytes approx. > > Moving large arrays to heap to reduce stack usage. > > > > Signed-off-by: Pooja Dhannawat <dhannawatpooja1@gmail.com> > > --- > > net/net.c | 14 ++++++++++---- > > 1 file changed, 10 insertions(+), 4 deletions(-) > > > > diff --git a/net/net.c b/net/net.c > > index b0c832e..f03c571 100644 > > --- a/net/net.c > > +++ b/net/net.c > > @@ -709,23 +709,29 @@ ssize_t qemu_send_packet_raw(NetClientState *nc, > const uint8_t *buf, int size) > > static ssize_t nc_sendv_compat(NetClientState *nc, const struct iovec > *iov, > > int iovcnt, unsigned flags) > > { > > - uint8_t buf[NET_BUFSIZE]; > > + uint8_t *buf; > > uint8_t *buffer; > > size_t offset; > > + ssize_t ret; > > + > > + buf = g_new(uint8_t, NET_BUFSIZE); > > The linear buffer is only needed when iovcnt > 1. I suggest the > following instead: > > uint8_t *buf = NULL; > > if (iovcnt == 1) { > buffer = iov[0].iov_base; > offset = iov[0].iov_len; > } else { > buf = g_new(uint8_t, NET_BUFSIZE); > buffer = buf; > offset = iov_to_buf(iov, iovcnt, 0, buf, NET_BUFSIZE); > } > > This way the allocation is only made when we actually need to linearize > the buffer. > Thanks Stefan for pointing out this one :) I will make desired changes and push patch.
diff --git a/net/net.c b/net/net.c index b0c832e..f03c571 100644 --- a/net/net.c +++ b/net/net.c @@ -709,23 +709,29 @@ ssize_t qemu_send_packet_raw(NetClientState *nc, const uint8_t *buf, int size) static ssize_t nc_sendv_compat(NetClientState *nc, const struct iovec *iov, int iovcnt, unsigned flags) { - uint8_t buf[NET_BUFSIZE]; + uint8_t *buf; uint8_t *buffer; size_t offset; + ssize_t ret; + + buf = g_new(uint8_t, NET_BUFSIZE); if (iovcnt == 1) { buffer = iov[0].iov_base; offset = iov[0].iov_len; } else { buffer = buf; - offset = iov_to_buf(iov, iovcnt, 0, buf, sizeof(buf)); + offset = iov_to_buf(iov, iovcnt, 0, buf, NET_BUFSIZE); } if (flags & QEMU_NET_PACKET_FLAG_RAW && nc->info->receive_raw) { - return nc->info->receive_raw(nc, buffer, offset); + ret = nc->info->receive_raw(nc, buffer, offset); } else { - return nc->info->receive(nc, buffer, offset); + ret = nc->info->receive(nc, buffer, offset); } + + g_free(buf); + return ret; } ssize_t qemu_deliver_packet_iov(NetClientState *sender,
nc_sendv_compat has a huge stack usage of 69680 bytes approx. Moving large arrays to heap to reduce stack usage. Signed-off-by: Pooja Dhannawat <dhannawatpooja1@gmail.com> --- net/net.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-)