diff mbox series

[RFC,4/6] hv_sock: Initialize send_buf in hvs_stream_enqueue()

Message ID 20220413204742.5539-5-parri.andrea@gmail.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series hv_sock: Hardening changes | expand

Checks

Context Check Description
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers success CCed 12 of 12 maintainers
netdev/build_clang success Errors and warnings before: 0 this patch: 0
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 0 this patch: 0
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 8 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
netdev/tree_selection success Guessing tree name failed - patch did not apply

Commit Message

Andrea Parri April 13, 2022, 8:47 p.m. UTC
So that padding or uninitialized bytes can't leak guest memory contents.

Signed-off-by: Andrea Parri (Microsoft) <parri.andrea@gmail.com>
---
 net/vmw_vsock/hyperv_transport.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Michael Kelley (LINUX) April 15, 2022, 3:33 a.m. UTC | #1
From: Andrea Parri (Microsoft) <parri.andrea@gmail.com> Sent: Wednesday, April 13, 2022 1:48 PM
> 
> So that padding or uninitialized bytes can't leak guest memory contents.
> 
> Signed-off-by: Andrea Parri (Microsoft) <parri.andrea@gmail.com>
> ---
>  net/vmw_vsock/hyperv_transport.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c
> index 092cadc2c866d..72ce00928c8e7 100644
> --- a/net/vmw_vsock/hyperv_transport.c
> +++ b/net/vmw_vsock/hyperv_transport.c
> @@ -655,7 +655,7 @@ static ssize_t hvs_stream_enqueue(struct vsock_sock *vsk,
> struct msghdr *msg,
> 
>  	BUILD_BUG_ON(sizeof(*send_buf) != HV_HYP_PAGE_SIZE);
> 
> -	send_buf = kmalloc(sizeof(*send_buf), GFP_KERNEL);
> +	send_buf = kzalloc(sizeof(*send_buf), GFP_KERNEL);

Is this change really needed?   All fields are explicitly initialized, and in the data
array, only the populated bytes are copied to the ring buffer.  There should not
be any uninitialized values sent to the host.   Zeroing the memory ahead of
time certainly provides an extra protection (particularly against padding bytes,
but there can't be any since the layout of the data is part of the protocol with
Hyper-V).  It is expensive protection to zero out 16K+ bytes every time we send
out a small message.

Michael

>  	if (!send_buf)
>  		return -ENOMEM;
> 
> --
> 2.25.1
Andrea Parri April 15, 2022, 6:50 a.m. UTC | #2
> > @@ -655,7 +655,7 @@ static ssize_t hvs_stream_enqueue(struct vsock_sock *vsk,
> > struct msghdr *msg,
> > 
> >  	BUILD_BUG_ON(sizeof(*send_buf) != HV_HYP_PAGE_SIZE);
> > 
> > -	send_buf = kmalloc(sizeof(*send_buf), GFP_KERNEL);
> > +	send_buf = kzalloc(sizeof(*send_buf), GFP_KERNEL);
> 
> Is this change really needed?

The idea was...


> All fields are explicitly initialized, and in the data
> array, only the populated bytes are copied to the ring buffer.  There should not
> be any uninitialized values sent to the host.   Zeroing the memory ahead of
> time certainly provides an extra protection (particularly against padding bytes,
> but there can't be any since the layout of the data is part of the protocol with
> Hyper-V).

Rather than keeping checking that...


> It is expensive protection to zero out 16K+ bytes every time we send
> out a small message.

Do this.  ;-)

Will drop the patch.

Thanks,
  Andrea
Michael Kelley (LINUX) April 15, 2022, 2:30 p.m. UTC | #3
From: Andrea Parri <parri.andrea@gmail.com> Sent: Thursday, April 14, 2022 11:51 PM
> 
> > > @@ -655,7 +655,7 @@ static ssize_t hvs_stream_enqueue(struct vsock_sock *vsk,
> > > struct msghdr *msg,
> > >
> > >  	BUILD_BUG_ON(sizeof(*send_buf) != HV_HYP_PAGE_SIZE);
> > >
> > > -	send_buf = kmalloc(sizeof(*send_buf), GFP_KERNEL);
> > > +	send_buf = kzalloc(sizeof(*send_buf), GFP_KERNEL);
> >
> > Is this change really needed?
> 
> The idea was...
> 
> 
> > All fields are explicitly initialized, and in the data
> > array, only the populated bytes are copied to the ring buffer.  There should not
> > be any uninitialized values sent to the host.   Zeroing the memory ahead of
> > time certainly provides an extra protection (particularly against padding bytes,
> > but there can't be any since the layout of the data is part of the protocol with
> > Hyper-V).
> 
> Rather than keeping checking that...

The extra protection might be obtained by just zero'ing the header (i.e., the
bytes up to the 16 Kbyte data array).   I don't have a strong preference either
way, so up to you.

Michael

> 
> 
> > It is expensive protection to zero out 16K+ bytes every time we send
> > out a small message.
> 
> Do this.  ;-)
> 
> Will drop the patch.
> 
> Thanks,
>   Andrea
Andrea Parri April 15, 2022, 4:16 p.m. UTC | #4
> > > All fields are explicitly initialized, and in the data
> > > array, only the populated bytes are copied to the ring buffer.  There should not
> > > be any uninitialized values sent to the host.   Zeroing the memory ahead of
> > > time certainly provides an extra protection (particularly against padding bytes,
> > > but there can't be any since the layout of the data is part of the protocol with
> > > Hyper-V).
> > 
> > Rather than keeping checking that...
> 
> The extra protection might be obtained by just zero'ing the header (i.e., the
> bytes up to the 16 Kbyte data array).   I don't have a strong preference either
> way, so up to you.

A main reason behind this RFC is that I don't have either.  IIUC, you're
suggesting something like (the compiled only):


diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c
index 092cadc2c866d..200f12c432863 100644
--- a/net/vmw_vsock/hyperv_transport.c
+++ b/net/vmw_vsock/hyperv_transport.c
@@ -234,7 +234,8 @@ static int __hvs_send_data(struct vmbus_channel *chan,
 {
 	hdr->pkt_type = 1;
 	hdr->data_size = to_write;
-	return vmbus_sendpacket(chan, hdr, sizeof(*hdr) + to_write,
+	return vmbus_sendpacket(chan, hdr,
+				offsetof(struct hvs_send_buf, data) + to_write,
 				0, VM_PKT_DATA_INBAND, 0);
 }
 
@@ -658,6 +659,7 @@ static ssize_t hvs_stream_enqueue(struct vsock_sock *vsk, struct msghdr *msg,
 	send_buf = kmalloc(sizeof(*send_buf), GFP_KERNEL);
 	if (!send_buf)
 		return -ENOMEM;
+	memset(send_buf, 0, offsetof(struct hvs_send_buf, data));
 
 	/* Reader(s) could be draining data from the channel as we write.
 	 * Maximize bandwidth, by iterating until the channel is found to be
diff mbox series

Patch

diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c
index 092cadc2c866d..72ce00928c8e7 100644
--- a/net/vmw_vsock/hyperv_transport.c
+++ b/net/vmw_vsock/hyperv_transport.c
@@ -655,7 +655,7 @@  static ssize_t hvs_stream_enqueue(struct vsock_sock *vsk, struct msghdr *msg,
 
 	BUILD_BUG_ON(sizeof(*send_buf) != HV_HYP_PAGE_SIZE);
 
-	send_buf = kmalloc(sizeof(*send_buf), GFP_KERNEL);
+	send_buf = kzalloc(sizeof(*send_buf), GFP_KERNEL);
 	if (!send_buf)
 		return -ENOMEM;