mbox series

[v2,0/2] vhost-vdpa: Add support for NO-IOMMU mode

Message ID 20240920140530.775307-1-schalla@marvell.com (mailing list archive)
Headers show
Series vhost-vdpa: Add support for NO-IOMMU mode | expand

Message

Srujana Challa Sept. 20, 2024, 2:05 p.m. UTC
This patchset introduces support for an UNSAFE, no-IOMMU mode in the
vhost-vdpa driver. When enabled, this mode provides no device isolation,
no DMA translation, no host kernel protection, and cannot be used for
device assignment to virtual machines. It requires RAWIO permissions
and will taint the kernel.

This mode requires enabling the "enable_vhost_vdpa_unsafe_noiommu_mode"
option on the vhost-vdpa driver and also negotiate the feature flag
VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get
better performance on specifice low end machines and can be leveraged
by embedded platforms where applications run in controlled environment.

First patch introduces a module parameter
"enable_vhost_vdpa_unsafe_noiommu_mode", while the second patch
introduces a feature flag VHOST_BACKEND_F_NOIOMMU to the vhost vdpa
driver to support NO-IOMMU mode.

This feature flag indicates to userspace that the driver can safely
operate in NO-IOMMU mode. If the flag is not present, userspace should
assume NO-IOMMU mode is unsupported and must not proceed.

v1->v2:
- Introduced new feature flag to vhost backend features for negotiating
  NO-IOMMU feature.

Srujana Challa (2):
  vhost-vdpa: introduce module parameter for no-IOMMU mode
  vhost-vdpa: introduce NO-IOMMU backend feature bit

 drivers/vhost/vdpa.c             | 39 +++++++++++++++++++++++++++++++-
 include/uapi/linux/vhost_types.h |  2 ++
 2 files changed, 40 insertions(+), 1 deletion(-)

Comments

Christoph Hellwig Oct. 1, 2024, 8:47 a.m. UTC | #1
On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote:
> This patchset introduces support for an UNSAFE, no-IOMMU mode in the
> vhost-vdpa driver. When enabled, this mode provides no device isolation,
> no DMA translation, no host kernel protection, and cannot be used for
> device assignment to virtual machines. It requires RAWIO permissions
> and will taint the kernel.
> 
> This mode requires enabling the "enable_vhost_vdpa_unsafe_noiommu_mode"
> option on the vhost-vdpa driver and also negotiate the feature flag
> VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get
> better performance on specifice low end machines and can be leveraged
> by embedded platforms where applications run in controlled environment.

... and is completely broken and dangerous.
Srujana Challa Oct. 14, 2024, 1:18 p.m. UTC | #2
> On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote:
> > This patchset introduces support for an UNSAFE, no-IOMMU mode in the
> > vhost-vdpa driver. When enabled, this mode provides no device
> > isolation, no DMA translation, no host kernel protection, and cannot
> > be used for device assignment to virtual machines. It requires RAWIO
> > permissions and will taint the kernel.
> >
> > This mode requires enabling the
> "enable_vhost_vdpa_unsafe_noiommu_mode"
> > option on the vhost-vdpa driver and also negotiate the feature flag
> > VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get better
> > performance on specifice low end machines and can be leveraged by
> > embedded platforms where applications run in controlled environment.
> 
> ... and is completely broken and dangerous.
Based on the discussions in this thread https://www.spinics.net/lists/kvm/msg357569.html,
we have decided to proceed with this implementation. Could you please share any
alternative ideas or suggestions you might have?
Thanks.
Christoph Hellwig Oct. 15, 2024, 3:48 a.m. UTC | #3
On Mon, Oct 14, 2024 at 01:18:01PM +0000, Srujana Challa wrote:
> > On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote:
> > > This patchset introduces support for an UNSAFE, no-IOMMU mode in the
> > > vhost-vdpa driver. When enabled, this mode provides no device
> > > isolation, no DMA translation, no host kernel protection, and cannot
> > > be used for device assignment to virtual machines. It requires RAWIO
> > > permissions and will taint the kernel.
> > >
> > > This mode requires enabling the
> > "enable_vhost_vdpa_unsafe_noiommu_mode"
> > > option on the vhost-vdpa driver and also negotiate the feature flag
> > > VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get better
> > > performance on specifice low end machines and can be leveraged by
> > > embedded platforms where applications run in controlled environment.
> > 
> > ... and is completely broken and dangerous.
> Based on the discussions in this thread https://www.spinics.net/lists/kvm/msg357569.html,
> we have decided to proceed with this implementation. Could you please share any
> alternative ideas or suggestions you might have?

Don't do this.  It is inherently unsafe and dangerous and there is not
valid reason to implement it.

Double-Nacked-by: Christoph Hellwig <hch@lst.de>
Srujana Challa Oct. 16, 2024, 5:28 p.m. UTC | #4
> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> IOMMU mode
> 
> On Mon, Oct 14, 2024 at 01: 18: 01PM +0000, Srujana Challa wrote: > > On Fri,
> Sep 20, 2024 at 07: 35: 28PM +0530, Srujana Challa wrote: > > > This patchset
> introduces support for an UNSAFE, no-IOMMU mode in the > > > vhost-vdpa
> 
> On Mon, Oct 14, 2024 at 01:18:01PM +0000, Srujana Challa wrote:
> > > On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote:
> > > > This patchset introduces support for an UNSAFE, no-IOMMU mode in
> > > > the vhost-vdpa driver. When enabled, this mode provides no device
> > > > isolation, no DMA translation, no host kernel protection, and
> > > > cannot be used for device assignment to virtual machines. It
> > > > requires RAWIO permissions and will taint the kernel.
> > > >
> > > > This mode requires enabling the
> > > "enable_vhost_vdpa_unsafe_noiommu_mode"
> > > > option on the vhost-vdpa driver and also negotiate the feature
> > > > flag VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get
> > > > better performance on specifice low end machines and can be
> > > > leveraged by embedded platforms where applications run in controlled
> environment.
> > >
> > > ... and is completely broken and dangerous.
> > Based on the discussions in this thread
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.spinics.net_l
> >
> ists_kvm_msg357569.html&d=DwIBAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=Fj4O
> oD5hcK
> >
> FpANhTWdwQzjT1Jpf7veC5263T47JVpnc&m=Kj2YVdoGecovW95oPf_fILveQer
> 4EsJfWJ
> >
> XmzmACF_v1jROPwW343ZXF2nEc5JN7&s=Dw7EoKg_W8Ak7E0uGR4gT3vHBv
> Em2uEP1Pvx0
> > 5dXVHI&e=, we have decided to proceed with this implementation. Could
> > you please share any alternative ideas or suggestions you might have?
> 
> Don't do this.  It is inherently unsafe and dangerous and there is not valid
> reason to implement it.
> 
> Double-Nacked-by: Christoph Hellwig <hch@lst.de>

When using the DPDK virtio user PMD, we’ve noticed a significant 70%
performance improvement when IOMMU is disabled on specific low-end
x86 machines. This performance improvement can be particularly
advantageous for embedded platforms where applications operate in
controlled environments. Therefore, we believe supporting the intel_iommu=off
mode is beneficial.

Thanks.
Michael S. Tsirkin Oct. 16, 2024, 5:41 p.m. UTC | #5
On Mon, Oct 14, 2024 at 08:48:27PM -0700, Christoph Hellwig wrote:
> On Mon, Oct 14, 2024 at 01:18:01PM +0000, Srujana Challa wrote:
> > > On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote:
> > > > This patchset introduces support for an UNSAFE, no-IOMMU mode in the
> > > > vhost-vdpa driver. When enabled, this mode provides no device
> > > > isolation, no DMA translation, no host kernel protection, and cannot
> > > > be used for device assignment to virtual machines. It requires RAWIO
> > > > permissions and will taint the kernel.
> > > >
> > > > This mode requires enabling the
> > > "enable_vhost_vdpa_unsafe_noiommu_mode"
> > > > option on the vhost-vdpa driver and also negotiate the feature flag
> > > > VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get better
> > > > performance on specifice low end machines and can be leveraged by
> > > > embedded platforms where applications run in controlled environment.
> > > 
> > > ... and is completely broken and dangerous.
> > Based on the discussions in this thread https://www.spinics.net/lists/kvm/msg357569.html,
> > we have decided to proceed with this implementation. Could you please share any
> > alternative ideas or suggestions you might have?
> 
> Don't do this.  It is inherently unsafe and dangerous and there is not
> valid reason to implement it.
> 
> Double-Nacked-by: Christoph Hellwig <hch@lst.de>

It's basically because vfio does, so we have to follow suit.
Christoph Hellwig Oct. 17, 2024, 6:16 a.m. UTC | #6
On Wed, Oct 16, 2024 at 01:41:51PM -0400, Michael S. Tsirkin wrote:
> It's basically because vfio does, so we have to follow suit.

That's a very bold argument, especially without any rationale of

 a) why you need to match the feature set
 b) how even adding it to vfio was agood idea
Christoph Hellwig Oct. 17, 2024, 6:19 a.m. UTC | #7
On Wed, Oct 16, 2024 at 05:28:27PM +0000, Srujana Challa wrote:
> When using the DPDK virtio user PMD, we’ve noticed a significant 70%
> performance improvement when IOMMU is disabled on specific low-end
> x86 machines. This performance improvement can be particularly
> advantageous for embedded platforms where applications operate in
> controlled environments. Therefore, we believe supporting the intel_iommu=off
> mode is beneficial.

While making the system completely unsafe to use.  Maybe you should
fix your stack to use the iommu more itelligently instead?
Srujana Challa Oct. 17, 2024, 8:53 a.m. UTC | #8
> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> IOMMU mode
> 
> On Wed, Oct 16, 2024 at 05:28:27PM +0000, Srujana Challa wrote:
> > When using the DPDK virtio user PMD, we’ve noticed a significant 70%
> > performance improvement when IOMMU is disabled on specific low-end
> > x86 machines. This performance improvement can be particularly
> > advantageous for embedded platforms where applications operate in
> > controlled environments. Therefore, we believe supporting the
> > intel_iommu=off mode is beneficial.
> 
> While making the system completely unsafe to use.  Maybe you should fix
> your stack to use the iommu more itelligently instead?

We observed better performance with "intel_iommu=on" in high-end x86 machines,
indicating that the performance limitations are specific to low-end x86 hardware.
This presents a trade-off between performance and security. Since intel_iommu
is enabled by default, users who prioritize security over performance do not need to
disable this option.
Jason Wang Oct. 18, 2024, 4:54 a.m. UTC | #9
On Thu, Oct 17, 2024 at 4:53 PM Srujana Challa <schalla@marvell.com> wrote:
>
> > Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> > IOMMU mode
> >
> > On Wed, Oct 16, 2024 at 05:28:27PM +0000, Srujana Challa wrote:
> > > When using the DPDK virtio user PMD, we’ve noticed a significant 70%
> > > performance improvement when IOMMU is disabled on specific low-end
> > > x86 machines. This performance improvement can be particularly
> > > advantageous for embedded platforms where applications operate in
> > > controlled environments. Therefore, we believe supporting the
> > > intel_iommu=off mode is beneficial.
> >
> > While making the system completely unsafe to use.  Maybe you should fix
> > your stack to use the iommu more itelligently instead?
>
> We observed better performance with "intel_iommu=on" in high-end x86 machines,
> indicating that the performance limitations are specific to low-end x86 hardware.

Do you have any analysis on why Intel IOMMU is slow on "low-end" x86
hardware? Anyhow, we can ask Intel IOMMU experts to help here.

Thanks

> This presents a trade-off between performance and security. Since intel_iommu
> is enabled by default, users who prioritize security over performance do not need to
> disable this option.
Christoph Hellwig Oct. 18, 2024, 5:24 a.m. UTC | #10
On Thu, Oct 17, 2024 at 08:53:08AM +0000, Srujana Challa wrote:
> We observed better performance with "intel_iommu=on" in high-end x86 machines,
> indicating that the performance limitations are specific to low-end x86 hardware.

What does "low-end" vs "high-end" mean?  Atom vs other cores?

> This presents a trade-off between performance and security. Since intel_iommu
> is enabled by default, users who prioritize security over performance do not need to
> disable this option.

Either way, just disabling essential protection because it is slow
is a no-go.  We'll need to either fix your issues, or you need to use
more suitable hardware for your workload.
Srujana Challa Oct. 18, 2024, 1:08 p.m. UTC | #11
> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> IOMMU mode
> 
> On Thu, Oct 17, 2024 at 08: 53: 08AM +0000, Srujana Challa wrote: > We
> observed better performance with "intel_iommu=on" in high-end x86
> machines, > indicating that the performance limitations are specific to low-
> end x86 hardware. What does 
> On Thu, Oct 17, 2024 at 08:53:08AM +0000, Srujana Challa wrote:
> > We observed better performance with "intel_iommu=on" in high-end x86
> > machines, indicating that the performance limitations are specific to low-
> end x86 hardware.
> 
> What does "low-end" vs "high-end" mean?  Atom vs other cores?
 High-end, model name      : Intel(R) Xeon(R) Platinum 8462Y+,  64 Cores
 Low-end,  model name      : 13th Gen Intel(R) Core(TM) i9-13900K, 32 Cores
>  	
> > This presents a trade-off between performance and security. Since
> > intel_iommu is enabled by default, users who prioritize security over
> > performance do not need to disable this option.
> 
> Either way, just disabling essential protection because it is slow is a no-go.
> We'll need to either fix your issues, or you need to use more suitable
> hardware for your workload.
I disagree. Why to pay more HW cost if the use case does not need demand for it?
fox example, embedded environment and trusted application are running.
It is the same thing is done for VFIO scheme. I don't understand your reservation
on a mode you are not planning to use and default is protected.
There are a lot kernel options, which does the correct trade between various
parameter like performance, security, power and HW cost etc.
Michael S. Tsirkin Oct. 20, 2024, 12:16 a.m. UTC | #12
On Wed, Oct 16, 2024 at 11:16:08PM -0700, Christoph Hellwig wrote:
> On Wed, Oct 16, 2024 at 01:41:51PM -0400, Michael S. Tsirkin wrote:
> > It's basically because vfio does, so we have to follow suit.
> 
> That's a very bold argument, especially without any rationale of
> 
>  a) why you need to match the feature set

Because people want to move from some vendor specific solution with vfio
to a standard vdpa compatible one with vdpa.

We could just block them since we don't support tainted kernels anyway,
but it's a step in the right direction since it allows moving to
virtio, and the kernel is tained so no big support costs (although
qe costs do exist, as with any code).

>  b) how even adding it to vfio was agood idea

That ship has sailed.
Christoph Hellwig Oct. 23, 2024, 6:58 a.m. UTC | #13
On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote:
> Because people want to move from some vendor specific solution with vfio
> to a standard vdpa compatible one with vdpa.

So now you have a want for new use cases and you turn that into a must
for supporting completely insecure and dangerous crap.
Michael S. Tsirkin Oct. 23, 2024, 8:19 a.m. UTC | #14
On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote:
> On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote:
> > Because people want to move from some vendor specific solution with vfio
> > to a standard vdpa compatible one with vdpa.
> 
> So now you have a want for new use cases and you turn that into a must
> for supporting completely insecure and dangerous crap.

Nope.

kernel is tainted -> unsupported

whoever supports tainted kernels is already in dangerous waters.
Christoph Hellwig Oct. 24, 2024, 9:05 a.m. UTC | #15
On Wed, Oct 23, 2024 at 04:19:02AM -0400, Michael S. Tsirkin wrote:
> On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote:
> > On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote:
> > > Because people want to move from some vendor specific solution with vfio
> > > to a standard vdpa compatible one with vdpa.
> > 
> > So now you have a want for new use cases and you turn that into a must
> > for supporting completely insecure and dangerous crap.
> 
> Nope.
> 
> kernel is tainted -> unsupported
> 
> whoever supports tainted kernels is already in dangerous waters.

That's not a carte blanche for doing whatever crazy stuff you
want.

And if you don't trust me I'll add Greg who has a very clear opinion
on IOMMU-bypassing user I/O hooks in the style of the uio driver as
well I think :)
Srujana Challa Nov. 6, 2024, 12:38 p.m. UTC | #16
> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> IOMMU mode
> 
> On Wed, Oct 23, 2024 at 04: 19: 02AM -0400, Michael S. Tsirkin wrote: > On
> Tue, Oct 22, 2024 at 11: 58: 19PM -0700, Christoph Hellwig wrote: > > On Sat,
> Oct 19, 2024 at 08: 16: 44PM -0400, Michael S. Tsirkin wrote: > > > Because
> 
> On Wed, Oct 23, 2024 at 04:19:02AM -0400, Michael S. Tsirkin wrote:
> > On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote:
> > > On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote:
> > > > Because people want to move from some vendor specific solution
> > > > with vfio to a standard vdpa compatible one with vdpa.
> > >
> > > So now you have a want for new use cases and you turn that into a
> > > must for supporting completely insecure and dangerous crap.
> >
> > Nope.
> >
> > kernel is tainted -> unsupported
> >
> > whoever supports tainted kernels is already in dangerous waters.
> 
> That's not a carte blanche for doing whatever crazy stuff you want.
> 
> And if you don't trust me I'll add Greg who has a very clear opinion on
> IOMMU-bypassing user I/O hooks in the style of the uio driver as well I think
> :)

It is going in circles, let me give the summary,
Issue: We need to address the lack of no-IOMMU support in the vhost vDPA driver for better performance.
Measured Performance: On the machine "13th Gen Intel(R) Core(TM) i9-13900K, 32 Cores", we observed
a performance improvement of 70 - 80% with intel_iommu=off when we run high-throughput network
packet processing.
Rationale for Fix: High-end machines which gives better performance with IOMMU are very expensive,
and certain use cases, such as embedded environment and trusted applications, do not require
the security features provided by IOMMU.
Initial Approach: We initially considered a driver-based solution, specifically integrating no-IOMMU
support into Marvell’s octep-vdpa driver.
Initial Community Feedback: The community suggested adopting a VFIO-like scheme to make the solution
more generic and widely applicable.
Decision Point: Should we pursue a generic approach for no-IOMMU support in the vhost vDPA driver,
or should we implement a driver-specific solution?

Thanks,
Srujana.
Christoph Hellwig Nov. 6, 2024, 3:45 p.m. UTC | #17
On Wed, Nov 06, 2024 at 12:38:02PM +0000, Srujana Challa wrote:
> It is going in circles, let me give the summary,
> Issue: We need to address the lack of no-IOMMU support in the vhost vDPA driver for better performance.
> Measured Performance: On the machine "13th Gen Intel(R) Core(TM) i9-13900K, 32 Cores", we observed

Looks ike you are going in circles indeed.  Lack of performance is never
a reason to disable the basic memoy safety for userspace drivers.

The (also quite bad) reason why vfio-nummu was added was to support
hardware entirely with an iommu.

There is absolutely no reason to add krnel code for new methods of
unsafer userspace I/O without an IOMMU ever.
Michael S. Tsirkin Nov. 6, 2024, 6:11 p.m. UTC | #18
On Wed, Nov 06, 2024 at 12:38:02PM +0000, Srujana Challa wrote:
> > Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> > IOMMU mode
> > 
> > On Wed, Oct 23, 2024 at 04: 19: 02AM -0400, Michael S. Tsirkin wrote: > On
> > Tue, Oct 22, 2024 at 11: 58: 19PM -0700, Christoph Hellwig wrote: > > On Sat,
> > Oct 19, 2024 at 08: 16: 44PM -0400, Michael S. Tsirkin wrote: > > > Because
> > 
> > On Wed, Oct 23, 2024 at 04:19:02AM -0400, Michael S. Tsirkin wrote:
> > > On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote:
> > > > On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote:
> > > > > Because people want to move from some vendor specific solution
> > > > > with vfio to a standard vdpa compatible one with vdpa.
> > > >
> > > > So now you have a want for new use cases and you turn that into a
> > > > must for supporting completely insecure and dangerous crap.
> > >
> > > Nope.
> > >
> > > kernel is tainted -> unsupported
> > >
> > > whoever supports tainted kernels is already in dangerous waters.
> > 
> > That's not a carte blanche for doing whatever crazy stuff you want.
> > 
> > And if you don't trust me I'll add Greg who has a very clear opinion on
> > IOMMU-bypassing user I/O hooks in the style of the uio driver as well I think
> > :)
> 
> It is going in circles, let me give the summary,
> Issue: We need to address the lack of no-IOMMU support in the vhost vDPA driver for better performance.
> Measured Performance: On the machine "13th Gen Intel(R) Core(TM) i9-13900K, 32 Cores", we observed
> a performance improvement of 70 - 80% with intel_iommu=off when we run high-throughput network
> packet processing.
> Rationale for Fix: High-end machines which gives better performance with IOMMU are very expensive,
> and certain use cases, such as embedded environment and trusted applications, do not require
> the security features provided by IOMMU.
> Initial Approach: We initially considered a driver-based solution, specifically integrating no-IOMMU
> support into Marvell’s octep-vdpa driver.
> Initial Community Feedback: The community suggested adopting a VFIO-like scheme to make the solution
> more generic and widely applicable.
> Decision Point: Should we pursue a generic approach for no-IOMMU support in the vhost vDPA driver,
> or should we implement a driver-specific solution?
> 
> Thanks,
> Srujana.

This point does not matter for Christoph.
Michael S. Tsirkin Nov. 6, 2024, 6:14 p.m. UTC | #19
On Thu, Oct 24, 2024 at 02:05:43AM -0700, Christoph Hellwig wrote:
> On Wed, Oct 23, 2024 at 04:19:02AM -0400, Michael S. Tsirkin wrote:
> > On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote:
> > > On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote:
> > > > Because people want to move from some vendor specific solution with vfio
> > > > to a standard vdpa compatible one with vdpa.
> > > 
> > > So now you have a want for new use cases and you turn that into a must
> > > for supporting completely insecure and dangerous crap.
> > 
> > Nope.
> > 
> > kernel is tainted -> unsupported
> > 
> > whoever supports tainted kernels is already in dangerous waters.
> 
> That's not a carte blanche for doing whatever crazy stuff you
> want.
> 
> And if you don't trust me I'll add Greg who has a very clear opinion
> on IOMMU-bypassing user I/O hooks in the style of the uio driver as
> well I think :)

As a supporter of one of uio drivers, I agree with him.
Srujana Challa Nov. 7, 2024, 6:08 a.m. UTC | #20
> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> IOMMU mode
> 
> On Wed, Nov 06, 2024 at 12: 38: 02PM +0000, Srujana Challa wrote: > It is
> going in circles, let me give the summary, > Issue: We need to address the lack
> of no-IOMMU support in the vhost vDPA driver for better performance. >
> Measured 
> On Wed, Nov 06, 2024 at 12:38:02PM +0000, Srujana Challa wrote:
> > It is going in circles, let me give the summary,
> > Issue: We need to address the lack of no-IOMMU support in the vhost vDPA
> driver for better performance.
> > Measured Performance: On the machine "13th Gen Intel(R) Core(TM)
> > i9-13900K, 32 Cores", we observed
> 
> Looks ike you are going in circles indeed.  Lack of performance is never a
> reason to disable the basic memoy safety for userspace drivers.
If security is a priority, the IOMMU should stay enabled. In that case,
this patch wouldn’t cause any issues, right?
> 
> The (also quite bad) reason why vfio-nummu was added was to support
> hardware entirely with an iommu.
> 
> There is absolutely no reason to add krnel code for new methods of unsafer
> userspace I/O without an IOMMU ever.
Srujana Challa Nov. 12, 2024, 7:11 a.m. UTC | #21
> Subject: RE: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> IOMMU mode
> 
> > Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for
> > NO- IOMMU mode
> >
> > On Wed, Nov 06, 2024 at 12: 38: 02PM +0000, Srujana Challa wrote: > It
> > is going in circles, let me give the summary, > Issue: We need to
> > address the lack of no-IOMMU support in the vhost vDPA driver for
> > better performance. > Measured 
> > On Wed, Nov 06, 2024 at 12:38:02PM +0000, Srujana Challa wrote:
> > > It is going in circles, let me give the summary,
> > > Issue: We need to address the lack of no-IOMMU support in the vhost
> > > vDPA
> > driver for better performance.
> > > Measured Performance: On the machine "13th Gen Intel(R) Core(TM)
> > > i9-13900K, 32 Cores", we observed
> >
> > Looks ike you are going in circles indeed.  Lack of performance is
> > never a reason to disable the basic memoy safety for userspace drivers.
> If security is a priority, the IOMMU should stay enabled. In that case, this
> patch wouldn’t cause any issues, right?
Another important aspect of this patch is that it enables vhost-vdpa functionality on machines that do not support IOMMU, such as the Intel® Core™ i5-2320 CPU @ 3.00GHz. I believe, this enhancement is essential as it broadens the compatibility of our system, allowing users with embedded or less advanced hardware to benefit from vhost-vdpa features.
Thanks.

> >
> > The (also quite bad) reason why vfio-nummu was added was to support
> > hardware entirely with an iommu.
> >
> > There is absolutely no reason to add krnel code for new methods of
> > unsafer userspace I/O without an IOMMU ever.