Message ID | 20240920140530.775307-1-schalla@marvell.com (mailing list archive) |
---|---|
Headers | show |
Series | vhost-vdpa: Add support for NO-IOMMU mode | expand |
On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote: > This patchset introduces support for an UNSAFE, no-IOMMU mode in the > vhost-vdpa driver. When enabled, this mode provides no device isolation, > no DMA translation, no host kernel protection, and cannot be used for > device assignment to virtual machines. It requires RAWIO permissions > and will taint the kernel. > > This mode requires enabling the "enable_vhost_vdpa_unsafe_noiommu_mode" > option on the vhost-vdpa driver and also negotiate the feature flag > VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get > better performance on specifice low end machines and can be leveraged > by embedded platforms where applications run in controlled environment. ... and is completely broken and dangerous.
> On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote: > > This patchset introduces support for an UNSAFE, no-IOMMU mode in the > > vhost-vdpa driver. When enabled, this mode provides no device > > isolation, no DMA translation, no host kernel protection, and cannot > > be used for device assignment to virtual machines. It requires RAWIO > > permissions and will taint the kernel. > > > > This mode requires enabling the > "enable_vhost_vdpa_unsafe_noiommu_mode" > > option on the vhost-vdpa driver and also negotiate the feature flag > > VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get better > > performance on specifice low end machines and can be leveraged by > > embedded platforms where applications run in controlled environment. > > ... and is completely broken and dangerous. Based on the discussions in this thread https://www.spinics.net/lists/kvm/msg357569.html, we have decided to proceed with this implementation. Could you please share any alternative ideas or suggestions you might have? Thanks.
On Mon, Oct 14, 2024 at 01:18:01PM +0000, Srujana Challa wrote: > > On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote: > > > This patchset introduces support for an UNSAFE, no-IOMMU mode in the > > > vhost-vdpa driver. When enabled, this mode provides no device > > > isolation, no DMA translation, no host kernel protection, and cannot > > > be used for device assignment to virtual machines. It requires RAWIO > > > permissions and will taint the kernel. > > > > > > This mode requires enabling the > > "enable_vhost_vdpa_unsafe_noiommu_mode" > > > option on the vhost-vdpa driver and also negotiate the feature flag > > > VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get better > > > performance on specifice low end machines and can be leveraged by > > > embedded platforms where applications run in controlled environment. > > > > ... and is completely broken and dangerous. > Based on the discussions in this thread https://www.spinics.net/lists/kvm/msg357569.html, > we have decided to proceed with this implementation. Could you please share any > alternative ideas or suggestions you might have? Don't do this. It is inherently unsafe and dangerous and there is not valid reason to implement it. Double-Nacked-by: Christoph Hellwig <hch@lst.de>
> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO- > IOMMU mode > > On Mon, Oct 14, 2024 at 01: 18: 01PM +0000, Srujana Challa wrote: > > On Fri, > Sep 20, 2024 at 07: 35: 28PM +0530, Srujana Challa wrote: > > > This patchset > introduces support for an UNSAFE, no-IOMMU mode in the > > > vhost-vdpa > > On Mon, Oct 14, 2024 at 01:18:01PM +0000, Srujana Challa wrote: > > > On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote: > > > > This patchset introduces support for an UNSAFE, no-IOMMU mode in > > > > the vhost-vdpa driver. When enabled, this mode provides no device > > > > isolation, no DMA translation, no host kernel protection, and > > > > cannot be used for device assignment to virtual machines. It > > > > requires RAWIO permissions and will taint the kernel. > > > > > > > > This mode requires enabling the > > > "enable_vhost_vdpa_unsafe_noiommu_mode" > > > > option on the vhost-vdpa driver and also negotiate the feature > > > > flag VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get > > > > better performance on specifice low end machines and can be > > > > leveraged by embedded platforms where applications run in controlled > environment. > > > > > > ... and is completely broken and dangerous. > > Based on the discussions in this thread > > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.spinics.net_l > > > ists_kvm_msg357569.html&d=DwIBAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=Fj4O > oD5hcK > > > FpANhTWdwQzjT1Jpf7veC5263T47JVpnc&m=Kj2YVdoGecovW95oPf_fILveQer > 4EsJfWJ > > > XmzmACF_v1jROPwW343ZXF2nEc5JN7&s=Dw7EoKg_W8Ak7E0uGR4gT3vHBv > Em2uEP1Pvx0 > > 5dXVHI&e=, we have decided to proceed with this implementation. Could > > you please share any alternative ideas or suggestions you might have? > > Don't do this. It is inherently unsafe and dangerous and there is not valid > reason to implement it. > > Double-Nacked-by: Christoph Hellwig <hch@lst.de> When using the DPDK virtio user PMD, we’ve noticed a significant 70% performance improvement when IOMMU is disabled on specific low-end x86 machines. This performance improvement can be particularly advantageous for embedded platforms where applications operate in controlled environments. Therefore, we believe supporting the intel_iommu=off mode is beneficial. Thanks.
On Mon, Oct 14, 2024 at 08:48:27PM -0700, Christoph Hellwig wrote: > On Mon, Oct 14, 2024 at 01:18:01PM +0000, Srujana Challa wrote: > > > On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote: > > > > This patchset introduces support for an UNSAFE, no-IOMMU mode in the > > > > vhost-vdpa driver. When enabled, this mode provides no device > > > > isolation, no DMA translation, no host kernel protection, and cannot > > > > be used for device assignment to virtual machines. It requires RAWIO > > > > permissions and will taint the kernel. > > > > > > > > This mode requires enabling the > > > "enable_vhost_vdpa_unsafe_noiommu_mode" > > > > option on the vhost-vdpa driver and also negotiate the feature flag > > > > VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get better > > > > performance on specifice low end machines and can be leveraged by > > > > embedded platforms where applications run in controlled environment. > > > > > > ... and is completely broken and dangerous. > > Based on the discussions in this thread https://www.spinics.net/lists/kvm/msg357569.html, > > we have decided to proceed with this implementation. Could you please share any > > alternative ideas or suggestions you might have? > > Don't do this. It is inherently unsafe and dangerous and there is not > valid reason to implement it. > > Double-Nacked-by: Christoph Hellwig <hch@lst.de> It's basically because vfio does, so we have to follow suit.
On Wed, Oct 16, 2024 at 01:41:51PM -0400, Michael S. Tsirkin wrote:
> It's basically because vfio does, so we have to follow suit.
That's a very bold argument, especially without any rationale of
a) why you need to match the feature set
b) how even adding it to vfio was agood idea
On Wed, Oct 16, 2024 at 05:28:27PM +0000, Srujana Challa wrote: > When using the DPDK virtio user PMD, we’ve noticed a significant 70% > performance improvement when IOMMU is disabled on specific low-end > x86 machines. This performance improvement can be particularly > advantageous for embedded platforms where applications operate in > controlled environments. Therefore, we believe supporting the intel_iommu=off > mode is beneficial. While making the system completely unsafe to use. Maybe you should fix your stack to use the iommu more itelligently instead?
> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO- > IOMMU mode > > On Wed, Oct 16, 2024 at 05:28:27PM +0000, Srujana Challa wrote: > > When using the DPDK virtio user PMD, we’ve noticed a significant 70% > > performance improvement when IOMMU is disabled on specific low-end > > x86 machines. This performance improvement can be particularly > > advantageous for embedded platforms where applications operate in > > controlled environments. Therefore, we believe supporting the > > intel_iommu=off mode is beneficial. > > While making the system completely unsafe to use. Maybe you should fix > your stack to use the iommu more itelligently instead? We observed better performance with "intel_iommu=on" in high-end x86 machines, indicating that the performance limitations are specific to low-end x86 hardware. This presents a trade-off between performance and security. Since intel_iommu is enabled by default, users who prioritize security over performance do not need to disable this option.
On Thu, Oct 17, 2024 at 4:53 PM Srujana Challa <schalla@marvell.com> wrote: > > > Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO- > > IOMMU mode > > > > On Wed, Oct 16, 2024 at 05:28:27PM +0000, Srujana Challa wrote: > > > When using the DPDK virtio user PMD, we’ve noticed a significant 70% > > > performance improvement when IOMMU is disabled on specific low-end > > > x86 machines. This performance improvement can be particularly > > > advantageous for embedded platforms where applications operate in > > > controlled environments. Therefore, we believe supporting the > > > intel_iommu=off mode is beneficial. > > > > While making the system completely unsafe to use. Maybe you should fix > > your stack to use the iommu more itelligently instead? > > We observed better performance with "intel_iommu=on" in high-end x86 machines, > indicating that the performance limitations are specific to low-end x86 hardware. Do you have any analysis on why Intel IOMMU is slow on "low-end" x86 hardware? Anyhow, we can ask Intel IOMMU experts to help here. Thanks > This presents a trade-off between performance and security. Since intel_iommu > is enabled by default, users who prioritize security over performance do not need to > disable this option.
On Thu, Oct 17, 2024 at 08:53:08AM +0000, Srujana Challa wrote: > We observed better performance with "intel_iommu=on" in high-end x86 machines, > indicating that the performance limitations are specific to low-end x86 hardware. What does "low-end" vs "high-end" mean? Atom vs other cores? > This presents a trade-off between performance and security. Since intel_iommu > is enabled by default, users who prioritize security over performance do not need to > disable this option. Either way, just disabling essential protection because it is slow is a no-go. We'll need to either fix your issues, or you need to use more suitable hardware for your workload.
> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO- > IOMMU mode > > On Thu, Oct 17, 2024 at 08: 53: 08AM +0000, Srujana Challa wrote: > We > observed better performance with "intel_iommu=on" in high-end x86 > machines, > indicating that the performance limitations are specific to low- > end x86 hardware. What does > On Thu, Oct 17, 2024 at 08:53:08AM +0000, Srujana Challa wrote: > > We observed better performance with "intel_iommu=on" in high-end x86 > > machines, indicating that the performance limitations are specific to low- > end x86 hardware. > > What does "low-end" vs "high-end" mean? Atom vs other cores? High-end, model name : Intel(R) Xeon(R) Platinum 8462Y+, 64 Cores Low-end, model name : 13th Gen Intel(R) Core(TM) i9-13900K, 32 Cores > > > This presents a trade-off between performance and security. Since > > intel_iommu is enabled by default, users who prioritize security over > > performance do not need to disable this option. > > Either way, just disabling essential protection because it is slow is a no-go. > We'll need to either fix your issues, or you need to use more suitable > hardware for your workload. I disagree. Why to pay more HW cost if the use case does not need demand for it? fox example, embedded environment and trusted application are running. It is the same thing is done for VFIO scheme. I don't understand your reservation on a mode you are not planning to use and default is protected. There are a lot kernel options, which does the correct trade between various parameter like performance, security, power and HW cost etc.
On Wed, Oct 16, 2024 at 11:16:08PM -0700, Christoph Hellwig wrote: > On Wed, Oct 16, 2024 at 01:41:51PM -0400, Michael S. Tsirkin wrote: > > It's basically because vfio does, so we have to follow suit. > > That's a very bold argument, especially without any rationale of > > a) why you need to match the feature set Because people want to move from some vendor specific solution with vfio to a standard vdpa compatible one with vdpa. We could just block them since we don't support tainted kernels anyway, but it's a step in the right direction since it allows moving to virtio, and the kernel is tained so no big support costs (although qe costs do exist, as with any code). > b) how even adding it to vfio was agood idea That ship has sailed.
On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote: > Because people want to move from some vendor specific solution with vfio > to a standard vdpa compatible one with vdpa. So now you have a want for new use cases and you turn that into a must for supporting completely insecure and dangerous crap.
On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote: > On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote: > > Because people want to move from some vendor specific solution with vfio > > to a standard vdpa compatible one with vdpa. > > So now you have a want for new use cases and you turn that into a must > for supporting completely insecure and dangerous crap. Nope. kernel is tainted -> unsupported whoever supports tainted kernels is already in dangerous waters.
On Wed, Oct 23, 2024 at 04:19:02AM -0400, Michael S. Tsirkin wrote: > On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote: > > On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote: > > > Because people want to move from some vendor specific solution with vfio > > > to a standard vdpa compatible one with vdpa. > > > > So now you have a want for new use cases and you turn that into a must > > for supporting completely insecure and dangerous crap. > > Nope. > > kernel is tainted -> unsupported > > whoever supports tainted kernels is already in dangerous waters. That's not a carte blanche for doing whatever crazy stuff you want. And if you don't trust me I'll add Greg who has a very clear opinion on IOMMU-bypassing user I/O hooks in the style of the uio driver as well I think :)
> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO- > IOMMU mode > > On Wed, Oct 23, 2024 at 04: 19: 02AM -0400, Michael S. Tsirkin wrote: > On > Tue, Oct 22, 2024 at 11: 58: 19PM -0700, Christoph Hellwig wrote: > > On Sat, > Oct 19, 2024 at 08: 16: 44PM -0400, Michael S. Tsirkin wrote: > > > Because > > On Wed, Oct 23, 2024 at 04:19:02AM -0400, Michael S. Tsirkin wrote: > > On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote: > > > On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote: > > > > Because people want to move from some vendor specific solution > > > > with vfio to a standard vdpa compatible one with vdpa. > > > > > > So now you have a want for new use cases and you turn that into a > > > must for supporting completely insecure and dangerous crap. > > > > Nope. > > > > kernel is tainted -> unsupported > > > > whoever supports tainted kernels is already in dangerous waters. > > That's not a carte blanche for doing whatever crazy stuff you want. > > And if you don't trust me I'll add Greg who has a very clear opinion on > IOMMU-bypassing user I/O hooks in the style of the uio driver as well I think > :) It is going in circles, let me give the summary, Issue: We need to address the lack of no-IOMMU support in the vhost vDPA driver for better performance. Measured Performance: On the machine "13th Gen Intel(R) Core(TM) i9-13900K, 32 Cores", we observed a performance improvement of 70 - 80% with intel_iommu=off when we run high-throughput network packet processing. Rationale for Fix: High-end machines which gives better performance with IOMMU are very expensive, and certain use cases, such as embedded environment and trusted applications, do not require the security features provided by IOMMU. Initial Approach: We initially considered a driver-based solution, specifically integrating no-IOMMU support into Marvell’s octep-vdpa driver. Initial Community Feedback: The community suggested adopting a VFIO-like scheme to make the solution more generic and widely applicable. Decision Point: Should we pursue a generic approach for no-IOMMU support in the vhost vDPA driver, or should we implement a driver-specific solution? Thanks, Srujana.
On Wed, Nov 06, 2024 at 12:38:02PM +0000, Srujana Challa wrote: > It is going in circles, let me give the summary, > Issue: We need to address the lack of no-IOMMU support in the vhost vDPA driver for better performance. > Measured Performance: On the machine "13th Gen Intel(R) Core(TM) i9-13900K, 32 Cores", we observed Looks ike you are going in circles indeed. Lack of performance is never a reason to disable the basic memoy safety for userspace drivers. The (also quite bad) reason why vfio-nummu was added was to support hardware entirely with an iommu. There is absolutely no reason to add krnel code for new methods of unsafer userspace I/O without an IOMMU ever.
On Wed, Nov 06, 2024 at 12:38:02PM +0000, Srujana Challa wrote: > > Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO- > > IOMMU mode > > > > On Wed, Oct 23, 2024 at 04: 19: 02AM -0400, Michael S. Tsirkin wrote: > On > > Tue, Oct 22, 2024 at 11: 58: 19PM -0700, Christoph Hellwig wrote: > > On Sat, > > Oct 19, 2024 at 08: 16: 44PM -0400, Michael S. Tsirkin wrote: > > > Because > > > > On Wed, Oct 23, 2024 at 04:19:02AM -0400, Michael S. Tsirkin wrote: > > > On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote: > > > > On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote: > > > > > Because people want to move from some vendor specific solution > > > > > with vfio to a standard vdpa compatible one with vdpa. > > > > > > > > So now you have a want for new use cases and you turn that into a > > > > must for supporting completely insecure and dangerous crap. > > > > > > Nope. > > > > > > kernel is tainted -> unsupported > > > > > > whoever supports tainted kernels is already in dangerous waters. > > > > That's not a carte blanche for doing whatever crazy stuff you want. > > > > And if you don't trust me I'll add Greg who has a very clear opinion on > > IOMMU-bypassing user I/O hooks in the style of the uio driver as well I think > > :) > > It is going in circles, let me give the summary, > Issue: We need to address the lack of no-IOMMU support in the vhost vDPA driver for better performance. > Measured Performance: On the machine "13th Gen Intel(R) Core(TM) i9-13900K, 32 Cores", we observed > a performance improvement of 70 - 80% with intel_iommu=off when we run high-throughput network > packet processing. > Rationale for Fix: High-end machines which gives better performance with IOMMU are very expensive, > and certain use cases, such as embedded environment and trusted applications, do not require > the security features provided by IOMMU. > Initial Approach: We initially considered a driver-based solution, specifically integrating no-IOMMU > support into Marvell’s octep-vdpa driver. > Initial Community Feedback: The community suggested adopting a VFIO-like scheme to make the solution > more generic and widely applicable. > Decision Point: Should we pursue a generic approach for no-IOMMU support in the vhost vDPA driver, > or should we implement a driver-specific solution? > > Thanks, > Srujana. This point does not matter for Christoph.
On Thu, Oct 24, 2024 at 02:05:43AM -0700, Christoph Hellwig wrote: > On Wed, Oct 23, 2024 at 04:19:02AM -0400, Michael S. Tsirkin wrote: > > On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote: > > > On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote: > > > > Because people want to move from some vendor specific solution with vfio > > > > to a standard vdpa compatible one with vdpa. > > > > > > So now you have a want for new use cases and you turn that into a must > > > for supporting completely insecure and dangerous crap. > > > > Nope. > > > > kernel is tainted -> unsupported > > > > whoever supports tainted kernels is already in dangerous waters. > > That's not a carte blanche for doing whatever crazy stuff you > want. > > And if you don't trust me I'll add Greg who has a very clear opinion > on IOMMU-bypassing user I/O hooks in the style of the uio driver as > well I think :) As a supporter of one of uio drivers, I agree with him.
> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO- > IOMMU mode > > On Wed, Nov 06, 2024 at 12: 38: 02PM +0000, Srujana Challa wrote: > It is > going in circles, let me give the summary, > Issue: We need to address the lack > of no-IOMMU support in the vhost vDPA driver for better performance. > > Measured > On Wed, Nov 06, 2024 at 12:38:02PM +0000, Srujana Challa wrote: > > It is going in circles, let me give the summary, > > Issue: We need to address the lack of no-IOMMU support in the vhost vDPA > driver for better performance. > > Measured Performance: On the machine "13th Gen Intel(R) Core(TM) > > i9-13900K, 32 Cores", we observed > > Looks ike you are going in circles indeed. Lack of performance is never a > reason to disable the basic memoy safety for userspace drivers. If security is a priority, the IOMMU should stay enabled. In that case, this patch wouldn’t cause any issues, right? > > The (also quite bad) reason why vfio-nummu was added was to support > hardware entirely with an iommu. > > There is absolutely no reason to add krnel code for new methods of unsafer > userspace I/O without an IOMMU ever.
> Subject: RE: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO- > IOMMU mode > > > Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for > > NO- IOMMU mode > > > > On Wed, Nov 06, 2024 at 12: 38: 02PM +0000, Srujana Challa wrote: > It > > is going in circles, let me give the summary, > Issue: We need to > > address the lack of no-IOMMU support in the vhost vDPA driver for > > better performance. > Measured > > On Wed, Nov 06, 2024 at 12:38:02PM +0000, Srujana Challa wrote: > > > It is going in circles, let me give the summary, > > > Issue: We need to address the lack of no-IOMMU support in the vhost > > > vDPA > > driver for better performance. > > > Measured Performance: On the machine "13th Gen Intel(R) Core(TM) > > > i9-13900K, 32 Cores", we observed > > > > Looks ike you are going in circles indeed. Lack of performance is > > never a reason to disable the basic memoy safety for userspace drivers. > If security is a priority, the IOMMU should stay enabled. In that case, this > patch wouldn’t cause any issues, right? Another important aspect of this patch is that it enables vhost-vdpa functionality on machines that do not support IOMMU, such as the Intel® Core™ i5-2320 CPU @ 3.00GHz. I believe, this enhancement is essential as it broadens the compatibility of our system, allowing users with embedded or less advanced hardware to benefit from vhost-vdpa features. Thanks. > > > > The (also quite bad) reason why vfio-nummu was added was to support > > hardware entirely with an iommu. > > > > There is absolutely no reason to add krnel code for new methods of > > unsafer userspace I/O without an IOMMU ever.