diff mbox

KVM: X86: Extend MAX_IRQ_ROUTES to 4096

Message ID 1524141040-50214-1-git-send-email-wanpengli@tencent.com (mailing list archive)
State New, archived
Headers show

Commit Message

Wanpeng Li April 19, 2018, 12:30 p.m. UTC
From: Wanpeng Li <wanpengli@tencent.com>

Our virtual machines make use of device assignment by configuring
12 NVMe disks for high I/O performance. Each NVMe device has 129 
MSI-X Table entries:
Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
The windows virtual machines fail to boot since they will map the number of 
MSI-table entries that the NVMe hardware reported to the bus to msi routing 
table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096,
In the future this might be extended if needed.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Tonny Lu <tonnylu@tencent.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Tonny Lu <tonnylu@tencent.com>
---
 include/linux/kvm_host.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Cornelia Huck April 19, 2018, 1:06 p.m. UTC | #1
On Thu, 19 Apr 2018 05:30:40 -0700
Wanpeng Li <kernellwp@gmail.com> wrote:

> From: Wanpeng Li <wanpengli@tencent.com>
> 
> Our virtual machines make use of device assignment by configuring
> 12 NVMe disks for high I/O performance. Each NVMe device has 129 
> MSI-X Table entries:
> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
> The windows virtual machines fail to boot since they will map the number of 
> MSI-table entries that the NVMe hardware reported to the bus to msi routing 
> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096,
> In the future this might be extended if needed.
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Tonny Lu <tonnylu@tencent.com>
> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
> ---
>  include/linux/kvm_host.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 6930c63..815ae66 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1050,7 +1050,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>  #elif defined(CONFIG_ARM64)
>  #define KVM_MAX_IRQ_ROUTES 4096
>  #else
> -#define KVM_MAX_IRQ_ROUTES 1024
> +#define KVM_MAX_IRQ_ROUTES 4096
>  #endif
>  
>  bool kvm_arch_can_set_irq_routing(struct kvm *kvm);

So, this basically means we have 4096 everywhere, no?
Cornelia Huck April 19, 2018, 2:09 p.m. UTC | #2
On Thu, 19 Apr 2018 13:42:55 +0000
Wanpeng Li <wanpeng.li@hotmail.com> wrote:

> On Thu, 19 Apr 2018 05:30:40 -0700
> 
> Wanpeng Li <kernellwp@gmail.com> wrote:
> 
> > From: Wanpeng Li <wanpengli@tencent.com>
> >
> > Our virtual machines make use of device assignment by configuring
> > 12 NVMe disks for high I/O performance. Each NVMe device has 129
> > MSI-X Table entries:
> > Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
> > The windows virtual machines fail to boot since they will map the number of
> > MSI-table entries that the NVMe hardware reported to the bus to msi routing
> > table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096,
> > In the future this might be extended if needed.
> >
> > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > Cc: Radim Krčmář <rkrcmar@redhat.com>
> > Cc: Tonny Lu <tonnylu@tencent.com>
> > Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> > Signed-off-by: Tonny Lu <tonnylu@tencent.com>
> > ---
> >  include/linux/kvm_host.h | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 6930c63..815ae66 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -1050,7 +1050,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
> >  #elif defined(CONFIG_ARM64)
> >  #define KVM_MAX_IRQ_ROUTES 4096
> >  #else
> > -#define KVM_MAX_IRQ_ROUTES 1024
> > +#define KVM_MAX_IRQ_ROUTES 4096
> >  #endif
> >
> >  bool kvm_arch_can_set_irq_routing(struct kvm *kvm);  
> 
> So, this basically means we have 4096 everywhere, no?
> 
> I suspect different architectures maybe extend to different limits again according to their requirements.

Yes, but for now, we have the same everywhere (as you also bumped the
limit on power and 32-bit arm, implicitly). If that's ok, we might as
well get rid of the ifdeffery.

Also, my additional remark in f3f710bc64e12 still holds:

"We need to find a more general solution, though, as we can't just grow
the routing table indefinitly."
Wanpeng Li April 20, 2018, 12:55 a.m. UTC | #3
2018-04-19 22:09 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
> On Thu, 19 Apr 2018 13:42:55 +0000
> Wanpeng Li <wanpeng.li@hotmail.com> wrote:
>
>> On Thu, 19 Apr 2018 05:30:40 -0700
>>
>> Wanpeng Li <kernellwp@gmail.com> wrote:
>>
>> > From: Wanpeng Li <wanpengli@tencent.com>
>> >
>> > Our virtual machines make use of device assignment by configuring
>> > 12 NVMe disks for high I/O performance. Each NVMe device has 129
>> > MSI-X Table entries:
>> > Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>> > The windows virtual machines fail to boot since they will map the number of
>> > MSI-table entries that the NVMe hardware reported to the bus to msi routing
>> > table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096,
>> > In the future this might be extended if needed.
>> >
>> > Cc: Paolo Bonzini <pbonzini@redhat.com>
>> > Cc: Radim Krčmář <rkrcmar@redhat.com>
>> > Cc: Tonny Lu <tonnylu@tencent.com>
>> > Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
>> > Signed-off-by: Tonny Lu <tonnylu@tencent.com>
>> > ---
>> >  include/linux/kvm_host.h | 2 +-
>> >  1 file changed, 1 insertion(+), 1 deletion(-)
>> >
>> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> > index 6930c63..815ae66 100644
>> > --- a/include/linux/kvm_host.h
>> > +++ b/include/linux/kvm_host.h
>> > @@ -1050,7 +1050,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>> >  #elif defined(CONFIG_ARM64)
>> >  #define KVM_MAX_IRQ_ROUTES 4096
>> >  #else
>> > -#define KVM_MAX_IRQ_ROUTES 1024
>> > +#define KVM_MAX_IRQ_ROUTES 4096
>> >  #endif
>> >
>> >  bool kvm_arch_can_set_irq_routing(struct kvm *kvm);
>>
>> So, this basically means we have 4096 everywhere, no?
>>
>> I suspect different architectures maybe extend to different limits again according to their requirements.
>
> Yes, but for now, we have the same everywhere (as you also bumped the
> limit on power and 32-bit arm, implicitly). If that's ok, we might as

I suspect they will have the same issue when configured as our
production environment, so v2 gets rid of the ifdeffey.

Regards,
Wanpeng Li
diff mbox

Patch

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 6930c63..815ae66 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1050,7 +1050,7 @@  static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
 #elif defined(CONFIG_ARM64)
 #define KVM_MAX_IRQ_ROUTES 4096
 #else
-#define KVM_MAX_IRQ_ROUTES 1024
+#define KVM_MAX_IRQ_ROUTES 4096
 #endif
 
 bool kvm_arch_can_set_irq_routing(struct kvm *kvm);