diff mbox

[v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs

Message ID 1524185248-51744-1-git-send-email-wanpengli@tencent.com (mailing list archive)
State New, archived
Headers show

Commit Message

Wanpeng Li April 20, 2018, 12:47 a.m. UTC
From: Wanpeng Li <wanpengli@tencent.com>

Our virtual machines make use of device assignment by configuring
12 NVMe disks for high I/O performance. Each NVMe device has 129 
MSI-X Table entries:
Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
The windows virtual machines fail to boot since they will map the number of 
MSI-table entries that the NVMe hardware reported to the bus to msi routing 
table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
for all archs, in the future this might be extended again if needed.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Tonny Lu <tonnylu@tencent.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Tonny Lu <tonnylu@tencent.com>
---
v1 -> v2:
 * extend MAX_IRQ_ROUTES to 4096 for all archs 

 include/linux/kvm_host.h | 6 ------
 1 file changed, 6 deletions(-)

Comments

Cornelia Huck April 20, 2018, 7:15 a.m. UTC | #1
On Thu, 19 Apr 2018 17:47:28 -0700
Wanpeng Li <kernellwp@gmail.com> wrote:

> From: Wanpeng Li <wanpengli@tencent.com>
> 
> Our virtual machines make use of device assignment by configuring
> 12 NVMe disks for high I/O performance. Each NVMe device has 129 
> MSI-X Table entries:
> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
> The windows virtual machines fail to boot since they will map the number of 
> MSI-table entries that the NVMe hardware reported to the bus to msi routing 
> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
> for all archs, in the future this might be extended again if needed.
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Tonny Lu <tonnylu@tencent.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
> ---
> v1 -> v2:
>  * extend MAX_IRQ_ROUTES to 4096 for all archs 
> 
>  include/linux/kvm_host.h | 6 ------
>  1 file changed, 6 deletions(-)
> 
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 6930c63..0a5c299 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>  
>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>  
> -#ifdef CONFIG_S390
>  #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...

What about /* might need extension/rework in the future */ instead of
the FIXME?

As far as I understand, 4096 should cover most architectures and the
sane end of s390 configurations, but will not be enough at the scarier
end of s390. (I'm not sure how much it matters in practice.)

Do we want to make this a tuneable in the future? Do some kind of
dynamic allocation? Not sure whether it is worth the trouble.

> -#elif defined(CONFIG_ARM64)
> -#define KVM_MAX_IRQ_ROUTES 4096
> -#else
> -#define KVM_MAX_IRQ_ROUTES 1024
> -#endif
>  
>  bool kvm_arch_can_set_irq_routing(struct kvm *kvm);
>  int kvm_set_irq_routing(struct kvm *kvm,
Wanpeng Li April 20, 2018, 1:51 p.m. UTC | #2
2018-04-20 15:15 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
> On Thu, 19 Apr 2018 17:47:28 -0700
> Wanpeng Li <kernellwp@gmail.com> wrote:
>
>> From: Wanpeng Li <wanpengli@tencent.com>
>>
>> Our virtual machines make use of device assignment by configuring
>> 12 NVMe disks for high I/O performance. Each NVMe device has 129
>> MSI-X Table entries:
>> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>> The windows virtual machines fail to boot since they will map the number of
>> MSI-table entries that the NVMe hardware reported to the bus to msi routing
>> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
>> for all archs, in the future this might be extended again if needed.
>>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Radim Krčmář <rkrcmar@redhat.com>
>> Cc: Tonny Lu <tonnylu@tencent.com>
>> Cc: Cornelia Huck <cohuck@redhat.com>
>> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
>> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
>> ---
>> v1 -> v2:
>>  * extend MAX_IRQ_ROUTES to 4096 for all archs
>>
>>  include/linux/kvm_host.h | 6 ------
>>  1 file changed, 6 deletions(-)
>>
>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> index 6930c63..0a5c299 100644
>> --- a/include/linux/kvm_host.h
>> +++ b/include/linux/kvm_host.h
>> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>>
>>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>>
>> -#ifdef CONFIG_S390
>>  #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
>
> What about /* might need extension/rework in the future */ instead of
> the FIXME?

Yeah, I guess the maintainers can help to fix it when applying. :)

>
> As far as I understand, 4096 should cover most architectures and the
> sane end of s390 configurations, but will not be enough at the scarier
> end of s390. (I'm not sure how much it matters in practice.)
>
> Do we want to make this a tuneable in the future? Do some kind of
> dynamic allocation? Not sure whether it is worth the trouble.

I think keep as it is currently.

Regards,
Wanpeng Li
Cornelia Huck April 20, 2018, 2:21 p.m. UTC | #3
On Fri, 20 Apr 2018 21:51:13 +0800
Wanpeng Li <kernellwp@gmail.com> wrote:

> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
> > On Thu, 19 Apr 2018 17:47:28 -0700
> > Wanpeng Li <kernellwp@gmail.com> wrote:
> >  
> >> From: Wanpeng Li <wanpengli@tencent.com>
> >>
> >> Our virtual machines make use of device assignment by configuring
> >> 12 NVMe disks for high I/O performance. Each NVMe device has 129
> >> MSI-X Table entries:
> >> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
> >> The windows virtual machines fail to boot since they will map the number of
> >> MSI-table entries that the NVMe hardware reported to the bus to msi routing
> >> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
> >> for all archs, in the future this might be extended again if needed.
> >>
> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> >> Cc: Radim Krčmář <rkrcmar@redhat.com>
> >> Cc: Tonny Lu <tonnylu@tencent.com>
> >> Cc: Cornelia Huck <cohuck@redhat.com>
> >> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> >> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
> >> ---
> >> v1 -> v2:
> >>  * extend MAX_IRQ_ROUTES to 4096 for all archs
> >>
> >>  include/linux/kvm_host.h | 6 ------
> >>  1 file changed, 6 deletions(-)
> >>
> >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> >> index 6930c63..0a5c299 100644
> >> --- a/include/linux/kvm_host.h
> >> +++ b/include/linux/kvm_host.h
> >> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
> >>
> >>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
> >>
> >> -#ifdef CONFIG_S390
> >>  #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...  
> >
> > What about /* might need extension/rework in the future */ instead of
> > the FIXME?  
> 
> Yeah, I guess the maintainers can help to fix it when applying. :)
> 
> >
> > As far as I understand, 4096 should cover most architectures and the
> > sane end of s390 configurations, but will not be enough at the scarier
> > end of s390. (I'm not sure how much it matters in practice.)
> >
> > Do we want to make this a tuneable in the future? Do some kind of
> > dynamic allocation? Not sure whether it is worth the trouble.  
> 
> I think keep as it is currently.

My main question here is how long this is enough... the number of
virtqueues per device is up to 1K from the initial 64, which makes it
possible to hit the 4K limit with fewer virtio devices than before (on
s390, each virtqueue uses a routing table entry). OTOH, we don't want
giant tables everywhere just to accommodate s390.

If the s390 maintainers tell me that nobody is doing the really insane
stuff, I'm happy as well :)
Wanpeng Li April 21, 2018, 12:38 a.m. UTC | #4
2018-04-20 22:21 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
> On Fri, 20 Apr 2018 21:51:13 +0800
> Wanpeng Li <kernellwp@gmail.com> wrote:
>
>> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
>> > On Thu, 19 Apr 2018 17:47:28 -0700
>> > Wanpeng Li <kernellwp@gmail.com> wrote:
>> >
>> >> From: Wanpeng Li <wanpengli@tencent.com>
>> >>
>> >> Our virtual machines make use of device assignment by configuring
>> >> 12 NVMe disks for high I/O performance. Each NVMe device has 129
>> >> MSI-X Table entries:
>> >> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>> >> The windows virtual machines fail to boot since they will map the number of
>> >> MSI-table entries that the NVMe hardware reported to the bus to msi routing
>> >> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
>> >> for all archs, in the future this might be extended again if needed.
>> >>
>> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> >> Cc: Radim Krčmář <rkrcmar@redhat.com>
>> >> Cc: Tonny Lu <tonnylu@tencent.com>
>> >> Cc: Cornelia Huck <cohuck@redhat.com>
>> >> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
>> >> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
>> >> ---
>> >> v1 -> v2:
>> >>  * extend MAX_IRQ_ROUTES to 4096 for all archs
>> >>
>> >>  include/linux/kvm_host.h | 6 ------
>> >>  1 file changed, 6 deletions(-)
>> >>
>> >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> >> index 6930c63..0a5c299 100644
>> >> --- a/include/linux/kvm_host.h
>> >> +++ b/include/linux/kvm_host.h
>> >> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>> >>
>> >>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>> >>
>> >> -#ifdef CONFIG_S390
>> >>  #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
>> >
>> > What about /* might need extension/rework in the future */ instead of
>> > the FIXME?
>>
>> Yeah, I guess the maintainers can help to fix it when applying. :)
>>
>> >
>> > As far as I understand, 4096 should cover most architectures and the
>> > sane end of s390 configurations, but will not be enough at the scarier
>> > end of s390. (I'm not sure how much it matters in practice.)
>> >
>> > Do we want to make this a tuneable in the future? Do some kind of
>> > dynamic allocation? Not sure whether it is worth the trouble.
>>
>> I think keep as it is currently.
>
> My main question here is how long this is enough... the number of
> virtqueues per device is up to 1K from the initial 64, which makes it
> possible to hit the 4K limit with fewer virtio devices than before (on
> s390, each virtqueue uses a routing table entry). OTOH, we don't want
> giant tables everywhere just to accommodate s390.

I suspect there is no real scenario to futher extend for s390 since no
guys report.

> If the s390 maintainers tell me that nobody is doing the really insane
> stuff, I'm happy as well :)

Christian, any thoughts?

Regards,
Wanpeng Li
Christian Borntraeger April 23, 2018, 11:50 a.m. UTC | #5
On 04/21/2018 02:38 AM, Wanpeng Li wrote:
> 2018-04-20 22:21 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
>> On Fri, 20 Apr 2018 21:51:13 +0800
>> Wanpeng Li <kernellwp@gmail.com> wrote:
>>
>>> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
>>>> On Thu, 19 Apr 2018 17:47:28 -0700
>>>> Wanpeng Li <kernellwp@gmail.com> wrote:
>>>>
>>>>> From: Wanpeng Li <wanpengli@tencent.com>
>>>>>
>>>>> Our virtual machines make use of device assignment by configuring
>>>>> 12 NVMe disks for high I/O performance. Each NVMe device has 129
>>>>> MSI-X Table entries:
>>>>> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>>>>> The windows virtual machines fail to boot since they will map the number of
>>>>> MSI-table entries that the NVMe hardware reported to the bus to msi routing
>>>>> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
>>>>> for all archs, in the future this might be extended again if needed.
>>>>>
>>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>>>> Cc: Radim Krčmář <rkrcmar@redhat.com>
>>>>> Cc: Tonny Lu <tonnylu@tencent.com>
>>>>> Cc: Cornelia Huck <cohuck@redhat.com>
>>>>> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
>>>>> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
>>>>> ---
>>>>> v1 -> v2:
>>>>>  * extend MAX_IRQ_ROUTES to 4096 for all archs
>>>>>
>>>>>  include/linux/kvm_host.h | 6 ------
>>>>>  1 file changed, 6 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>>>>> index 6930c63..0a5c299 100644
>>>>> --- a/include/linux/kvm_host.h
>>>>> +++ b/include/linux/kvm_host.h
>>>>> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>>>>>
>>>>>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>>>>>
>>>>> -#ifdef CONFIG_S390
>>>>>  #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
>>>>
>>>> What about /* might need extension/rework in the future */ instead of
>>>> the FIXME?
>>>
>>> Yeah, I guess the maintainers can help to fix it when applying. :)
>>>
>>>>
>>>> As far as I understand, 4096 should cover most architectures and the
>>>> sane end of s390 configurations, but will not be enough at the scarier
>>>> end of s390. (I'm not sure how much it matters in practice.)
>>>>
>>>> Do we want to make this a tuneable in the future? Do some kind of
>>>> dynamic allocation? Not sure whether it is worth the trouble.
>>>
>>> I think keep as it is currently.
>>
>> My main question here is how long this is enough... the number of
>> virtqueues per device is up to 1K from the initial 64, which makes it
>> possible to hit the 4K limit with fewer virtio devices than before (on
>> s390, each virtqueue uses a routing table entry). OTOH, we don't want
>> giant tables everywhere just to accommodate s390.
> 
> I suspect there is no real scenario to futher extend for s390 since no
> guys report.
> 
>> If the s390 maintainers tell me that nobody is doing the really insane
>> stuff, I'm happy as well :)
> 
> Christian, any thoughts?

For now this patch is a no-op for s390 so as long as nobody complains today we are good.
If it turns out to be "not enough" we can then add a configurable number or whatever.
Wanpeng Li April 23, 2018, 11:56 a.m. UTC | #6
2018-04-23 19:50 GMT+08:00 Christian Borntraeger <borntraeger@de.ibm.com>:
>
>
> On 04/21/2018 02:38 AM, Wanpeng Li wrote:
>> 2018-04-20 22:21 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
>>> On Fri, 20 Apr 2018 21:51:13 +0800
>>> Wanpeng Li <kernellwp@gmail.com> wrote:
>>>
>>>> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:
>>>>> On Thu, 19 Apr 2018 17:47:28 -0700
>>>>> Wanpeng Li <kernellwp@gmail.com> wrote:
>>>>>
>>>>>> From: Wanpeng Li <wanpengli@tencent.com>
>>>>>>
>>>>>> Our virtual machines make use of device assignment by configuring
>>>>>> 12 NVMe disks for high I/O performance. Each NVMe device has 129
>>>>>> MSI-X Table entries:
>>>>>> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>>>>>> The windows virtual machines fail to boot since they will map the number of
>>>>>> MSI-table entries that the NVMe hardware reported to the bus to msi routing
>>>>>> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
>>>>>> for all archs, in the future this might be extended again if needed.
>>>>>>
>>>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>>>>> Cc: Radim Krčmář <rkrcmar@redhat.com>
>>>>>> Cc: Tonny Lu <tonnylu@tencent.com>
>>>>>> Cc: Cornelia Huck <cohuck@redhat.com>
>>>>>> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
>>>>>> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
>>>>>> ---
>>>>>> v1 -> v2:
>>>>>>  * extend MAX_IRQ_ROUTES to 4096 for all archs
>>>>>>
>>>>>>  include/linux/kvm_host.h | 6 ------
>>>>>>  1 file changed, 6 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>>>>>> index 6930c63..0a5c299 100644
>>>>>> --- a/include/linux/kvm_host.h
>>>>>> +++ b/include/linux/kvm_host.h
>>>>>> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>>>>>>
>>>>>>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>>>>>>
>>>>>> -#ifdef CONFIG_S390
>>>>>>  #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
>>>>>
>>>>> What about /* might need extension/rework in the future */ instead of
>>>>> the FIXME?
>>>>
>>>> Yeah, I guess the maintainers can help to fix it when applying. :)
>>>>
>>>>>
>>>>> As far as I understand, 4096 should cover most architectures and the
>>>>> sane end of s390 configurations, but will not be enough at the scarier
>>>>> end of s390. (I'm not sure how much it matters in practice.)
>>>>>
>>>>> Do we want to make this a tuneable in the future? Do some kind of
>>>>> dynamic allocation? Not sure whether it is worth the trouble.
>>>>
>>>> I think keep as it is currently.
>>>
>>> My main question here is how long this is enough... the number of
>>> virtqueues per device is up to 1K from the initial 64, which makes it
>>> possible to hit the 4K limit with fewer virtio devices than before (on
>>> s390, each virtqueue uses a routing table entry). OTOH, we don't want
>>> giant tables everywhere just to accommodate s390.
>>
>> I suspect there is no real scenario to futher extend for s390 since no
>> guys report.
>>
>>> If the s390 maintainers tell me that nobody is doing the really insane
>>> stuff, I'm happy as well :)
>>
>> Christian, any thoughts?
>
> For now this patch is a no-op for s390 so as long as nobody complains today we are good.
> If it turns out to be "not enough" we can then add a configurable number or whatever.

Thanks Christian. Paolo, could you pick this one w/ "/* might need
extension/rework in the future */ instead of
the FIXME" change or do you need I to send out a new version? :)

Regards,
Wanpeng Li
Cornelia Huck April 23, 2018, 11:57 a.m. UTC | #7
On Mon, 23 Apr 2018 13:50:48 +0200
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> On 04/21/2018 02:38 AM, Wanpeng Li wrote:
> > 2018-04-20 22:21 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:  
> >> On Fri, 20 Apr 2018 21:51:13 +0800
> >> Wanpeng Li <kernellwp@gmail.com> wrote:
> >>  
> >>> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <cohuck@redhat.com>:  
> >>>> On Thu, 19 Apr 2018 17:47:28 -0700
> >>>> Wanpeng Li <kernellwp@gmail.com> wrote:
> >>>>  
> >>>>> From: Wanpeng Li <wanpengli@tencent.com>
> >>>>>
> >>>>> Our virtual machines make use of device assignment by configuring
> >>>>> 12 NVMe disks for high I/O performance. Each NVMe device has 129
> >>>>> MSI-X Table entries:
> >>>>> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
> >>>>> The windows virtual machines fail to boot since they will map the number of
> >>>>> MSI-table entries that the NVMe hardware reported to the bus to msi routing
> >>>>> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
> >>>>> for all archs, in the future this might be extended again if needed.
> >>>>>
> >>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
> >>>>> Cc: Radim Krčmář <rkrcmar@redhat.com>
> >>>>> Cc: Tonny Lu <tonnylu@tencent.com>
> >>>>> Cc: Cornelia Huck <cohuck@redhat.com>
> >>>>> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> >>>>> Signed-off-by: Tonny Lu <tonnylu@tencent.com>
> >>>>> ---
> >>>>> v1 -> v2:
> >>>>>  * extend MAX_IRQ_ROUTES to 4096 for all archs
> >>>>>
> >>>>>  include/linux/kvm_host.h | 6 ------
> >>>>>  1 file changed, 6 deletions(-)
> >>>>>
> >>>>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> >>>>> index 6930c63..0a5c299 100644
> >>>>> --- a/include/linux/kvm_host.h
> >>>>> +++ b/include/linux/kvm_host.h
> >>>>> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
> >>>>>
> >>>>>  #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
> >>>>>
> >>>>> -#ifdef CONFIG_S390
> >>>>>  #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...  
> >>>>
> >>>> What about /* might need extension/rework in the future */ instead of
> >>>> the FIXME?  
> >>>
> >>> Yeah, I guess the maintainers can help to fix it when applying. :)
> >>>  
> >>>>
> >>>> As far as I understand, 4096 should cover most architectures and the
> >>>> sane end of s390 configurations, but will not be enough at the scarier
> >>>> end of s390. (I'm not sure how much it matters in practice.)
> >>>>
> >>>> Do we want to make this a tuneable in the future? Do some kind of
> >>>> dynamic allocation? Not sure whether it is worth the trouble.  
> >>>
> >>> I think keep as it is currently.  
> >>
> >> My main question here is how long this is enough... the number of
> >> virtqueues per device is up to 1K from the initial 64, which makes it
> >> possible to hit the 4K limit with fewer virtio devices than before (on
> >> s390, each virtqueue uses a routing table entry). OTOH, we don't want
> >> giant tables everywhere just to accommodate s390.  
> > 
> > I suspect there is no real scenario to futher extend for s390 since no
> > guys report.
> >   
> >> If the s390 maintainers tell me that nobody is doing the really insane
> >> stuff, I'm happy as well :)  
> > 
> > Christian, any thoughts?  
> 
> For now this patch is a no-op for s390 so as long as nobody complains today we are good.
> If it turns out to be "not enough" we can then add a configurable number or whatever. 

OK, then let's deal with the problem once it shows up.

With the comment changed as suggested above,

Reviewed-by: Cornelia Huck <cohuck@redhat.com>
diff mbox

Patch

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 6930c63..0a5c299 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1045,13 +1045,7 @@  static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
 
 #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
 
-#ifdef CONFIG_S390
 #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
-#elif defined(CONFIG_ARM64)
-#define KVM_MAX_IRQ_ROUTES 4096
-#else
-#define KVM_MAX_IRQ_ROUTES 1024
-#endif
 
 bool kvm_arch_can_set_irq_routing(struct kvm *kvm);
 int kvm_set_irq_routing(struct kvm *kvm,