diff mbox series

[v2,2/3] s390: cpu feature for diagnose 318 andlimit max VCPUs to 247

Message ID 1544135058-21380-3-git-send-email-walling@linux.ibm.com (mailing list archive)
State New, archived
Headers show
Series Guest Support for Diagnose 318 | expand

Commit Message

Collin Walling Dec. 6, 2018, 10:24 p.m. UTC
Diagnose 318 is a new z14.2 CPU feature. Since we are able to emulate
it entirely via KVM, we can add guest support for earlier models. A
new CPU feature for diagnose 318 (shortened to diag318) will be made
available to guests starting with the zEC12-full CPU model.

The z14.2 adds a new read SCP info byte (let's call it byte 134) to
detect the availability of diag318. Because of this, we have room for
one less VCPU and thus limit the max VPUs supported in a configuration
to 247 (down from 248).

Signed-off-by: Collin Walling <walling@linux.ibm.com>.
---
 hw/s390x/sclp.c                 | 2 ++
 include/hw/s390x/sclp.h         | 2 ++
 target/s390x/cpu.h              | 2 +-
 target/s390x/cpu_features.c     | 3 +++
 target/s390x/cpu_features.h     | 1 +
 target/s390x/cpu_features_def.h | 3 +++
 target/s390x/gen-features.c     | 1 +
 target/s390x/kvm.c              | 1 +
 8 files changed, 14 insertions(+), 1 deletion(-)

Comments

Cornelia Huck Dec. 7, 2018, 12:08 p.m. UTC | #1
On Thu,  6 Dec 2018 17:24:17 -0500
Collin Walling <walling@linux.ibm.com> wrote:

> Diagnose 318 is a new z14.2 CPU feature. Since we are able to emulate
> it entirely via KVM, we can add guest support for earlier models. A
> new CPU feature for diagnose 318 (shortened to diag318) will be made
> available to guests starting with the zEC12-full CPU model.
> 
> The z14.2 adds a new read SCP info byte (let's call it byte 134) to
> detect the availability of diag318. Because of this, we have room for
> one less VCPU and thus limit the max VPUs supported in a configuration
> to 247 (down from 248).
> 
> Signed-off-by: Collin Walling <walling@linux.ibm.com>.
> ---
>  hw/s390x/sclp.c                 | 2 ++
>  include/hw/s390x/sclp.h         | 2 ++
>  target/s390x/cpu.h              | 2 +-
>  target/s390x/cpu_features.c     | 3 +++
>  target/s390x/cpu_features.h     | 1 +
>  target/s390x/cpu_features_def.h | 3 +++
>  target/s390x/gen-features.c     | 1 +
>  target/s390x/kvm.c              | 1 +
>  8 files changed, 14 insertions(+), 1 deletion(-)
> 

> diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h
> index 8c2320e..594b4a4 100644
> --- a/target/s390x/cpu.h
> +++ b/target/s390x/cpu.h
> @@ -52,7 +52,7 @@
>  
>  #define MMU_USER_IDX 0
>  
> -#define S390_MAX_CPUS 248
> +#define S390_MAX_CPUS 247

Isn't that already problematic if you try to migrate from an older QEMU
with all possible vcpus defined? IOW, don't you really need a way that
older machines can still run with one more vcpu?

>  
>  typedef struct PSW {
>      uint64_t mask;
Collin Walling Dec. 11, 2018, 4:47 p.m. UTC | #2
On 12/7/18 7:08 AM, Cornelia Huck wrote:
> On Thu,  6 Dec 2018 17:24:17 -0500
> Collin Walling <walling@linux.ibm.com> wrote:
> 
>> Diagnose 318 is a new z14.2 CPU feature. Since we are able to emulate
>> it entirely via KVM, we can add guest support for earlier models. A
>> new CPU feature for diagnose 318 (shortened to diag318) will be made
>> available to guests starting with the zEC12-full CPU model.
>>
>> The z14.2 adds a new read SCP info byte (let's call it byte 134) to
>> detect the availability of diag318. Because of this, we have room for
>> one less VCPU and thus limit the max VPUs supported in a configuration
>> to 247 (down from 248).
>>
>> Signed-off-by: Collin Walling <walling@linux.ibm.com>.
>> ---
>>  hw/s390x/sclp.c                 | 2 ++
>>  include/hw/s390x/sclp.h         | 2 ++
>>  target/s390x/cpu.h              | 2 +-
>>  target/s390x/cpu_features.c     | 3 +++
>>  target/s390x/cpu_features.h     | 1 +
>>  target/s390x/cpu_features_def.h | 3 +++
>>  target/s390x/gen-features.c     | 1 +
>>  target/s390x/kvm.c              | 1 +
>>  8 files changed, 14 insertions(+), 1 deletion(-)
>>
> 
>> diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h
>> index 8c2320e..594b4a4 100644
>> --- a/target/s390x/cpu.h
>> +++ b/target/s390x/cpu.h
>> @@ -52,7 +52,7 @@
>>  
>>  #define MMU_USER_IDX 0
>>  
>> -#define S390_MAX_CPUS 248
>> +#define S390_MAX_CPUS 247
> 
> Isn't that already problematic if you try to migrate from an older QEMU
> with all possible vcpus defined? IOW, don't you really need a way that
> older machines can still run with one more vcpu?
> 

Good call. I'll run some tests on this and see what happens. I'll report
here on those results.

>>  
>>  typedef struct PSW {
>>      uint64_t mask;
>
Collin Walling Dec. 11, 2018, 9:12 p.m. UTC | #3
On 12/11/18 11:47 AM, Collin Walling wrote:
> On 12/7/18 7:08 AM, Cornelia Huck wrote:
>> On Thu,  6 Dec 2018 17:24:17 -0500
>> Collin Walling <walling@linux.ibm.com> wrote:
>>
>>> Diagnose 318 is a new z14.2 CPU feature. Since we are able to emulate
>>> it entirely via KVM, we can add guest support for earlier models. A
>>> new CPU feature for diagnose 318 (shortened to diag318) will be made
>>> available to guests starting with the zEC12-full CPU model.
>>>
>>> The z14.2 adds a new read SCP info byte (let's call it byte 134) to
>>> detect the availability of diag318. Because of this, we have room for
>>> one less VCPU and thus limit the max VPUs supported in a configuration
>>> to 247 (down from 248).
>>>
>>> Signed-off-by: Collin Walling <walling@linux.ibm.com>.
>>> ---
>>>  hw/s390x/sclp.c                 | 2 ++
>>>  include/hw/s390x/sclp.h         | 2 ++
>>>  target/s390x/cpu.h              | 2 +-
>>>  target/s390x/cpu_features.c     | 3 +++
>>>  target/s390x/cpu_features.h     | 1 +
>>>  target/s390x/cpu_features_def.h | 3 +++
>>>  target/s390x/gen-features.c     | 1 +
>>>  target/s390x/kvm.c              | 1 +
>>>  8 files changed, 14 insertions(+), 1 deletion(-)
>>>
>>
>>> diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h
>>> index 8c2320e..594b4a4 100644
>>> --- a/target/s390x/cpu.h
>>> +++ b/target/s390x/cpu.h
>>> @@ -52,7 +52,7 @@
>>>  
>>>  #define MMU_USER_IDX 0
>>>  
>>> -#define S390_MAX_CPUS 248
>>> +#define S390_MAX_CPUS 247
>>
>> Isn't that already problematic if you try to migrate from an older QEMU
>> with all possible vcpus defined? IOW, don't you really need a way that
>> older machines can still run with one more vcpu?
>>
> 
> Good call. I'll run some tests on this and see what happens. I'll report
> here on those results.
> 

Migrating to a machine that supports less vCPUs will report

error: unsupported configuration: Maximum CPUs greater than specified machine type limit

I revisited the code to see if there's a way to dynamically set the max vcpu count based 
on the read scp info size, but it gets really tricky and code looks very complicated.
(Having a packed struct contain the CPU entries whose maximum is determined by hardware
limitations makes things difficult -- but who said s390 is easy? :) )

In reality, do we often have guests running with 248 or even 247 vcpus? If so, I imagine
the performance isn't too significant?

>>>  
>>>  typedef struct PSW {
>>>      uint64_t mask;
>>
> 
>
David Hildenbrand Dec. 12, 2018, 11:20 a.m. UTC | #4
On 11.12.18 22:12, Collin Walling wrote:
> On 12/11/18 11:47 AM, Collin Walling wrote:
>> On 12/7/18 7:08 AM, Cornelia Huck wrote:
>>> On Thu,  6 Dec 2018 17:24:17 -0500
>>> Collin Walling <walling@linux.ibm.com> wrote:
>>>
>>>> Diagnose 318 is a new z14.2 CPU feature. Since we are able to emulate
>>>> it entirely via KVM, we can add guest support for earlier models. A
>>>> new CPU feature for diagnose 318 (shortened to diag318) will be made
>>>> available to guests starting with the zEC12-full CPU model.
>>>>
>>>> The z14.2 adds a new read SCP info byte (let's call it byte 134) to
>>>> detect the availability of diag318. Because of this, we have room for
>>>> one less VCPU and thus limit the max VPUs supported in a configuration
>>>> to 247 (down from 248).
>>>>
>>>> Signed-off-by: Collin Walling <walling@linux.ibm.com>.
>>>> ---
>>>>  hw/s390x/sclp.c                 | 2 ++
>>>>  include/hw/s390x/sclp.h         | 2 ++
>>>>  target/s390x/cpu.h              | 2 +-
>>>>  target/s390x/cpu_features.c     | 3 +++
>>>>  target/s390x/cpu_features.h     | 1 +
>>>>  target/s390x/cpu_features_def.h | 3 +++
>>>>  target/s390x/gen-features.c     | 1 +
>>>>  target/s390x/kvm.c              | 1 +
>>>>  8 files changed, 14 insertions(+), 1 deletion(-)
>>>>
>>>
>>>> diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h
>>>> index 8c2320e..594b4a4 100644
>>>> --- a/target/s390x/cpu.h
>>>> +++ b/target/s390x/cpu.h
>>>> @@ -52,7 +52,7 @@
>>>>  
>>>>  #define MMU_USER_IDX 0
>>>>  
>>>> -#define S390_MAX_CPUS 248
>>>> +#define S390_MAX_CPUS 247
>>>
>>> Isn't that already problematic if you try to migrate from an older QEMU
>>> with all possible vcpus defined? IOW, don't you really need a way that
>>> older machines can still run with one more vcpu?
>>>
>>
>> Good call. I'll run some tests on this and see what happens. I'll report
>> here on those results.
>>
> 
> Migrating to a machine that supports less vCPUs will report
> 
> error: unsupported configuration: Maximum CPUs greater than specified machine type limit
> 
> I revisited the code to see if there's a way to dynamically set the max vcpu count based 
> on the read scp info size, but it gets really tricky and code looks very complicated.
> (Having a packed struct contain the CPU entries whose maximum is determined by hardware
> limitations makes things difficult -- but who said s390 is easy? :) )
> 
> In reality, do we often have guests running with 248 or even 247 vcpus? If so, I imagine
> the performance isn't too significant?
Gluing CPU feature availability to machines is plain ugly. This sounds
like going back to pre-cpu model times ;)

There are two alternatives:

a) Don't model it as a CPU feature in QEMU. Glue it completely to the
QEMU machine. This goes hand-in-hand with the proposal I made in the KVM
thread, that diag318 is to be handled completely in QEMU, always. The
KVM setting part is optional (if KVM + HW support it).

Then we can have two different max_cpus/ReadInfo layouts based on the
machine type. No need to worry about QEMU cpu features.

Once we have other SCLP features (eventually requiring KVM/HW support)
announced in the same feature block, things might get more involved, but
I guess we could handle it somehow.


b) Glue the ReadInfo layout to the CPU feature, we would have to
default-disable the CPU feature for legacy machines. And bail out if
more CPUs are used when the feature is enabled. Hairy.


I guess a) would be the best thing to do. After all this really does not
sound like a CPU feature but more like a machine feature. But there is
usually a fine line between them.
Cornelia Huck Dec. 12, 2018, 1:41 p.m. UTC | #5
On Wed, 12 Dec 2018 12:20:08 +0100
David Hildenbrand <david@redhat.com> wrote:

> On 11.12.18 22:12, Collin Walling wrote:
> > On 12/11/18 11:47 AM, Collin Walling wrote:  
> >> On 12/7/18 7:08 AM, Cornelia Huck wrote:  
> >>> On Thu,  6 Dec 2018 17:24:17 -0500
> >>> Collin Walling <walling@linux.ibm.com> wrote:
> >>>  
> >>>> Diagnose 318 is a new z14.2 CPU feature. Since we are able to emulate
> >>>> it entirely via KVM, we can add guest support for earlier models. A
> >>>> new CPU feature for diagnose 318 (shortened to diag318) will be made
> >>>> available to guests starting with the zEC12-full CPU model.
> >>>>
> >>>> The z14.2 adds a new read SCP info byte (let's call it byte 134) to
> >>>> detect the availability of diag318. Because of this, we have room for
> >>>> one less VCPU and thus limit the max VPUs supported in a configuration
> >>>> to 247 (down from 248).
> >>>>
> >>>> Signed-off-by: Collin Walling <walling@linux.ibm.com>.
> >>>> ---
> >>>>  hw/s390x/sclp.c                 | 2 ++
> >>>>  include/hw/s390x/sclp.h         | 2 ++
> >>>>  target/s390x/cpu.h              | 2 +-
> >>>>  target/s390x/cpu_features.c     | 3 +++
> >>>>  target/s390x/cpu_features.h     | 1 +
> >>>>  target/s390x/cpu_features_def.h | 3 +++
> >>>>  target/s390x/gen-features.c     | 1 +
> >>>>  target/s390x/kvm.c              | 1 +
> >>>>  8 files changed, 14 insertions(+), 1 deletion(-)
> >>>>  
> >>>  
> >>>> diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h
> >>>> index 8c2320e..594b4a4 100644
> >>>> --- a/target/s390x/cpu.h
> >>>> +++ b/target/s390x/cpu.h
> >>>> @@ -52,7 +52,7 @@
> >>>>  
> >>>>  #define MMU_USER_IDX 0
> >>>>  
> >>>> -#define S390_MAX_CPUS 248
> >>>> +#define S390_MAX_CPUS 247  
> >>>
> >>> Isn't that already problematic if you try to migrate from an older QEMU
> >>> with all possible vcpus defined? IOW, don't you really need a way that
> >>> older machines can still run with one more vcpu?
> >>>  
> >>
> >> Good call. I'll run some tests on this and see what happens. I'll report
> >> here on those results.
> >>  
> > 
> > Migrating to a machine that supports less vCPUs will report
> > 
> > error: unsupported configuration: Maximum CPUs greater than specified machine type limit
> > 
> > I revisited the code to see if there's a way to dynamically set the max vcpu count based 
> > on the read scp info size, but it gets really tricky and code looks very complicated.
> > (Having a packed struct contain the CPU entries whose maximum is determined by hardware
> > limitations makes things difficult -- but who said s390 is easy? :) )
> > 
> > In reality, do we often have guests running with 248 or even 247 vcpus? If so, I imagine
> > the performance isn't too significant?  
> Gluing CPU feature availability to machines is plain ugly. This sounds
> like going back to pre-cpu model times ;)
> 
> There are two alternatives:
> 
> a) Don't model it as a CPU feature in QEMU. Glue it completely to the
> QEMU machine. This goes hand-in-hand with the proposal I made in the KVM
> thread, that diag318 is to be handled completely in QEMU, always. The
> KVM setting part is optional (if KVM + HW support it).
> 
> Then we can have two different max_cpus/ReadInfo layouts based on the
> machine type. No need to worry about QEMU cpu features.
> 
> Once we have other SCLP features (eventually requiring KVM/HW support)
> announced in the same feature block, things might get more involved, but
> I guess we could handle it somehow.

Perhaps via a capability to be enabled?

> 
> 
> b) Glue the ReadInfo layout to the CPU feature, we would have to
> default-disable the CPU feature for legacy machines. And bail out if
> more CPUs are used when the feature is enabled. Hairy.
> 
> 
> I guess a) would be the best thing to do. After all this really does not
> sound like a CPU feature but more like a machine feature. But there is
> usually a fine line between them.

a) sounds like the better option to me as well.
Collin Walling Dec. 12, 2018, 3:01 p.m. UTC | #6
On 12/12/18 8:41 AM, Cornelia Huck wrote:
> On Wed, 12 Dec 2018 12:20:08 +0100
> David Hildenbrand <david@redhat.com> wrote:
> 
>> On 11.12.18 22:12, Collin Walling wrote:
>>> On 12/11/18 11:47 AM, Collin Walling wrote:  
>>>> On 12/7/18 7:08 AM, Cornelia Huck wrote:  
>>>>> On Thu,  6 Dec 2018 17:24:17 -0500
>>>>> Collin Walling <walling@linux.ibm.com> wrote:
>>>>>  
>>>>>> Diagnose 318 is a new z14.2 CPU feature. Since we are able to emulate
>>>>>> it entirely via KVM, we can add guest support for earlier models. A
>>>>>> new CPU feature for diagnose 318 (shortened to diag318) will be made
>>>>>> available to guests starting with the zEC12-full CPU model.
>>>>>>
>>>>>> The z14.2 adds a new read SCP info byte (let's call it byte 134) to
>>>>>> detect the availability of diag318. Because of this, we have room for
>>>>>> one less VCPU and thus limit the max VPUs supported in a configuration
>>>>>> to 247 (down from 248).
>>>>>>
>>>>>> Signed-off-by: Collin Walling <walling@linux.ibm.com>.
>>>>>> ---
>>>>>>  hw/s390x/sclp.c                 | 2 ++
>>>>>>  include/hw/s390x/sclp.h         | 2 ++
>>>>>>  target/s390x/cpu.h              | 2 +-
>>>>>>  target/s390x/cpu_features.c     | 3 +++
>>>>>>  target/s390x/cpu_features.h     | 1 +
>>>>>>  target/s390x/cpu_features_def.h | 3 +++
>>>>>>  target/s390x/gen-features.c     | 1 +
>>>>>>  target/s390x/kvm.c              | 1 +
>>>>>>  8 files changed, 14 insertions(+), 1 deletion(-)
>>>>>>  
>>>>>  
>>>>>> diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h
>>>>>> index 8c2320e..594b4a4 100644
>>>>>> --- a/target/s390x/cpu.h
>>>>>> +++ b/target/s390x/cpu.h
>>>>>> @@ -52,7 +52,7 @@
>>>>>>  
>>>>>>  #define MMU_USER_IDX 0
>>>>>>  
>>>>>> -#define S390_MAX_CPUS 248
>>>>>> +#define S390_MAX_CPUS 247  
>>>>>
>>>>> Isn't that already problematic if you try to migrate from an older QEMU
>>>>> with all possible vcpus defined? IOW, don't you really need a way that
>>>>> older machines can still run with one more vcpu?
>>>>>  
>>>>
>>>> Good call. I'll run some tests on this and see what happens. I'll report
>>>> here on those results.
>>>>  
>>>
>>> Migrating to a machine that supports less vCPUs will report
>>>
>>> error: unsupported configuration: Maximum CPUs greater than specified machine type limit
>>>
>>> I revisited the code to see if there's a way to dynamically set the max vcpu count based 
>>> on the read scp info size, but it gets really tricky and code looks very complicated.
>>> (Having a packed struct contain the CPU entries whose maximum is determined by hardware
>>> limitations makes things difficult -- but who said s390 is easy? :) )
>>>
>>> In reality, do we often have guests running with 248 or even 247 vcpus? If so, I imagine
>>> the performance isn't too significant?  
>> Gluing CPU feature availability to machines is plain ugly. This sounds
>> like going back to pre-cpu model times ;)
>>
>> There are two alternatives:
>>
>> a) Don't model it as a CPU feature in QEMU. Glue it completely to the
>> QEMU machine. This goes hand-in-hand with the proposal I made in the KVM
>> thread, that diag318 is to be handled completely in QEMU, always. The
>> KVM setting part is optional (if KVM + HW support it).
>>
>> Then we can have two different max_cpus/ReadInfo layouts based on the
>> machine type. No need to worry about QEMU cpu features.
>>
>> Once we have other SCLP features (eventually requiring KVM/HW support)
>> announced in the same feature block, things might get more involved, but
>> I guess we could handle it somehow.
> 
> Perhaps via a capability to be enabled?
> 
>>
>>
>> b) Glue the ReadInfo layout to the CPU feature, we would have to
>> default-disable the CPU feature for legacy machines. And bail out if
>> more CPUs are used when the feature is enabled. Hairy.
>>
>>
>> I guess a) would be the best thing to do. After all this really does not
>> sound like a CPU feature but more like a machine feature. But there is
>> usually a fine line between them.
> 
> a) sounds like the better option to me as well.
> 

I think this makes sense as well. A CPU feat really doesn't make sense if we 
just want to enable this "always" so-to-speak. I'll get cracking on a rework
of this patch series. It'll take me some time.

In the mean time, I'll return the favor and take a look at the PCI stuff you
guys have posted ;)
Christian Borntraeger Jan. 24, 2019, 8:11 a.m. UTC | #7
On 06.12.2018 23:24, Collin Walling wrote:
> Diagnose 318 is a new z14.2 CPU feature. Since we are able to emulate
> it entirely via KVM, we can add guest support for earlier models. A
> new CPU feature for diagnose 318 (shortened to diag318) will be made
> available to guests starting with the zEC12-full CPU model.
> 
> The z14.2 adds a new read SCP info byte (let's call it byte 134) to
> detect the availability of diag318. Because of this, we have room for
> one less VCPU and thus limit the max VPUs supported in a configuration
> to 247 (down from 248).
> 
> Signed-off-by: Collin Walling <walling@linux.ibm.com>.
> ---
>  hw/s390x/sclp.c                 | 2 ++
>  include/hw/s390x/sclp.h         | 2 ++
>  target/s390x/cpu.h              | 2 +-
>  target/s390x/cpu_features.c     | 3 +++
>  target/s390x/cpu_features.h     | 1 +
>  target/s390x/cpu_features_def.h | 3 +++
>  target/s390x/gen-features.c     | 1 +
>  target/s390x/kvm.c              | 1 +
>  8 files changed, 14 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/s390x/sclp.c b/hw/s390x/sclp.c
> index 4510a80..183c627 100644
> --- a/hw/s390x/sclp.c
> +++ b/hw/s390x/sclp.c
> @@ -73,6 +73,8 @@ static void read_SCP_info(SCLPDevice *sclp, SCCB *sccb)
>                           read_info->conf_char);
>      s390_get_feat_block(S390_FEAT_TYPE_SCLP_CONF_CHAR_EXT,
>                           read_info->conf_char_ext);
> +    /* Read Info byte 134 */
> +    s390_get_feat_block(S390_FEAT_TYPE_SCLP_BYTE_134, read_info->byte_134);
>  
>      read_info->facilities = cpu_to_be64(SCLP_HAS_CPU_INFO |
>                                          SCLP_HAS_IOA_RECONFIG);
> diff --git a/include/hw/s390x/sclp.h b/include/hw/s390x/sclp.h
> index f9db243..eb12ba2 100644
> --- a/include/hw/s390x/sclp.h
> +++ b/include/hw/s390x/sclp.h
> @@ -133,6 +133,8 @@ typedef struct ReadInfo {
>      uint16_t highest_cpu;
>      uint8_t  _reserved5[124 - 122];     /* 122-123 */
>      uint32_t hmfai;
> +    uint8_t  _reserved7[134 - 128];     /* 128-133 */
> +    uint8_t  byte_134[1];
>      struct CPUEntry entries[0];
>  } QEMU_PACKED ReadInfo;
 
The size must be a multiple of 16. Can you add a reserved field to fill up
until 144?
diff mbox series

Patch

diff --git a/hw/s390x/sclp.c b/hw/s390x/sclp.c
index 4510a80..183c627 100644
--- a/hw/s390x/sclp.c
+++ b/hw/s390x/sclp.c
@@ -73,6 +73,8 @@  static void read_SCP_info(SCLPDevice *sclp, SCCB *sccb)
                          read_info->conf_char);
     s390_get_feat_block(S390_FEAT_TYPE_SCLP_CONF_CHAR_EXT,
                          read_info->conf_char_ext);
+    /* Read Info byte 134 */
+    s390_get_feat_block(S390_FEAT_TYPE_SCLP_BYTE_134, read_info->byte_134);
 
     read_info->facilities = cpu_to_be64(SCLP_HAS_CPU_INFO |
                                         SCLP_HAS_IOA_RECONFIG);
diff --git a/include/hw/s390x/sclp.h b/include/hw/s390x/sclp.h
index f9db243..eb12ba2 100644
--- a/include/hw/s390x/sclp.h
+++ b/include/hw/s390x/sclp.h
@@ -133,6 +133,8 @@  typedef struct ReadInfo {
     uint16_t highest_cpu;
     uint8_t  _reserved5[124 - 122];     /* 122-123 */
     uint32_t hmfai;
+    uint8_t  _reserved7[134 - 128];     /* 128-133 */
+    uint8_t  byte_134[1];
     struct CPUEntry entries[0];
 } QEMU_PACKED ReadInfo;
 
diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h
index 8c2320e..594b4a4 100644
--- a/target/s390x/cpu.h
+++ b/target/s390x/cpu.h
@@ -52,7 +52,7 @@ 
 
 #define MMU_USER_IDX 0
 
-#define S390_MAX_CPUS 248
+#define S390_MAX_CPUS 247
 
 typedef struct PSW {
     uint64_t mask;
diff --git a/target/s390x/cpu_features.c b/target/s390x/cpu_features.c
index 60cfeba..d05afa5 100644
--- a/target/s390x/cpu_features.c
+++ b/target/s390x/cpu_features.c
@@ -121,6 +121,9 @@  static const S390FeatDef s390_features[] = {
     FEAT_INIT("pfmfi", S390_FEAT_TYPE_SCLP_CONF_CHAR_EXT, 9, "SIE: PFMF interpretation facility"),
     FEAT_INIT("ibs", S390_FEAT_TYPE_SCLP_CONF_CHAR_EXT, 10, "SIE: Interlock-and-broadcast-suppression facility"),
 
+    /* SCLP SCCB Byte 134 */
+    FEAT_INIT("diag318", S390_FEAT_TYPE_SCLP_BYTE_134, 0, "SIE: Diagnose 318"),
+
     FEAT_INIT("sief2", S390_FEAT_TYPE_SCLP_CPU, 4, "SIE: interception format 2 (Virtual SIE)"),
     FEAT_INIT("skey", S390_FEAT_TYPE_SCLP_CPU, 5, "SIE: Storage-key facility"),
     FEAT_INIT("gpereh", S390_FEAT_TYPE_SCLP_CPU, 10, "SIE: Guest-PER enhancement facility"),
diff --git a/target/s390x/cpu_features.h b/target/s390x/cpu_features.h
index effe790..e7248df 100644
--- a/target/s390x/cpu_features.h
+++ b/target/s390x/cpu_features.h
@@ -23,6 +23,7 @@  typedef enum {
     S390_FEAT_TYPE_STFL,
     S390_FEAT_TYPE_SCLP_CONF_CHAR,
     S390_FEAT_TYPE_SCLP_CONF_CHAR_EXT,
+    S390_FEAT_TYPE_SCLP_BYTE_134,
     S390_FEAT_TYPE_SCLP_CPU,
     S390_FEAT_TYPE_MISC,
     S390_FEAT_TYPE_PLO,
diff --git a/target/s390x/cpu_features_def.h b/target/s390x/cpu_features_def.h
index 5fc7e7b..d99da1d 100644
--- a/target/s390x/cpu_features_def.h
+++ b/target/s390x/cpu_features_def.h
@@ -109,6 +109,9 @@  typedef enum {
     S390_FEAT_SIE_PFMFI,
     S390_FEAT_SIE_IBS,
 
+    /* Read Info Byte 134 */
+    S390_FEAT_DIAG318,
+
     /* Sclp Cpu */
     S390_FEAT_SIE_F2,
     S390_FEAT_SIE_SKEY,
diff --git a/target/s390x/gen-features.c b/target/s390x/gen-features.c
index 70015ea..a3d1457 100644
--- a/target/s390x/gen-features.c
+++ b/target/s390x/gen-features.c
@@ -450,6 +450,7 @@  static uint16_t full_GEN12_GA1[] = {
     S390_FEAT_AP_QUERY_CONFIG_INFO,
     S390_FEAT_AP_FACILITIES_TEST,
     S390_FEAT_AP,
+    S390_FEAT_DIAG318,
 };
 
 static uint16_t full_GEN12_GA2[] = {
diff --git a/target/s390x/kvm.c b/target/s390x/kvm.c
index 2ebf26a..3915e36 100644
--- a/target/s390x/kvm.c
+++ b/target/s390x/kvm.c
@@ -2142,6 +2142,7 @@  static int kvm_to_feat[][2] = {
     { KVM_S390_VM_CPU_FEAT_PFMFI, S390_FEAT_SIE_PFMFI},
     { KVM_S390_VM_CPU_FEAT_SIGPIF, S390_FEAT_SIE_SIGPIF},
     { KVM_S390_VM_CPU_FEAT_KSS, S390_FEAT_SIE_KSS},
+    { KVM_S390_VM_CPU_FEAT_DIAG318, S390_FEAT_DIAG318},
 };
 
 static int query_cpu_feat(S390FeatBitmap features)