Message ID | 20190829205635.20189-3-krish.sadhukhan@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: nVMX: Check GUEST_DEBUGCTL and GUEST_DR7 on vmentry of nested guests | expand |
On Thu, Aug 29, 2019 at 2:25 PM Krish Sadhukhan <krish.sadhukhan@oracle.com> wrote: > > According to section "Checks on Guest Control Registers, Debug Registers, and > and MSRs" in Intel SDM vol 3C, the following checks are performed on vmentry > of nested guests: > > If the "load debug controls" VM-entry control is 1, bits 63:32 in the DR7 > field must be 0. Can't we just let the hardware check guest DR7? This results in "VM-entry failure due to invalid guest state," right? And we just reflect that to L1?
On 08/29/2019 03:26 PM, Jim Mattson wrote: > On Thu, Aug 29, 2019 at 2:25 PM Krish Sadhukhan > <krish.sadhukhan@oracle.com> wrote: >> According to section "Checks on Guest Control Registers, Debug Registers, and >> and MSRs" in Intel SDM vol 3C, the following checks are performed on vmentry >> of nested guests: >> >> If the "load debug controls" VM-entry control is 1, bits 63:32 in the DR7 >> field must be 0. > Can't we just let the hardware check guest DR7? This results in > "VM-entry failure due to invalid guest state," right? And we just > reflect that to L1? Just trying to understand the reason why this particular check can be deferred to the hardware.
On Fri, Aug 30, 2019 at 4:07 PM Krish Sadhukhan <krish.sadhukhan@oracle.com> wrote: > > > > On 08/29/2019 03:26 PM, Jim Mattson wrote: > > On Thu, Aug 29, 2019 at 2:25 PM Krish Sadhukhan > > <krish.sadhukhan@oracle.com> wrote: > >> According to section "Checks on Guest Control Registers, Debug Registers, and > >> and MSRs" in Intel SDM vol 3C, the following checks are performed on vmentry > >> of nested guests: > >> > >> If the "load debug controls" VM-entry control is 1, bits 63:32 in the DR7 > >> field must be 0. > > Can't we just let the hardware check guest DR7? This results in > > "VM-entry failure due to invalid guest state," right? And we just > > reflect that to L1? > > Just trying to understand the reason why this particular check can be > deferred to the hardware. The vmcs02 field has the same value as the vmcs12 field, and the physical CPU has the same requirements as the virtual CPU.
On Fri, Aug 30, 2019 at 4:15 PM Jim Mattson <jmattson@google.com> wrote: > > On Fri, Aug 30, 2019 at 4:07 PM Krish Sadhukhan > <krish.sadhukhan@oracle.com> wrote: > > > > > > > > On 08/29/2019 03:26 PM, Jim Mattson wrote: > > > On Thu, Aug 29, 2019 at 2:25 PM Krish Sadhukhan > > > <krish.sadhukhan@oracle.com> wrote: > > >> According to section "Checks on Guest Control Registers, Debug Registers, and > > >> and MSRs" in Intel SDM vol 3C, the following checks are performed on vmentry > > >> of nested guests: > > >> > > >> If the "load debug controls" VM-entry control is 1, bits 63:32 in the DR7 > > >> field must be 0. > > > Can't we just let the hardware check guest DR7? This results in > > > "VM-entry failure due to invalid guest state," right? And we just > > > reflect that to L1? > > > > Just trying to understand the reason why this particular check can be > > deferred to the hardware. > > The vmcs02 field has the same value as the vmcs12 field, and the > physical CPU has the same requirements as the virtual CPU. Actually, you're right. There is a problem. With the current implementation, there's a priority inversion if the vmcs12 contains both illegal guest state for which the checks are deferred to hardware, and illegal entries in the VM-entry MSR-load area. In this case, we will synthesize a "VM-entry failure due to MSR loading" rather than a "VM-entry failure due to invalid guest state." There are so many checks on guest state that it's really compelling to defer as many as possible to hardware. However, we need to fix the aforesaid priority inversion. Instead of returning early from nested_vmx_enter_non_root_mode() with EXIT_REASON_MSR_LOAD_FAIL, we could induce a "VM-entry failure due to MSR loading" for the next VM-entry of vmcs02 and continue with the attempted vmcs02 VM-entry. If hardware exits with EXIT_REASON_INVALID_STATE, we reflect that to L1, and if hardware exits with EXIT_REASON_INVALID_STATE, we reflect that to L1 (along with the appropriate exit qualification). The tricky part is in undoing the successful MSR writes if we reflect EXIT_REASON_INVALID_STATE to L1. Some MSR writes can't actually be undone (e.g. writes to IA32_PRED_CMD), but maybe we can get away with those. (Fortunately, it's illegal to put x2APIC MSRs in the VM-entry MSR-load area!) Other MSR writes are just a bit tricky to undo (e.g. writes to IA32_TIME_STAMP_COUNTER). Alternatively, we could perform validity checks on the entire vmcs12 VM-entry MSR-load area before writing any of the MSRs. This may be easier, but it would certainly be slower. We would have to be wary of situations where processing an earlier entry affects the validity of a later entry. (If we take this route, then we would also have to process the valid prefix of the VM-entry MSR-load area when we reflect EXIT_REASON_MSR_LOAD_FAIL to L1.) Note that this approach could be extended to permit the deferral of some control field checks to hardware as well. As long as the control field is copied verbatim from vmcs12 to vmcs02 and the virtual CPU enforces the same constraints as the physical CPU, deferral should be fine. We just have to make sure that we induce a "VM-entry failure due to invalid guest state" for the next VM-entry of vmcs02 if any software checks on guest state fail, rather than immediately synthesizing an "VM-entry failure due to invalid guest state" during the construction of vmcs02.
On Tue, Sep 3, 2019 at 5:59 PM Krish Sadhukhan <krish.sadhukhan@oracle.com> wrote: > > > > On 09/01/2019 05:33 PM, Jim Mattson wrote: > > On Fri, Aug 30, 2019 at 4:15 PM Jim Mattson <jmattson@google.com> wrote: > > On Fri, Aug 30, 2019 at 4:07 PM Krish Sadhukhan > <krish.sadhukhan@oracle.com> wrote: > > On 08/29/2019 03:26 PM, Jim Mattson wrote: > > On Thu, Aug 29, 2019 at 2:25 PM Krish Sadhukhan > <krish.sadhukhan@oracle.com> wrote: > > According to section "Checks on Guest Control Registers, Debug Registers, and > and MSRs" in Intel SDM vol 3C, the following checks are performed on vmentry > of nested guests: > > If the "load debug controls" VM-entry control is 1, bits 63:32 in the DR7 > field must be 0. > > Can't we just let the hardware check guest DR7? This results in > "VM-entry failure due to invalid guest state," right? And we just > reflect that to L1? > > Just trying to understand the reason why this particular check can be > deferred to the hardware. > > The vmcs02 field has the same value as the vmcs12 field, and the > physical CPU has the same requirements as the virtual CPU. > > Actually, you're right. There is a problem. With the current > implementation, there's a priority inversion if the vmcs12 contains > both illegal guest state for which the checks are deferred to > hardware, and illegal entries in the VM-entry MSR-load area. In this > case, we will synthesize a "VM-entry failure due to MSR loading" > rather than a "VM-entry failure due to invalid guest state." > > There are so many checks on guest state that it's really compelling to > defer as many as possible to hardware. However, we need to fix the > aforesaid priority inversion. Instead of returning early from > nested_vmx_enter_non_root_mode() with EXIT_REASON_MSR_LOAD_FAIL, we > could induce a "VM-entry failure due to MSR loading" for the next > VM-entry of vmcs02 and continue with the attempted vmcs02 VM-entry. If > hardware exits with EXIT_REASON_INVALID_STATE, we reflect that to L1, > and if hardware exits with EXIT_REASON_INVALID_STATE, we reflect that > to L1 (along with the appropriate exit qualification). > > > Looking at nested_vmx_exit_reflected(), it seems we do return to L1 if the error is EXIT_REASON_INVALID_STATE. So if we fix the priority inversion, this should work then ? Yes. > The tricky part is in undoing the successful MSR writes if we reflect > EXIT_REASON_INVALID_STATE to L1. Some MSR writes can't actually be > undone (e.g. writes to IA32_PRED_CMD), but maybe we can get away with > those. (Fortunately, it's illegal to put x2APIC MSRs in the VM-entry > MSR-load area!) Other MSR writes are just a bit tricky to undo (e.g. > writes to IA32_TIME_STAMP_COUNTER). > > > Let's say that the priority inversion issue is fixed. In the scenario in which the Guest state is fine but the VM-entry MSR-Load area contains an illegal entry, you are saying that the induced "VM-entry failure due to MSR loading" will be caught during the next VM-entry of vmcs02. So how far does the attempted VM-entry of vmcs02 continue with an illegal MSR-Load entry and how do get to the next VM-entry of vmcs02 ? Sorry; I don't understand the questions. > > There are two other scenarios there: > > 1. Guest state is illegal and VM-entry MSR-Load area contains an illegal entry > 2. Guest state is illegal but VM-entry MSR-Load area is fine > > In these scenarios, L2 will exit to L1 with EXIT_REASON_INVALID_STATE and finally this will be returned to L1 userspace. Right ? If so, we do we care about reverting MSR-writes because the SDM section 26.8 say, > > "Processor state is loaded as would be done on a VM exit (see Section 27.5)" I'm not sure how the referenced section of the SDM is relevant. Are you assuming that every MSR in the VM-entry MSR load area also appears in the VM-exit MSR load area? That certainly isn't the case. > Alternatively, we could perform validity checks on the entire vmcs12 > VM-entry MSR-load area before writing any of the MSRs. This may be > easier, but it would certainly be slower. We would have to be wary of > situations where processing an earlier entry affects the validity of a > later entry. (If we take this route, then we would also have to > process the valid prefix of the VM-entry MSR-load area when we reflect > EXIT_REASON_MSR_LOAD_FAIL to L1.) Forget this paragraph. Even if all of the checks pass, we still have to undo all of the MSR-writes in the event of a deferred "VM-entry failure due to invalid guest state." > Note that this approach could be extended to permit the deferral of > some control field checks to hardware as well. > > > Why can't the first approach be used for VM-entry controls as well ? Sorry; I don't understand this question either. > As long as the control > field is copied verbatim from vmcs12 to vmcs02 and the virtual CPU > enforces the same constraints as the physical CPU, deferral should be > fine. We just have to make sure that we induce a "VM-entry failure due > to invalid guest state" for the next VM-entry of vmcs02 if any > software checks on guest state fail, rather than immediately > synthesizing an "VM-entry failure due to invalid guest state" during > the construction of vmcs02. > > > Is it OK to keep this Guest check in software for now and then remove it once we have a solution in place ? Why do you feel that getting the priority correct is so important for this one check in particular? I'd be surprised if any hypervisor ever assembled a VMCS that failed this check.
On Wed, Sep 04, 2019 at 09:44:58AM -0700, Jim Mattson wrote: > On Tue, Sep 3, 2019 at 5:59 PM Krish Sadhukhan > <krish.sadhukhan@oracle.com> wrote: > > Is it OK to keep this Guest check in software for now and then remove it > > once we have a solution in place ? > > Why do you feel that getting the priority correct is so important for > this one check in particular? I'd be surprised if any hypervisor ever > assembled a VMCS that failed this check. Agreed. I don't see much value in adding a subset of guest state checks, and adding every check will be painfully slow. IMO we're better off finding a solution that allows deferring guest state checks to hardware.
On Sun, Sep 01, 2019 at 05:33:26PM -0700, Jim Mattson wrote: > Actually, you're right. There is a problem. With the current > implementation, there's a priority inversion if the vmcs12 contains > both illegal guest state for which the checks are deferred to > hardware, and illegal entries in the VM-entry MSR-load area. In this > case, we will synthesize a "VM-entry failure due to MSR loading" > rather than a "VM-entry failure due to invalid guest state." > > There are so many checks on guest state that it's really compelling to > defer as many as possible to hardware. However, we need to fix the > aforesaid priority inversion. Instead of returning early from > nested_vmx_enter_non_root_mode() with EXIT_REASON_MSR_LOAD_FAIL, we > could induce a "VM-entry failure due to MSR loading" for the next > VM-entry of vmcs02 and continue with the attempted vmcs02 VM-entry. If > hardware exits with EXIT_REASON_INVALID_STATE, we reflect that to L1, > and if hardware exits with EXIT_REASON_INVALID_STATE, we reflect that > to L1 (along with the appropriate exit qualification). > > The tricky part is in undoing the successful MSR writes if we reflect > EXIT_REASON_INVALID_STATE to L1. Some MSR writes can't actually be > undone (e.g. writes to IA32_PRED_CMD), but maybe we can get away with > those. (Fortunately, it's illegal to put x2APIC MSRs in the VM-entry > MSR-load area!) Other MSR writes are just a bit tricky to undo (e.g. > writes to IA32_TIME_STAMP_COUNTER). > > Alternatively, we could perform validity checks on the entire vmcs12 > VM-entry MSR-load area before writing any of the MSRs. This may be > easier, but it would certainly be slower. We would have to be wary of > situations where processing an earlier entry affects the validity of a > later entry. (If we take this route, then we would also have to > process the valid prefix of the VM-entry MSR-load area when we reflect > EXIT_REASON_MSR_LOAD_FAIL to L1.) Maybe a hybrid of the two, e.g. updates that are difficult to unwind are deferred until all checks pass. I suspect the set of MSRs that are difficult to unwind doesn't overlap with the set of MSRs that can affect the legality of a downstream WRMSR. > Note that this approach could be extended to permit the deferral of > some control field checks to hardware as well. As long as the control > field is copied verbatim from vmcs12 to vmcs02 and the virtual CPU > enforces the same constraints as the physical CPU, deferral should be > fine. I doubt it's worth the effort/complexity to defer control checks to hardware. There aren't any control fields that are guaranteed to be copied verbatim, e.g. we'd need accurate prediction of the final value. Eliminating the basic vmx_control_verify() doesn't save much as those are quite speedy, whereas eliminating the individual checks would need to ensure that *all* fields involved in the check are copied verbatim. > We just have to make sure that we induce a "VM-entry failure due > to invalid guest state" for the next VM-entry of vmcs02 if any > software checks on guest state fail, rather than immediately > synthesizing an "VM-entry failure due to invalid guest state" during > the construction of vmcs02.
On 9/4/19 9:44 AM, Jim Mattson wrote: > On Tue, Sep 3, 2019 at 5:59 PM Krish Sadhukhan > <krish.sadhukhan@oracle.com> wrote: >> >> >> On 09/01/2019 05:33 PM, Jim Mattson wrote: >> >> On Fri, Aug 30, 2019 at 4:15 PM Jim Mattson <jmattson@google.com> wrote: >> >> On Fri, Aug 30, 2019 at 4:07 PM Krish Sadhukhan >> <krish.sadhukhan@oracle.com> wrote: >> >> On 08/29/2019 03:26 PM, Jim Mattson wrote: >> >> On Thu, Aug 29, 2019 at 2:25 PM Krish Sadhukhan >> <krish.sadhukhan@oracle.com> wrote: >> >> According to section "Checks on Guest Control Registers, Debug Registers, and >> and MSRs" in Intel SDM vol 3C, the following checks are performed on vmentry >> of nested guests: >> >> If the "load debug controls" VM-entry control is 1, bits 63:32 in the DR7 >> field must be 0. >> >> Can't we just let the hardware check guest DR7? This results in >> "VM-entry failure due to invalid guest state," right? And we just >> reflect that to L1? >> >> Just trying to understand the reason why this particular check can be >> deferred to the hardware. >> >> The vmcs02 field has the same value as the vmcs12 field, and the >> physical CPU has the same requirements as the virtual CPU. >> >> Actually, you're right. There is a problem. With the current >> implementation, there's a priority inversion if the vmcs12 contains >> both illegal guest state for which the checks are deferred to >> hardware, and illegal entries in the VM-entry MSR-load area. In this >> case, we will synthesize a "VM-entry failure due to MSR loading" >> rather than a "VM-entry failure due to invalid guest state." >> >> There are so many checks on guest state that it's really compelling to >> defer as many as possible to hardware. However, we need to fix the >> aforesaid priority inversion. Instead of returning early from >> nested_vmx_enter_non_root_mode() with EXIT_REASON_MSR_LOAD_FAIL, we >> could induce a "VM-entry failure due to MSR loading" for the next >> VM-entry of vmcs02 and continue with the attempted vmcs02 VM-entry. If >> hardware exits with EXIT_REASON_INVALID_STATE, we reflect that to L1, >> and if hardware exits with EXIT_REASON_INVALID_STATE, we reflect that >> to L1 (along with the appropriate exit qualification). >> >> >> Looking at nested_vmx_exit_reflected(), it seems we do return to L1 if the error is EXIT_REASON_INVALID_STATE. So if we fix the priority inversion, this should work then ? > Yes. > >> The tricky part is in undoing the successful MSR writes if we reflect >> EXIT_REASON_INVALID_STATE to L1. Some MSR writes can't actually be >> undone (e.g. writes to IA32_PRED_CMD), but maybe we can get away with >> those. (Fortunately, it's illegal to put x2APIC MSRs in the VM-entry >> MSR-load area!) Other MSR writes are just a bit tricky to undo (e.g. >> writes to IA32_TIME_STAMP_COUNTER). >> >> >> Let's say that the priority inversion issue is fixed. In the scenario in which the Guest state is fine but the VM-entry MSR-Load area contains an illegal entry, you are saying that the induced "VM-entry failure due to MSR loading" will be caught during the next VM-entry of vmcs02. So how far does the attempted VM-entry of vmcs02 continue with an illegal MSR-Load entry and how do get to the next VM-entry of vmcs02 ? > Sorry; I don't understand the questions. Let's say that all guest state checks are deferred to hardware and that they all will pass. Now, the VM-entry MSR-load area contains an illegal entry and we modify nested_vmx_enter_non_root_mode() to induce a "VM-entry failure due to MSR loading" for the next VM-entry of vmcs02. I wanted to understand how that induced error ultimately leads to a VM-entry failure ? >> There are two other scenarios there: >> >> 1. Guest state is illegal and VM-entry MSR-Load area contains an illegal entry >> 2. Guest state is illegal but VM-entry MSR-Load area is fine >> >> In these scenarios, L2 will exit to L1 with EXIT_REASON_INVALID_STATE and finally this will be returned to L1 userspace. Right ? If so, we do we care about reverting MSR-writes because the SDM section 26.8 say, >> >> "Processor state is loaded as would be done on a VM exit (see Section 27.5)" > I'm not sure how the referenced section of the SDM is relevant. Are > you assuming that every MSR in the VM-entry MSR load area also appears > in the VM-exit MSR load area? That certainly isn't the case. > >> Alternatively, we could perform validity checks on the entire vmcs12 >> VM-entry MSR-load area before writing any of the MSRs. This may be >> easier, but it would certainly be slower. We would have to be wary of >> situations where processing an earlier entry affects the validity of a >> later entry. (If we take this route, then we would also have to >> process the valid prefix of the VM-entry MSR-load area when we reflect >> EXIT_REASON_MSR_LOAD_FAIL to L1.) > Forget this paragraph. Even if all of the checks pass, we still have > to undo all of the MSR-writes in the event of a deferred "VM-entry > failure due to invalid guest state." > >> Note that this approach could be extended to permit the deferral of >> some control field checks to hardware as well. >> >> >> Why can't the first approach be used for VM-entry controls as well ? > Sorry; I don't understand this question either. Since you mentioned, "Note that this approach could be extended to permit the deferral of some control field checks..." So it seemed that only the second approach was applicable to deferring VM-entry control checks to hardware. Hence I asked why the first approach can't be used. > >> As long as the control >> field is copied verbatim from vmcs12 to vmcs02 and the virtual CPU >> enforces the same constraints as the physical CPU, deferral should be >> fine. We just have to make sure that we induce a "VM-entry failure due >> to invalid guest state" for the next VM-entry of vmcs02 if any >> software checks on guest state fail, rather than immediately >> synthesizing an "VM-entry failure due to invalid guest state" during >> the construction of vmcs02. >> >> >> Is it OK to keep this Guest check in software for now and then remove it once we have a solution in place ? > Why do you feel that getting the priority correct is so important for > this one check in particular? I'd be surprised if any hypervisor ever > assembled a VMCS that failed this check.
On Wed, Sep 4, 2019 at 11:07 AM Krish Sadhukhan <krish.sadhukhan@oracle.com> wrote: > > > On 9/4/19 9:44 AM, Jim Mattson wrote: > > On Tue, Sep 3, 2019 at 5:59 PM Krish Sadhukhan > > <krish.sadhukhan@oracle.com> wrote: > >> > >> > >> On 09/01/2019 05:33 PM, Jim Mattson wrote: > >> > >> On Fri, Aug 30, 2019 at 4:15 PM Jim Mattson <jmattson@google.com> wrote: > >> > >> On Fri, Aug 30, 2019 at 4:07 PM Krish Sadhukhan > >> <krish.sadhukhan@oracle.com> wrote: > >> > >> On 08/29/2019 03:26 PM, Jim Mattson wrote: > >> > >> On Thu, Aug 29, 2019 at 2:25 PM Krish Sadhukhan > >> <krish.sadhukhan@oracle.com> wrote: > >> > >> According to section "Checks on Guest Control Registers, Debug Registers, and > >> and MSRs" in Intel SDM vol 3C, the following checks are performed on vmentry > >> of nested guests: > >> > >> If the "load debug controls" VM-entry control is 1, bits 63:32 in the DR7 > >> field must be 0. > >> > >> Can't we just let the hardware check guest DR7? This results in > >> "VM-entry failure due to invalid guest state," right? And we just > >> reflect that to L1? > >> > >> Just trying to understand the reason why this particular check can be > >> deferred to the hardware. > >> > >> The vmcs02 field has the same value as the vmcs12 field, and the > >> physical CPU has the same requirements as the virtual CPU. > >> > >> Actually, you're right. There is a problem. With the current > >> implementation, there's a priority inversion if the vmcs12 contains > >> both illegal guest state for which the checks are deferred to > >> hardware, and illegal entries in the VM-entry MSR-load area. In this > >> case, we will synthesize a "VM-entry failure due to MSR loading" > >> rather than a "VM-entry failure due to invalid guest state." > >> > >> There are so many checks on guest state that it's really compelling to > >> defer as many as possible to hardware. However, we need to fix the > >> aforesaid priority inversion. Instead of returning early from > >> nested_vmx_enter_non_root_mode() with EXIT_REASON_MSR_LOAD_FAIL, we > >> could induce a "VM-entry failure due to MSR loading" for the next > >> VM-entry of vmcs02 and continue with the attempted vmcs02 VM-entry. If > >> hardware exits with EXIT_REASON_INVALID_STATE, we reflect that to L1, > >> and if hardware exits with EXIT_REASON_INVALID_STATE, we reflect that > >> to L1 (along with the appropriate exit qualification). > >> > >> > >> Looking at nested_vmx_exit_reflected(), it seems we do return to L1 if the error is EXIT_REASON_INVALID_STATE. So if we fix the priority inversion, this should work then ? > > Yes. > > > >> The tricky part is in undoing the successful MSR writes if we reflect > >> EXIT_REASON_INVALID_STATE to L1. Some MSR writes can't actually be > >> undone (e.g. writes to IA32_PRED_CMD), but maybe we can get away with > >> those. (Fortunately, it's illegal to put x2APIC MSRs in the VM-entry > >> MSR-load area!) Other MSR writes are just a bit tricky to undo (e.g. > >> writes to IA32_TIME_STAMP_COUNTER). > >> > >> > >> Let's say that the priority inversion issue is fixed. In the scenario in which the Guest state is fine but the VM-entry MSR-Load area contains an illegal entry, you are saying that the induced "VM-entry failure due to MSR loading" will be caught during the next VM-entry of vmcs02. So how far does the attempted VM-entry of vmcs02 continue with an illegal MSR-Load entry and how do get to the next VM-entry of vmcs02 ? > > Sorry; I don't understand the questions. > > > Let's say that all guest state checks are deferred to hardware and that > they all will pass. Now, the VM-entry MSR-load area contains an illegal > entry and we modify nested_vmx_enter_non_root_mode() to induce a > "VM-entry failure due to MSR loading" for the next VM-entry of vmcs02. I > wanted to understand how that induced error ultimately leads to a > VM-entry failure ? One possible implementation is as follows: While nested_vmx_load_msr() is processing the vmcs12 VM-entry MSR-load area, it finds an error in entry <i>. We could set up the vmcs02 VM-entry MSR-load area so that the first entry has <i+1> in the reserved bits, and the VM-entry MSR-load count is greater than 0. Since the reserved bits must be one, when we try to launch/resume the vmcs02 in vmx_vcpu_run(), it will result in "VM-entry failure due to MSR loading." We can then reflect that to the guest, setting the vmcs12 exit qualification field from the reserved bits in the first entry of the vmcs02 VM-entry MSR-load area, rather than passing on the exit qualification field from the vmcs02. Of course, this doesn't work if <i> is MAX_UINT32, but I suspect you've already got bigger problems in that case. > > >> There are two other scenarios there: > >> > >> 1. Guest state is illegal and VM-entry MSR-Load area contains an illegal entry > >> 2. Guest state is illegal but VM-entry MSR-Load area is fine > >> > >> In these scenarios, L2 will exit to L1 with EXIT_REASON_INVALID_STATE and finally this will be returned to L1 userspace. Right ? If so, we do we care about reverting MSR-writes because the SDM section 26.8 say, > >> > >> "Processor state is loaded as would be done on a VM exit (see Section 27.5)" > > I'm not sure how the referenced section of the SDM is relevant. Are > > you assuming that every MSR in the VM-entry MSR load area also appears > > in the VM-exit MSR load area? That certainly isn't the case. > > > >> Alternatively, we could perform validity checks on the entire vmcs12 > >> VM-entry MSR-load area before writing any of the MSRs. This may be > >> easier, but it would certainly be slower. We would have to be wary of > >> situations where processing an earlier entry affects the validity of a > >> later entry. (If we take this route, then we would also have to > >> process the valid prefix of the VM-entry MSR-load area when we reflect > >> EXIT_REASON_MSR_LOAD_FAIL to L1.) > > Forget this paragraph. Even if all of the checks pass, we still have > > to undo all of the MSR-writes in the event of a deferred "VM-entry > > failure due to invalid guest state." > > > >> Note that this approach could be extended to permit the deferral of > >> some control field checks to hardware as well. > >> > >> > >> Why can't the first approach be used for VM-entry controls as well ? > > Sorry; I don't understand this question either. > > > Since you mentioned, > > "Note that this approach could be extended to permit the deferral > of some control field checks..." > > So it seemed that only the second approach was applicable to deferring > VM-entry control checks to hardware. Hence I asked why the first > approach can't be used. By "this approach," I meant the deferred delivery of an error discovered in software. > > > >> As long as the control > >> field is copied verbatim from vmcs12 to vmcs02 and the virtual CPU > >> enforces the same constraints as the physical CPU, deferral should be > >> fine. We just have to make sure that we induce a "VM-entry failure due > >> to invalid guest state" for the next VM-entry of vmcs02 if any > >> software checks on guest state fail, rather than immediately > >> synthesizing an "VM-entry failure due to invalid guest state" during > >> the construction of vmcs02. > >> > >> > >> Is it OK to keep this Guest check in software for now and then remove it once we have a solution in place ? > > Why do you feel that getting the priority correct is so important for > > this one check in particular? I'd be surprised if any hypervisor ever > > assembled a VMCS that failed this check.
On 9/4/19 11:20 AM, Jim Mattson wrote: > On Wed, Sep 4, 2019 at 11:07 AM Krish Sadhukhan > <krish.sadhukhan@oracle.com> wrote: >> >> On 9/4/19 9:44 AM, Jim Mattson wrote: >>> On Tue, Sep 3, 2019 at 5:59 PM Krish Sadhukhan >>> <krish.sadhukhan@oracle.com> wrote: >>>> >>>> On 09/01/2019 05:33 PM, Jim Mattson wrote: >>>> >>>> On Fri, Aug 30, 2019 at 4:15 PM Jim Mattson <jmattson@google.com> wrote: >>>> >>>> On Fri, Aug 30, 2019 at 4:07 PM Krish Sadhukhan >>>> <krish.sadhukhan@oracle.com> wrote: >>>> >>>> On 08/29/2019 03:26 PM, Jim Mattson wrote: >>>> >>>> On Thu, Aug 29, 2019 at 2:25 PM Krish Sadhukhan >>>> <krish.sadhukhan@oracle.com> wrote: >>>> >>>> According to section "Checks on Guest Control Registers, Debug Registers, and >>>> and MSRs" in Intel SDM vol 3C, the following checks are performed on vmentry >>>> of nested guests: >>>> >>>> If the "load debug controls" VM-entry control is 1, bits 63:32 in the DR7 >>>> field must be 0. >>>> >>>> Can't we just let the hardware check guest DR7? This results in >>>> "VM-entry failure due to invalid guest state," right? And we just >>>> reflect that to L1? >>>> >>>> Just trying to understand the reason why this particular check can be >>>> deferred to the hardware. >>>> >>>> The vmcs02 field has the same value as the vmcs12 field, and the >>>> physical CPU has the same requirements as the virtual CPU. >>>> >>>> Actually, you're right. There is a problem. With the current >>>> implementation, there's a priority inversion if the vmcs12 contains >>>> both illegal guest state for which the checks are deferred to >>>> hardware, and illegal entries in the VM-entry MSR-load area. In this >>>> case, we will synthesize a "VM-entry failure due to MSR loading" >>>> rather than a "VM-entry failure due to invalid guest state." >>>> >>>> There are so many checks on guest state that it's really compelling to >>>> defer as many as possible to hardware. However, we need to fix the >>>> aforesaid priority inversion. Instead of returning early from >>>> nested_vmx_enter_non_root_mode() with EXIT_REASON_MSR_LOAD_FAIL, we >>>> could induce a "VM-entry failure due to MSR loading" for the next >>>> VM-entry of vmcs02 and continue with the attempted vmcs02 VM-entry. If >>>> hardware exits with EXIT_REASON_INVALID_STATE, we reflect that to L1, >>>> and if hardware exits with EXIT_REASON_INVALID_STATE, we reflect that >>>> to L1 (along with the appropriate exit qualification). >>>> >>>> >>>> Looking at nested_vmx_exit_reflected(), it seems we do return to L1 if the error is EXIT_REASON_INVALID_STATE. So if we fix the priority inversion, this should work then ? >>> Yes. >>> >>>> The tricky part is in undoing the successful MSR writes if we reflect >>>> EXIT_REASON_INVALID_STATE to L1. Some MSR writes can't actually be >>>> undone (e.g. writes to IA32_PRED_CMD), but maybe we can get away with >>>> those. (Fortunately, it's illegal to put x2APIC MSRs in the VM-entry >>>> MSR-load area!) Other MSR writes are just a bit tricky to undo (e.g. >>>> writes to IA32_TIME_STAMP_COUNTER). >>>> >>>> >>>> Let's say that the priority inversion issue is fixed. In the scenario in which the Guest state is fine but the VM-entry MSR-Load area contains an illegal entry, you are saying that the induced "VM-entry failure due to MSR loading" will be caught during the next VM-entry of vmcs02. So how far does the attempted VM-entry of vmcs02 continue with an illegal MSR-Load entry and how do get to the next VM-entry of vmcs02 ? >>> Sorry; I don't understand the questions. >> >> Let's say that all guest state checks are deferred to hardware and that >> they all will pass. Now, the VM-entry MSR-load area contains an illegal >> entry and we modify nested_vmx_enter_non_root_mode() to induce a >> "VM-entry failure due to MSR loading" for the next VM-entry of vmcs02. I >> wanted to understand how that induced error ultimately leads to a >> VM-entry failure ? > One possible implementation is as follows: > > While nested_vmx_load_msr() is processing the vmcs12 VM-entry MSR-load > area, it finds an error in entry <i>. We could set up the vmcs02 > VM-entry MSR-load area so that the first entry has <i+1> in the > reserved bits, and the VM-entry MSR-load count is greater than 0. > Since the reserved bits must be one, when we try to launch/resume the > vmcs02 in vmx_vcpu_run(), it will result in "VM-entry failure due to > MSR loading." We can then reflect that to the guest, setting the > vmcs12 exit qualification field from the reserved bits in the first > entry of the vmcs02 VM-entry MSR-load area, rather than passing on the > exit qualification field from the vmcs02. Of course, this doesn't work > if <i> is MAX_UINT32, but I suspect you've already got bigger problems > in that case. It seems like a good solution. The only problem I see in this is that using the reserved bits is not guaranteed to work forever as the hardware vendors can decide to use them anytime. Instead, I was wondering whether we could set bits 31:0 in the first entry in the VM-entry MSR-load area of vmcs02 to a value of C0000100H. According to Intel SDM, this will cause VM-entry to fail: "The value of bits 31:0 is either C0000100H (the IA32_FS_BASE MSR) or C0000101 (the IA32_GS_BASE MSR)." We can use bits 127:64 of that entry to indicate which MSR entry in the vmcs12 MSR-load area had an error and then we synthesize an exit qualification from that information. > >>>> There are two other scenarios there: >>>> >>>> 1. Guest state is illegal and VM-entry MSR-Load area contains an illegal entry >>>> 2. Guest state is illegal but VM-entry MSR-Load area is fine >>>> >>>> In these scenarios, L2 will exit to L1 with EXIT_REASON_INVALID_STATE and finally this will be returned to L1 userspace. Right ? If so, we do we care about reverting MSR-writes because the SDM section 26.8 say, >>>> >>>> "Processor state is loaded as would be done on a VM exit (see Section 27.5)" >>> I'm not sure how the referenced section of the SDM is relevant. Are >>> you assuming that every MSR in the VM-entry MSR load area also appears >>> in the VM-exit MSR load area? That certainly isn't the case. >>> >>>> Alternatively, we could perform validity checks on the entire vmcs12 >>>> VM-entry MSR-load area before writing any of the MSRs. This may be >>>> easier, but it would certainly be slower. We would have to be wary of >>>> situations where processing an earlier entry affects the validity of a >>>> later entry. (If we take this route, then we would also have to >>>> process the valid prefix of the VM-entry MSR-load area when we reflect >>>> EXIT_REASON_MSR_LOAD_FAIL to L1.) >>> Forget this paragraph. Even if all of the checks pass, we still have >>> to undo all of the MSR-writes in the event of a deferred "VM-entry >>> failure due to invalid guest state." >>> >>>> Note that this approach could be extended to permit the deferral of >>>> some control field checks to hardware as well. >>>> >>>> >>>> Why can't the first approach be used for VM-entry controls as well ? >>> Sorry; I don't understand this question either. >> >> Since you mentioned, >> >> "Note that this approach could be extended to permit the deferral >> of some control field checks..." >> >> So it seemed that only the second approach was applicable to deferring >> VM-entry control checks to hardware. Hence I asked why the first >> approach can't be used. > By "this approach," I meant the deferred delivery of an error > discovered in software. > >>>> As long as the control >>>> field is copied verbatim from vmcs12 to vmcs02 and the virtual CPU >>>> enforces the same constraints as the physical CPU, deferral should be >>>> fine. We just have to make sure that we induce a "VM-entry failure due >>>> to invalid guest state" for the next VM-entry of vmcs02 if any >>>> software checks on guest state fail, rather than immediately >>>> synthesizing an "VM-entry failure due to invalid guest state" during >>>> the construction of vmcs02. >>>> >>>> >>>> Is it OK to keep this Guest check in software for now and then remove it once we have a solution in place ? >>> Why do you feel that getting the priority correct is so important for >>> this one check in particular? I'd be surprised if any hypervisor ever >>> assembled a VMCS that failed this check.
On Sun, Sep 8, 2019 at 9:11 PM Krish Sadhukhan <krish.sadhukhan@oracle.com> wrote: > It seems like a good solution. The only problem I see in this is that > using the reserved bits is not guaranteed to work forever as the > hardware vendors can decide to use them anytime. Unlikely, but point taken. > Instead, I was wondering whether we could set bits 31:0 in the first > entry in the VM-entry MSR-load area of vmcs02 to a value of C0000100H. > According to Intel SDM, this will cause VM-entry to fail: > > "The value of bits 31:0 is either C0000100H (the > IA32_FS_BASE MSR) or C0000101 (the IA32_GS_BASE MSR)." > > We can use bits 127:64 of that entry to indicate which MSR entry in the > vmcs12 MSR-load area had an error and then we synthesize an exit > qualification from that information. That seems reasonable to me.
On Fri, Aug 30, 2019 at 4:15 PM Jim Mattson <jmattson@google.com> wrote: > > On Fri, Aug 30, 2019 at 4:07 PM Krish Sadhukhan > <krish.sadhukhan@oracle.com> wrote: > > > > > > > > On 08/29/2019 03:26 PM, Jim Mattson wrote: > > > On Thu, Aug 29, 2019 at 2:25 PM Krish Sadhukhan > > > <krish.sadhukhan@oracle.com> wrote: > > >> According to section "Checks on Guest Control Registers, Debug Registers, and > > >> and MSRs" in Intel SDM vol 3C, the following checks are performed on vmentry > > >> of nested guests: > > >> > > >> If the "load debug controls" VM-entry control is 1, bits 63:32 in the DR7 > > >> field must be 0. > > > Can't we just let the hardware check guest DR7? This results in > > > "VM-entry failure due to invalid guest state," right? And we just > > > reflect that to L1? > > > > Just trying to understand the reason why this particular check can be > > deferred to the hardware. > > The vmcs02 field has the same value as the vmcs12 field, and the > physical CPU has the same requirements as the virtual CPU. Sadly, I was mistaken. The guest DR7 value is not transferred from vmcs12 to vmcs02. It is set prior to the vmcs02 VM-entry by kvm_set_dr(). Unfortunately, that function synthesizes a #GP if any bit in the high dword of DR7 is set. So, you are correct, Krish: this field must be checked in software.
On Thu, Aug 29, 2019 at 2:25 PM Krish Sadhukhan <krish.sadhukhan@oracle.com> wrote: > > According to section "Checks on Guest Control Registers, Debug Registers, and > and MSRs" in Intel SDM vol 3C, the following checks are performed on vmentry > of nested guests: > > If the "load debug controls" VM-entry control is 1, bits 63:32 in the DR7 > field must be 0. > > Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> > Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com> > --- > arch/x86/kvm/vmx/nested.c | 6 ++++++ > arch/x86/kvm/x86.c | 2 +- > arch/x86/kvm/x86.h | 6 ++++++ > 3 files changed, 13 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c > index 0b234e95e0ed..f04619daf906 100644 > --- a/arch/x86/kvm/vmx/nested.c > +++ b/arch/x86/kvm/vmx/nested.c > @@ -2681,6 +2681,12 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu, > !kvm_debugctl_valid(vmcs12->guest_ia32_debugctl)) > return -EINVAL; > > +#ifdef CONFIG_X86_64 > + if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS) && > + !kvm_dr7_valid(vmcs12->guest_dr7)) > + return -EINVAL; > +#endif > + > if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PAT) && > !kvm_pat_valid(vmcs12->guest_ia32_pat)) > return -EINVAL; > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index fafd81d2c9ea..423a7a573608 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -1051,7 +1051,7 @@ static int __kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val) > case 5: > /* fall through */ > default: /* 7 */ > - if (val & 0xffffffff00000000ULL) > + if (!kvm_dr7_valid(val)) > return -1; /* #GP */ > vcpu->arch.dr7 = (val & DR7_VOLATILE) | DR7_FIXED_1; > kvm_update_dr7(vcpu); > diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h > index 28ba6d0c359f..4e55851fc3fb 100644 > --- a/arch/x86/kvm/x86.h > +++ b/arch/x86/kvm/x86.h > @@ -360,6 +360,12 @@ static inline bool kvm_debugctl_valid(u64 data) > return ((data & 0xFFFFFFFFFFFF203Cull) ? false : true); > } > > +static inline bool kvm_dr7_valid(u64 data) This should be 'unsigned long data.' > +{ > + /* Bits [63:32] are reserved */ > + return ((data & 0xFFFFFFFF00000000ull) ? false : true); return !(data & 0xFFFFFFFF00000000ull); Or, shorter: return (u32)data == data; > +} > + > void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu); > void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu); > > -- > 2.20.1 >
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 0b234e95e0ed..f04619daf906 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2681,6 +2681,12 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu, !kvm_debugctl_valid(vmcs12->guest_ia32_debugctl)) return -EINVAL; +#ifdef CONFIG_X86_64 + if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS) && + !kvm_dr7_valid(vmcs12->guest_dr7)) + return -EINVAL; +#endif + if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PAT) && !kvm_pat_valid(vmcs12->guest_ia32_pat)) return -EINVAL; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fafd81d2c9ea..423a7a573608 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1051,7 +1051,7 @@ static int __kvm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long val) case 5: /* fall through */ default: /* 7 */ - if (val & 0xffffffff00000000ULL) + if (!kvm_dr7_valid(val)) return -1; /* #GP */ vcpu->arch.dr7 = (val & DR7_VOLATILE) | DR7_FIXED_1; kvm_update_dr7(vcpu); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 28ba6d0c359f..4e55851fc3fb 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -360,6 +360,12 @@ static inline bool kvm_debugctl_valid(u64 data) return ((data & 0xFFFFFFFFFFFF203Cull) ? false : true); } +static inline bool kvm_dr7_valid(u64 data) +{ + /* Bits [63:32] are reserved */ + return ((data & 0xFFFFFFFF00000000ull) ? false : true); +} + void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu); void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu);