diff mbox series

[RFC,v7,01/64] KVM: Fix memslot boundary condition for large page

Message ID 20221214194056.161492-2-michael.roth@amd.com (mailing list archive)
State New
Headers show
Series Add AMD Secure Nested Paging (SEV-SNP) Hypervisor Support | expand

Commit Message

Michael Roth Dec. 14, 2022, 7:39 p.m. UTC
From: Nikunj A Dadhania <nikunj@amd.com>

Aligned end boundary causes a kvm crash, handle the case.

Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
---
 arch/x86/kvm/mmu/mmu.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Borislav Petkov Dec. 22, 2022, 12:16 p.m. UTC | #1
On Wed, Dec 14, 2022 at 01:39:53PM -0600, Michael Roth wrote:
> From: Nikunj A Dadhania <nikunj@amd.com>
> 
> Aligned end boundary causes a kvm crash, handle the case.
> 
> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
> Signed-off-by: Michael Roth <michael.roth@amd.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index b1953ebc012e..b3ffc61c668c 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -7159,6 +7159,9 @@ static void kvm_update_lpage_private_shared_mixed(struct kvm *kvm,
>  		for (gfn = first + pages; gfn < last; gfn += pages)
>  			linfo_set_mixed(gfn, slot, level, false);
>  
> +		if (gfn == last)
> +			goto out;

I'm guessing this was supposed to be "return;" here:

arch/x86/kvm/mmu/mmu.c: In function ‘kvm_update_lpage_private_shared_mixed’:
arch/x86/kvm/mmu/mmu.c:7090:25: error: label ‘out’ used but not defined
 7090 |                         goto out;
      |                         ^~~~

/me goes and digs deeper.

Aha, it was a "return" but you reordered the patches and the one adding
the out label:

KVM: x86: Add 'update_mem_attr' x86 op

went further down and this became the first but it didn't have the label
anymore.

Yeah, each patch needs to build successfully for bisection reasons, ofc.

Thx.
Jarkko Sakkinen Jan. 4, 2023, 12:01 p.m. UTC | #2
On Wed, Dec 14, 2022 at 01:39:53PM -0600, Michael Roth wrote:
> From: Nikunj A Dadhania <nikunj@amd.com>
> 
> Aligned end boundary causes a kvm crash, handle the case.
> 

Link: https://lore.kernel.org/kvm/20221202061347.1070246-8-chao.p.peng@linux.intel.com/

Chao, are you aware of this issue already?

> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
> Signed-off-by: Michael Roth <michael.roth@amd.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index b1953ebc012e..b3ffc61c668c 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -7159,6 +7159,9 @@ static void kvm_update_lpage_private_shared_mixed(struct kvm *kvm,
>  		for (gfn = first + pages; gfn < last; gfn += pages)
>  			linfo_set_mixed(gfn, slot, level, false);
>  
> +		if (gfn == last)
> +			goto out;
> +
>  		gfn = last;
>  		gfn_end = min(last + pages, slot->base_gfn + slot->npages);
>  		mixed = mem_attrs_mixed(kvm, slot, level, attrs, gfn, gfn_end);
> -- 
> 2.25.1
> 


BR, Jarkko
Chao Peng Jan. 5, 2023, 3:34 a.m. UTC | #3
On Wed, Jan 04, 2023 at 12:01:05PM +0000, Jarkko Sakkinen wrote:
> On Wed, Dec 14, 2022 at 01:39:53PM -0600, Michael Roth wrote:
> > From: Nikunj A Dadhania <nikunj@amd.com>
> > 
> > Aligned end boundary causes a kvm crash, handle the case.
> > 
> 
> Link: https://lore.kernel.org/kvm/20221202061347.1070246-8-chao.p.peng@linux.intel.com/
> 
> Chao, are you aware of this issue already?

Thanks Jarkko adding me. I'm not aware of there is a fix.

> 
> > Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
> > Signed-off-by: Michael Roth <michael.roth@amd.com>
> > ---
> >  arch/x86/kvm/mmu/mmu.c | 3 +++
> >  1 file changed, 3 insertions(+)
> > 
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index b1953ebc012e..b3ffc61c668c 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -7159,6 +7159,9 @@ static void kvm_update_lpage_private_shared_mixed(struct kvm *kvm,
> >  		for (gfn = first + pages; gfn < last; gfn += pages)
> >  			linfo_set_mixed(gfn, slot, level, false);
> >  
> > +		if (gfn == last)
> > +			goto out;
> > +

Nikunj or Michael, could you help me understand in which case it causes
a KVM crash? To me, even the end is aligned to huge page boundary, but:
    last = (end - 1) & mask;
so 'last' is the base address for the last effective huage page. Even
when gfn == last, it should still a valid page and needs to be updated
for mem_attrs, correct?

Thanks,
Chao
> >  		gfn = last;
> >  		gfn_end = min(last + pages, slot->base_gfn + slot->npages);
> >  		mixed = mem_attrs_mixed(kvm, slot, level, attrs, gfn, gfn_end);
> > -- 
> > 2.25.1
> > 
> 
> 
> BR, Jarkko
Chao Peng Jan. 5, 2023, 3:37 a.m. UTC | #4
On Thu, Dec 22, 2022 at 01:16:04PM +0100, Borislav Petkov wrote:
> On Wed, Dec 14, 2022 at 01:39:53PM -0600, Michael Roth wrote:
> > From: Nikunj A Dadhania <nikunj@amd.com>
> > 
> > Aligned end boundary causes a kvm crash, handle the case.
> > 
> > Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
> > Signed-off-by: Michael Roth <michael.roth@amd.com>
> > ---
> >  arch/x86/kvm/mmu/mmu.c | 3 +++
> >  1 file changed, 3 insertions(+)
> > 
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index b1953ebc012e..b3ffc61c668c 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -7159,6 +7159,9 @@ static void kvm_update_lpage_private_shared_mixed(struct kvm *kvm,
> >  		for (gfn = first + pages; gfn < last; gfn += pages)
> >  			linfo_set_mixed(gfn, slot, level, false);
> >  
> > +		if (gfn == last)
> > +			goto out;
> 
> I'm guessing this was supposed to be "return;" here:

If we finally need this, this should be "continue;", we can't skip the
remaining huge page levels.

Thanks,
Chao
> 
> arch/x86/kvm/mmu/mmu.c: In function ‘kvm_update_lpage_private_shared_mixed’:
> arch/x86/kvm/mmu/mmu.c:7090:25: error: label ‘out’ used but not defined
>  7090 |                         goto out;
>       |                         ^~~~
> 
> /me goes and digs deeper.
> 
> Aha, it was a "return" but you reordered the patches and the one adding
> the out label:
> 
> KVM: x86: Add 'update_mem_attr' x86 op
> 
> went further down and this became the first but it didn't have the label
> anymore.
> 
> Yeah, each patch needs to build successfully for bisection reasons, ofc.
> 
> Thx.
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> https://people.kernel.org/tglx/notes-about-netiquette
Nikunj A. Dadhania Jan. 5, 2023, 4:08 a.m. UTC | #5
On 05/01/23 09:04, Chao Peng wrote:
> On Wed, Jan 04, 2023 at 12:01:05PM +0000, Jarkko Sakkinen wrote:
>> On Wed, Dec 14, 2022 at 01:39:53PM -0600, Michael Roth wrote:
>>> From: Nikunj A Dadhania <nikunj@amd.com>
>>>
>>> Aligned end boundary causes a kvm crash, handle the case.
>>>
>>
>> Link: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fkvm%2F20221202061347.1070246-8-chao.p.peng%40linux.intel.com%2F&data=05%7C01%7Cnikunj.dadhania%40amd.com%7C7a95933fac1b433e339c08daeece6c2c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638084867591405299%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=vDEu9Uxs0QRdzbUkJbE2LsJnMHJJHBdQijkePbE2woc%3D&reserved=0
>>
>> Chao, are you aware of this issue already?
> 
> Thanks Jarkko adding me. I'm not aware of there is a fix.

It was discussed here: https://lore.kernel.org/all/e234d307-0b05-6548-5882-c24fc32c8e77@amd.com/

I was hitting this with one of the selftests case.

> 
>>
>>> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
>>> Signed-off-by: Michael Roth <michael.roth@amd.com>
>>> ---
>>>  arch/x86/kvm/mmu/mmu.c | 3 +++
>>>  1 file changed, 3 insertions(+)
>>>
>>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
>>> index b1953ebc012e..b3ffc61c668c 100644
>>> --- a/arch/x86/kvm/mmu/mmu.c
>>> +++ b/arch/x86/kvm/mmu/mmu.c
>>> @@ -7159,6 +7159,9 @@ static void kvm_update_lpage_private_shared_mixed(struct kvm *kvm,
>>>  		for (gfn = first + pages; gfn < last; gfn += pages)
>>>  			linfo_set_mixed(gfn, slot, level, false);
>>>  
>>> +		if (gfn == last)
>>> +			goto out;
>>> +
> 
> Nikunj or Michael, could you help me understand in which case it causes
> a KVM crash? To me, even the end is aligned to huge page boundary, but:
>     last = (end - 1) & mask;
> so 'last' is the base address for the last effective huage page. Even
> when gfn == last, it should still a valid page and needs to be updated
> for mem_attrs, correct?

Yes, that is correct with: last = (end - 1) & mask;

We can drop this patch from SNP series.

Regards
Nikunj
Chao Peng Jan. 5, 2023, 8:14 a.m. UTC | #6
On Thu, Jan 05, 2023 at 09:38:59AM +0530, Nikunj A. Dadhania wrote:
> 
> 
> On 05/01/23 09:04, Chao Peng wrote:
> > On Wed, Jan 04, 2023 at 12:01:05PM +0000, Jarkko Sakkinen wrote:
> >> On Wed, Dec 14, 2022 at 01:39:53PM -0600, Michael Roth wrote:
> >>> From: Nikunj A Dadhania <nikunj@amd.com>
> >>>
> >>> Aligned end boundary causes a kvm crash, handle the case.
> >>>
> >>
> >> Link: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fkvm%2F20221202061347.1070246-8-chao.p.peng%40linux.intel.com%2F&data=05%7C01%7Cnikunj.dadhania%40amd.com%7C7a95933fac1b433e339c08daeece6c2c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638084867591405299%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=vDEu9Uxs0QRdzbUkJbE2LsJnMHJJHBdQijkePbE2woc%3D&reserved=0
> >>
> >> Chao, are you aware of this issue already?
> > 
> > Thanks Jarkko adding me. I'm not aware of there is a fix.
> 
> It was discussed here: https://lore.kernel.org/all/e234d307-0b05-6548-5882-c24fc32c8e77@amd.com/
> 
> I was hitting this with one of the selftests case.

Yeah, I remember that discussion. With the new UPM code, this bug
should be fixed. If you still hit the issue please let me know.

Thanks,
Chao
> 
> > 
> >>
> >>> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
> >>> Signed-off-by: Michael Roth <michael.roth@amd.com>
> >>> ---
> >>>  arch/x86/kvm/mmu/mmu.c | 3 +++
> >>>  1 file changed, 3 insertions(+)
> >>>
> >>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> >>> index b1953ebc012e..b3ffc61c668c 100644
> >>> --- a/arch/x86/kvm/mmu/mmu.c
> >>> +++ b/arch/x86/kvm/mmu/mmu.c
> >>> @@ -7159,6 +7159,9 @@ static void kvm_update_lpage_private_shared_mixed(struct kvm *kvm,
> >>>  		for (gfn = first + pages; gfn < last; gfn += pages)
> >>>  			linfo_set_mixed(gfn, slot, level, false);
> >>>  
> >>> +		if (gfn == last)
> >>> +			goto out;
> >>> +
> > 
> > Nikunj or Michael, could you help me understand in which case it causes
> > a KVM crash? To me, even the end is aligned to huge page boundary, but:
> >     last = (end - 1) & mask;
> > so 'last' is the base address for the last effective huage page. Even
> > when gfn == last, it should still a valid page and needs to be updated
> > for mem_attrs, correct?
> 
> Yes, that is correct with: last = (end - 1) & mask;
> 
> We can drop this patch from SNP series.
> 
> Regards
> Nikunj
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index b1953ebc012e..b3ffc61c668c 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -7159,6 +7159,9 @@  static void kvm_update_lpage_private_shared_mixed(struct kvm *kvm,
 		for (gfn = first + pages; gfn < last; gfn += pages)
 			linfo_set_mixed(gfn, slot, level, false);
 
+		if (gfn == last)
+			goto out;
+
 		gfn = last;
 		gfn_end = min(last + pages, slot->base_gfn + slot->npages);
 		mixed = mem_attrs_mixed(kvm, slot, level, attrs, gfn, gfn_end);