diff mbox

KVM: x86: verify MTRR/PAT validity

Message ID 20090622182756.GA8582@amt.cnet (mailing list archive)
State New, archived
Headers show

Commit Message

Marcelo Tosatti June 22, 2009, 6:27 p.m. UTC
On Thu, Jun 18, 2009 at 10:39:32AM +0800, Yang, Sheng wrote:
> On Tuesday 16 June 2009 20:05:29 Marcelo Tosatti wrote:
> > Do not allow invalid MTRR/PAT values in set_msr_mtrr.
> >
> > Please review carefully.
> >
> > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> >
> Looks fine to me.
> 
> Is it necessary to check reserved bit of MSR_MTRRdefType and variable MTRRs as 
> well? Maybe like this:
> 
> if (msr == MSR_MTRRdefType) {
> 	return valid_mtrr_type(data & ~0xc00ull);
> }
> 
> And variable ones can be:
> 
> #define MTRR_VALID_MASK(v, msr) (~(rsvd_bits(cpuid_max_physaddr(v)) | ((msr % 
> 2) << 11)))
> 
> return valid_mtrr_type(data & MTRR_VALID_MASK(vcpu, msr)))
> 
> 
> (rsvd_bits() is in mmu.c, both untested)
> 
> Maybe we can put cpuid_max_physaddr as a field in vcpu struct?

Sheng,

This in the BIOS is writing 1's into the reserved address bits into
variable MTRR:

    wrmsr_smp(MTRRphysMask_MSR(0), ~(0x20000000ull - 1) | 0x800);

So i'll leave just memory type validity checking and MSR_MTRRdefType 
valid bit checks in for now:




KVM: x86: verify MTRR/PAT validity

Do not allow invalid memory types in MTRR/PAT (generating a #GP
otherwise).

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>






--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Avi Kivity June 24, 2009, 10 a.m. UTC | #1
On 06/22/2009 09:27 PM, Marcelo Tosatti wrote:
>
> This in the BIOS is writing 1's into the reserved address bits into
> variable MTRR:
>
>      wrmsr_smp(MTRRphysMask_MSR(0), ~(0x20000000ull - 1) | 0x800);
>
> So i'll leave just memory type validity checking and MSR_MTRRdefType
> valid bit checks in for now:
>    
Applied, thanks.
diff mbox

Patch

Index: kvm/arch/x86/kvm/x86.c
===================================================================
--- kvm.orig/arch/x86/kvm/x86.c
+++ kvm/arch/x86/kvm/x86.c
@@ -721,11 +721,48 @@  static bool msr_mtrr_valid(unsigned msr)
 	return false;
 }
 
+static bool valid_pat_type(unsigned t)
+{
+	return t < 8 && (1 << t) & 0xf3; /* 0, 1, 4, 5, 6, 7 */
+}
+
+static bool valid_mtrr_type(unsigned t)
+{
+	return t < 8 && (1 << t) & 0x73; /* 0, 1, 4, 5, 6 */
+}
+
+static bool mtrr_valid(struct kvm_vcpu *vcpu, u32 msr, u64 data)
+{
+	int i;
+
+	if (!msr_mtrr_valid(msr))
+		return false;
+
+	if (msr == MSR_IA32_CR_PAT) {
+		for (i = 0; i < 8; i++)
+			if (!valid_pat_type((data >> (i * 8)) & 0xff))
+				return false;
+		return true;
+	} else if (msr == MSR_MTRRdefType) {
+		if (data & ~0xcff)
+			return false;
+		return valid_mtrr_type(data & 0xff);
+	} else if (msr >= MSR_MTRRfix64K_00000 && msr <= MSR_MTRRfix4K_F8000) {
+		for (i = 0; i < 8 ; i++)
+			if (!valid_mtrr_type((data >> (i * 8)) & 0xff))
+				return false;
+		return true;
+	}
+
+	/* variable MTRRs */
+	return valid_mtrr_type(data & 0xff);
+}
+
 static int set_msr_mtrr(struct kvm_vcpu *vcpu, u32 msr, u64 data)
 {
 	u64 *p = (u64 *)&vcpu->arch.mtrr_state.fixed_ranges;
 
-	if (!msr_mtrr_valid(msr))
+	if (!mtrr_valid(vcpu, msr, data))
 		return 1;
 
 	if (msr == MSR_MTRRdefType) {