Message ID | 1485421042.4727.173.camel@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
>>> On 26.01.17 at 09:57, <dwmw2@infradead.org> wrote: > Since the fix for XSA-154 in commit c61a6f74f80e ("x86: enforce > consistent cachability of MMIO mappings"), HVM guests have no longer > been able to use PAT to obtain write-combining on MMIO because the > 'ignore PAT' bit is set in EPT. This is too broad a statement: MMIO pages for which mfn_valid() is true would still allow WC use. The only other issue with the patch that I can see is that you need to eliminate all the hard tabs you're introducing. Jan
On Thu, 2017-01-26 at 03:45 -0700, Jan Beulich wrote: > > This is too broad a statement: MMIO pages for which mfn_valid() > is true would still allow WC use. ... where mfn_valid() would be true for MMIO regions in the low memory hole, but not for regions above all physical memory? > The only other issue with the patch that I can see is that you need > to eliminate all the hard tabs you're introducing. Bad emacs; no biscuit. I shall teach it that we're not in Kansas any more, and repost. Thanks.
>>> On 26.01.17 at 11:55, <dwmw2@infradead.org> wrote: > On Thu, 2017-01-26 at 03:45 -0700, Jan Beulich wrote: >> >> This is too broad a statement: MMIO pages for which mfn_valid() >> is true would still allow WC use. > > ... where mfn_valid() would be true for MMIO regions in the low memory > hole, but not for regions above all physical memory? Even for pages immediately above physical memory mfn_valid() may be true, iirc. Jan
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c index 709759c..41ae8b4 100644 --- a/xen/arch/x86/hvm/mtrr.c +++ b/xen/arch/x86/hvm/mtrr.c @@ -773,18 +773,20 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn, if ( v->domain != d ) v = d->vcpu ? d->vcpu[0] : NULL; - if ( !mfn_valid(mfn_x(mfn)) || - rangeset_contains_range(mmio_ro_ranges, mfn_x(mfn), - mfn_x(mfn) + (1UL << order) - 1) ) + /* Mask, not add, for order so it works with INVALID_MFN on unmapping */ + if ( rangeset_overlaps_range(mmio_ro_ranges, mfn_x(mfn), + mfn_x(mfn) | ((1UL << order) - 1)) ) { - *ipat = 1; - return MTRR_TYPE_UNCACHABLE; + if ( !order || rangeset_contains_range(mmio_ro_ranges, mfn_x(mfn), + mfn_x(mfn) | ((1UL << order) - 1)) ) + { + *ipat = 1; + return MTRR_TYPE_UNCACHABLE; + } + /* Force invalid memory type so resolve_misconfig() will split it */ + return -1; } - if ( rangeset_overlaps_range(mmio_ro_ranges, mfn_x(mfn), - mfn_x(mfn) + (1UL << order) - 1) ) - return -1; - if ( direct_mmio ) { if ( (mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >> order ) @@ -795,6 +797,12 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn, return MTRR_TYPE_WRBACK; } + if ( !mfn_valid(mfn_x(mfn)) ) + { + *ipat = 1; + return MTRR_TYPE_UNCACHABLE; + } + if ( !need_iommu(d) && !cache_flush_permitted(d) ) { *ipat = 1;