Message ID | 5733068D.5050604@citrix.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
>>> On 11.05.16 at 12:16, <david.vrabel@citrix.com> wrote: > On 11/05/16 08:00, Juergen Gross wrote: >> Adding David as he removed _PAGE_IOMAP in kernel 3.18. > > Why don't we get the RW bits correct when making the pteval when we > already have the pfn, instead trying to fix it up afterwards. While it looks like this would help in this specific situation, the next time something is found to access the M2P early, that would need another fix then. I.e. dealing with the underlying more general issue would seem preferable to me. Jan
On 11/05/16 13:21, Jan Beulich wrote: >>>> On 11.05.16 at 12:16, <david.vrabel@citrix.com> wrote: >> On 11/05/16 08:00, Juergen Gross wrote: >>> Adding David as he removed _PAGE_IOMAP in kernel 3.18. >> >> Why don't we get the RW bits correct when making the pteval when we >> already have the pfn, instead trying to fix it up afterwards. > > While it looks like this would help in this specific situation, the next > time something is found to access the M2P early, that would need > another fix then. I.e. dealing with the underlying more general > issue would seem preferable to me. I'm more concerned with future regression caused by changes to the generic x86 code to (for example) install a different early page fault handler. Can we fix this specific issue in the way I suggested (avoiding the unnecessary m2p lookup entirely) and then discuss the merits of the page fault handler approach as a separate topic? David
>>> On 11.05.16 at 14:48, <david.vrabel@citrix.com> wrote: > On 11/05/16 13:21, Jan Beulich wrote: >>>>> On 11.05.16 at 12:16, <david.vrabel@citrix.com> wrote: >>> On 11/05/16 08:00, Juergen Gross wrote: >>>> Adding David as he removed _PAGE_IOMAP in kernel 3.18. >>> >>> Why don't we get the RW bits correct when making the pteval when we >>> already have the pfn, instead trying to fix it up afterwards. >> >> While it looks like this would help in this specific situation, the next >> time something is found to access the M2P early, that would need >> another fix then. I.e. dealing with the underlying more general >> issue would seem preferable to me. > > I'm more concerned with future regression caused by changes to the > generic x86 code to (for example) install a different early page fault > handler. > > Can we fix this specific issue in the way I suggested (avoiding the > unnecessary m2p lookup entirely) and then discuss the merits of the page > fault handler approach as a separate topic? That's fine with me. Jan
On 11/05/16 14:48, David Vrabel wrote: > On 11/05/16 13:21, Jan Beulich wrote: >>>>> On 11.05.16 at 12:16, <david.vrabel@citrix.com> wrote: >>> On 11/05/16 08:00, Juergen Gross wrote: >>>> Adding David as he removed _PAGE_IOMAP in kernel 3.18. >>> >>> Why don't we get the RW bits correct when making the pteval when we >>> already have the pfn, instead trying to fix it up afterwards. >> >> While it looks like this would help in this specific situation, the next >> time something is found to access the M2P early, that would need >> another fix then. I.e. dealing with the underlying more general >> issue would seem preferable to me. > > I'm more concerned with future regression caused by changes to the > generic x86 code to (for example) install a different early page fault > handler. > > Can we fix this specific issue in the way I suggested (avoiding the > unnecessary m2p lookup entirely) and then discuss the merits of the page > fault handler approach as a separate topic? Sure. Juergen
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c index 478a2de..d187368 100644 --- a/arch/x86/xen/mmu.c +++ b/arch/x86/xen/mmu.c @@ -430,6 +430,22 @@ __visible pte_t xen_make_pte(pteval_t pte) } PV_CALLEE_SAVE_REGS_THUNK(xen_make_pte); +__visible __init pte_t xen_make_pte_init(pteval_t pte) +{ + unsigned long pfn = pte_mfn(pte); + +#ifdef CONFIG_X86_64 + pte = mask_rw_pte(pte); +#endif + pte = pte_pfn_to_mfn(pte); + + if (pte_mfn(pte) == INVALID_P2M_ENTRY) + pte = __pte_ma(0); + + return native_make_pte(pte); +} +PV_CALLEE_SAVE_REGS_THUNK(xen_make_pte); + __visible pgd_t xen_make_pgd(pgdval_t pgd) { pgd = pte_pfn_to_mfn(pgd); @@ -1562,7 +1578,7 @@ static pte_t __init mask_rw_pte(pte_t *ptep, pte_t pte) return pte; } #else /* CONFIG_X86_64 */ -static pte_t __init mask_rw_pte(pte_t *ptep, pte_t pte) +static pte_t __init mask_rw_pte(pte_t pte) { unsigned long pfn;