Message ID | 1668624097-14884-2-git-send-email-mikelley@microsoft.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Lorenzo Pieralisi |
Headers | show |
Series | Add PCI pass-thru support to Hyper-V Confidential VMs | expand |
On Wed, Nov 16, 2022 at 10:41:24AM -0800, Michael Kelley wrote: > Current code re-calculates the size after aligning the starting and > ending physical addresses on a page boundary. But the re-calculation > also embeds the masking of high order bits that exceed the size of > the physical address space (via PHYSICAL_PAGE_MASK). If the masking > removes any high order bits, the size calculation results in a huge > value that is likely to immediately fail. > > Fix this by re-calculating the page-aligned size first. Then mask any > high order bits using PHYSICAL_PAGE_MASK. > > Signed-off-by: Michael Kelley <mikelley@microsoft.com> > --- > arch/x86/mm/ioremap.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c > index 78c5bc6..6453fba 100644 > --- a/arch/x86/mm/ioremap.c > +++ b/arch/x86/mm/ioremap.c > @@ -217,9 +217,15 @@ static void __ioremap_check_mem(resource_size_t addr, unsigned long size, > * Mappings have to be page-aligned > */ > offset = phys_addr & ~PAGE_MASK; > - phys_addr &= PHYSICAL_PAGE_MASK; > + phys_addr &= PAGE_MASK; > size = PAGE_ALIGN(last_addr+1) - phys_addr; > > + /* > + * Mask out any bits not part of the actual physical > + * address, like memory encryption bits. > + */ > + phys_addr &= PHYSICAL_PAGE_MASK; > + > retval = memtype_reserve(phys_addr, (u64)phys_addr + size, > pcm, &new_pcm); > if (retval) { > -- This looks like a fix to me that needs to go to independently to stable. And it would need a Fixes tag. /me does some git archeology... I guess this one: ffa71f33a820 ("x86, ioremap: Fix incorrect physical address handling in PAE mode") should be old enough so that it goes to all relevant stable kernels... Hmm?
From: Borislav Petkov <bp@alien8.de> Sent: Monday, November 21, 2022 5:33 AM > > On Wed, Nov 16, 2022 at 10:41:24AM -0800, Michael Kelley wrote: > > Current code re-calculates the size after aligning the starting and > > ending physical addresses on a page boundary. But the re-calculation > > also embeds the masking of high order bits that exceed the size of > > the physical address space (via PHYSICAL_PAGE_MASK). If the masking > > removes any high order bits, the size calculation results in a huge > > value that is likely to immediately fail. > > > > Fix this by re-calculating the page-aligned size first. Then mask any > > high order bits using PHYSICAL_PAGE_MASK. > > > > Signed-off-by: Michael Kelley <mikelley@microsoft.com> > > --- > > arch/x86/mm/ioremap.c | 8 +++++++- > > 1 file changed, 7 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c > > index 78c5bc6..6453fba 100644 > > --- a/arch/x86/mm/ioremap.c > > +++ b/arch/x86/mm/ioremap.c > > @@ -217,9 +217,15 @@ static void __ioremap_check_mem(resource_size_t addr, > unsigned long size, > > * Mappings have to be page-aligned > > */ > > offset = phys_addr & ~PAGE_MASK; > > - phys_addr &= PHYSICAL_PAGE_MASK; > > + phys_addr &= PAGE_MASK; > > size = PAGE_ALIGN(last_addr+1) - phys_addr; > > > > + /* > > + * Mask out any bits not part of the actual physical > > + * address, like memory encryption bits. > > + */ > > + phys_addr &= PHYSICAL_PAGE_MASK; > > + > > retval = memtype_reserve(phys_addr, (u64)phys_addr + size, > > pcm, &new_pcm); > > if (retval) { > > -- > > This looks like a fix to me that needs to go to independently to stable. > And it would need a Fixes tag. > > /me does some git archeology... > > I guess this one: > > ffa71f33a820 ("x86, ioremap: Fix incorrect physical address handling in PAE mode") > > should be old enough so that it goes to all relevant stable kernels... > > Hmm? > As discussed in a parallel thread [1], the incorrect code here doesn't have any real impact in already released Linux kernels. It only affects the transition that my patch series implements to change the way vTOM is handled. I don't know what the tradeoffs are for backporting a fix that doesn't solve a real problem vs. just letting it be. Every backport carries some overhead in the process and there's always a non-zero risk of breaking something. I've leaned away from adding the "Fixes:" tag in such cases. But if it's better to go ahead and add the "Fixes:" tag for what's only a theoretical problem, I'm OK with doing so. Michael [1] https://lkml.org/lkml/2022/11/11/1348
On 11/16/22 10:41, Michael Kelley wrote: > Current code re-calculates the size after aligning the starting and > ending physical addresses on a page boundary. But the re-calculation > also embeds the masking of high order bits that exceed the size of > the physical address space (via PHYSICAL_PAGE_MASK). If the masking > removes any high order bits, the size calculation results in a huge > value that is likely to immediately fail. > > Fix this by re-calculating the page-aligned size first. Then mask any > high order bits using PHYSICAL_PAGE_MASK. > > Signed-off-by: Michael Kelley <mikelley@microsoft.com> Looks good: Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Although I do agree with Boris that this superficially looks like something that's important to backport. It would be best to either beef up the changelog to explain why that's not the case, or to treat this as an actual fix and submit separately.
On Mon, Nov 21, 2022 at 04:40:16PM +0000, Michael Kelley (LINUX) wrote: > As discussed in a parallel thread [1], the incorrect code here doesn't have > any real impact in already released Linux kernels. It only affects the > transition that my patch series implements to change the way vTOM > is handled. Are you sure? PHYSICAL_PAGE_MASK is controlled by __PHYSICAL_MASK which is determined by CONFIG_DYNAMIC_PHYSICAL_MASK and __PHYSICAL_MASK_SHIFT which all differ depending on configurations and also dynamic. It is probably still ok, in probably all possible cases even though I wouldn't bet on it. And this fix is simple and all clear so lemme ask it differently: what would be any downsides in backporting it to stable, just in case? > I don't know what the tradeoffs are for backporting a fix that doesn't solve > a real problem vs. just letting it be. Every backport carries some overhead > in the process Have you seen the deluge of stable fixes? :-) > and there's always a non-zero risk of breaking something. I don't see how this one would cause any breakage... > I've leaned away from adding the "Fixes:" tag in such cases. But if > it's better to go ahead and add the "Fixes:" tag for what's only a > theoretical problem, I'm OK with doing so. I think this is a good to have fix anyway as it is Obviously Correct(tm). Unless you have any reservations you haven't shared yet... > [1] https://lkml.org/lkml/2022/11/11/1348 Btw, the proper way to reference to a mail message now is simply to do: https://lore.kernel.org/r/<Message-ID> as long as it has been posted on some ML which lore archives. And I think it archives all. Thx.
From: Borislav Petkov <bp@alien8.de> Sent: Monday, November 21, 2022 11:45 AM > > On Mon, Nov 21, 2022 at 04:40:16PM +0000, Michael Kelley (LINUX) wrote: > > As discussed in a parallel thread [1], the incorrect code here doesn't have > > any real impact in already released Linux kernels. It only affects the > > transition that my patch series implements to change the way vTOM > > is handled. > > Are you sure? > > PHYSICAL_PAGE_MASK is controlled by __PHYSICAL_MASK which is determined > by CONFIG_DYNAMIC_PHYSICAL_MASK and __PHYSICAL_MASK_SHIFT which all > differ depending on configurations and also dynamic. > > It is probably still ok, in probably all possible cases even though I > wouldn't bet on it. > > And this fix is simple and all clear so lemme ask it differently: what > would be any downsides in backporting it to stable, just in case? None > > > I don't know what the tradeoffs are for backporting a fix that doesn't solve > > a real problem vs. just letting it be. Every backport carries some overhead > > in the process > > Have you seen the deluge of stable fixes? :-) > > > and there's always a non-zero risk of breaking something. > > I don't see how this one would cause any breakage... > > > I've leaned away from adding the "Fixes:" tag in such cases. But if > > it's better to go ahead and add the "Fixes:" tag for what's only a > > theoretical problem, I'm OK with doing so. > > I think this is a good to have fix anyway as it is Obviously > Correct(tm). > > Unless you have any reservations you haven't shared yet... > No reservations. I'll add the "Fixes:" tag. Michael
From: Dave Hansen <dave.hansen@intel.com> Sent: Monday, November 21, 2022 10:14 AM > > On 11/16/22 10:41, Michael Kelley wrote: > > Current code re-calculates the size after aligning the starting and > > ending physical addresses on a page boundary. But the re-calculation > > also embeds the masking of high order bits that exceed the size of > > the physical address space (via PHYSICAL_PAGE_MASK). If the masking > > removes any high order bits, the size calculation results in a huge > > value that is likely to immediately fail. > > > > Fix this by re-calculating the page-aligned size first. Then mask any > > high order bits using PHYSICAL_PAGE_MASK. > > > > Signed-off-by: Michael Kelley <mikelley@microsoft.com> > > Looks good: > > Acked-by: Dave Hansen <dave.hansen@linux.intel.com> > > Although I do agree with Boris that this superficially looks like > something that's important to backport. It would be best to either beef > up the changelog to explain why that's not the case, or to treat this as > an actual fix and submit separately. You and Boris agree and I have no objection, so I'll add the "Fixes:" tag. I'd like to keep the patch as part of this series because it *is* needed to make the series work. Michael
On Mon, Nov 21, 2022 at 09:04:06PM +0000, Michael Kelley (LINUX) wrote: > You and Boris agree and I have no objection, so I'll add the "Fixes:" tag. > I'd like to keep the patch as part of this series because it *is* needed to > make the series work. Yeah, no worries. I can take it tomorrow through urgent and send it to Linus this week so whatever you rebase your tree on, it should already have it.
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 78c5bc6..6453fba 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -217,9 +217,15 @@ static void __ioremap_check_mem(resource_size_t addr, unsigned long size, * Mappings have to be page-aligned */ offset = phys_addr & ~PAGE_MASK; - phys_addr &= PHYSICAL_PAGE_MASK; + phys_addr &= PAGE_MASK; size = PAGE_ALIGN(last_addr+1) - phys_addr; + /* + * Mask out any bits not part of the actual physical + * address, like memory encryption bits. + */ + phys_addr &= PHYSICAL_PAGE_MASK; + retval = memtype_reserve(phys_addr, (u64)phys_addr + size, pcm, &new_pcm); if (retval) {
Current code re-calculates the size after aligning the starting and ending physical addresses on a page boundary. But the re-calculation also embeds the masking of high order bits that exceed the size of the physical address space (via PHYSICAL_PAGE_MASK). If the masking removes any high order bits, the size calculation results in a huge value that is likely to immediately fail. Fix this by re-calculating the page-aligned size first. Then mask any high order bits using PHYSICAL_PAGE_MASK. Signed-off-by: Michael Kelley <mikelley@microsoft.com> --- arch/x86/mm/ioremap.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)