diff mbox series

vunmap: let vunmap align virtual address by itself

Message ID 1563469897-2773-1-git-send-email-andrii.anisov@gmail.com (mailing list archive)
State New, archived
Headers show
Series vunmap: let vunmap align virtual address by itself | expand

Commit Message

Andrii Anisov July 18, 2019, 5:11 p.m. UTC
From: Andrii Anisov <andrii_anisov@epam.com>

Let vunmap align passed virtual address by PAGE_SIZE.
This also makes it consistent with how {,un}map_domain_page()
currently works.

With the main change, also:
 - strip all existing vunmap() calls from prior masking
 - replace opencoded PAGE_MASK macro in vm_index()

Signed-off-by: Andrii Anisov <andrii_anisov@epam.com>
---
 xen/arch/x86/mm/shadow/hvm.c | 2 +-
 xen/common/vmap.c            | 4 ++--
 xen/drivers/acpi/osl.c       | 2 +-
 xen/include/xen/vmap.h       | 4 +---
 4 files changed, 5 insertions(+), 7 deletions(-)

Comments

Andrew Cooper July 18, 2019, 5:16 p.m. UTC | #1
On 18/07/2019 18:11, Andrii Anisov wrote:
> From: Andrii Anisov <andrii_anisov@epam.com>
>
> Let vunmap align passed virtual address by PAGE_SIZE.

You probably want to describe where this goes wrong currently on ARM here.

If you can come up with a suitable piece of text, I can fix up on commit.

> This also makes it consistent with how {,un}map_domain_page()
> currently works.
>
> With the main change, also:
>  - strip all existing vunmap() calls from prior masking
>  - replace opencoded PAGE_MASK macro in vm_index()
>
> Signed-off-by: Andrii Anisov <andrii_anisov@epam.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Andrii Anisov July 18, 2019, 5:33 p.m. UTC | #2
Hello Andrew,

On 18.07.19 20:16, Andrew Cooper wrote:
> If you can come up with a suitable piece of text, I can fix up on commit.

Following text might have a better reference to the current problem:

Currently vunmap() is called from  from xen/arch/arm/cpuerrata.c with an
address potentially not page aligned. Instead of fixing that in ARM code,
we let vunmap() making alignment by itself and stripping other existing
vunmap() calls from prior masking. This makes it consistent with how
{,un}map_domain_page() currently works.

With the main change, also:
  - replace opencoded PAGE_MASK macro in vm_index()

Signed-off-by: Andrii Anisov <andrii_anisov@epam.com>
Jan Beulich July 19, 2019, 9:37 a.m. UTC | #3
On 18.07.2019 19:11, Andrii Anisov wrote:
> Let vunmap align passed virtual address by PAGE_SIZE.

Despite seeing Andrew's R-b you've already got I'd like to point out
that this increases the risk of some code passing the wrong pointer
into here. {,un}map_domain_page() act on single pages only, so there's
no ambiguity. When talking about multi-page regions though, not undoing
arithmetic previously done may mean actually pointing at other than the
first page of the mapping.

> This also makes it consistent with how {,un}map_domain_page()
> currently works.
> 
> With the main change, also:
>   - strip all existing vunmap() calls from prior masking

_If_ we are to go this route, then unmap_domain_page_global()
callers should also be adjusted. There, as for unmap_domain_page(),
I agree it would make sense to be more tolerant.

Jan
Julien Grall July 19, 2019, 9:51 a.m. UTC | #4
On 18/07/2019 18:33, Andrii Anisov wrote:
> Hello Andrew,
> 
> On 18.07.19 20:16, Andrew Cooper wrote:
>> If you can come up with a suitable piece of text, I can fix up on commit.
> 
> Following text might have a better reference to the current problem:
> 
> Currently vunmap() is called from  from xen/arch/arm/cpuerrata.c with an

s/  from//

> address potentially not page aligned. Instead of fixing that in ARM code,

s/ARM/Arm/

> we let vunmap() making alignment by itself and stripping other existing
> vunmap() calls from prior masking. This makes it consistent with how
> {,un}map_domain_page() currently works.

The commit message does not state what could goes wrong if the page is not 
aligned. So how about:

Since commit 9cc0618eb0 "xen/arm: mm: Sanity check any update of Xen page 
tables", the MM code requires the virtual address to be page-aligned. As the 
vunmap() helper is directly used the virtual address passed by its caller, this 
now implies the caller should pass a page-aligned virtual address.

One of the caller in xen/arch/arm/cpuerrata.c may actually pass an unaligned 
address resulting the vunmap() to silently fail and potentially making future 
user of vmap() to fail (the MM code does not allow to replace existing mapping).

In general, it would be better to have vunmap() more resilient to unaligned 
address. So the function is now page-aligning the virtual address.

Note that for multi-pages virtual mapping, the address should still point into 
the first page. Otherwise, vunmap() may only partially remove the mapping.

> 
> With the main change, also:
>   - replace opencoded PAGE_MASK macro in vm_index()

Why did you remove the following line:
  - strip all existing vunmap() calls from prior masking

> 
> Signed-off-by: Andrii Anisov <andrii_anisov@epam.com>
>
Andrii Anisov July 22, 2019, 9:12 a.m. UTC | #5
On 19.07.19 12:51, Julien Grall wrote:
>> Currently vunmap() is called from  from xen/arch/arm/cpuerrata.c with an
> 
> s/  from//
> 
>> address potentially not page aligned. Instead of fixing that in ARM code,
> 
> s/ARM/Arm/
> 
>> we let vunmap() making alignment by itself and stripping other existing
>> vunmap() calls from prior masking. This makes it consistent with how
>> {,un}map_domain_page() currently works.
> 
> The commit message does not state what could goes wrong if the page is not aligned. So how about:
> 
> Since commit 9cc0618eb0 "xen/arm: mm: Sanity check any update of Xen page tables", the MM code requires the virtual address to be page-aligned. As the vunmap() helper is directly used the virtual address passed by its caller, this now implies the caller should pass a page-aligned virtual address.
> 
> One of the caller in xen/arch/arm/cpuerrata.c may actually pass an unaligned address resulting the vunmap() to silently fail and potentially making future user of vmap() to fail (the MM code does not allow to replace existing mapping).
> 
> In general, it would be better to have vunmap() more resilient to unaligned address. So the function is now page-aligning the virtual address.
> 
> Note that for multi-pages virtual mapping, the address should still point into the first page. Otherwise, vunmap() may only partially remove the mapping.

I'm ok with that.


> Why did you remove the following line:
>   - strip all existing vunmap() calls from prior masking

Because I already mentioned it in the main body of my message.

> address potentially not page aligned. Instead of fixing that in ARM code,
> we let vunmap() making alignment by itself and stripping other existing
> vunmap() calls from prior masking.

Yet, with your text both notes are needed.
Andrii Anisov July 22, 2019, 9:30 a.m. UTC | #6
On 19.07.19 12:37, Jan Beulich wrote:
> On 18.07.2019 19:11, Andrii Anisov wrote:
>> Let vunmap align passed virtual address by PAGE_SIZE.
> 
> Despite seeing Andrew's R-b you've already got I'd like to point out
> that this increases the risk of some code passing the wrong pointer
> into here. {,un}map_domain_page() act on single pages only, so there's
> no ambiguity. When talking about multi-page regions though, not undoing
> arithmetic previously done may mean actually pointing at other than the
> first page of the mapping.

Well, what we are moving into vunmap(), is a page alignment only. It can't save us from the wrong pointer, even if it is done outside the vunmap().

>> With the main change, also:
>>    - strip all existing vunmap() calls from prior masking
> 
> _If_ we are to go this route, then unmap_domain_page_global()
> callers should also be adjusted. There, as for unmap_domain_page(),
> I agree it would make sense to be more tolerant.

I've found two page alignments prior to `unmap_domain_page_global()` call. Should I wipe them in this patch, or in separate?
Jan Beulich July 22, 2019, 10:23 a.m. UTC | #7
On 22.07.2019 11:30, Andrii Anisov wrote:
> 
> 
> On 19.07.19 12:37, Jan Beulich wrote:
>> On 18.07.2019 19:11, Andrii Anisov wrote:
>>> Let vunmap align passed virtual address by PAGE_SIZE.
>>
>> Despite seeing Andrew's R-b you've already got I'd like to point out
>> that this increases the risk of some code passing the wrong pointer
>> into here. {,un}map_domain_page() act on single pages only, so there's
>> no ambiguity. When talking about multi-page regions though, not undoing
>> arithmetic previously done may mean actually pointing at other than the
>> first page of the mapping.
> 
> Well, what we are moving into vunmap(), is a page alignment only. It can't save us from the wrong pointer, even if it is done outside the vunmap().
> 
>>> With the main change, also:
>>>    - strip all existing vunmap() calls from prior masking
>>
>> _If_ we are to go this route, then unmap_domain_page_global()
>> callers should also be adjusted. There, as for unmap_domain_page(),
>> I agree it would make sense to be more tolerant.
> 
> I've found two page alignments prior to `unmap_domain_page_global()` call. Should I wipe them in this patch, or in separate?

If we're to go the suggested route then it might not matter much.
As I'm not entirely certain yet that we will, making this a prereq
one dealing with the alignment in unmap_domain_page_global() might
be better. Your existing patch would then further shift this into
vunmap().

Jan
Julien Grall July 22, 2019, 12:02 p.m. UTC | #8
On 22/07/2019 11:23, Jan Beulich wrote:
> On 22.07.2019 11:30, Andrii Anisov wrote:
>>
>>
>> On 19.07.19 12:37, Jan Beulich wrote:
>>> On 18.07.2019 19:11, Andrii Anisov wrote:
>>>> Let vunmap align passed virtual address by PAGE_SIZE.
>>>
>>> Despite seeing Andrew's R-b you've already got I'd like to point out
>>> that this increases the risk of some code passing the wrong pointer
>>> into here. {,un}map_domain_page() act on single pages only, so there's
>>> no ambiguity. When talking about multi-page regions though, not undoing
>>> arithmetic previously done may mean actually pointing at other than the
>>> first page of the mapping.
>>
>> Well, what we are moving into vunmap(), is a page alignment only. It can't save us from the wrong pointer, even if it is done outside the vunmap().
>>
>>>> With the main change, also:
>>>>     - strip all existing vunmap() calls from prior masking
>>>
>>> _If_ we are to go this route, then unmap_domain_page_global()
>>> callers should also be adjusted. There, as for unmap_domain_page(),
>>> I agree it would make sense to be more tolerant.
>>
>> I've found two page alignments prior to `unmap_domain_page_global()` call. Should I wipe them in this patch, or in separate?
> 
> If we're to go the suggested route then it might not matter much.
> As I'm not entirely certain yet that we will, making this a prereq
> one dealing with the alignment in unmap_domain_page_global() might
> be better. Your existing patch would then further shift this into
> vunmap().

I think allowing vunmap to deal with unaligned address could simplify some of 
the callers. Passing the wrong page-aligned pointer is less critical than 
passing an unaligned address.

The latter may result in misbehaving code. The vmap will end-up to not be sync 
with the page-table as we assume page-table update cannot fail (this probably 
should be changed). On Arm, this will result to any new vmap function to likely 
fail (we don't allow replace an existing valid page-table entry).

IIUC the code, the former will result to only unmapping some part of the vmap. 
While I agree this is not ideal, the vmap will still stay in-sync with the 
page-table.

Cheers,
Jan Beulich July 22, 2019, 12:11 p.m. UTC | #9
On 22.07.2019 14:02, Julien Grall wrote:
> On 22/07/2019 11:23, Jan Beulich wrote:
>> On 22.07.2019 11:30, Andrii Anisov wrote:
>>>
>>>
>>> On 19.07.19 12:37, Jan Beulich wrote:
>>>> On 18.07.2019 19:11, Andrii Anisov wrote:
>>>>> Let vunmap align passed virtual address by PAGE_SIZE.
>>>>
>>>> Despite seeing Andrew's R-b you've already got I'd like to point out
>>>> that this increases the risk of some code passing the wrong pointer
>>>> into here. {,un}map_domain_page() act on single pages only, so there's
>>>> no ambiguity. When talking about multi-page regions though, not undoing
>>>> arithmetic previously done may mean actually pointing at other than the
>>>> first page of the mapping.
>>>
>>> Well, what we are moving into vunmap(), is a page alignment only. It can't save us from the wrong pointer, even if it is done outside the vunmap().
>>>
>>>>> With the main change, also:
>>>>>     - strip all existing vunmap() calls from prior masking
>>>>
>>>> _If_ we are to go this route, then unmap_domain_page_global()
>>>> callers should also be adjusted. There, as for unmap_domain_page(),
>>>> I agree it would make sense to be more tolerant.
>>>
>>> I've found two page alignments prior to `unmap_domain_page_global()` call. Should I wipe them in this patch, or in separate?
>>
>> If we're to go the suggested route then it might not matter much.
>> As I'm not entirely certain yet that we will, making this a prereq
>> one dealing with the alignment in unmap_domain_page_global() might
>> be better. Your existing patch would then further shift this into
>> vunmap().
> 
> I think allowing vunmap to deal with unaligned address could simplify some of the callers. Passing the wrong page-aligned pointer is less critical than passing an unaligned address.
> 
> The latter may result in misbehaving code.

Why only the latter?

> The vmap will end-up to not be sync with the page-table as we assume
> page-table update cannot fail (this probably should be changed). On
> Arm, this will result to any new vmap function to likely fail (we
> don't allow replace an existing valid page-table entry).
> 
> IIUC the code, the former will result to only unmapping some part of
> the vmap. 

AIUI the unmap request will simply fail: vm_index() and hence vm_size()
will simply return 0. Hence, with vunmap() not itself returning any
error, there'll be a silent leak of mappings.

Jan
Julien Grall July 22, 2019, 12:29 p.m. UTC | #10
Hi,

On 22/07/2019 13:11, Jan Beulich wrote:
> On 22.07.2019 14:02, Julien Grall wrote:
>> On 22/07/2019 11:23, Jan Beulich wrote:
>>> On 22.07.2019 11:30, Andrii Anisov wrote:
>>>>
>>>>
>>>> On 19.07.19 12:37, Jan Beulich wrote:
>>>>> On 18.07.2019 19:11, Andrii Anisov wrote:
>>>>>> Let vunmap align passed virtual address by PAGE_SIZE.
>>>>>
>>>>> Despite seeing Andrew's R-b you've already got I'd like to point out
>>>>> that this increases the risk of some code passing the wrong pointer
>>>>> into here. {,un}map_domain_page() act on single pages only, so there's
>>>>> no ambiguity. When talking about multi-page regions though, not undoing
>>>>> arithmetic previously done may mean actually pointing at other than the
>>>>> first page of the mapping.
>>>>
>>>> Well, what we are moving into vunmap(), is a page alignment only. It can't save us from the wrong pointer, even if it is done outside the vunmap().
>>>>
>>>>>> With the main change, also:
>>>>>>      - strip all existing vunmap() calls from prior masking
>>>>>
>>>>> _If_ we are to go this route, then unmap_domain_page_global()
>>>>> callers should also be adjusted. There, as for unmap_domain_page(),
>>>>> I agree it would make sense to be more tolerant.
>>>>
>>>> I've found two page alignments prior to `unmap_domain_page_global()` call. Should I wipe them in this patch, or in separate?
>>>
>>> If we're to go the suggested route then it might not matter much.
>>> As I'm not entirely certain yet that we will, making this a prereq
>>> one dealing with the alignment in unmap_domain_page_global() might
>>> be better. Your existing patch would then further shift this into
>>> vunmap().
>>
>> I think allowing vunmap to deal with unaligned address could simplify some of the callers. Passing the wrong page-aligned pointer is less critical than passing an unaligned address.
>>
>> The latter may result in misbehaving code.
> 
> Why only the latter?
> 
>> The vmap will end-up to not be sync with the page-table as we assume
>> page-table update cannot fail (this probably should be changed). On
>> Arm, this will result to any new vmap function to likely fail (we
>> don't allow replace an existing valid page-table entry).
>>
>> IIUC the code, the former will result to only unmapping some part of
>> the vmap.
> 
> AIUI the unmap request will simply fail: vm_index() and hence vm_size()
> will simply return 0. Hence, with vunmap() not itself returning any
> error, there'll be a silent leak of mappings.

Hmmm, I misread the read. vm_index() will indeed return 0 in that case. But, 
this will not silently leak the mappings thanks to the WARN_ON() in the code.

However, even if there are a leak, the vmap and page-table will stay in sync. So 
this is not as bad as the unaligned case.

Cheers,
Andrii Anisov July 23, 2019, 8:48 a.m. UTC | #11
Julien, Jan, Andrew,

The problem addressed by [1] causes random ARM64 boot fails dependent on hypervisor code changes.
Yet more generic solution was requested by Andrew and supported by Julien [2].

How to proceed with this particular patch?
As I understand, Jan doubts we should move page alignment to vunmap(), while Julien and Andrew wanted the commit message clarification.
Can we have an agreement on approach here?

[1] https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg01167.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg01129.html
Jan Beulich July 23, 2019, 9:12 a.m. UTC | #12
On 23.07.2019 10:48, Andrii Anisov wrote:
> Julien, Jan, Andrew,
> 
> The problem addressed by [1] causes random ARM64 boot fails dependent on hypervisor code changes.
> Yet more generic solution was requested by Andrew and supported by Julien [2].
> 
> How to proceed with this particular patch?
> As I understand, Jan doubts we should move page alignment to vunmap(), while Julien and Andrew wanted the commit message clarification.
> Can we have an agreement on approach here?
> 
> [1] https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg01167.html
> [2] https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg01129.html

First of all, let me quote Linux'es code:

static void __vunmap(const void *addr, int deallocate_pages)
{
	struct vm_struct *area;

	if (!addr)
		return;

	if (WARN(!PAGE_ALIGNED(addr), "Trying to vfree() bad address (%p)\n",
			addr))
		return;

As long as we aim to have a reasonable level of compatibility of
similar interfaces, we should not go the suggested route.

Beyond that I continue to be of the opinion that it should be
all-or-nothing: Any pointer pointing anywhere at or inside the
region should be accepted, or just the one pointing precisely at
the start.

Jan
Andrii Anisov July 23, 2019, 10:11 a.m. UTC | #13
Hello Jan,

On 23.07.19 12:12, Jan Beulich wrote:
> Beyond that I continue to be of the opinion that it should be
> all-or-nothing: Any pointer pointing anywhere at or inside the
> region should be accepted, or just the one pointing precisely at
> the start.

It's reasonable enough and I agree with that.
Sorry for missing this point.
Julien Grall July 23, 2019, 5:02 p.m. UTC | #14
Hi Jan,

On 23/07/2019 10:12, Jan Beulich wrote:
> On 23.07.2019 10:48, Andrii Anisov wrote:
>> Julien, Jan, Andrew,
>>
>> The problem addressed by [1] causes random ARM64 boot fails dependent on hypervisor code changes.
>> Yet more generic solution was requested by Andrew and supported by Julien [2].
>>
>> How to proceed with this particular patch?
>> As I understand, Jan doubts we should move page alignment to vunmap(), while Julien and Andrew wanted the commit message clarification.
>> Can we have an agreement on approach here?
>>
>> [1] https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg01167.html
>> [2] https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg01129.html
> 
> First of all, let me quote Linux'es code:
> 
> static void __vunmap(const void *addr, int deallocate_pages)
> {
> 	struct vm_struct *area;
> 
> 	if (!addr)
> 		return;
> 
> 	if (WARN(!PAGE_ALIGNED(addr), "Trying to vfree() bad address (%p)\n",
> 			addr))
> 		return;
> 
> As long as we aim to have a reasonable level of compatibility of
> similar interfaces, we should not go the suggested route.

Well, it is more likely to have Linux code moving to Xen compare to the 
opposite. So the change suggested is still compatible as we don't restrict 
anything but just open more flexibility.

> 
> Beyond that I continue to be of the opinion that it should be
> all-or-nothing: Any pointer pointing anywhere at or inside the
> region should be accepted, or just the one pointing precisely at
> the start.
That's fine, but we can still achieve this step by step... Handling unaligned 
address is quite easy.

Cheers,
diff mbox series

Patch

diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index c6469c8..8561a11 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -597,7 +597,7 @@  static void sh_emulate_unmap_dest(struct vcpu *v, void *addr,
     {
         paging_mark_dirty(v->domain, sh_ctxt->mfn[1]);
         put_page(mfn_to_page(sh_ctxt->mfn[1]));
-        vunmap((void *)((unsigned long)addr & PAGE_MASK));
+        vunmap(addr);
     }
     else
         unmap_domain_page(addr);
diff --git a/xen/common/vmap.c b/xen/common/vmap.c
index faebc1d..e7bd6bf 100644
--- a/xen/common/vmap.c
+++ b/xen/common/vmap.c
@@ -141,7 +141,7 @@  static void *vm_alloc(unsigned int nr, unsigned int align,
 
 static unsigned int vm_index(const void *va, enum vmap_region type)
 {
-    unsigned long addr = (unsigned long)va & ~(PAGE_SIZE - 1);
+    unsigned long addr = (unsigned long)va & PAGE_MASK;
     unsigned int idx;
     unsigned long start = (unsigned long)vm_base[type];
 
@@ -225,7 +225,7 @@  void *vmap(const mfn_t *mfn, unsigned int nr)
 
 void vunmap(const void *va)
 {
-    unsigned long addr = (unsigned long)va;
+    unsigned long addr = (unsigned long)va & PAGE_MASK;
     unsigned int pages = vm_size(va, VMAP_DEFAULT);
 
     if ( !pages )
diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c
index 4c8bb78..1a91453 100644
--- a/xen/drivers/acpi/osl.c
+++ b/xen/drivers/acpi/osl.c
@@ -115,7 +115,7 @@  void acpi_os_unmap_memory(void __iomem * virt, acpi_size size)
 	}
 
 	if (system_state >= SYS_STATE_boot)
-		vunmap((void *)((unsigned long)virt & PAGE_MASK));
+		vunmap(virt);
 }
 
 acpi_status acpi_os_read_port(acpi_io_address port, u32 * value, u32 width)
diff --git a/xen/include/xen/vmap.h b/xen/include/xen/vmap.h
index 369560e..a556d13 100644
--- a/xen/include/xen/vmap.h
+++ b/xen/include/xen/vmap.h
@@ -27,9 +27,7 @@  void __iomem *ioremap(paddr_t, size_t);
 
 static inline void iounmap(void __iomem *va)
 {
-    unsigned long addr = (unsigned long)(void __force *)va;
-
-    vunmap((void *)(addr & PAGE_MASK));
+    vunmap((void *)va);
 }
 
 void *arch_vmap_virt_end(void);