diff mbox series

[v7,1/4] KVM: arm64: Introduce two cache maintenance callbacks

Message ID 20210617105824.31752-2-wangyanan55@huawei.com (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: Improve efficiency of stage2 page table | expand

Commit Message

wangyanan (Y) June 17, 2021, 10:58 a.m. UTC
To prepare for performing CMOs for guest stage-2 in the fault handlers
in pgtable.c, here introduce two cache maintenance callbacks in struct
kvm_pgtable_mm_ops. We also adjust the comment alignment for the
existing part but make no real content change at all.

Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
---
 arch/arm64/include/asm/kvm_pgtable.h | 42 +++++++++++++++++-----------
 1 file changed, 25 insertions(+), 17 deletions(-)

Comments

Will Deacon June 17, 2021, 12:38 p.m. UTC | #1
On Thu, Jun 17, 2021 at 06:58:21PM +0800, Yanan Wang wrote:
> To prepare for performing CMOs for guest stage-2 in the fault handlers
> in pgtable.c, here introduce two cache maintenance callbacks in struct
> kvm_pgtable_mm_ops. We also adjust the comment alignment for the
> existing part but make no real content change at all.
> 
> Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
> ---
>  arch/arm64/include/asm/kvm_pgtable.h | 42 +++++++++++++++++-----------
>  1 file changed, 25 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> index c3674c47d48c..b6ce34aa44bb 100644
> --- a/arch/arm64/include/asm/kvm_pgtable.h
> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> @@ -27,23 +27,29 @@ typedef u64 kvm_pte_t;
>  
>  /**
>   * struct kvm_pgtable_mm_ops - Memory management callbacks.
> - * @zalloc_page:	Allocate a single zeroed memory page. The @arg parameter
> - *			can be used by the walker to pass a memcache. The
> - *			initial refcount of the page is 1.
> - * @zalloc_pages_exact:	Allocate an exact number of zeroed memory pages. The
> - *			@size parameter is in bytes, and is rounded-up to the
> - *			next page boundary. The resulting allocation is
> - *			physically contiguous.
> - * @free_pages_exact:	Free an exact number of memory pages previously
> - *			allocated by zalloc_pages_exact.
> - * @get_page:		Increment the refcount on a page.
> - * @put_page:		Decrement the refcount on a page. When the refcount
> - *			reaches 0 the page is automatically freed.
> - * @page_count:		Return the refcount of a page.
> - * @phys_to_virt:	Convert a physical address into a virtual address mapped
> - *			in the current context.
> - * @virt_to_phys:	Convert a virtual address mapped in the current context
> - *			into a physical address.
> + * @zalloc_page:		Allocate a single zeroed memory page.
> + *				The @arg parameter can be used by the walker
> + *				to pass a memcache. The initial refcount of
> + *				the page is 1.
> + * @zalloc_pages_exact:		Allocate an exact number of zeroed memory pages.
> + *				The @size parameter is in bytes, and is rounded
> + *				up to the next page boundary. The resulting
> + *				allocation is physically contiguous.
> + * @free_pages_exact:		Free an exact number of memory pages previously
> + *				allocated by zalloc_pages_exact.
> + * @get_page:			Increment the refcount on a page.
> + * @put_page:			Decrement the refcount on a page. When the
> + *				refcount reaches 0 the page is automatically
> + *				freed.
> + * @page_count:			Return the refcount of a page.
> + * @phys_to_virt:		Convert a physical address into a virtual address
> + *				mapped in the current context.
> + * @virt_to_phys:		Convert a virtual address mapped in the current
> + *				context into a physical address.
> + * @clean_invalidate_dcache:	Clean and invalidate the data cache for the
> + *				specified memory address range.

This should probably be explicit about whether this to the PoU/PoC/PoP.

Will
Marc Zyngier June 17, 2021, 2:20 p.m. UTC | #2
On Thu, 17 Jun 2021 13:38:37 +0100,
Will Deacon <will@kernel.org> wrote:
> 
> On Thu, Jun 17, 2021 at 06:58:21PM +0800, Yanan Wang wrote:
> > To prepare for performing CMOs for guest stage-2 in the fault handlers
> > in pgtable.c, here introduce two cache maintenance callbacks in struct
> > kvm_pgtable_mm_ops. We also adjust the comment alignment for the
> > existing part but make no real content change at all.
> > 
> > Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
> > ---
> >  arch/arm64/include/asm/kvm_pgtable.h | 42 +++++++++++++++++-----------
> >  1 file changed, 25 insertions(+), 17 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> > index c3674c47d48c..b6ce34aa44bb 100644
> > --- a/arch/arm64/include/asm/kvm_pgtable.h
> > +++ b/arch/arm64/include/asm/kvm_pgtable.h
> > @@ -27,23 +27,29 @@ typedef u64 kvm_pte_t;
> >  
> >  /**
> >   * struct kvm_pgtable_mm_ops - Memory management callbacks.
> > - * @zalloc_page:	Allocate a single zeroed memory page. The @arg parameter
> > - *			can be used by the walker to pass a memcache. The
> > - *			initial refcount of the page is 1.
> > - * @zalloc_pages_exact:	Allocate an exact number of zeroed memory pages. The
> > - *			@size parameter is in bytes, and is rounded-up to the
> > - *			next page boundary. The resulting allocation is
> > - *			physically contiguous.
> > - * @free_pages_exact:	Free an exact number of memory pages previously
> > - *			allocated by zalloc_pages_exact.
> > - * @get_page:		Increment the refcount on a page.
> > - * @put_page:		Decrement the refcount on a page. When the refcount
> > - *			reaches 0 the page is automatically freed.
> > - * @page_count:		Return the refcount of a page.
> > - * @phys_to_virt:	Convert a physical address into a virtual address mapped
> > - *			in the current context.
> > - * @virt_to_phys:	Convert a virtual address mapped in the current context
> > - *			into a physical address.
> > + * @zalloc_page:		Allocate a single zeroed memory page.
> > + *				The @arg parameter can be used by the walker
> > + *				to pass a memcache. The initial refcount of
> > + *				the page is 1.
> > + * @zalloc_pages_exact:		Allocate an exact number of zeroed memory pages.
> > + *				The @size parameter is in bytes, and is rounded
> > + *				up to the next page boundary. The resulting
> > + *				allocation is physically contiguous.
> > + * @free_pages_exact:		Free an exact number of memory pages previously
> > + *				allocated by zalloc_pages_exact.
> > + * @get_page:			Increment the refcount on a page.
> > + * @put_page:			Decrement the refcount on a page. When the
> > + *				refcount reaches 0 the page is automatically
> > + *				freed.
> > + * @page_count:			Return the refcount of a page.
> > + * @phys_to_virt:		Convert a physical address into a virtual address
> > + *				mapped in the current context.
> > + * @virt_to_phys:		Convert a virtual address mapped in the current
> > + *				context into a physical address.
> > + * @clean_invalidate_dcache:	Clean and invalidate the data cache for the
> > + *				specified memory address range.
> 
> This should probably be explicit about whether this to the PoU/PoC/PoP.

Indeed. I can fix that locally if there is nothing else that requires
adjusting.

	M.
wangyanan (Y) June 18, 2021, 1:52 a.m. UTC | #3
On 2021/6/17 22:20, Marc Zyngier wrote:
> On Thu, 17 Jun 2021 13:38:37 +0100,
> Will Deacon <will@kernel.org> wrote:
>> On Thu, Jun 17, 2021 at 06:58:21PM +0800, Yanan Wang wrote:
>>> To prepare for performing CMOs for guest stage-2 in the fault handlers
>>> in pgtable.c, here introduce two cache maintenance callbacks in struct
>>> kvm_pgtable_mm_ops. We also adjust the comment alignment for the
>>> existing part but make no real content change at all.
>>>
>>> Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
>>> ---
>>>   arch/arm64/include/asm/kvm_pgtable.h | 42 +++++++++++++++++-----------
>>>   1 file changed, 25 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
>>> index c3674c47d48c..b6ce34aa44bb 100644
>>> --- a/arch/arm64/include/asm/kvm_pgtable.h
>>> +++ b/arch/arm64/include/asm/kvm_pgtable.h
>>> @@ -27,23 +27,29 @@ typedef u64 kvm_pte_t;
>>>   
>>>   /**
>>>    * struct kvm_pgtable_mm_ops - Memory management callbacks.
>>> - * @zalloc_page:	Allocate a single zeroed memory page. The @arg parameter
>>> - *			can be used by the walker to pass a memcache. The
>>> - *			initial refcount of the page is 1.
>>> - * @zalloc_pages_exact:	Allocate an exact number of zeroed memory pages. The
>>> - *			@size parameter is in bytes, and is rounded-up to the
>>> - *			next page boundary. The resulting allocation is
>>> - *			physically contiguous.
>>> - * @free_pages_exact:	Free an exact number of memory pages previously
>>> - *			allocated by zalloc_pages_exact.
>>> - * @get_page:		Increment the refcount on a page.
>>> - * @put_page:		Decrement the refcount on a page. When the refcount
>>> - *			reaches 0 the page is automatically freed.
>>> - * @page_count:		Return the refcount of a page.
>>> - * @phys_to_virt:	Convert a physical address into a virtual address mapped
>>> - *			in the current context.
>>> - * @virt_to_phys:	Convert a virtual address mapped in the current context
>>> - *			into a physical address.
>>> + * @zalloc_page:		Allocate a single zeroed memory page.
>>> + *				The @arg parameter can be used by the walker
>>> + *				to pass a memcache. The initial refcount of
>>> + *				the page is 1.
>>> + * @zalloc_pages_exact:		Allocate an exact number of zeroed memory pages.
>>> + *				The @size parameter is in bytes, and is rounded
>>> + *				up to the next page boundary. The resulting
>>> + *				allocation is physically contiguous.
>>> + * @free_pages_exact:		Free an exact number of memory pages previously
>>> + *				allocated by zalloc_pages_exact.
>>> + * @get_page:			Increment the refcount on a page.
>>> + * @put_page:			Decrement the refcount on a page. When the
>>> + *				refcount reaches 0 the page is automatically
>>> + *				freed.
>>> + * @page_count:			Return the refcount of a page.
>>> + * @phys_to_virt:		Convert a physical address into a virtual address
>>> + *				mapped in the current context.
>>> + * @virt_to_phys:		Convert a virtual address mapped in the current
>>> + *				context into a physical address.
>>> + * @clean_invalidate_dcache:	Clean and invalidate the data cache for the
>>> + *				specified memory address range.
>> This should probably be explicit about whether this to the PoU/PoC/PoP.
> Indeed. I can fix that locally if there is nothing else that requires
> adjusting.
Will be grateful !

Thanks,
Yanan
.
>
> 	M.
>
Fuad Tabba June 18, 2021, 8:59 a.m. UTC | #4
Hi,

On Fri, Jun 18, 2021 at 2:52 AM wangyanan (Y) <wangyanan55@huawei.com> wrote:
>
>
>
> On 2021/6/17 22:20, Marc Zyngier wrote:
> > On Thu, 17 Jun 2021 13:38:37 +0100,
> > Will Deacon <will@kernel.org> wrote:
> >> On Thu, Jun 17, 2021 at 06:58:21PM +0800, Yanan Wang wrote:
> >>> To prepare for performing CMOs for guest stage-2 in the fault handlers
> >>> in pgtable.c, here introduce two cache maintenance callbacks in struct
> >>> kvm_pgtable_mm_ops. We also adjust the comment alignment for the
> >>> existing part but make no real content change at all.
> >>>
> >>> Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
> >>> ---
> >>>   arch/arm64/include/asm/kvm_pgtable.h | 42 +++++++++++++++++-----------
> >>>   1 file changed, 25 insertions(+), 17 deletions(-)
> >>>
> >>> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> >>> index c3674c47d48c..b6ce34aa44bb 100644
> >>> --- a/arch/arm64/include/asm/kvm_pgtable.h
> >>> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> >>> @@ -27,23 +27,29 @@ typedef u64 kvm_pte_t;
> >>>
> >>>   /**
> >>>    * struct kvm_pgtable_mm_ops - Memory management callbacks.
> >>> - * @zalloc_page:   Allocate a single zeroed memory page. The @arg parameter
> >>> - *                 can be used by the walker to pass a memcache. The
> >>> - *                 initial refcount of the page is 1.
> >>> - * @zalloc_pages_exact:    Allocate an exact number of zeroed memory pages. The
> >>> - *                 @size parameter is in bytes, and is rounded-up to the
> >>> - *                 next page boundary. The resulting allocation is
> >>> - *                 physically contiguous.
> >>> - * @free_pages_exact:      Free an exact number of memory pages previously
> >>> - *                 allocated by zalloc_pages_exact.
> >>> - * @get_page:              Increment the refcount on a page.
> >>> - * @put_page:              Decrement the refcount on a page. When the refcount
> >>> - *                 reaches 0 the page is automatically freed.
> >>> - * @page_count:            Return the refcount of a page.
> >>> - * @phys_to_virt:  Convert a physical address into a virtual address mapped
> >>> - *                 in the current context.
> >>> - * @virt_to_phys:  Convert a virtual address mapped in the current context
> >>> - *                 into a physical address.
> >>> + * @zalloc_page:           Allocate a single zeroed memory page.
> >>> + *                         The @arg parameter can be used by the walker
> >>> + *                         to pass a memcache. The initial refcount of
> >>> + *                         the page is 1.
> >>> + * @zalloc_pages_exact:            Allocate an exact number of zeroed memory pages.
> >>> + *                         The @size parameter is in bytes, and is rounded
> >>> + *                         up to the next page boundary. The resulting
> >>> + *                         allocation is physically contiguous.
> >>> + * @free_pages_exact:              Free an exact number of memory pages previously
> >>> + *                         allocated by zalloc_pages_exact.
> >>> + * @get_page:                      Increment the refcount on a page.
> >>> + * @put_page:                      Decrement the refcount on a page. When the
> >>> + *                         refcount reaches 0 the page is automatically
> >>> + *                         freed.
> >>> + * @page_count:                    Return the refcount of a page.
> >>> + * @phys_to_virt:          Convert a physical address into a virtual address
> >>> + *                         mapped in the current context.
> >>> + * @virt_to_phys:          Convert a virtual address mapped in the current
> >>> + *                         context into a physical address.
> >>> + * @clean_invalidate_dcache:       Clean and invalidate the data cache for the
> >>> + *                         specified memory address range.
> >> This should probably be explicit about whether this to the PoU/PoC/PoP.
> > Indeed. I can fix that locally if there is nothing else that requires
> > adjusting.
> Will be grateful !

Sorry, I missed the v7 update. One comment here is that the naming
used in the patch series I mentioned shortens invalidate to inval (if
you want it to be less of a mouthful):
https://lore.kernel.org/linux-arm-kernel/20210524083001.2586635-19-tabba@google.com/

Otherwise:
Reviewed-by: Fuad Tabba <tabba@google.com>

Thanks!
/fuad



>
> Thanks,
> Yanan
> .
> >
> >       M.
> >
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Marc Zyngier June 18, 2021, 11:10 a.m. UTC | #5
On 2021-06-18 09:59, Fuad Tabba wrote:
> Hi,
> 
> On Fri, Jun 18, 2021 at 2:52 AM wangyanan (Y) <wangyanan55@huawei.com> 
> wrote:
>> 
>> 
>> 
>> On 2021/6/17 22:20, Marc Zyngier wrote:
>> > On Thu, 17 Jun 2021 13:38:37 +0100,
>> > Will Deacon <will@kernel.org> wrote:
>> >> On Thu, Jun 17, 2021 at 06:58:21PM +0800, Yanan Wang wrote:
>> >>> To prepare for performing CMOs for guest stage-2 in the fault handlers
>> >>> in pgtable.c, here introduce two cache maintenance callbacks in struct
>> >>> kvm_pgtable_mm_ops. We also adjust the comment alignment for the
>> >>> existing part but make no real content change at all.
>> >>>
>> >>> Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
>> >>> ---
>> >>>   arch/arm64/include/asm/kvm_pgtable.h | 42 +++++++++++++++++-----------
>> >>>   1 file changed, 25 insertions(+), 17 deletions(-)
>> >>>
>> >>> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
>> >>> index c3674c47d48c..b6ce34aa44bb 100644
>> >>> --- a/arch/arm64/include/asm/kvm_pgtable.h
>> >>> +++ b/arch/arm64/include/asm/kvm_pgtable.h
>> >>> @@ -27,23 +27,29 @@ typedef u64 kvm_pte_t;
>> >>>
>> >>>   /**
>> >>>    * struct kvm_pgtable_mm_ops - Memory management callbacks.
>> >>> - * @zalloc_page:   Allocate a single zeroed memory page. The @arg parameter
>> >>> - *                 can be used by the walker to pass a memcache. The
>> >>> - *                 initial refcount of the page is 1.
>> >>> - * @zalloc_pages_exact:    Allocate an exact number of zeroed memory pages. The
>> >>> - *                 @size parameter is in bytes, and is rounded-up to the
>> >>> - *                 next page boundary. The resulting allocation is
>> >>> - *                 physically contiguous.
>> >>> - * @free_pages_exact:      Free an exact number of memory pages previously
>> >>> - *                 allocated by zalloc_pages_exact.
>> >>> - * @get_page:              Increment the refcount on a page.
>> >>> - * @put_page:              Decrement the refcount on a page. When the refcount
>> >>> - *                 reaches 0 the page is automatically freed.
>> >>> - * @page_count:            Return the refcount of a page.
>> >>> - * @phys_to_virt:  Convert a physical address into a virtual address mapped
>> >>> - *                 in the current context.
>> >>> - * @virt_to_phys:  Convert a virtual address mapped in the current context
>> >>> - *                 into a physical address.
>> >>> + * @zalloc_page:           Allocate a single zeroed memory page.
>> >>> + *                         The @arg parameter can be used by the walker
>> >>> + *                         to pass a memcache. The initial refcount of
>> >>> + *                         the page is 1.
>> >>> + * @zalloc_pages_exact:            Allocate an exact number of zeroed memory pages.
>> >>> + *                         The @size parameter is in bytes, and is rounded
>> >>> + *                         up to the next page boundary. The resulting
>> >>> + *                         allocation is physically contiguous.
>> >>> + * @free_pages_exact:              Free an exact number of memory pages previously
>> >>> + *                         allocated by zalloc_pages_exact.
>> >>> + * @get_page:                      Increment the refcount on a page.
>> >>> + * @put_page:                      Decrement the refcount on a page. When the
>> >>> + *                         refcount reaches 0 the page is automatically
>> >>> + *                         freed.
>> >>> + * @page_count:                    Return the refcount of a page.
>> >>> + * @phys_to_virt:          Convert a physical address into a virtual address
>> >>> + *                         mapped in the current context.
>> >>> + * @virt_to_phys:          Convert a virtual address mapped in the current
>> >>> + *                         context into a physical address.
>> >>> + * @clean_invalidate_dcache:       Clean and invalidate the data cache for the
>> >>> + *                         specified memory address range.
>> >> This should probably be explicit about whether this to the PoU/PoC/PoP.
>> > Indeed. I can fix that locally if there is nothing else that requires
>> > adjusting.
>> Will be grateful !
> 
> Sorry, I missed the v7 update. One comment here is that the naming
> used in the patch series I mentioned shortens invalidate to inval (if
> you want it to be less of a mouthful):
> https://lore.kernel.org/linux-arm-kernel/20210524083001.2586635-19-tabba@google.com/
> 

OK, I've now aligned these callbacks to Fuad's naming:

[...]

  * @dcache_clean_inval_poc:	Clean and invalidate the data cache to the 
PoC
  *				for the	specified memory address range.
  * @icache_inval_pou:		Invalidate the instruction cache to the PoU
  *				for the specified memory address range.
  */
struct kvm_pgtable_mm_ops {
	void*		(*zalloc_page)(void *arg);
	void*		(*zalloc_pages_exact)(size_t size);
	void		(*free_pages_exact)(void *addr, size_t size);
	void		(*get_page)(void *addr);
	void		(*put_page)(void *addr);
	int		(*page_count)(void *addr);
	void*		(*phys_to_virt)(phys_addr_t phys);
	phys_addr_t	(*virt_to_phys)(void *addr);
	void		(*dcache_clean_inval_poc)(void *addr, size_t size);
	void		(*icache_inval_pou)(void *addr, size_t size);
};

and repainted everything else.

Thanks,

         M.
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
index c3674c47d48c..b6ce34aa44bb 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -27,23 +27,29 @@  typedef u64 kvm_pte_t;
 
 /**
  * struct kvm_pgtable_mm_ops - Memory management callbacks.
- * @zalloc_page:	Allocate a single zeroed memory page. The @arg parameter
- *			can be used by the walker to pass a memcache. The
- *			initial refcount of the page is 1.
- * @zalloc_pages_exact:	Allocate an exact number of zeroed memory pages. The
- *			@size parameter is in bytes, and is rounded-up to the
- *			next page boundary. The resulting allocation is
- *			physically contiguous.
- * @free_pages_exact:	Free an exact number of memory pages previously
- *			allocated by zalloc_pages_exact.
- * @get_page:		Increment the refcount on a page.
- * @put_page:		Decrement the refcount on a page. When the refcount
- *			reaches 0 the page is automatically freed.
- * @page_count:		Return the refcount of a page.
- * @phys_to_virt:	Convert a physical address into a virtual address mapped
- *			in the current context.
- * @virt_to_phys:	Convert a virtual address mapped in the current context
- *			into a physical address.
+ * @zalloc_page:		Allocate a single zeroed memory page.
+ *				The @arg parameter can be used by the walker
+ *				to pass a memcache. The initial refcount of
+ *				the page is 1.
+ * @zalloc_pages_exact:		Allocate an exact number of zeroed memory pages.
+ *				The @size parameter is in bytes, and is rounded
+ *				up to the next page boundary. The resulting
+ *				allocation is physically contiguous.
+ * @free_pages_exact:		Free an exact number of memory pages previously
+ *				allocated by zalloc_pages_exact.
+ * @get_page:			Increment the refcount on a page.
+ * @put_page:			Decrement the refcount on a page. When the
+ *				refcount reaches 0 the page is automatically
+ *				freed.
+ * @page_count:			Return the refcount of a page.
+ * @phys_to_virt:		Convert a physical address into a virtual address
+ *				mapped in the current context.
+ * @virt_to_phys:		Convert a virtual address mapped in the current
+ *				context into a physical address.
+ * @clean_invalidate_dcache:	Clean and invalidate the data cache for the
+ *				specified memory address range.
+ * @invalidate_icache:		Invalidate the instruction cache for the
+ *				specified memory address range.
  */
 struct kvm_pgtable_mm_ops {
 	void*		(*zalloc_page)(void *arg);
@@ -54,6 +60,8 @@  struct kvm_pgtable_mm_ops {
 	int		(*page_count)(void *addr);
 	void*		(*phys_to_virt)(phys_addr_t phys);
 	phys_addr_t	(*virt_to_phys)(void *addr);
+	void		(*clean_invalidate_dcache)(void *addr, size_t size);
+	void		(*invalidate_icache)(void *addr, size_t size);
 };
 
 /**