diff mbox series

[2/5] mm/hmm: Clean up some coding style and comments

Message ID 20190506232942.12623-3-rcampbell@nvidia.com (mailing list archive)
State New, archived
Headers show
Series mm/hmm: HMM documentation updates and code fixes | expand

Commit Message

Ralph Campbell May 6, 2019, 11:29 p.m. UTC
From: Ralph Campbell <rcampbell@nvidia.com>

There are no functional changes, just some coding style clean ups and
minor comment changes.

Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Souptick Joarder <jrdr.linux@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
 include/linux/hmm.h | 71 +++++++++++++++++++++++----------------------
 mm/hmm.c            | 51 ++++++++++++++++----------------
 2 files changed, 62 insertions(+), 60 deletions(-)

Comments

Jason Gunthorpe June 6, 2019, 2:16 p.m. UTC | #1
On Mon, May 06, 2019 at 04:29:39PM -0700, rcampbell@nvidia.com wrote:
> From: Ralph Campbell <rcampbell@nvidia.com>
> 
> There are no functional changes, just some coding style clean ups and
> minor comment changes.
> 
> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: Ira Weiny <ira.weiny@intel.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Balbir Singh <bsingharora@gmail.com>
> Cc: Dan Carpenter <dan.carpenter@oracle.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Souptick Joarder <jrdr.linux@gmail.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
>  include/linux/hmm.h | 71 +++++++++++++++++++++++----------------------
>  mm/hmm.c            | 51 ++++++++++++++++----------------
>  2 files changed, 62 insertions(+), 60 deletions(-)

Applied to hmm.git, thanks

Jason
Jerome Glisse June 6, 2019, 2:27 p.m. UTC | #2
On Thu, Jun 06, 2019 at 11:16:44AM -0300, Jason Gunthorpe wrote:
> On Mon, May 06, 2019 at 04:29:39PM -0700, rcampbell@nvidia.com wrote:
> > From: Ralph Campbell <rcampbell@nvidia.com>
> > 
> > There are no functional changes, just some coding style clean ups and
> > minor comment changes.
> > 
> > Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> > Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
> > Cc: John Hubbard <jhubbard@nvidia.com>
> > Cc: Ira Weiny <ira.weiny@intel.com>
> > Cc: Dan Williams <dan.j.williams@intel.com>
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > Cc: Balbir Singh <bsingharora@gmail.com>
> > Cc: Dan Carpenter <dan.carpenter@oracle.com>
> > Cc: Matthew Wilcox <willy@infradead.org>
> > Cc: Souptick Joarder <jrdr.linux@gmail.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> >  include/linux/hmm.h | 71 +++++++++++++++++++++++----------------------
> >  mm/hmm.c            | 51 ++++++++++++++++----------------
> >  2 files changed, 62 insertions(+), 60 deletions(-)
> 
> Applied to hmm.git, thanks

Can you hold off, i was already collecting patches and we will
be stepping on each other toe ... for instance i had

https://cgit.freedesktop.org/~glisse/linux/log/?h=hmm-5.3

But i have been working on more collection.

Cheers,
Jérôme
Jason Gunthorpe June 6, 2019, 3:41 p.m. UTC | #3
On Thu, Jun 06, 2019 at 10:27:43AM -0400, Jerome Glisse wrote:
> On Thu, Jun 06, 2019 at 11:16:44AM -0300, Jason Gunthorpe wrote:
> > On Mon, May 06, 2019 at 04:29:39PM -0700, rcampbell@nvidia.com wrote:
> > > From: Ralph Campbell <rcampbell@nvidia.com>
> > > 
> > > There are no functional changes, just some coding style clean ups and
> > > minor comment changes.
> > > 
> > > Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> > > Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
> > > Cc: John Hubbard <jhubbard@nvidia.com>
> > > Cc: Ira Weiny <ira.weiny@intel.com>
> > > Cc: Dan Williams <dan.j.williams@intel.com>
> > > Cc: Arnd Bergmann <arnd@arndb.de>
> > > Cc: Balbir Singh <bsingharora@gmail.com>
> > > Cc: Dan Carpenter <dan.carpenter@oracle.com>
> > > Cc: Matthew Wilcox <willy@infradead.org>
> > > Cc: Souptick Joarder <jrdr.linux@gmail.com>
> > > Cc: Andrew Morton <akpm@linux-foundation.org>
> > >  include/linux/hmm.h | 71 +++++++++++++++++++++++----------------------
> > >  mm/hmm.c            | 51 ++++++++++++++++----------------
> > >  2 files changed, 62 insertions(+), 60 deletions(-)
> > 
> > Applied to hmm.git, thanks
> 
> Can you hold off, i was already collecting patches and we will
> be stepping on each other toe ... for instance i had

I'd really rather not, I have a lot of work to do for this cycle and
this part needs to start to move forward now. I can't do everything
last minute, sorry.

The patches I picked up all look very safe to move ahead.

> https://cgit.freedesktop.org/~glisse/linux/log/?h=hmm-5.3

I'm aware, and am referring to this tree. You can trivially rebase it
on top of hmm.git..

BTW, what were you planning to do with this git branch anyhow?

As we'd already agreed I will send the hmm patches to Linus on a clean
git branch so we can properly collaborate between the various involved
trees.

As a tree-runner I very much prefer to take patches directly from the
mailing list where everything is public. This is the standard kernel
workflow.

> But i have been working on more collection.

We haven't talked on process, but for me, please follow the standard
kernel development process and respond to patches on the list with
comments, ack/review them, etc. I may not have seen every patch, so
I'd appreciate it if you cc me on stuff that needs to be picked up,
thanks.

I am sorting out the changes you made off-list in your .git right now,
but this is very time consuming.. Please try to keep comments &
changes on list.

I don't want to take any thing into hmm.git that is not deemed ready -
so please feel free to continue to use your freedesktop git to
co-ordinate testing.

Thanks,
Jason
Jerome Glisse June 6, 2019, 3:52 p.m. UTC | #4
On Thu, Jun 06, 2019 at 12:41:29PM -0300, Jason Gunthorpe wrote:
> On Thu, Jun 06, 2019 at 10:27:43AM -0400, Jerome Glisse wrote:
> > On Thu, Jun 06, 2019 at 11:16:44AM -0300, Jason Gunthorpe wrote:
> > > On Mon, May 06, 2019 at 04:29:39PM -0700, rcampbell@nvidia.com wrote:
> > > > From: Ralph Campbell <rcampbell@nvidia.com>
> > > > 
> > > > There are no functional changes, just some coding style clean ups and
> > > > minor comment changes.
> > > > 
> > > > Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> > > > Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
> > > > Cc: John Hubbard <jhubbard@nvidia.com>
> > > > Cc: Ira Weiny <ira.weiny@intel.com>
> > > > Cc: Dan Williams <dan.j.williams@intel.com>
> > > > Cc: Arnd Bergmann <arnd@arndb.de>
> > > > Cc: Balbir Singh <bsingharora@gmail.com>
> > > > Cc: Dan Carpenter <dan.carpenter@oracle.com>
> > > > Cc: Matthew Wilcox <willy@infradead.org>
> > > > Cc: Souptick Joarder <jrdr.linux@gmail.com>
> > > > Cc: Andrew Morton <akpm@linux-foundation.org>
> > > >  include/linux/hmm.h | 71 +++++++++++++++++++++++----------------------
> > > >  mm/hmm.c            | 51 ++++++++++++++++----------------
> > > >  2 files changed, 62 insertions(+), 60 deletions(-)
> > > 
> > > Applied to hmm.git, thanks
> > 
> > Can you hold off, i was already collecting patches and we will
> > be stepping on each other toe ... for instance i had
> 
> I'd really rather not, I have a lot of work to do for this cycle and
> this part needs to start to move forward now. I can't do everything
> last minute, sorry.
> 
> The patches I picked up all look very safe to move ahead.

I want to post all the patch you need to apply soon, it is really
painful because they are lot of different branches i have to work
with if you start pulling patches that differ from the below branch
then you are making thing ever more difficult for me.

If you hold of i will be posting all the patches in one big set so
that you can apply all of them in one go and it will be a _lot_
easier for me that way.

> 
> > https://cgit.freedesktop.org/~glisse/linux/log/?h=hmm-5.3
> 
> I'm aware, and am referring to this tree. You can trivially rebase it
> on top of hmm.git..
> 
> BTW, what were you planning to do with this git branch anyhow?

This is just something i use to do testing and stack-up all patches.

> 
> As we'd already agreed I will send the hmm patches to Linus on a clean
> git branch so we can properly collaborate between the various involved
> trees.
> 
> As a tree-runner I very much prefer to take patches directly from the
> mailing list where everything is public. This is the standard kernel
> workflow.

Like i said above i want to resend all the patches in one big set.

On process thing it would be easier if we ask Dave/Daniel to merge
hmm within drm this cycle. Merging with Linus will break drm drivers
and it seems easier to me to fix all this within the drm tree.

But if you want to do everything with Linus fine.

Cheers,
Jérôme
Jason Gunthorpe June 6, 2019, 3:57 p.m. UTC | #5
On Mon, May 06, 2019 at 04:29:39PM -0700, rcampbell@nvidia.com wrote:
> @@ -924,6 +922,7 @@ int hmm_range_register(struct hmm_range *range,
>  		       unsigned page_shift)
>  {
>  	unsigned long mask = ((1UL << page_shift) - 1UL);
> +	struct hmm *hmm;
>  
>  	range->valid = false;
>  	range->hmm = NULL;

I was finishing these patches off and noticed that 'hmm' above is
never initialized.

I added the below to this patch:

diff --git a/mm/hmm.c b/mm/hmm.c
index 678873eb21930a..8e7403f081f44a 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -932,19 +932,20 @@ int hmm_range_register(struct hmm_range *range,
 	range->start = start;
 	range->end = end;
 
-	range->hmm = hmm_get_or_create(mm);
-	if (!range->hmm)
+	hmm = hmm_get_or_create(mm);
+	if (!hmm)
 		return -EFAULT;
 
 	/* Check if hmm_mm_destroy() was call. */
-	if (range->hmm->mm == NULL || range->hmm->dead) {
-		hmm_put(range->hmm);
+	if (hmm->mm == NULL || hmm->dead) {
+		hmm_put(hmm);
 		return -EFAULT;
 	}
 
 	/* Initialize range to track CPU page table updates. */
-	mutex_lock(&range->hmm->lock);
+	mutex_lock(&hmm->lock);
 
+	range->hmm = hmm;
 	list_add_rcu(&range->list, &hmm->ranges);
 
 	/*

Which I think was the intent of adding the 'struct hmm *'. I prefer
this arrangement as it does not set an leave an invalid hmm pointer in
the range if there is a failure..

Most probably the later patches fixed this up?

Please confirm, thanks

Regards,
Jason
Joe Perches June 6, 2019, 4:55 p.m. UTC | #6
On Thu, 2019-06-06 at 11:52 -0400, Jerome Glisse wrote:
> On Thu, Jun 06, 2019 at 12:41:29PM -0300, Jason Gunthorpe wrote:
> > On Thu, Jun 06, 2019 at 10:27:43AM -0400, Jerome Glisse wrote:
> > > On Thu, Jun 06, 2019 at 11:16:44AM -0300, Jason Gunthorpe wrote:
> > > > On Mon, May 06, 2019 at 04:29:39PM -0700, rcampbell@nvidia.com wrote:
> > > > > From: Ralph Campbell <rcampbell@nvidia.com>
> > > > > 
> > > > > There are no functional changes, just some coding style clean ups and
> > > > > minor comment changes.
> > > > > 
> > > > > Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> > > > > Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
> > > > > Cc: John Hubbard <jhubbard@nvidia.com>
> > > > > Cc: Ira Weiny <ira.weiny@intel.com>
> > > > > Cc: Dan Williams <dan.j.williams@intel.com>
> > > > > Cc: Arnd Bergmann <arnd@arndb.de>
> > > > > Cc: Balbir Singh <bsingharora@gmail.com>
> > > > > Cc: Dan Carpenter <dan.carpenter@oracle.com>
> > > > > Cc: Matthew Wilcox <willy@infradead.org>
> > > > > Cc: Souptick Joarder <jrdr.linux@gmail.com>
> > > > > Cc: Andrew Morton <akpm@linux-foundation.org>
> > > > >  include/linux/hmm.h | 71 +++++++++++++++++++++++----------------------
> > > > >  mm/hmm.c            | 51 ++++++++++++++++----------------
> > > > >  2 files changed, 62 insertions(+), 60 deletions(-)
> > > > 
> > > > Applied to hmm.git, thanks
> > > 
> > > 
> > > Can you hold off, i was already collecting patches and we will
> > > be stepping on each other toe ... for instance i had
> > 
> > I'd really rather not, I have a lot of work to do for this cycle and
> > this part needs to start to move forward now. I can't do everything
> > last minute, sorry.
> > 
> > The patches I picked up all look very safe to move ahead.
> 
> I want to post all the patch you need to apply soon, it is really
> painful because they are lot of different branches i have to work
> with if you start pulling patches that differ from the below branch
> then you are making thing ever more difficult for me.
> 
> If you hold of i will be posting all the patches in one big set so
> that you can apply all of them in one go and it will be a _lot_
> easier for me that way.

Easier for you is not necessarily easier for a community.
Publish early and often.
Jason Gunthorpe June 6, 2019, 6:54 p.m. UTC | #7
On Thu, Jun 06, 2019 at 11:52:13AM -0400, Jerome Glisse wrote:
> On Thu, Jun 06, 2019 at 12:41:29PM -0300, Jason Gunthorpe wrote:
> > On Thu, Jun 06, 2019 at 10:27:43AM -0400, Jerome Glisse wrote:
> > > On Thu, Jun 06, 2019 at 11:16:44AM -0300, Jason Gunthorpe wrote:
> > > > On Mon, May 06, 2019 at 04:29:39PM -0700, rcampbell@nvidia.com wrote:
> > > > > From: Ralph Campbell <rcampbell@nvidia.com>
> > > > > 
> > > > > There are no functional changes, just some coding style clean ups and
> > > > > minor comment changes.
> > > > > 
> > > > > Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> > > > > Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
> > > > > Cc: John Hubbard <jhubbard@nvidia.com>
> > > > > Cc: Ira Weiny <ira.weiny@intel.com>
> > > > > Cc: Dan Williams <dan.j.williams@intel.com>
> > > > > Cc: Arnd Bergmann <arnd@arndb.de>
> > > > > Cc: Balbir Singh <bsingharora@gmail.com>
> > > > > Cc: Dan Carpenter <dan.carpenter@oracle.com>
> > > > > Cc: Matthew Wilcox <willy@infradead.org>
> > > > > Cc: Souptick Joarder <jrdr.linux@gmail.com>
> > > > > Cc: Andrew Morton <akpm@linux-foundation.org>
> > > > >  include/linux/hmm.h | 71 +++++++++++++++++++++++----------------------
> > > > >  mm/hmm.c            | 51 ++++++++++++++++----------------
> > > > >  2 files changed, 62 insertions(+), 60 deletions(-)
> > > > 
> > > > Applied to hmm.git, thanks
> > > 
> > > Can you hold off, i was already collecting patches and we will
> > > be stepping on each other toe ... for instance i had
> > 
> > I'd really rather not, I have a lot of work to do for this cycle and
> > this part needs to start to move forward now. I can't do everything
> > last minute, sorry.
> > 
> > The patches I picked up all look very safe to move ahead.
> 
> I want to post all the patch you need to apply soon, it is really
> painful because they are lot of different branches

I've already handled everything in your hmm-5.3, so I don't think
there is anything for you to do in that regard. Please double check
though!

If you have new patches please post them against something sensible
(and put them in a git branch) and I can usually sort out 'git am'
conflicts pretty quickly.

> If you hold of i will be posting all the patches in one big set so
> that you can apply all of them in one go and it will be a _lot_
> easier for me that way.

You don't need to repost my patches, I can do that myself, but thanks
for all the help getting them ready! Please respond to my v2 with more
review's/ack's/changes/etc so the series can move toward being
applied.

> On process thing it would be easier if we ask Dave/Daniel to merge
> hmm within drm this cycle. 

Yes, I expect we will do this - probably also to the AMD tree judging
on things in -next. This is the entire point of running a shared tree.

> Merging with Linus will break drm drivers and it seems easier to me
> to fix all this within the drm tree.

This is the normal process with a shared tree, we merge the tree
*everywhere it is required* so all trees can run concurrently.

I will *also* send it to Linus early so that Linus reviews the hmm
patches in the HMM pull request, not in the DRM or RDMA pull
request. This is best-practice when working across trees like this.

Please just keep me up to date when things conflicting arise and we
will work out the best solution.

Reminder, I still need patches from you for:
 - Fix all the kconfig stuff for randconfig failures/etc
 - Enable ARM64
 - Remove deprecated APIs from hmm.h

Please send them ASAP so it can be tested.

There shouldn't be any patches held back for 5.4 - send them all now.

Thanks,
Jason
Ralph Campbell June 7, 2019, 12:44 a.m. UTC | #8
On 6/6/19 8:57 AM, Jason Gunthorpe wrote:
> On Mon, May 06, 2019 at 04:29:39PM -0700, rcampbell@nvidia.com wrote:
>> @@ -924,6 +922,7 @@ int hmm_range_register(struct hmm_range *range,
>>   		       unsigned page_shift)
>>   {
>>   	unsigned long mask = ((1UL << page_shift) - 1UL);
>> +	struct hmm *hmm;
>>   
>>   	range->valid = false;
>>   	range->hmm = NULL;
> 
> I was finishing these patches off and noticed that 'hmm' above is
> never initialized.
> 
> I added the below to this patch:
> 
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 678873eb21930a..8e7403f081f44a 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -932,19 +932,20 @@ int hmm_range_register(struct hmm_range *range,
>   	range->start = start;
>   	range->end = end;
>   
> -	range->hmm = hmm_get_or_create(mm);
> -	if (!range->hmm)
> +	hmm = hmm_get_or_create(mm);
> +	if (!hmm)
>   		return -EFAULT;
>   
>   	/* Check if hmm_mm_destroy() was call. */
> -	if (range->hmm->mm == NULL || range->hmm->dead) {
> -		hmm_put(range->hmm);
> +	if (hmm->mm == NULL || hmm->dead) {
> +		hmm_put(hmm);
>   		return -EFAULT;
>   	}
>   
>   	/* Initialize range to track CPU page table updates. */
> -	mutex_lock(&range->hmm->lock);
> +	mutex_lock(&hmm->lock);
>   
> +	range->hmm = hmm;
>   	list_add_rcu(&range->list, &hmm->ranges);
>   
>   	/*
> 
> Which I think was the intent of adding the 'struct hmm *'. I prefer
> this arrangement as it does not set an leave an invalid hmm pointer in
> the range if there is a failure..
> 
> Most probably the later patches fixed this up?
> 
> Please confirm, thanks
> 
> Regards,
> Jason
> 

Yes, you understand correctly. That was the intended clean up.
I must have split my original patch set incorrectly.
diff mbox series

Patch

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 51ec27a84668..35a429621e1e 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -30,8 +30,8 @@ 
  *
  * HMM address space mirroring API:
  *
- * Use HMM address space mirroring if you want to mirror range of the CPU page
- * table of a process into a device page table. Here, "mirror" means "keep
+ * Use HMM address space mirroring if you want to mirror a range of the CPU
+ * page tables of a process into a device page table. Here, "mirror" means "keep
  * synchronized". Prerequisites: the device must provide the ability to write-
  * protect its page tables (at PAGE_SIZE granularity), and must be able to
  * recover from the resulting potential page faults.
@@ -114,10 +114,11 @@  struct hmm {
  * HMM_PFN_WRITE: CPU page table has write permission set
  * HMM_PFN_DEVICE_PRIVATE: private device memory (ZONE_DEVICE)
  *
- * The driver provide a flags array, if driver valid bit for an entry is bit
- * 3 ie (entry & (1 << 3)) is true if entry is valid then driver must provide
+ * The driver provides a flags array for mapping page protections to device
+ * PTE bits. If the driver valid bit for an entry is bit 3,
+ * i.e., (entry & (1 << 3)), then the driver must provide
  * an array in hmm_range.flags with hmm_range.flags[HMM_PFN_VALID] == 1 << 3.
- * Same logic apply to all flags. This is same idea as vm_page_prot in vma
+ * Same logic apply to all flags. This is the same idea as vm_page_prot in vma
  * except that this is per device driver rather than per architecture.
  */
 enum hmm_pfn_flag_e {
@@ -138,13 +139,13 @@  enum hmm_pfn_flag_e {
  *      be mirrored by a device, because the entry will never have HMM_PFN_VALID
  *      set and the pfn value is undefined.
  *
- * Driver provide entry value for none entry, error entry and special entry,
- * driver can alias (ie use same value for error and special for instance). It
- * should not alias none and error or special.
+ * Driver provides values for none entry, error entry, and special entry.
+ * Driver can alias (i.e., use same value) error and special, but
+ * it should not alias none with error or special.
  *
  * HMM pfn value returned by hmm_vma_get_pfns() or hmm_vma_fault() will be:
  * hmm_range.values[HMM_PFN_ERROR] if CPU page table entry is poisonous,
- * hmm_range.values[HMM_PFN_NONE] if there is no CPU page table
+ * hmm_range.values[HMM_PFN_NONE] if there is no CPU page table entry,
  * hmm_range.values[HMM_PFN_SPECIAL] if CPU page table entry is a special one
  */
 enum hmm_pfn_value_e {
@@ -167,6 +168,7 @@  enum hmm_pfn_value_e {
  * @values: pfn value for some special case (none, special, error, ...)
  * @default_flags: default flags for the range (write, read, ... see hmm doc)
  * @pfn_flags_mask: allows to mask pfn flags so that only default_flags matter
+ * @page_shift: device virtual address shift value (should be >= PAGE_SHIFT)
  * @pfn_shifts: pfn shift value (should be <= PAGE_SHIFT)
  * @valid: pfns array did not change since it has been fill by an HMM function
  */
@@ -189,7 +191,7 @@  struct hmm_range {
 /*
  * hmm_range_page_shift() - return the page shift for the range
  * @range: range being queried
- * Returns: page shift (page size = 1 << page shift) for the range
+ * Return: page shift (page size = 1 << page shift) for the range
  */
 static inline unsigned hmm_range_page_shift(const struct hmm_range *range)
 {
@@ -199,7 +201,7 @@  static inline unsigned hmm_range_page_shift(const struct hmm_range *range)
 /*
  * hmm_range_page_size() - return the page size for the range
  * @range: range being queried
- * Returns: page size for the range in bytes
+ * Return: page size for the range in bytes
  */
 static inline unsigned long hmm_range_page_size(const struct hmm_range *range)
 {
@@ -210,7 +212,7 @@  static inline unsigned long hmm_range_page_size(const struct hmm_range *range)
  * hmm_range_wait_until_valid() - wait for range to be valid
  * @range: range affected by invalidation to wait on
  * @timeout: time out for wait in ms (ie abort wait after that period of time)
- * Returns: true if the range is valid, false otherwise.
+ * Return: true if the range is valid, false otherwise.
  */
 static inline bool hmm_range_wait_until_valid(struct hmm_range *range,
 					      unsigned long timeout)
@@ -231,7 +233,7 @@  static inline bool hmm_range_wait_until_valid(struct hmm_range *range,
 /*
  * hmm_range_valid() - test if a range is valid or not
  * @range: range
- * Returns: true if the range is valid, false otherwise.
+ * Return: true if the range is valid, false otherwise.
  */
 static inline bool hmm_range_valid(struct hmm_range *range)
 {
@@ -242,7 +244,7 @@  static inline bool hmm_range_valid(struct hmm_range *range)
  * hmm_device_entry_to_page() - return struct page pointed to by a device entry
  * @range: range use to decode device entry value
  * @entry: device entry value to get corresponding struct page from
- * Returns: struct page pointer if entry is a valid, NULL otherwise
+ * Return: struct page pointer if entry is a valid, NULL otherwise
  *
  * If the device entry is valid (ie valid flag set) then return the struct page
  * matching the entry value. Otherwise return NULL.
@@ -265,7 +267,7 @@  static inline struct page *hmm_device_entry_to_page(const struct hmm_range *rang
  * hmm_device_entry_to_pfn() - return pfn value store in a device entry
  * @range: range use to decode device entry value
  * @entry: device entry to extract pfn from
- * Returns: pfn value if device entry is valid, -1UL otherwise
+ * Return: pfn value if device entry is valid, -1UL otherwise
  */
 static inline unsigned long
 hmm_device_entry_to_pfn(const struct hmm_range *range, uint64_t pfn)
@@ -285,7 +287,7 @@  hmm_device_entry_to_pfn(const struct hmm_range *range, uint64_t pfn)
  * hmm_device_entry_from_page() - create a valid device entry for a page
  * @range: range use to encode HMM pfn value
  * @page: page for which to create the device entry
- * Returns: valid device entry for the page
+ * Return: valid device entry for the page
  */
 static inline uint64_t hmm_device_entry_from_page(const struct hmm_range *range,
 						  struct page *page)
@@ -298,7 +300,7 @@  static inline uint64_t hmm_device_entry_from_page(const struct hmm_range *range,
  * hmm_device_entry_from_pfn() - create a valid device entry value from pfn
  * @range: range use to encode HMM pfn value
  * @pfn: pfn value for which to create the device entry
- * Returns: valid device entry for the pfn
+ * Return: valid device entry for the pfn
  */
 static inline uint64_t hmm_device_entry_from_pfn(const struct hmm_range *range,
 						 unsigned long pfn)
@@ -403,7 +405,7 @@  enum hmm_update_event {
 };
 
 /*
- * struct hmm_update - HMM update informations for callback
+ * struct hmm_update - HMM update information for callback
  *
  * @start: virtual start address of the range to update
  * @end: virtual end address of the range to update
@@ -436,8 +438,8 @@  struct hmm_mirror_ops {
 	/* sync_cpu_device_pagetables() - synchronize page tables
 	 *
 	 * @mirror: pointer to struct hmm_mirror
-	 * @update: update informations (see struct hmm_update)
-	 * Returns: -EAGAIN if update.blockable false and callback need to
+	 * @update: update information (see struct hmm_update)
+	 * Return: -EAGAIN if update.blockable false and callback need to
 	 *          block, 0 otherwise.
 	 *
 	 * This callback ultimately originates from mmu_notifiers when the CPU
@@ -476,13 +478,13 @@  void hmm_mirror_unregister(struct hmm_mirror *mirror);
 /*
  * hmm_mirror_mm_is_alive() - test if mm is still alive
  * @mirror: the HMM mm mirror for which we want to lock the mmap_sem
- * Returns: false if the mm is dead, true otherwise
+ * Return: false if the mm is dead, true otherwise
  *
- * This is an optimization it will not accurately always return -EINVAL if the
- * mm is dead ie there can be false negative (process is being kill but HMM is
- * not yet inform of that). It is only intented to be use to optimize out case
- * where driver is about to do something time consuming and it would be better
- * to skip it if the mm is dead.
+ * This is an optimization, it will not always accurately return false if the
+ * mm is dead; i.e., there can be false negatives (process is being killed but
+ * HMM is not yet informed of that). It is only intended to be used to optimize
+ * out cases where the driver is about to do something time consuming and it
+ * would be better to skip it if the mm is dead.
  */
 static inline bool hmm_mirror_mm_is_alive(struct hmm_mirror *mirror)
 {
@@ -497,7 +499,6 @@  static inline bool hmm_mirror_mm_is_alive(struct hmm_mirror *mirror)
 	return true;
 }
 
-
 /*
  * Please see Documentation/vm/hmm.rst for how to use the range API.
  */
@@ -570,7 +571,7 @@  static inline int hmm_vma_fault(struct hmm_range *range, bool block)
 	ret = hmm_range_fault(range, block);
 	if (ret <= 0) {
 		if (ret == -EBUSY || !ret) {
-			/* Same as above  drop mmap_sem to match old API. */
+			/* Same as above, drop mmap_sem to match old API. */
 			up_read(&range->vma->vm_mm->mmap_sem);
 			ret = -EBUSY;
 		} else if (ret == -EAGAIN)
@@ -637,7 +638,7 @@  struct hmm_devmem_ops {
 	 * @page: pointer to struct page backing virtual address (unreliable)
 	 * @flags: FAULT_FLAG_* (see include/linux/mm.h)
 	 * @pmdp: page middle directory
-	 * Returns: VM_FAULT_MINOR/MAJOR on success or one of VM_FAULT_ERROR
+	 * Return: VM_FAULT_MINOR/MAJOR on success or one of VM_FAULT_ERROR
 	 *   on error
 	 *
 	 * The callback occurs whenever there is a CPU page fault or GUP on a
@@ -645,14 +646,14 @@  struct hmm_devmem_ops {
 	 * page back to regular memory (CPU accessible).
 	 *
 	 * The device driver is free to migrate more than one page from the
-	 * fault() callback as an optimization. However if device decide to
-	 * migrate more than one page it must always priotirize the faulting
+	 * fault() callback as an optimization. However if the device decides
+	 * to migrate more than one page it must always priotirize the faulting
 	 * address over the others.
 	 *
-	 * The struct page pointer is only given as an hint to allow quick
+	 * The struct page pointer is only given as a hint to allow quick
 	 * lookup of internal device driver data. A concurrent migration
-	 * might have already free that page and the virtual address might
-	 * not longer be back by it. So it should not be modified by the
+	 * might have already freed that page and the virtual address might
+	 * no longer be backed by it. So it should not be modified by the
 	 * callback.
 	 *
 	 * Note that mmap semaphore is held in read mode at least when this
@@ -679,7 +680,7 @@  struct hmm_devmem_ops {
  * @ref: per CPU refcount
  * @page_fault: callback when CPU fault on an unaddressable device page
  *
- * This an helper structure for device drivers that do not wish to implement
+ * This is a helper structure for device drivers that do not wish to implement
  * the gory details related to hotplugging new memoy and allocating struct
  * pages.
  *
diff --git a/mm/hmm.c b/mm/hmm.c
index 0db8491090b8..f6c4c8633db9 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -162,9 +162,8 @@  static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm)
 
 	/* Wake-up everyone waiting on any range. */
 	mutex_lock(&hmm->lock);
-	list_for_each_entry(range, &hmm->ranges, list) {
+	list_for_each_entry(range, &hmm->ranges, list)
 		range->valid = false;
-	}
 	wake_up_all(&hmm->wq);
 	mutex_unlock(&hmm->lock);
 
@@ -175,9 +174,10 @@  static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm)
 		list_del_init(&mirror->list);
 		if (mirror->ops->release) {
 			/*
-			 * Drop mirrors_sem so callback can wait on any pending
-			 * work that might itself trigger mmu_notifier callback
-			 * and thus would deadlock with us.
+			 * Drop mirrors_sem so the release callback can wait
+			 * on any pending work that might itself trigger a
+			 * mmu_notifier callback and thus would deadlock with
+			 * us.
 			 */
 			up_write(&hmm->mirrors_sem);
 			mirror->ops->release(mirror);
@@ -232,11 +232,8 @@  static int hmm_invalidate_range_start(struct mmu_notifier *mn,
 		int ret;
 
 		ret = mirror->ops->sync_cpu_device_pagetables(mirror, &update);
-		if (!update.blockable && ret == -EAGAIN) {
-			up_read(&hmm->mirrors_sem);
-			ret = -EAGAIN;
-			goto out;
-		}
+		if (!update.blockable && ret == -EAGAIN)
+			break;
 	}
 	up_read(&hmm->mirrors_sem);
 
@@ -280,6 +277,7 @@  static const struct mmu_notifier_ops hmm_mmu_notifier_ops = {
  *
  * @mirror: new mirror struct to register
  * @mm: mm to register against
+ * Return: 0 on success, -ENOMEM if no memory, -EINVAL if invalid arguments
  *
  * To start mirroring a process address space, the device driver must register
  * an HMM mirror struct.
@@ -307,7 +305,7 @@  EXPORT_SYMBOL(hmm_mirror_register);
 /*
  * hmm_mirror_unregister() - unregister a mirror
  *
- * @mirror: new mirror struct to register
+ * @mirror: mirror struct to unregister
  *
  * Stop mirroring a process address space, and cleanup.
  */
@@ -381,7 +379,7 @@  static int hmm_pfns_bad(unsigned long addr,
  * @fault: should we fault or not ?
  * @write_fault: write fault ?
  * @walk: mm_walk structure
- * Returns: 0 on success, -EBUSY after page fault, or page fault error
+ * Return: 0 on success, -EBUSY after page fault, or page fault error
  *
  * This function will be called whenever pmd_none() or pte_none() returns true,
  * or whenever there is no page directory covering the virtual address range.
@@ -924,6 +922,7 @@  int hmm_range_register(struct hmm_range *range,
 		       unsigned page_shift)
 {
 	unsigned long mask = ((1UL << page_shift) - 1UL);
+	struct hmm *hmm;
 
 	range->valid = false;
 	range->hmm = NULL;
@@ -947,18 +946,18 @@  int hmm_range_register(struct hmm_range *range,
 		return -EFAULT;
 	}
 
-	/* Initialize range to track CPU page table update */
+	/* Initialize range to track CPU page table updates. */
 	mutex_lock(&range->hmm->lock);
 
-	list_add_rcu(&range->list, &range->hmm->ranges);
+	list_add_rcu(&range->list, &hmm->ranges);
 
 	/*
 	 * If there are any concurrent notifiers we have to wait for them for
 	 * the range to be valid (see hmm_range_wait_until_valid()).
 	 */
-	if (!range->hmm->notifiers)
+	if (!hmm->notifiers)
 		range->valid = true;
-	mutex_unlock(&range->hmm->lock);
+	mutex_unlock(&hmm->lock);
 
 	return 0;
 }
@@ -973,17 +972,19 @@  EXPORT_SYMBOL(hmm_range_register);
  */
 void hmm_range_unregister(struct hmm_range *range)
 {
+	struct hmm *hmm = range->hmm;
+
 	/* Sanity check this really should not happen. */
-	if (range->hmm == NULL || range->end <= range->start)
+	if (hmm == NULL || range->end <= range->start)
 		return;
 
-	mutex_lock(&range->hmm->lock);
+	mutex_lock(&hmm->lock);
 	list_del_rcu(&range->list);
-	mutex_unlock(&range->hmm->lock);
+	mutex_unlock(&hmm->lock);
 
 	/* Drop reference taken by hmm_range_register() */
 	range->valid = false;
-	hmm_put(range->hmm);
+	hmm_put(hmm);
 	range->hmm = NULL;
 }
 EXPORT_SYMBOL(hmm_range_unregister);
@@ -991,7 +992,7 @@  EXPORT_SYMBOL(hmm_range_unregister);
 /*
  * hmm_range_snapshot() - snapshot CPU page table for a range
  * @range: range
- * Returns: -EINVAL if invalid argument, -ENOMEM out of memory, -EPERM invalid
+ * Return: -EINVAL if invalid argument, -ENOMEM out of memory, -EPERM invalid
  *          permission (for instance asking for write and range is read only),
  *          -EAGAIN if you need to retry, -EFAULT invalid (ie either no valid
  *          vma or it is illegal to access that range), number of valid pages
@@ -1075,7 +1076,7 @@  EXPORT_SYMBOL(hmm_range_snapshot);
  * hmm_range_fault() - try to fault some address in a virtual address range
  * @range: range being faulted
  * @block: allow blocking on fault (if true it sleeps and do not drop mmap_sem)
- * Returns: number of valid pages in range->pfns[] (from range start
+ * Return: number of valid pages in range->pfns[] (from range start
  *          address). This may be zero. If the return value is negative,
  *          then one of the following values may be returned:
  *
@@ -1193,7 +1194,7 @@  EXPORT_SYMBOL(hmm_range_fault);
  * @device: device against to dma map page to
  * @daddrs: dma address of mapped pages
  * @block: allow blocking on fault (if true it sleeps and do not drop mmap_sem)
- * Returns: number of pages mapped on success, -EAGAIN if mmap_sem have been
+ * Return: number of pages mapped on success, -EAGAIN if mmap_sem have been
  *          drop and you need to try again, some other error value otherwise
  *
  * Note same usage pattern as hmm_range_fault().
@@ -1281,7 +1282,7 @@  EXPORT_SYMBOL(hmm_range_dma_map);
  * @device: device against which dma map was done
  * @daddrs: dma address of mapped pages
  * @dirty: dirty page if it had the write flag set
- * Returns: number of page unmapped on success, -EINVAL otherwise
+ * Return: number of page unmapped on success, -EINVAL otherwise
  *
  * Note that caller MUST abide by mmu notifier or use HMM mirror and abide
  * to the sync_cpu_device_pagetables() callback so that it is safe here to
@@ -1404,7 +1405,7 @@  static void hmm_devmem_free(struct page *page, void *data)
  * @ops: memory event device driver callback (see struct hmm_devmem_ops)
  * @device: device struct to bind the resource too
  * @size: size in bytes of the device memory to add
- * Returns: pointer to new hmm_devmem struct ERR_PTR otherwise
+ * Return: pointer to new hmm_devmem struct ERR_PTR otherwise
  *
  * This function first finds an empty range of physical address big enough to
  * contain the new resource, and then hotplugs it as ZONE_DEVICE memory, which