diff mbox series

[v3,04/12] mm: introduce page_ref_sub_return()

Message ID 20200201034029.4063170-5-jhubbard@nvidia.com (mailing list archive)
State Superseded
Headers show
Series mm/gup: track FOLL_PIN pages | expand

Commit Message

John Hubbard Feb. 1, 2020, 3:40 a.m. UTC
An upcoming patch requires subtracting a large chunk of refcounts from
a page, and checking what the resulting refcount is. This is a little
different than the usual "check for zero refcount" that many of the
page ref functions already do. However, it is similar to a few other
routines that (like this one) are generally useful for things such as
1-based refcounting.

Add page_ref_sub_return(), that subtracts a chunk of refcounts
atomically, and returns an atomic snapshot of the result.

Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---
 include/linux/page_ref.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Kirill A. Shutemov Feb. 3, 2020, 1:23 p.m. UTC | #1
On Fri, Jan 31, 2020 at 07:40:21PM -0800, John Hubbard wrote:
> An upcoming patch requires subtracting a large chunk of refcounts from
> a page, and checking what the resulting refcount is. This is a little
> different than the usual "check for zero refcount" that many of the
> page ref functions already do. However, it is similar to a few other
> routines that (like this one) are generally useful for things such as
> 1-based refcounting.
> 
> Add page_ref_sub_return(), that subtracts a chunk of refcounts
> atomically, and returns an atomic snapshot of the result.
> 
> Signed-off-by: John Hubbard <jhubbard@nvidia.com>
> ---
>  include/linux/page_ref.h | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
> index 14d14beb1f7f..b9cbe553d1e7 100644
> --- a/include/linux/page_ref.h
> +++ b/include/linux/page_ref.h
> @@ -102,6 +102,16 @@ static inline void page_ref_sub(struct page *page, int nr)
>  		__page_ref_mod(page, -nr);
>  }
>  
> +static inline int page_ref_sub_return(struct page *page, int nr)
> +{
> +	int ret = atomic_sub_return(nr, &page->_refcount);
> +
> +	if (page_ref_tracepoint_active(__tracepoint_page_ref_mod))
> +		__page_ref_mod(page, -nr);

Shouldn't it be __page_ref_mod_and_return() and relevant tracepoint?

> +
> +	return ret;
> +}
> +
>  static inline void page_ref_inc(struct page *page)
>  {
>  	atomic_inc(&page->_refcount);
> -- 
> 2.25.0
>
John Hubbard Feb. 3, 2020, 8:03 p.m. UTC | #2
On 2/3/20 5:23 AM, Kirill A. Shutemov wrote:
> On Fri, Jan 31, 2020 at 07:40:21PM -0800, John Hubbard wrote:
>> An upcoming patch requires subtracting a large chunk of refcounts from
>> a page, and checking what the resulting refcount is. This is a little
>> different than the usual "check for zero refcount" that many of the
>> page ref functions already do. However, it is similar to a few other
>> routines that (like this one) are generally useful for things such as
>> 1-based refcounting.
>>
>> Add page_ref_sub_return(), that subtracts a chunk of refcounts
>> atomically, and returns an atomic snapshot of the result.
>>
>> Signed-off-by: John Hubbard <jhubbard@nvidia.com>
>> ---
>>  include/linux/page_ref.h | 10 ++++++++++
>>  1 file changed, 10 insertions(+)
>>
>> diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
>> index 14d14beb1f7f..b9cbe553d1e7 100644
>> --- a/include/linux/page_ref.h
>> +++ b/include/linux/page_ref.h
>> @@ -102,6 +102,16 @@ static inline void page_ref_sub(struct page *page, int nr)
>>  		__page_ref_mod(page, -nr);
>>  }
>>  
>> +static inline int page_ref_sub_return(struct page *page, int nr)
>> +{
>> +	int ret = atomic_sub_return(nr, &page->_refcount);
>> +
>> +	if (page_ref_tracepoint_active(__tracepoint_page_ref_mod))
>> +		__page_ref_mod(page, -nr);
> 
> Shouldn't it be __page_ref_mod_and_return() and relevant tracepoint?


Why yes, it should. I didn't even notice that that more precise function existed,
thanks for catching that. I've changed it to this for the next version of the
patchset:

static inline int page_ref_sub_return(struct page *page, int nr)
{
	int ret = atomic_sub_return(nr, &page->_refcount);

	if (page_ref_tracepoint_active(__tracepoint_page_ref_mod))
		__page_ref_mod_and_return(page, -nr, ret);
	return ret;
}



thanks,
diff mbox series

Patch

diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 14d14beb1f7f..b9cbe553d1e7 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -102,6 +102,16 @@  static inline void page_ref_sub(struct page *page, int nr)
 		__page_ref_mod(page, -nr);
 }
 
+static inline int page_ref_sub_return(struct page *page, int nr)
+{
+	int ret = atomic_sub_return(nr, &page->_refcount);
+
+	if (page_ref_tracepoint_active(__tracepoint_page_ref_mod))
+		__page_ref_mod(page, -nr);
+
+	return ret;
+}
+
 static inline void page_ref_inc(struct page *page)
 {
 	atomic_inc(&page->_refcount);