diff mbox series

[linux-next] mm/huge_memory: remove redundant locking when parsing THP sysfs input

Message ID 20240511144048767fdB7EqYoMHEw6A5b6FrXM@zte.com.cn (mailing list archive)
State New
Headers show
Series [linux-next] mm/huge_memory: remove redundant locking when parsing THP sysfs input | expand

Commit Message

xu.xin16@zte.com.cn May 11, 2024, 6:40 a.m. UTC
From: Ran Xiaokai <ran.xiaokai@zte.com.cn>

Since sysfs_streq() only performs a simple memory comparison operation
and will not introduce any sleepable operation, So there is no
need to drop the lock when parsing input. Remove redundant lock
and unlock operations to make code cleaner.

Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
---
 mm/huge_memory.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

Comments

Matthew Wilcox May 11, 2024, 4:05 p.m. UTC | #1
On Sat, May 11, 2024 at 02:40:48PM +0800, xu.xin16@zte.com.cn wrote:
> From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> 
> Since sysfs_streq() only performs a simple memory comparison operation
> and will not introduce any sleepable operation, So there is no
> need to drop the lock when parsing input. Remove redundant lock
> and unlock operations to make code cleaner.

i disagree that it makes the code cleaner.
Andrew Morton May 11, 2024, 9:35 p.m. UTC | #2
(cc Muchun)

On Sat, 11 May 2024 17:05:13 +0100 Matthew Wilcox <willy@infradead.org> wrote:

> On Sat, May 11, 2024 at 02:40:48PM +0800, xu.xin16@zte.com.cn wrote:
> > From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> > 
> > Since sysfs_streq() only performs a simple memory comparison operation
> > and will not introduce any sleepable operation, So there is no
> > need to drop the lock when parsing input. Remove redundant lock
> > and unlock operations to make code cleaner.
> 
> i disagree that it makes the code cleaner.

Oh.  Why is that?  The end result looks nice to me and saves a bit of
.text.
David Hildenbrand May 13, 2024, 3:58 p.m. UTC | #3
On 11.05.24 08:40, xu.xin16@zte.com.cn wrote:
> From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> 
> Since sysfs_streq() only performs a simple memory comparison operation
> and will not introduce any sleepable operation, So there is no
> need to drop the lock when parsing input. Remove redundant lock
> and unlock operations to make code cleaner.
> 
> Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> ---
>   mm/huge_memory.c | 10 ++--------
>   1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 89f58c7603b2..87123a87cb21 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -478,32 +478,26 @@ static ssize_t thpsize_enabled_store(struct kobject *kobj,
>   	int order = to_thpsize(kobj)->order;
>   	ssize_t ret = count;
> 
> +	spin_lock(&huge_anon_orders_lock);
>   	if (sysfs_streq(buf, "always")) {
> -		spin_lock(&huge_anon_orders_lock);
>   		clear_bit(order, &huge_anon_orders_inherit);
>   		clear_bit(order, &huge_anon_orders_madvise);
>   		set_bit(order, &huge_anon_orders_always);
> -		spin_unlock(&huge_anon_orders_lock);
>   	} else if (sysfs_streq(buf, "inherit")) {
> -		spin_lock(&huge_anon_orders_lock);
>   		clear_bit(order, &huge_anon_orders_always);
>   		clear_bit(order, &huge_anon_orders_madvise);
>   		set_bit(order, &huge_anon_orders_inherit);
> -		spin_unlock(&huge_anon_orders_lock);
>   	} else if (sysfs_streq(buf, "madvise")) {
> -		spin_lock(&huge_anon_orders_lock);
>   		clear_bit(order, &huge_anon_orders_always);
>   		clear_bit(order, &huge_anon_orders_inherit);
>   		set_bit(order, &huge_anon_orders_madvise);
> -		spin_unlock(&huge_anon_orders_lock);
>   	} else if (sysfs_streq(buf, "never")) {
> -		spin_lock(&huge_anon_orders_lock);
>   		clear_bit(order, &huge_anon_orders_always);
>   		clear_bit(order, &huge_anon_orders_inherit);
>   		clear_bit(order, &huge_anon_orders_madvise);
> -		spin_unlock(&huge_anon_orders_lock);
>   	} else
>   		ret = -EINVAL;
> +	spin_unlock(&huge_anon_orders_lock);
> 
>   	return ret;
>   }

No strong opinion

Reviewed-by: David Hildenbrand <david@redhat.com>
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 89f58c7603b2..87123a87cb21 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -478,32 +478,26 @@  static ssize_t thpsize_enabled_store(struct kobject *kobj,
 	int order = to_thpsize(kobj)->order;
 	ssize_t ret = count;

+	spin_lock(&huge_anon_orders_lock);
 	if (sysfs_streq(buf, "always")) {
-		spin_lock(&huge_anon_orders_lock);
 		clear_bit(order, &huge_anon_orders_inherit);
 		clear_bit(order, &huge_anon_orders_madvise);
 		set_bit(order, &huge_anon_orders_always);
-		spin_unlock(&huge_anon_orders_lock);
 	} else if (sysfs_streq(buf, "inherit")) {
-		spin_lock(&huge_anon_orders_lock);
 		clear_bit(order, &huge_anon_orders_always);
 		clear_bit(order, &huge_anon_orders_madvise);
 		set_bit(order, &huge_anon_orders_inherit);
-		spin_unlock(&huge_anon_orders_lock);
 	} else if (sysfs_streq(buf, "madvise")) {
-		spin_lock(&huge_anon_orders_lock);
 		clear_bit(order, &huge_anon_orders_always);
 		clear_bit(order, &huge_anon_orders_inherit);
 		set_bit(order, &huge_anon_orders_madvise);
-		spin_unlock(&huge_anon_orders_lock);
 	} else if (sysfs_streq(buf, "never")) {
-		spin_lock(&huge_anon_orders_lock);
 		clear_bit(order, &huge_anon_orders_always);
 		clear_bit(order, &huge_anon_orders_inherit);
 		clear_bit(order, &huge_anon_orders_madvise);
-		spin_unlock(&huge_anon_orders_lock);
 	} else
 		ret = -EINVAL;
+	spin_unlock(&huge_anon_orders_lock);

 	return ret;
 }