diff mbox series

btrfs: Do not check for PagePrivate twice

Message ID 20191018181544.26515-1-rgoldwyn@suse.de (mailing list archive)
State New, archived
Headers show
Series btrfs: Do not check for PagePrivate twice | expand

Commit Message

Goldwyn Rodrigues Oct. 18, 2019, 6:15 p.m. UTC
From: Goldwyn Rodrigues <rgoldwyn@suse.com>

We are checking PagePrivate twice, once with lock and once without.
Perform the check only once.

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
---
 fs/btrfs/extent_io.c | 3 ---
 1 file changed, 3 deletions(-)

Comments

Filipe Manana Oct. 21, 2019, 9:52 a.m. UTC | #1
On Sat, Oct 19, 2019 at 10:05 AM Goldwyn Rodrigues <rgoldwyn@suse.de> wrote:
>
> From: Goldwyn Rodrigues <rgoldwyn@suse.com>
>
> We are checking PagePrivate twice, once with lock and once without.
> Perform the check only once.

Have you checked if there's some performance degradation after
removing the check?
My guess is it's there to avoid taking the lock, as the lock can be
heavily used on a system under heavy load (maybe even if it's not too
heavy, since we generate a lot of dirty metadata due to cow).
The page may have been released after locking the mapping, that's why
we check it twice, and after unlocking we are sure it can not be
released due to taking a reference on the extent buffer.

Thanks.

>
> Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
> ---
>  fs/btrfs/extent_io.c | 3 ---
>  1 file changed, 3 deletions(-)
>
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index cceaf05aada2..425ba359178c 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -3959,9 +3959,6 @@ int btree_write_cache_pages(struct address_space *mapping,
>                 for (i = 0; i < nr_pages; i++) {
>                         struct page *page = pvec.pages[i];
>
> -                       if (!PagePrivate(page))
> -                               continue;
> -
>                         spin_lock(&mapping->private_lock);
>                         if (!PagePrivate(page)) {
>                                 spin_unlock(&mapping->private_lock);
> --
> 2.16.4
>
David Sterba Nov. 28, 2019, 11:44 a.m. UTC | #2
On Mon, Oct 21, 2019 at 10:52:06AM +0100, Filipe Manana wrote:
> On Sat, Oct 19, 2019 at 10:05 AM Goldwyn Rodrigues <rgoldwyn@suse.de> wrote:
> > We are checking PagePrivate twice, once with lock and once without.
> > Perform the check only once.
> 
> Have you checked if there's some performance degradation after
> removing the check?
> My guess is it's there to avoid taking the lock, as the lock can be
> heavily used on a system under heavy load (maybe even if it's not too
> heavy, since we generate a lot of dirty metadata due to cow).
> The page may have been released after locking the mapping, that's why
> we check it twice, and after unlocking we are sure it can not be
> released due to taking a reference on the extent buffer.

That's my understanding as well, so the duplicate unlocked check should
stay.
diff mbox series

Patch

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index cceaf05aada2..425ba359178c 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3959,9 +3959,6 @@  int btree_write_cache_pages(struct address_space *mapping,
 		for (i = 0; i < nr_pages; i++) {
 			struct page *page = pvec.pages[i];
 
-			if (!PagePrivate(page))
-				continue;
-
 			spin_lock(&mapping->private_lock);
 			if (!PagePrivate(page)) {
 				spin_unlock(&mapping->private_lock);