Message ID | 20200908075230.86856-10-wqu@suse.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | btrfs: add read-only support for subpage sector size | expand |
On 8.09.20 г. 10:52 ч., Qu Wenruo wrote: > We have attach_extent_buffer_page() and it get utilized in > btrfs_clone_extent_buffer() and alloc_extent_buffer(). > > But in btrfs_release_extent_buffer_pages() we manually call > detach_page_private(). > > This is fine for current code, but if we're going to support subpage > size, we will do a lot of more work other than just calling > detach_page_private(). > > This patch will extract the main work of btrfs_clone_extent_buffer() Did you mean to type btrfs_release_extent_buffer_pages instead of clone_extent_buffer ? > into detach_extent_buffer_page() so that later subpage size support can > put their own code into them. > > Signed-off-by: Qu Wenruo <wqu@suse.com> > --- > fs/btrfs/extent_io.c | 58 +++++++++++++++++++------------------------- > 1 file changed, 25 insertions(+), 33 deletions(-) > > diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c > index 3c8fe40f67fa..1cb41dab7a1d 100644 > --- a/fs/btrfs/extent_io.c > +++ b/fs/btrfs/extent_io.c > @@ -4920,6 +4920,29 @@ int extent_buffer_under_io(const struct extent_buffer *eb) > test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); > } > > +static void detach_extent_buffer_page(struct extent_buffer *eb, > + struct page *page) > +{ > + bool mapped = !test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags); nit: Now you are performing the atomic op once per page rather than once per-eb. > + > + if (!page) > + return; > + > + if (mapped) > + spin_lock(&page->mapping->private_lock); > + if (PagePrivate(page) && page->private == (unsigned long)eb) { > + BUG_ON(test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); > + BUG_ON(PageDirty(page)); > + BUG_ON(PageWriteback(page)); > + /* We need to make sure we haven't be attached to a new eb. */ > + detach_page_private(page); > + } > + if (mapped) > + spin_unlock(&page->mapping->private_lock); > + /* One for when we allocated the page */ > + put_page(page); > +} > + > /* > * Release all pages attached to the extent buffer. > */ > @@ -4927,43 +4950,12 @@ static void btrfs_release_extent_buffer_pages(struct extent_buffer *eb) > { > int i; > int num_pages; > - int mapped = !test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags); > > BUG_ON(extent_buffer_under_io(eb)); > > num_pages = num_extent_pages(eb); > - for (i = 0; i < num_pages; i++) { > - struct page *page = eb->pages[i]; > - > - if (!page) > - continue; > - if (mapped) > - spin_lock(&page->mapping->private_lock); > - /* > - * We do this since we'll remove the pages after we've > - * removed the eb from the radix tree, so we could race > - * and have this page now attached to the new eb. So > - * only clear page_private if it's still connected to > - * this eb. > - */ > - if (PagePrivate(page) && > - page->private == (unsigned long)eb) { > - BUG_ON(test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); > - BUG_ON(PageDirty(page)); > - BUG_ON(PageWriteback(page)); > - /* > - * We need to make sure we haven't be attached > - * to a new eb. > - */ > - detach_page_private(page); > - } > - > - if (mapped) > - spin_unlock(&page->mapping->private_lock); > - > - /* One for when we allocated the page */ > - put_page(page); > - } > + for (i = 0; i < num_pages; i++) > + detach_extent_buffer_page(eb, eb->pages[i]); > } > > /* >
On 2020/9/11 下午7:17, Nikolay Borisov wrote: > > > On 8.09.20 г. 10:52 ч., Qu Wenruo wrote: >> We have attach_extent_buffer_page() and it get utilized in >> btrfs_clone_extent_buffer() and alloc_extent_buffer(). >> >> But in btrfs_release_extent_buffer_pages() we manually call >> detach_page_private(). >> >> This is fine for current code, but if we're going to support subpage >> size, we will do a lot of more work other than just calling >> detach_page_private(). >> >> This patch will extract the main work of btrfs_clone_extent_buffer() > > Did you mean to type btrfs_release_extent_buffer_pages instead of > clone_extent_buffer ? Oh, that's what I mean exactly... > >> into detach_extent_buffer_page() so that later subpage size support can >> put their own code into them. >> >> Signed-off-by: Qu Wenruo <wqu@suse.com> >> --- >> fs/btrfs/extent_io.c | 58 +++++++++++++++++++------------------------- >> 1 file changed, 25 insertions(+), 33 deletions(-) >> >> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c >> index 3c8fe40f67fa..1cb41dab7a1d 100644 >> --- a/fs/btrfs/extent_io.c >> +++ b/fs/btrfs/extent_io.c >> @@ -4920,6 +4920,29 @@ int extent_buffer_under_io(const struct extent_buffer *eb) >> test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); >> } >> >> +static void detach_extent_buffer_page(struct extent_buffer *eb, >> + struct page *page) >> +{ >> + bool mapped = !test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags); > > nit: Now you are performing the atomic op once per page rather than once > per-eb. Makes sense, I should just extract the for () loop into a function rather than the loop body. Thanks, Qu > >> + >> + if (!page) >> + return; >> + >> + if (mapped) >> + spin_lock(&page->mapping->private_lock); >> + if (PagePrivate(page) && page->private == (unsigned long)eb) { >> + BUG_ON(test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); >> + BUG_ON(PageDirty(page)); >> + BUG_ON(PageWriteback(page)); >> + /* We need to make sure we haven't be attached to a new eb. */ >> + detach_page_private(page); >> + } >> + if (mapped) >> + spin_unlock(&page->mapping->private_lock); >> + /* One for when we allocated the page */ >> + put_page(page); >> +} >> + >> /* >> * Release all pages attached to the extent buffer. >> */ >> @@ -4927,43 +4950,12 @@ static void btrfs_release_extent_buffer_pages(struct extent_buffer *eb) >> { >> int i; >> int num_pages; >> - int mapped = !test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags); >> >> BUG_ON(extent_buffer_under_io(eb)); >> >> num_pages = num_extent_pages(eb); >> - for (i = 0; i < num_pages; i++) { >> - struct page *page = eb->pages[i]; >> - >> - if (!page) >> - continue; >> - if (mapped) >> - spin_lock(&page->mapping->private_lock); >> - /* >> - * We do this since we'll remove the pages after we've >> - * removed the eb from the radix tree, so we could race >> - * and have this page now attached to the new eb. So >> - * only clear page_private if it's still connected to >> - * this eb. >> - */ >> - if (PagePrivate(page) && >> - page->private == (unsigned long)eb) { >> - BUG_ON(test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); >> - BUG_ON(PageDirty(page)); >> - BUG_ON(PageWriteback(page)); >> - /* >> - * We need to make sure we haven't be attached >> - * to a new eb. >> - */ >> - detach_page_private(page); >> - } >> - >> - if (mapped) >> - spin_unlock(&page->mapping->private_lock); >> - >> - /* One for when we allocated the page */ >> - put_page(page); >> - } >> + for (i = 0; i < num_pages; i++) >> + detach_extent_buffer_page(eb, eb->pages[i]); >> } >> >> /* >> >
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 3c8fe40f67fa..1cb41dab7a1d 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -4920,6 +4920,29 @@ int extent_buffer_under_io(const struct extent_buffer *eb) test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); } +static void detach_extent_buffer_page(struct extent_buffer *eb, + struct page *page) +{ + bool mapped = !test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags); + + if (!page) + return; + + if (mapped) + spin_lock(&page->mapping->private_lock); + if (PagePrivate(page) && page->private == (unsigned long)eb) { + BUG_ON(test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); + BUG_ON(PageDirty(page)); + BUG_ON(PageWriteback(page)); + /* We need to make sure we haven't be attached to a new eb. */ + detach_page_private(page); + } + if (mapped) + spin_unlock(&page->mapping->private_lock); + /* One for when we allocated the page */ + put_page(page); +} + /* * Release all pages attached to the extent buffer. */ @@ -4927,43 +4950,12 @@ static void btrfs_release_extent_buffer_pages(struct extent_buffer *eb) { int i; int num_pages; - int mapped = !test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags); BUG_ON(extent_buffer_under_io(eb)); num_pages = num_extent_pages(eb); - for (i = 0; i < num_pages; i++) { - struct page *page = eb->pages[i]; - - if (!page) - continue; - if (mapped) - spin_lock(&page->mapping->private_lock); - /* - * We do this since we'll remove the pages after we've - * removed the eb from the radix tree, so we could race - * and have this page now attached to the new eb. So - * only clear page_private if it's still connected to - * this eb. - */ - if (PagePrivate(page) && - page->private == (unsigned long)eb) { - BUG_ON(test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); - BUG_ON(PageDirty(page)); - BUG_ON(PageWriteback(page)); - /* - * We need to make sure we haven't be attached - * to a new eb. - */ - detach_page_private(page); - } - - if (mapped) - spin_unlock(&page->mapping->private_lock); - - /* One for when we allocated the page */ - put_page(page); - } + for (i = 0; i < num_pages; i++) + detach_extent_buffer_page(eb, eb->pages[i]); } /*
We have attach_extent_buffer_page() and it get utilized in btrfs_clone_extent_buffer() and alloc_extent_buffer(). But in btrfs_release_extent_buffer_pages() we manually call detach_page_private(). This is fine for current code, but if we're going to support subpage size, we will do a lot of more work other than just calling detach_page_private(). This patch will extract the main work of btrfs_clone_extent_buffer() into detach_extent_buffer_page() so that later subpage size support can put their own code into them. Signed-off-by: Qu Wenruo <wqu@suse.com> --- fs/btrfs/extent_io.c | 58 +++++++++++++++++++------------------------- 1 file changed, 25 insertions(+), 33 deletions(-)