diff mbox

[PATCHv6,05/37] thp: try to free page's buffers before attempt split

Message ID 20170126115819.58875-6-kirill.shutemov@linux.intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Kirill A. Shutemov Jan. 26, 2017, 11:57 a.m. UTC
We want page to be isolated from the rest of the system before splitting
it. We rely on page count to be 2 for file pages to make sure nobody
uses the page: one pin to caller, one to radix-tree.

Filesystems with backing storage can have page count increased if it has
buffers.

Let's try to free them, before attempt split. And remove one guarding
VM_BUG_ON_PAGE().

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 include/linux/buffer_head.h |  1 +
 mm/huge_memory.c            | 19 ++++++++++++++++++-
 2 files changed, 19 insertions(+), 1 deletion(-)

Comments

Matthew Wilcox Feb. 9, 2017, 8:14 p.m. UTC | #1
On Thu, Jan 26, 2017 at 02:57:47PM +0300, Kirill A. Shutemov wrote:
> @@ -2146,6 +2146,23 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
>  			goto out;
>  		}
>  
> +		/* Try to free buffers before attempt split */
> +		if (!PageSwapBacked(head) && PagePrivate(page)) {
> +			/*
> +			 * We cannot trigger writeback from here due possible
> +			 * recursion if triggered from vmscan, only wait.
> +			 *
> +			 * Caller can trigger writeback it on its own, if safe.
> +			 */

It took me a few reads to get this.  May I suggest:

		/*
		 * Cannot split a page with buffers.  If the caller has
		 * already started writeback, we can wait for it to finish,
		 * but we cannot start writeback if we were called from vmscan
		 */
> +		if (!PageSwapBacked(head) && PagePrivate(page)) {

Also, it looks weird to test PageSwapBacked of *head* and PagePrivate
of *page*.  I think it's correct, but it still looks weird.

Reviewed-by: Matthew Wilcox <mawilcox@microsoft.com>
Kirill A. Shutemov Feb. 13, 2017, 2:32 p.m. UTC | #2
On Thu, Feb 09, 2017 at 12:14:16PM -0800, Matthew Wilcox wrote:
> On Thu, Jan 26, 2017 at 02:57:47PM +0300, Kirill A. Shutemov wrote:
> > @@ -2146,6 +2146,23 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> >  			goto out;
> >  		}
> >  
> > +		/* Try to free buffers before attempt split */
> > +		if (!PageSwapBacked(head) && PagePrivate(page)) {
> > +			/*
> > +			 * We cannot trigger writeback from here due possible
> > +			 * recursion if triggered from vmscan, only wait.
> > +			 *
> > +			 * Caller can trigger writeback it on its own, if safe.
> > +			 */
> 
> It took me a few reads to get this.  May I suggest:
> 
> 		/*
> 		 * Cannot split a page with buffers.  If the caller has
> 		 * already started writeback, we can wait for it to finish,
> 		 * but we cannot start writeback if we were called from vmscan
> 		 */

Yeah, that's better.

> > +		if (!PageSwapBacked(head) && PagePrivate(page)) {
> 
> Also, it looks weird to test PageSwapBacked of *head* and PagePrivate
> of *page*.  I think it's correct, but it still looks weird.

I'll change this.

> Reviewed-by: Matthew Wilcox <mawilcox@microsoft.com>

Thanks!
diff mbox

Patch

diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index d67ab83823ad..fd4134ce9c54 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -400,6 +400,7 @@  extern int __set_page_dirty_buffers(struct page *page);
 #else /* CONFIG_BLOCK */
 
 static inline void buffer_init(void) {}
+static inline int page_has_buffers(struct page *page) { return 0; }
 static inline int try_to_free_buffers(struct page *page) { return 1; }
 static inline int inode_has_buffers(struct inode *inode) { return 0; }
 static inline void invalidate_inode_buffers(struct inode *inode) {}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 89819fe4debc..55aee62e8444 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -30,6 +30,7 @@ 
 #include <linux/userfaultfd_k.h>
 #include <linux/page_idle.h>
 #include <linux/shmem_fs.h>
+#include <linux/buffer_head.h>
 
 #include <asm/tlb.h>
 #include <asm/pgalloc.h>
@@ -2117,7 +2118,6 @@  int split_huge_page_to_list(struct page *page, struct list_head *list)
 
 	VM_BUG_ON_PAGE(is_huge_zero_page(page), page);
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
-	VM_BUG_ON_PAGE(!PageSwapBacked(page), page);
 	VM_BUG_ON_PAGE(!PageCompound(page), page);
 
 	if (PageAnon(head)) {
@@ -2146,6 +2146,23 @@  int split_huge_page_to_list(struct page *page, struct list_head *list)
 			goto out;
 		}
 
+		/* Try to free buffers before attempt split */
+		if (!PageSwapBacked(head) && PagePrivate(page)) {
+			/*
+			 * We cannot trigger writeback from here due possible
+			 * recursion if triggered from vmscan, only wait.
+			 *
+			 * Caller can trigger writeback it on its own, if safe.
+			 */
+			wait_on_page_writeback(head);
+
+			if (page_has_buffers(head) && !try_to_release_page(head,
+						GFP_KERNEL)) {
+				ret = -EBUSY;
+				goto out;
+			}
+		}
+
 		/* Addidional pin from radix tree */
 		extra_pins = 1;
 		anon_vma = NULL;