diff mbox series

[linux-next] Fix shmem huge page failed to set F_SEAL_WRITE attribute problem

Message ID 20220215073743.1769979-1-cgel.zte@gmail.com (mailing list archive)
State New
Headers show
Series [linux-next] Fix shmem huge page failed to set F_SEAL_WRITE attribute problem | expand

Commit Message

CGEL Feb. 15, 2022, 7:37 a.m. UTC
From: wangyong <wang.yong12@zte.com.cn>

After enabling tmpfs filesystem to support transparent hugepage with the
following command:
 echo always > /sys/kernel/mm/transparent_hugepage/shmem_enabled
The docker program adds F_SEAL_WRITE through the following command will
prompt EBUSY.
 fcntl(5, F_ADD_SEALS, F_SEAL_WRITE)=-1.

It is found that in memfd_wait_for_pins function, the page_count of
hugepage is 512 and page_mapcount is 0, which does not meet the
conditions:
 page_count(page) - page_mapcount(page) != 1.
But the page is not busy at this time, therefore, the page_order of
hugepage should be taken into account in the calculation.

Reported-by: Zeal Robot <zealci@zte.com.cn>
Signed-off-by: wangyong <wang.yong12@zte.com.cn>
---
 mm/memfd.c | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

Comments

Andrew Morton Feb. 15, 2022, 10:12 p.m. UTC | #1
On Tue, 15 Feb 2022 07:37:43 +0000 cgel.zte@gmail.com wrote:

> From: wangyong <wang.yong12@zte.com.cn>
> 
> After enabling tmpfs filesystem to support transparent hugepage with the
> following command:
>  echo always > /sys/kernel/mm/transparent_hugepage/shmem_enabled
> The docker program adds F_SEAL_WRITE through the following command will
> prompt EBUSY.
>  fcntl(5, F_ADD_SEALS, F_SEAL_WRITE)=-1.
> 
> It is found that in memfd_wait_for_pins function, the page_count of
> hugepage is 512 and page_mapcount is 0, which does not meet the
> conditions:
>  page_count(page) - page_mapcount(page) != 1.
> But the page is not busy at this time, therefore, the page_order of
> hugepage should be taken into account in the calculation.

What are the real-world runtime effects of this?

Do we think that this fix (or one similar to it) should be backported
into -stable kernels?

If "yes" then Mike's 5d752600a8c373 ("mm: restructure memfd code") will
get in the way because it moved lots of code around.

But then, that's four years old and perhaps that's far enough back in
time.
CGEL Feb. 16, 2022, 6:57 a.m. UTC | #2
O Tue, Feb 15, 2022 at 02:12:36PM -0800, Andrew Morton wrote:
> On Tue, 15 Feb 2022 07:37:43 +0000 cgel.zte@gmail.com wrote:
> 
> > From: wangyong <wang.yong12@zte.com.cn>
> > 
> > After enabling tmpfs filesystem to support transparent hugepage with the
> > following command:
> >  echo always > /sys/kernel/mm/transparent_hugepage/shmem_enabled
> > The docker program adds F_SEAL_WRITE through the following command will
> > prompt EBUSY.
> >  fcntl(5, F_ADD_SEALS, F_SEAL_WRITE)=-1.
> > 
> > It is found that in memfd_wait_for_pins function, the page_count of
> > hugepage is 512 and page_mapcount is 0, which does not meet the
> > conditions:
> >  page_count(page) - page_mapcount(page) != 1.
> > But the page is not busy at this time, therefore, the page_order of
> > hugepage should be taken into account in the calculation.
> 
> What are the real-world runtime effects of this?
>
The problem I encounter is that the "docker-runc run busybox" command
fails, and then the container cannot be started. The following alarm is
prompted:
[pid  1412] fcntl(5, F_ADD_SEALS,F_SEAL_SEAL|F_SEAL_SHRINK|F_SEAL_GROW|F_SEAL_WRITE) = -1 EBUSY (Device or resource busy)
[pid  1412] close(5)                    = 0
[pid  1412] write(2, "nsenter: could not ensure we are"..., 74) = 74
...
[pid  1491] write(3, "\33[31mERRO\33[0m[0005] container_li"..., 166) = 166
[pid  1491] write(2, "container_linux.go:299: starting"..., 144container_linux.go:299: starting container process caused
"process_linux.go:245: running exec setns process for init caused \"exit statu" ) = 144

I'm not sure how this will affect other situations.
> Do we think that this fix (or one similar to it) should be backported
> into -stable kernels?
> 
> If "yes" then Mike's 5d752600a8c373 ("mm: restructure memfd code") will
> get in the way because it moved lots of code around.
> 
Yes, 4.14 does not have this patch, but 4.19 does.
In addition, Kirill A. Shutemov's 800d8c63b2e989c2e349632d1648119bf5862f01 
(shmem: add huge pages support) is not included in 4.4, but it is available in 4.14.

> But then, that's four years old and perhaps that's far enough back in
> time.

Thanks.
Mike Kravetz Feb. 17, 2022, 1 a.m. UTC | #3
On 2/14/22 23:37, cgel.zte@gmail.com wrote:
> From: wangyong <wang.yong12@zte.com.cn>
> 
> After enabling tmpfs filesystem to support transparent hugepage with the
> following command:
>  echo always > /sys/kernel/mm/transparent_hugepage/shmem_enabled
> The docker program adds F_SEAL_WRITE through the following command will
> prompt EBUSY.
>  fcntl(5, F_ADD_SEALS, F_SEAL_WRITE)=-1.
> 
> It is found that in memfd_wait_for_pins function, the page_count of
> hugepage is 512 and page_mapcount is 0, which does not meet the
> conditions:
>  page_count(page) - page_mapcount(page) != 1.
> But the page is not busy at this time, therefore, the page_order of
> hugepage should be taken into account in the calculation.
> 
> Reported-by: Zeal Robot <zealci@zte.com.cn>
> Signed-off-by: wangyong <wang.yong12@zte.com.cn>
> ---
>  mm/memfd.c | 16 +++++++++++++---
>  1 file changed, 13 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/memfd.c b/mm/memfd.c
> index 9f80f162791a..26d1d390a22a 100644
> --- a/mm/memfd.c
> +++ b/mm/memfd.c
> @@ -31,6 +31,7 @@
>  static void memfd_tag_pins(struct xa_state *xas)
>  {
>  	struct page *page;
> +	int count = 0;
>  	unsigned int tagged = 0;
>  
>  	lru_add_drain();
> @@ -39,8 +40,12 @@ static void memfd_tag_pins(struct xa_state *xas)
>  	xas_for_each(xas, page, ULONG_MAX) {
>  		if (xa_is_value(page))
>  			continue;
> +
>  		page = find_subpage(page, xas->xa_index);
> -		if (page_count(page) - page_mapcount(page) > 1)
> +		count = page_count(page);
> +		if (PageTransCompound(page))

PageTransCompound() is true for hugetlb pages as well as THP.  And, hugetlb
pages will not have a ref per subpage as THP does.  So, I believe this will
break hugetlb seal usage.

I was trying to do some testing via the memfd selftests, but those have some
other issues for hugetlb that need to be fixed. :(
Hugh Dickins Feb. 17, 2022, 1:25 a.m. UTC | #4
On Wed, 16 Feb 2022, Mike Kravetz wrote:
> On 2/14/22 23:37, cgel.zte@gmail.com wrote:
> > From: wangyong <wang.yong12@zte.com.cn>
> > 
> > After enabling tmpfs filesystem to support transparent hugepage with the
> > following command:
> >  echo always > /sys/kernel/mm/transparent_hugepage/shmem_enabled
> > The docker program adds F_SEAL_WRITE through the following command will
> > prompt EBUSY.
> >  fcntl(5, F_ADD_SEALS, F_SEAL_WRITE)=-1.
> > 
> > It is found that in memfd_wait_for_pins function, the page_count of
> > hugepage is 512 and page_mapcount is 0, which does not meet the
> > conditions:
> >  page_count(page) - page_mapcount(page) != 1.
> > But the page is not busy at this time, therefore, the page_order of
> > hugepage should be taken into account in the calculation.
> > 
> > Reported-by: Zeal Robot <zealci@zte.com.cn>
> > Signed-off-by: wangyong <wang.yong12@zte.com.cn>
> > ---
> >  mm/memfd.c | 16 +++++++++++++---
> >  1 file changed, 13 insertions(+), 3 deletions(-)
> > 
> > diff --git a/mm/memfd.c b/mm/memfd.c
> > index 9f80f162791a..26d1d390a22a 100644
> > --- a/mm/memfd.c
> > +++ b/mm/memfd.c
> > @@ -31,6 +31,7 @@
> >  static void memfd_tag_pins(struct xa_state *xas)
> >  {
> >  	struct page *page;
> > +	int count = 0;
> >  	unsigned int tagged = 0;
> >  
> >  	lru_add_drain();
> > @@ -39,8 +40,12 @@ static void memfd_tag_pins(struct xa_state *xas)
> >  	xas_for_each(xas, page, ULONG_MAX) {
> >  		if (xa_is_value(page))
> >  			continue;
> > +
> >  		page = find_subpage(page, xas->xa_index);
> > -		if (page_count(page) - page_mapcount(page) > 1)
> > +		count = page_count(page);
> > +		if (PageTransCompound(page))
> 
> PageTransCompound() is true for hugetlb pages as well as THP.  And, hugetlb
> pages will not have a ref per subpage as THP does.  So, I believe this will
> break hugetlb seal usage.

Yes, I think so too; and that is not the only issue with the patch
(I don't think page_mapcount is enough, I had to use total_mapcount).

It's a good find, and thank you WangYong for the report.
I found the same issue when testing my MFD_HUGEPAGE patch last year,
and devised a patch to fix it (and keep MFD_HUGETLB working) then; but
never sent that in because there wasn't time to re-present MFD_HUGEPAGE.

I'm currently retesting my patch: just found something failing which
I thought should pass; but maybe I'm confused, or maybe the xarray is
working differently now.  I'm rushing to reply now because I don't want
others to waste their own time on it.

Andrew, please expect a replacement patch for this issue, but
I certainly have more testing and checking to do before sending.

Hugh

> 
> I was trying to do some testing via the memfd selftests, but those have some
> other issues for hugetlb that need to be fixed. :(
> -- 
> Mike Kravetz
Matthew Wilcox Feb. 17, 2022, 1:43 p.m. UTC | #5
On Wed, Feb 16, 2022 at 05:25:17PM -0800, Hugh Dickins wrote:
> On Wed, 16 Feb 2022, Mike Kravetz wrote:
> > On 2/14/22 23:37, cgel.zte@gmail.com wrote:
> > > From: wangyong <wang.yong12@zte.com.cn>
> > > 
> > > After enabling tmpfs filesystem to support transparent hugepage with the
> > > following command:
> > >  echo always > /sys/kernel/mm/transparent_hugepage/shmem_enabled
> > > The docker program adds F_SEAL_WRITE through the following command will
> > > prompt EBUSY.
> > >  fcntl(5, F_ADD_SEALS, F_SEAL_WRITE)=-1.
> > > 
> > > It is found that in memfd_wait_for_pins function, the page_count of
> > > hugepage is 512 and page_mapcount is 0, which does not meet the
> > > conditions:
> > >  page_count(page) - page_mapcount(page) != 1.
> > > But the page is not busy at this time, therefore, the page_order of
> > > hugepage should be taken into account in the calculation.
> > > 
> > > Reported-by: Zeal Robot <zealci@zte.com.cn>
> > > Signed-off-by: wangyong <wang.yong12@zte.com.cn>
> > > ---
> > >  mm/memfd.c | 16 +++++++++++++---
> > >  1 file changed, 13 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/mm/memfd.c b/mm/memfd.c
> > > index 9f80f162791a..26d1d390a22a 100644
> > > --- a/mm/memfd.c
> > > +++ b/mm/memfd.c
> > > @@ -31,6 +31,7 @@
> > >  static void memfd_tag_pins(struct xa_state *xas)
> > >  {
> > >  	struct page *page;
> > > +	int count = 0;
> > >  	unsigned int tagged = 0;
> > >  
> > >  	lru_add_drain();
> > > @@ -39,8 +40,12 @@ static void memfd_tag_pins(struct xa_state *xas)
> > >  	xas_for_each(xas, page, ULONG_MAX) {
> > >  		if (xa_is_value(page))
> > >  			continue;
> > > +
> > >  		page = find_subpage(page, xas->xa_index);
> > > -		if (page_count(page) - page_mapcount(page) > 1)
> > > +		count = page_count(page);
> > > +		if (PageTransCompound(page))
> > 
> > PageTransCompound() is true for hugetlb pages as well as THP.  And, hugetlb
> > pages will not have a ref per subpage as THP does.  So, I believe this will
> > break hugetlb seal usage.
> 
> Yes, I think so too; and that is not the only issue with the patch
> (I don't think page_mapcount is enough, I had to use total_mapcount).
> 
> It's a good find, and thank you WangYong for the report.
> I found the same issue when testing my MFD_HUGEPAGE patch last year,
> and devised a patch to fix it (and keep MFD_HUGETLB working) then; but
> never sent that in because there wasn't time to re-present MFD_HUGEPAGE.
> 
> I'm currently retesting my patch: just found something failing which
> I thought should pass; but maybe I'm confused, or maybe the xarray is
> working differently now.  I'm rushing to reply now because I don't want
> others to waste their own time on it.

I did change how the XArray works for THP recently.

Kirill's original patch stored:

512: p
513: p+1
514: p+2
...
1023: p+511

A couple of years ago, I changed it to store:

512: p
513: p
514: p
...
1023: p

And in January, Linus merged the commit which changes it to:

512-575: p
576-639: (sibling of 512)
640-703: (sibling of 512)
...
960-1023: (sibling of 512)

That is, I removed a level of the tree and store sibling entries
rather than duplicate entries.  That wasn't for fun; I needed to do
that in order to make msync() work with large folios.  Commit
6b24ca4a1a8d has more detail and hopefully can inspire whatever
changes you need to make to your patch.
Hugh Dickins Feb. 27, 2022, 3 a.m. UTC | #6
On Thu, 17 Feb 2022, Matthew Wilcox wrote:
> On Wed, Feb 16, 2022 at 05:25:17PM -0800, Hugh Dickins wrote:
> > On Wed, 16 Feb 2022, Mike Kravetz wrote:
> > > On 2/14/22 23:37, cgel.zte@gmail.com wrote:
...
> > > > @@ -39,8 +40,12 @@ static void memfd_tag_pins(struct xa_state *xas)
> > > >  	xas_for_each(xas, page, ULONG_MAX) {
> > > >  		if (xa_is_value(page))
> > > >  			continue;
> > > > +
> > > >  		page = find_subpage(page, xas->xa_index);
> > > > -		if (page_count(page) - page_mapcount(page) > 1)
> > > > +		count = page_count(page);
> > > > +		if (PageTransCompound(page))
> > > 
> > > PageTransCompound() is true for hugetlb pages as well as THP.  And, hugetlb
> > > pages will not have a ref per subpage as THP does.  So, I believe this will
> > > break hugetlb seal usage.
> > 
> > Yes, I think so too; and that is not the only issue with the patch
> > (I don't think page_mapcount is enough, I had to use total_mapcount).

Mike, we had the same instinctive reaction to seeing a PageTransCompound
check in code also exposed to PageHuge pages; but in fact that seems to
have worked correctly - those hugetlbfs pages are hard to predict!
But it was not working on pte maps of THPs.

> > 
> > It's a good find, and thank you WangYong for the report.
> > I found the same issue when testing my MFD_HUGEPAGE patch last year,
> > and devised a patch to fix it (and keep MFD_HUGETLB working) then; but
> > never sent that in because there wasn't time to re-present MFD_HUGEPAGE.
> > 
> > I'm currently retesting my patch: just found something failing which
> > I thought should pass; but maybe I'm confused, or maybe the xarray is
> > working differently now.  I'm rushing to reply now because I don't want
> > others to waste their own time on it.
> 
> I did change how the XArray works for THP recently.
> 
> Kirill's original patch stored:
> 
> 512: p
> 513: p+1
> 514: p+2
> ...
> 1023: p+511
> 
> A couple of years ago, I changed it to store:
> 
> 512: p
> 513: p
> 514: p
> ...
> 1023: p
> 
> And in January, Linus merged the commit which changes it to:
> 
> 512-575: p
> 576-639: (sibling of 512)
> 640-703: (sibling of 512)
> ...
> 960-1023: (sibling of 512)
> 
> That is, I removed a level of the tree and store sibling entries
> rather than duplicate entries.  That wasn't for fun; I needed to do
> that in order to make msync() work with large folios.  Commit
> 6b24ca4a1a8d has more detail and hopefully can inspire whatever
> changes you need to make to your patch.

Matthew, thanks for the very detailed info, you shouldn't have taken
so much trouble over it: I knew you had done something of that kind,
and yes, that's where my suspicion lay at the time of writing.  But
you'll be relieved to know that the patch I wrote before your changes
turned out to be unaffected, and just as valid after your changes.

"just found something failing which I thought should pass" was me
forgetting, again and again, just how limited are the allowed
possibilities for F_SEAL_WRITE when mmaps are outstanding.

One thinks that a PROT_READ, MAP_SHARED mapping would be allowed;
but of course all the memfds are automatically O_RDWR, so mprotect
(no sealing hook) allows it to be changed to PROT_READ|PROT_WRITE,
so F_SEAL_WRITE is forbidden on any MAP_SHARED mapping: only allowed
on MAP_PRIVATEs.

I'll now re-read the commit message I wrote before, update if necessary,
and then send to Andrew, asking him to replace the one in this thread.

Hugh
diff mbox series

Patch

diff --git a/mm/memfd.c b/mm/memfd.c
index 9f80f162791a..26d1d390a22a 100644
--- a/mm/memfd.c
+++ b/mm/memfd.c
@@ -31,6 +31,7 @@ 
 static void memfd_tag_pins(struct xa_state *xas)
 {
 	struct page *page;
+	int count = 0;
 	unsigned int tagged = 0;
 
 	lru_add_drain();
@@ -39,8 +40,12 @@  static void memfd_tag_pins(struct xa_state *xas)
 	xas_for_each(xas, page, ULONG_MAX) {
 		if (xa_is_value(page))
 			continue;
+
 		page = find_subpage(page, xas->xa_index);
-		if (page_count(page) - page_mapcount(page) > 1)
+		count = page_count(page);
+		if (PageTransCompound(page))
+			count -= (1 << compound_order(compound_head(page))) - 1;
+		if (count - page_mapcount(page) > 1)
 			xas_set_mark(xas, MEMFD_TAG_PINNED);
 
 		if (++tagged % XA_CHECK_SCHED)
@@ -67,11 +72,12 @@  static int memfd_wait_for_pins(struct address_space *mapping)
 {
 	XA_STATE(xas, &mapping->i_pages, 0);
 	struct page *page;
-	int error, scan;
+	int error, scan, count;
 
 	memfd_tag_pins(&xas);
 
 	error = 0;
+	count = 0;
 	for (scan = 0; scan <= LAST_SCAN; scan++) {
 		unsigned int tagged = 0;
 
@@ -89,8 +95,12 @@  static int memfd_wait_for_pins(struct address_space *mapping)
 			bool clear = true;
 			if (xa_is_value(page))
 				continue;
+
 			page = find_subpage(page, xas.xa_index);
-			if (page_count(page) - page_mapcount(page) != 1) {
+			count = page_count(page);
+			if (PageTransCompound(page))
+				count -= (1 << compound_order(compound_head(page))) - 1;
+			if (count - page_mapcount(page) != 1) {
 				/*
 				 * On the last scan, we clean up all those tags
 				 * we inserted; but make a note that we still