diff mbox series

hugetlbfs: move resv_map to hugetlbfs_inode_info

Message ID 20190412040240.29861-1-yuyufen@huawei.com (mailing list archive)
State New, archived
Headers show
Series hugetlbfs: move resv_map to hugetlbfs_inode_info | expand

Commit Message

Yufen Yu April 12, 2019, 4:02 a.m. UTC
Commit 58b6e5e8f1ad ("hugetlbfs: fix memory leak for resv_map")
fix memory leak for resv_map. It only allocate resv_map for inode mode
is 'S_ISREG' and 'S_ISLNK', avoiding i_mapping->private_data be set as
bdev_inode private_data.

However, for inode mode that is 'S_ISBLK', hugetlbfs_evict_inode() may
free or modify i_mapping->private_data that is owned by bdev inode,
which is not expected!

Fortunately, bdev filesystem have not actually use i_mapping->private_data.
Thus, this bug has not caused serious impact. But, we can not ensure
bdev will not use that field in the furture. And hugetlbfs should not
depend on bdev filesystem implementation.

We fix the problem by moving resv_map to hugetlbfs_inode_info. It may
be more reasonable.

Fixes: 58b6e5e8f1ad ("hugetlbfs: fix memory leak for resv_map")
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by: Yufen Yu <yuyufen@huawei.com>
---
 fs/hugetlbfs/inode.c    | 11 ++++++++---
 include/linux/hugetlb.h |  1 +
 mm/hugetlb.c            |  2 +-
 3 files changed, 10 insertions(+), 4 deletions(-)

Comments

Mike Kravetz April 12, 2019, 11:40 p.m. UTC | #1
On 4/11/19 9:02 PM, Yufen Yu wrote:
> Commit 58b6e5e8f1ad ("hugetlbfs: fix memory leak for resv_map")
...
> However, for inode mode that is 'S_ISBLK', hugetlbfs_evict_inode() may
> free or modify i_mapping->private_data that is owned by bdev inode,
> which is not expected!
...
> We fix the problem by moving resv_map to hugetlbfs_inode_info. It may
> be more reasonable.

Your patches force me to consider these potential issues.  Thank you!

The root of all these problems (including the original leak) is that the
open of a block special inode will result in bd_acquire() overwriting the
value of inode->i_mapping.  Since hugetlbfs inodes normally contain a
resv_map at inode->i_mapping->private_data, a memory leak occurs if we do
not free the initially allocated resv_map.  In addition, when the
inode is evicted/destroyed inode->i_mapping may point to an address space
not associated with the hugetlbfs inode.  If code assumes inode->i_mapping
points to hugetlbfs inode address space at evict time, there may be bad
data references or worse.

This specific part of the patch made me think,

> @@ -497,12 +497,15 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
>  static void hugetlbfs_evict_inode(struct inode *inode)
>  {
>  	struct resv_map *resv_map;
> +	struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode);
>  
>  	remove_inode_hugepages(inode, 0, LLONG_MAX);
> -	resv_map = (struct resv_map *)inode->i_mapping->private_data;
> +	resv_map = info->resv_map;
>  	/* root inode doesn't have the resv_map, so we should check it */
> -	if (resv_map)
> +	if (resv_map) {
>  		resv_map_release(&resv_map->refs);
> +		info->resv_map = NULL;
> +	}
>  	clear_inode(inode);
>  }

If inode->i_mapping may not be associated with the hugetlbfs inode, then
remove_inode_hugepages() will also have problems.  It will want to operate
on the address space associated with the inode.  So, there are more issues
than just the resv_map.  When I looked at the first few lines of
remove_inode_hugepages(), I was surprised to see:

	struct address_space *mapping = &inode->i_data;

So remove_inode_hugepages is explicitly using the original address space
that is embedded in the inode.  As a result, it is not impacted by changes
to inode->i_mapping.  Using git history I was unable to determine why
remove_inode_hugepages is the only place in hugetlbfs code doing this.

With this in mind, a simple change like the following will fix the original
leak issue as well as the potential issues mentioned in this patch.

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 53ea3cef526e..9f0719bad46f 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -511,6 +511,11 @@ static void hugetlbfs_evict_inode(struct inode *inode)
 {
 	struct resv_map *resv_map;
 
+	/*
+	 * Make sure we are operating on original hugetlbfs address space.
+	 */
+	inode->i_mapping = &inode->i_data;
+
 	remove_inode_hugepages(inode, 0, LLONG_MAX);
 	resv_map = (struct resv_map *)inode->i_mapping->private_data;
 	/* root inode doesn't have the resv_map, so we should check it */


I don't know why hugetlbfs code would ever want to operate on any address
space but the one embedded within the inode.  However, my uderstanding of
the vfs layer is somewhat limited.  I'm wondering if the hugetlbfs code
(helper routines mostly) should perhaps use &inode->i_data instead of
inode->i_mapping.  Does it ever make sense for hugetlbfs code to operate
on inode->i_mapping if inode->i_mapping != &inode->i_data ?
Yufen Yu April 13, 2019, 11:57 a.m. UTC | #2
On 2019/4/13 7:40, Mike Kravetz wrote:
> This specific part of the patch made me think,
>
>> @@ -497,12 +497,15 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
>>   static void hugetlbfs_evict_inode(struct inode *inode)
>>   {
>>   	struct resv_map *resv_map;
>> +	struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode);
>>   
>>   	remove_inode_hugepages(inode, 0, LLONG_MAX);
>> -	resv_map = (struct resv_map *)inode->i_mapping->private_data;
>> +	resv_map = info->resv_map;
>>   	/* root inode doesn't have the resv_map, so we should check it */
>> -	if (resv_map)
>> +	if (resv_map) {
>>   		resv_map_release(&resv_map->refs);
>> +		info->resv_map = NULL;
>> +	}
>>   	clear_inode(inode);
>>   }
> If inode->i_mapping may not be associated with the hugetlbfs inode, then
> remove_inode_hugepages() will also have problems.  It will want to operate
> on the address space associated with the inode.  So, there are more issues
> than just the resv_map.  When I looked at the first few lines of
> remove_inode_hugepages(), I was surprised to see:
>
> 	struct address_space *mapping = &inode->i_data;

Good catch!

> So remove_inode_hugepages is explicitly using the original address space
> that is embedded in the inode.  As a result, it is not impacted by changes
> to inode->i_mapping.  Using git history I was unable to determine why
> remove_inode_hugepages is the only place in hugetlbfs code doing this.
>
> With this in mind, a simple change like the following will fix the original
> leak issue as well as the potential issues mentioned in this patch.
>
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index 53ea3cef526e..9f0719bad46f 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -511,6 +511,11 @@ static void hugetlbfs_evict_inode(struct inode *inode)
>   {
>   	struct resv_map *resv_map;
>   
> +	/*
> +	 * Make sure we are operating on original hugetlbfs address space.
> +	 */
> +	inode->i_mapping = &inode->i_data;
> +
>   	remove_inode_hugepages(inode, 0, LLONG_MAX);
>   	resv_map = (struct resv_map *)inode->i_mapping->private_data;
>   	/* root inode doesn't have the resv_map, so we should check it */
>
>
> I don't know why hugetlbfs code would ever want to operate on any address
> space but the one embedded within the inode.  However, my uderstanding of
> the vfs layer is somewhat limited.  I'm wondering if the hugetlbfs code
> (helper routines mostly) should perhaps use &inode->i_data instead of
> inode->i_mapping.  Does it ever make sense for hugetlbfs code to operate
> on inode->i_mapping if inode->i_mapping != &inode->i_data ?

I also feel very confused.

Yufen
thanks.
Naoya Horiguchi April 15, 2019, 6:16 a.m. UTC | #3
On Fri, Apr 12, 2019 at 04:40:01PM -0700, Mike Kravetz wrote:
> On 4/11/19 9:02 PM, Yufen Yu wrote:
> > Commit 58b6e5e8f1ad ("hugetlbfs: fix memory leak for resv_map")
> ...
> > However, for inode mode that is 'S_ISBLK', hugetlbfs_evict_inode() may
> > free or modify i_mapping->private_data that is owned by bdev inode,
> > which is not expected!
> ...
> > We fix the problem by moving resv_map to hugetlbfs_inode_info. It may
> > be more reasonable.
> 
> Your patches force me to consider these potential issues.  Thank you!
> 
> The root of all these problems (including the original leak) is that the
> open of a block special inode will result in bd_acquire() overwriting the
> value of inode->i_mapping.  Since hugetlbfs inodes normally contain a
> resv_map at inode->i_mapping->private_data, a memory leak occurs if we do
> not free the initially allocated resv_map.  In addition, when the
> inode is evicted/destroyed inode->i_mapping may point to an address space
> not associated with the hugetlbfs inode.  If code assumes inode->i_mapping
> points to hugetlbfs inode address space at evict time, there may be bad
> data references or worse.

Let me ask a kind of elementary question: is there any good reason/purpose
to create and use block special files on hugetlbfs?  I never heard about
such usecases.  I guess that the conflict of the usage of ->i_mapping is
discovered recently and that's because block special files on hugetlbfs are
just not considered until recently or well defined.  So I think that we might
be better to begin with defining it first.

I tried the procedure described in commit 58b6e5e8f1a ("hugetlbfs: fix
memory leak for resv_map") and I failed to open the block special file with
ENXIO. So I'm not clearly sure what to solve.
I tried a similar test on tmpfs (good reference for hugetlbfs) and that also
failed on opening block file on it (but with EACCESS), so simply fixing it
similarly could be an option if there's no reasonable usecase and we just
want to fix the memory leak. 

> 
> This specific part of the patch made me think,
> 
> > @@ -497,12 +497,15 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
> >  static void hugetlbfs_evict_inode(struct inode *inode)
> >  {
> >  	struct resv_map *resv_map;
> > +	struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode);
> >  
> >  	remove_inode_hugepages(inode, 0, LLONG_MAX);
> > -	resv_map = (struct resv_map *)inode->i_mapping->private_data;
> > +	resv_map = info->resv_map;
> >  	/* root inode doesn't have the resv_map, so we should check it */
> > -	if (resv_map)
> > +	if (resv_map) {
> >  		resv_map_release(&resv_map->refs);
> > +		info->resv_map = NULL;
> > +	}
> >  	clear_inode(inode);
> >  }
> 
> If inode->i_mapping may not be associated with the hugetlbfs inode, then
> remove_inode_hugepages() will also have problems.  It will want to operate
> on the address space associated with the inode.  So, there are more issues
> than just the resv_map.  When I looked at the first few lines of
> remove_inode_hugepages(), I was surprised to see:
> 
> 	struct address_space *mapping = &inode->i_data;
> 
> So remove_inode_hugepages is explicitly using the original address space
> that is embedded in the inode.  As a result, it is not impacted by changes
> to inode->i_mapping.  Using git history I was unable to determine why
> remove_inode_hugepages is the only place in hugetlbfs code doing this.
> 
> With this in mind, a simple change like the following will fix the original
> leak issue as well as the potential issues mentioned in this patch.
> 
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index 53ea3cef526e..9f0719bad46f 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -511,6 +511,11 @@ static void hugetlbfs_evict_inode(struct inode *inode)
>  {
>  	struct resv_map *resv_map;
>  
> +	/*
> +	 * Make sure we are operating on original hugetlbfs address space.
> +	 */
> +	inode->i_mapping = &inode->i_data;
> +
>  	remove_inode_hugepages(inode, 0, LLONG_MAX);
>  	resv_map = (struct resv_map *)inode->i_mapping->private_data;
>  	/* root inode doesn't have the resv_map, so we should check it */
> 
> 
> I don't know why hugetlbfs code would ever want to operate on any address
> space but the one embedded within the inode.  However, my uderstanding of
> the vfs layer is somewhat limited.  I'm wondering if the hugetlbfs code
> (helper routines mostly) should perhaps use &inode->i_data instead of
> inode->i_mapping.  Does it ever make sense for hugetlbfs code to operate
> on inode->i_mapping if inode->i_mapping != &inode->i_data ?

I'm not a fs expert, but &inode->i_data seems safer pointer to reach to
the struct address_space because it's embedded in inode and always exists.

Thanks,
Naoya Horiguchi
Michal Hocko April 15, 2019, 9:15 a.m. UTC | #4
On Mon 15-04-19 06:16:15, Naoya Horiguchi wrote:
> On Fri, Apr 12, 2019 at 04:40:01PM -0700, Mike Kravetz wrote:
> > On 4/11/19 9:02 PM, Yufen Yu wrote:
> > > Commit 58b6e5e8f1ad ("hugetlbfs: fix memory leak for resv_map")
> > ...
> > > However, for inode mode that is 'S_ISBLK', hugetlbfs_evict_inode() may
> > > free or modify i_mapping->private_data that is owned by bdev inode,
> > > which is not expected!
> > ...
> > > We fix the problem by moving resv_map to hugetlbfs_inode_info. It may
> > > be more reasonable.
> > 
> > Your patches force me to consider these potential issues.  Thank you!
> > 
> > The root of all these problems (including the original leak) is that the
> > open of a block special inode will result in bd_acquire() overwriting the
> > value of inode->i_mapping.  Since hugetlbfs inodes normally contain a
> > resv_map at inode->i_mapping->private_data, a memory leak occurs if we do
> > not free the initially allocated resv_map.  In addition, when the
> > inode is evicted/destroyed inode->i_mapping may point to an address space
> > not associated with the hugetlbfs inode.  If code assumes inode->i_mapping
> > points to hugetlbfs inode address space at evict time, there may be bad
> > data references or worse.
> 
> Let me ask a kind of elementary question: is there any good reason/purpose
> to create and use block special files on hugetlbfs?  I never heard about
> such usecases.  I guess that the conflict of the usage of ->i_mapping is
> discovered recently and that's because block special files on hugetlbfs are
> just not considered until recently or well defined.  So I think that we might
> be better to begin with defining it first.

A absolutely agree. Hugetlbfs is overly complicated even without that.
So if this is merely "we have tried it and it has blown up" kinda thing
then just refuse the create blockdev files or document it as undefined.
You need a root to do so anyway.
Mike Kravetz April 15, 2019, 5:11 p.m. UTC | #5
On 4/15/19 2:15 AM, Michal Hocko wrote:
> On Mon 15-04-19 06:16:15, Naoya Horiguchi wrote:
>> On Fri, Apr 12, 2019 at 04:40:01PM -0700, Mike Kravetz wrote:
>>> On 4/11/19 9:02 PM, Yufen Yu wrote:
>>>> Commit 58b6e5e8f1ad ("hugetlbfs: fix memory leak for resv_map")
>>> ...
>>>> However, for inode mode that is 'S_ISBLK', hugetlbfs_evict_inode() may
>>>> free or modify i_mapping->private_data that is owned by bdev inode,
>>>> which is not expected!
>>> ...
>>>> We fix the problem by moving resv_map to hugetlbfs_inode_info. It may
>>>> be more reasonable.
>>>
>>> Your patches force me to consider these potential issues.  Thank you!
>>>
>>> The root of all these problems (including the original leak) is that the
>>> open of a block special inode will result in bd_acquire() overwriting the
>>> value of inode->i_mapping.  Since hugetlbfs inodes normally contain a
>>> resv_map at inode->i_mapping->private_data, a memory leak occurs if we do
>>> not free the initially allocated resv_map.  In addition, when the
>>> inode is evicted/destroyed inode->i_mapping may point to an address space
>>> not associated with the hugetlbfs inode.  If code assumes inode->i_mapping
>>> points to hugetlbfs inode address space at evict time, there may be bad
>>> data references or worse.
>>
>> Let me ask a kind of elementary question: is there any good reason/purpose
>> to create and use block special files on hugetlbfs?  I never heard about
>> such usecases.

I am not aware of this as a common use case.  Yufen Yu may be able to provide
more details about how the issue was discovered.  My guess is that it was
discovered via code inspection.

>>                 I guess that the conflict of the usage of ->i_mapping is
>> discovered recently and that's because block special files on hugetlbfs are
>> just not considered until recently or well defined.  So I think that we might
>> be better to begin with defining it first.

Unless I am mistaken, this is just like creating a device special file
in any other filesystem.  Correct?  hugetlbfs is just some place for the
inode/file to reside.  What happens when you open/ioctl/close/etc the file
is really dependent on the vfs layer and underlying driver.

> A absolutely agree. Hugetlbfs is overly complicated even without that.
> So if this is merely "we have tried it and it has blown up" kinda thing
> then just refuse the create blockdev files or document it as undefined.
> You need a root to do so anyway.

Can we just refuse to create device special files in hugetlbfs?  Do we need
to worry about breaking any potential users?  I honestly do not know if anyone
does this today.  However, if they did I believe things would "just work".
The only known issue is leaking a resv_map structure when the inode is
destroyed.  I doubt anyone would notice that leak today.

Let me do a little more research.  I think this can all be cleaned up by
making hugetlbfs always operate on the address space embedded in the inode.
If nothing else, a change or explanation should be added as to why most code
operates on inode->mapping and one place operates on &inode->i_data.
Naoya Horiguchi April 15, 2019, 11:59 p.m. UTC | #6
On Mon, Apr 15, 2019 at 10:11:39AM -0700, Mike Kravetz wrote:
> On 4/15/19 2:15 AM, Michal Hocko wrote:
> > On Mon 15-04-19 06:16:15, Naoya Horiguchi wrote:
> >> On Fri, Apr 12, 2019 at 04:40:01PM -0700, Mike Kravetz wrote:
> >>> On 4/11/19 9:02 PM, Yufen Yu wrote:
> >>>> Commit 58b6e5e8f1ad ("hugetlbfs: fix memory leak for resv_map")
> >>> ...
> >>>> However, for inode mode that is 'S_ISBLK', hugetlbfs_evict_inode() may
> >>>> free or modify i_mapping->private_data that is owned by bdev inode,
> >>>> which is not expected!
> >>> ...
> >>>> We fix the problem by moving resv_map to hugetlbfs_inode_info. It may
> >>>> be more reasonable.
> >>>
> >>> Your patches force me to consider these potential issues.  Thank you!
> >>>
> >>> The root of all these problems (including the original leak) is that the
> >>> open of a block special inode will result in bd_acquire() overwriting the
> >>> value of inode->i_mapping.  Since hugetlbfs inodes normally contain a
> >>> resv_map at inode->i_mapping->private_data, a memory leak occurs if we do
> >>> not free the initially allocated resv_map.  In addition, when the
> >>> inode is evicted/destroyed inode->i_mapping may point to an address space
> >>> not associated with the hugetlbfs inode.  If code assumes inode->i_mapping
> >>> points to hugetlbfs inode address space at evict time, there may be bad
> >>> data references or worse.
> >>
> >> Let me ask a kind of elementary question: is there any good reason/purpose
> >> to create and use block special files on hugetlbfs?  I never heard about
> >> such usecases.
> 
> I am not aware of this as a common use case.  Yufen Yu may be able to provide
> more details about how the issue was discovered.  My guess is that it was
> discovered via code inspection.
> 
> >>                 I guess that the conflict of the usage of ->i_mapping is
> >> discovered recently and that's because block special files on hugetlbfs are
> >> just not considered until recently or well defined.  So I think that we might
> >> be better to begin with defining it first.
> 
> Unless I am mistaken, this is just like creating a device special file
> in any other filesystem.  Correct?  hugetlbfs is just some place for the
> inode/file to reside.  What happens when you open/ioctl/close/etc the file
> is really dependent on the vfs layer and underlying driver.
> 

OK. Generally speaking, "special files just work even on hugetlbfs" sounds
fine for me if it properly works.

> > A absolutely agree. Hugetlbfs is overly complicated even without that.
> > So if this is merely "we have tried it and it has blown up" kinda thing
> > then just refuse the create blockdev files or document it as undefined.
> > You need a root to do so anyway.
> 
> Can we just refuse to create device special files in hugetlbfs?  Do we need
> to worry about breaking any potential users?  I honestly do not know if anyone
> does this today.  However, if they did I believe things would "just work".
> The only known issue is leaking a resv_map structure when the inode is
> destroyed.  I doubt anyone would notice that leak today.

Thanks for explanation, so that's unclear now. 

> 
> Let me do a little more research.  I think this can all be cleaned up by
> making hugetlbfs always operate on the address space embedded in the inode.
> If nothing else, a change or explanation should be added as to why most code
> operates on inode->mapping and one place operates on &inode->i_data.

Sounds nice, thank you.

(Just for sharing point, not intending to block the fix ...)
My remaining concern is that this problem might not be hugetlbfs specific,
because what triggers the issue seems to be the usage of inode->i_mapping.
bd_acquire() are callable from any filesystem, so I'm wondering whether we
have something to generally prevent this kind of issue?

Thanks,
Naoya Horiguchi
Mike Kravetz April 16, 2019, 12:37 a.m. UTC | #7
On 4/15/19 4:59 PM, Naoya Horiguchi wrote:
> On Mon, Apr 15, 2019 at 10:11:39AM -0700, Mike Kravetz wrote:
>> Let me do a little more research.  I think this can all be cleaned up by
>> making hugetlbfs always operate on the address space embedded in the inode.
>> If nothing else, a change or explanation should be added as to why most code
>> operates on inode->mapping and one place operates on &inode->i_data.
> 
> Sounds nice, thank you.
> 
> (Just for sharing point, not intending to block the fix ...)
> My remaining concern is that this problem might not be hugetlbfs specific,
> because what triggers the issue seems to be the usage of inode->i_mapping.
> bd_acquire() are callable from any filesystem, so I'm wondering whether we
> have something to generally prevent this kind of issue?

I have gone through most of the filesystems and have not found any others
where this may be an issue.  From what I have seen, it is fairly common in
filesystem evict_inode routines to explicitly use &inode->i_data to get at
the address space passed to the truncate routine.  As mentioned, even
hugetlbfs does this in remove_inode_hugepages() which was previously part
of the evict_inode routine.  In tmpfs, the evict_inode routuine cleverly
checks the address space ops to determine if the address space is associated
with tmpfs.  If not, it does not call the truncate routine.

One of things different for hugetlbfs is that the resv_map structure hangs
off the address space within the inode.  Yufen Yu's approach was to move the
resv_map pointer into the hugetlbfs inode extenstion hugetlbfs_inode_info.
With that change, we could make the hugetlbfs evict_inode routine look more
like the tmpfs routine.  I do not really like the idea of increasing the size
of hugetlbfs inodes as we already have a place to store the pointer.  In
addition, the reserv_map is used with the mappings within the address space
so one could argue that it makes sense for them to be together.
Michal Hocko April 16, 2019, 6:50 a.m. UTC | #8
On Mon 15-04-19 10:11:39, Mike Kravetz wrote:
> On 4/15/19 2:15 AM, Michal Hocko wrote:
> > On Mon 15-04-19 06:16:15, Naoya Horiguchi wrote:
> >> On Fri, Apr 12, 2019 at 04:40:01PM -0700, Mike Kravetz wrote:
> >>> On 4/11/19 9:02 PM, Yufen Yu wrote:
> >>>> Commit 58b6e5e8f1ad ("hugetlbfs: fix memory leak for resv_map")
> >>> ...
> >>>> However, for inode mode that is 'S_ISBLK', hugetlbfs_evict_inode() may
> >>>> free or modify i_mapping->private_data that is owned by bdev inode,
> >>>> which is not expected!
> >>> ...
> >>>> We fix the problem by moving resv_map to hugetlbfs_inode_info. It may
> >>>> be more reasonable.
> >>>
> >>> Your patches force me to consider these potential issues.  Thank you!
> >>>
> >>> The root of all these problems (including the original leak) is that the
> >>> open of a block special inode will result in bd_acquire() overwriting the
> >>> value of inode->i_mapping.  Since hugetlbfs inodes normally contain a
> >>> resv_map at inode->i_mapping->private_data, a memory leak occurs if we do
> >>> not free the initially allocated resv_map.  In addition, when the
> >>> inode is evicted/destroyed inode->i_mapping may point to an address space
> >>> not associated with the hugetlbfs inode.  If code assumes inode->i_mapping
> >>> points to hugetlbfs inode address space at evict time, there may be bad
> >>> data references or worse.
> >>
> >> Let me ask a kind of elementary question: is there any good reason/purpose
> >> to create and use block special files on hugetlbfs?  I never heard about
> >> such usecases.
> 
> I am not aware of this as a common use case.  Yufen Yu may be able to provide
> more details about how the issue was discovered.  My guess is that it was
> discovered via code inspection.
> 
> >>                 I guess that the conflict of the usage of ->i_mapping is
> >> discovered recently and that's because block special files on hugetlbfs are
> >> just not considered until recently or well defined.  So I think that we might
> >> be better to begin with defining it first.
> 
> Unless I am mistaken, this is just like creating a device special file
> in any other filesystem.  Correct?  hugetlbfs is just some place for the
> inode/file to reside.  What happens when you open/ioctl/close/etc the file
> is really dependent on the vfs layer and underlying driver.
> 
> > A absolutely agree. Hugetlbfs is overly complicated even without that.
> > So if this is merely "we have tried it and it has blown up" kinda thing
> > then just refuse the create blockdev files or document it as undefined.
> > You need a root to do so anyway.
> 
> Can we just refuse to create device special files in hugetlbfs?  Do we need
> to worry about breaking any potential users?  I honestly do not know if anyone
> does this today.  However, if they did I believe things would "just work".

But why would anybody do something like that? Is there any actual
semantical advantage to create device files on hugetlbfs? I would be
worried that some confused application might expect e.g. hugetlb backed
pagecache for a block device or something like that. I wouldn't be too
worried to outright disallow this and only allow on an explicit and
reasonable usecase.

> The only known issue is leaking a resv_map structure when the inode is
> destroyed.  I doubt anyone would notice that leak today.
> 
> Let me do a little more research.  I think this can all be cleaned up by
> making hugetlbfs always operate on the address space embedded in the inode.
> If nothing else, a change or explanation should be added as to why most code
> operates on inode->mapping and one place operates on &inode->i_data.

Yes, that makes sense.

Thanks!
Yufen Yu April 16, 2019, 12:57 p.m. UTC | #9
On 2019/4/16 1:11, Mike Kravetz wrote:
> On 4/15/19 2:15 AM, Michal Hocko wrote:
>> On Mon 15-04-19 06:16:15, Naoya Horiguchi wrote:
>>> On Fri, Apr 12, 2019 at 04:40:01PM -0700, Mike Kravetz wrote:
>>>> On 4/11/19 9:02 PM, Yufen Yu wrote:
>>>>> Commit 58b6e5e8f1ad ("hugetlbfs: fix memory leak for resv_map")
>>>> ...
>>>>> However, for inode mode that is 'S_ISBLK', hugetlbfs_evict_inode() may
>>>>> free or modify i_mapping->private_data that is owned by bdev inode,
>>>>> which is not expected!
>>>> ...
>>>>> We fix the problem by moving resv_map to hugetlbfs_inode_info. It may
>>>>> be more reasonable.
>>>> Your patches force me to consider these potential issues.  Thank you!
>>>>
>>>> The root of all these problems (including the original leak) is that the
>>>> open of a block special inode will result in bd_acquire() overwriting the
>>>> value of inode->i_mapping.  Since hugetlbfs inodes normally contain a
>>>> resv_map at inode->i_mapping->private_data, a memory leak occurs if we do
>>>> not free the initially allocated resv_map.  In addition, when the
>>>> inode is evicted/destroyed inode->i_mapping may point to an address space
>>>> not associated with the hugetlbfs inode.  If code assumes inode->i_mapping
>>>> points to hugetlbfs inode address space at evict time, there may be bad
>>>> data references or worse.
>>> Let me ask a kind of elementary question: is there any good reason/purpose
>>> to create and use block special files on hugetlbfs?  I never heard about
>>> such usecases.
> I am not aware of this as a common use case.  Yufen Yu may be able to provide
> more details about how the issue was discovered.  My guess is that it was
> discovered via code inspection.

In fact, we discover the issue by running syzkaller. The program like:

15:39:59 executing program 0:
r0 = openat(0xffffffffffffff9c, &(0x7f0000000040)='./file0/file0\x00', 
0x44000, 0x1)
r1 = syz_open_dev$vcsn(&(0x7f00000000c0)='/dev/vcs#\x00', 0x3f, 0x202000)
renameat2(r0, &(0x7f0000000140)='./file0\x00', r0, 
&(0x7f0000000180)='./file0/file0/file0\x00', 0x4)
mkdir(&(0x7f0000000300)='./file0\x00', 0x0)
mount(0x0, &(0x7f0000000200)='./file0\x00', 
&(0x7f0000000240)='hugetlbfs\x00', 0x0, 0x0)
mknod$loop(&(0x7f0000000000)='./file0/file0\x00', 0x6000, 
0xffffffffffffffff)

Yufen
Thanks

>
>>>                  I guess that the conflict of the usage of ->i_mapping is
>>> discovered recently and that's because block special files on hugetlbfs are
>>> just not considered until recently or well defined.  So I think that we might
>>> be better to begin with defining it first.
> Unless I am mistaken, this is just like creating a device special file
> in any other filesystem.  Correct?  hugetlbfs is just some place for the
> inode/file to reside.  What happens when you open/ioctl/close/etc the file
> is really dependent on the vfs layer and underlying driver.
>
>> A absolutely agree. Hugetlbfs is overly complicated even without that.
>> So if this is merely "we have tried it and it has blown up" kinda thing
>> then just refuse the create blockdev files or document it as undefined.
>> You need a root to do so anyway.
> Can we just refuse to create device special files in hugetlbfs?  Do we need
> to worry about breaking any potential users?  I honestly do not know if anyone
> does this today.  However, if they did I believe things would "just work".
> The only known issue is leaking a resv_map structure when the inode is
> destroyed.  I doubt anyone would notice that leak today.
>
> Let me do a little more research.  I think this can all be cleaned up by
> making hugetlbfs always operate on the address space embedded in the inode.
> If nothing else, a change or explanation should be added as to why most code
> operates on inode->mapping and one place operates on &inode->i_data.
diff mbox series

Patch

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 9285dd4f4b1c..f1342a3fa716 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -497,12 +497,15 @@  static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
 static void hugetlbfs_evict_inode(struct inode *inode)
 {
 	struct resv_map *resv_map;
+	struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode);
 
 	remove_inode_hugepages(inode, 0, LLONG_MAX);
-	resv_map = (struct resv_map *)inode->i_mapping->private_data;
+	resv_map = info->resv_map;
 	/* root inode doesn't have the resv_map, so we should check it */
-	if (resv_map)
+	if (resv_map) {
 		resv_map_release(&resv_map->refs);
+		info->resv_map = NULL;
+	}
 	clear_inode(inode);
 }
 
@@ -777,7 +780,7 @@  static struct inode *hugetlbfs_get_inode(struct super_block *sb,
 				&hugetlbfs_i_mmap_rwsem_key);
 		inode->i_mapping->a_ops = &hugetlbfs_aops;
 		inode->i_atime = inode->i_mtime = inode->i_ctime = current_time(inode);
-		inode->i_mapping->private_data = resv_map;
+		info->resv_map = resv_map;
 		info->seals = F_SEAL_SEAL;
 		switch (mode & S_IFMT) {
 		default:
@@ -1047,6 +1050,7 @@  static struct inode *hugetlbfs_alloc_inode(struct super_block *sb)
 	 * private inode.  This simplifies hugetlbfs_destroy_inode.
 	 */
 	mpol_shared_policy_init(&p->policy, NULL);
+	p->resv_map = NULL;
 
 	return &p->vfs_inode;
 }
@@ -1061,6 +1065,7 @@  static void hugetlbfs_destroy_inode(struct inode *inode)
 {
 	hugetlbfs_inc_free_inodes(HUGETLBFS_SB(inode->i_sb));
 	mpol_free_shared_policy(&HUGETLBFS_I(inode)->policy);
+	HUGETLBFS_I(inode)->resv_map = NULL;
 	call_rcu(&inode->i_rcu, hugetlbfs_i_callback);
 }
 
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 11943b60f208..584030631045 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -297,6 +297,7 @@  struct hugetlbfs_inode_info {
 	struct shared_policy policy;
 	struct inode vfs_inode;
 	unsigned int seals;
+	struct resv_map *resv_map;
 };
 
 static inline struct hugetlbfs_inode_info *HUGETLBFS_I(struct inode *inode)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index fe74f94e5327..a2648edffb02 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -740,7 +740,7 @@  void resv_map_release(struct kref *ref)
 
 static inline struct resv_map *inode_resv_map(struct inode *inode)
 {
-	return inode->i_mapping->private_data;
+	return HUGETLBFS_I(inode)->resv_map;
 }
 
 static struct resv_map *vma_resv_map(struct vm_area_struct *vma)