diff mbox series

ceph: fix memory leak when reallocating pages array for writepages

Message ID 20200726122804.16008-1-jlayton@kernel.org (mailing list archive)
State New, archived
Headers show
Series ceph: fix memory leak when reallocating pages array for writepages | expand

Commit Message

Jeff Layton July 26, 2020, 12:28 p.m. UTC
Once we've replaced it, we don't want to keep the old one around
anymore.

Signed-off-by: Jeff Layton <jlayton@kernel.org>
---
 fs/ceph/addr.c | 1 +
 1 file changed, 1 insertion(+)

Comments

Xiubo Li July 27, 2020, 12:16 p.m. UTC | #1
On 2020/7/26 20:28, Jeff Layton wrote:
> Once we've replaced it, we don't want to keep the old one around
> anymore.
>
> Signed-off-by: Jeff Layton <jlayton@kernel.org>
> ---
>   fs/ceph/addr.c | 1 +
>   1 file changed, 1 insertion(+)
>
> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> index 01ad09733ac7..01e167efa104 100644
> --- a/fs/ceph/addr.c
> +++ b/fs/ceph/addr.c
> @@ -1212,6 +1212,7 @@ static int ceph_writepages_start(struct address_space *mapping,
>   			       locked_pages * sizeof(*pages));
>   			memset(data_pages + i, 0,
>   			       locked_pages * sizeof(*pages));

BTW, do we still need to memset() the data_pages ?

> +			kfree(data_pages);
>   		} else {
>   			BUG_ON(num_ops != req->r_num_ops);
>   			index = pages[i - 1]->index + 1;
Jeff Layton July 27, 2020, 1:18 p.m. UTC | #2
On Mon, 2020-07-27 at 20:16 +0800, Xiubo Li wrote:
> On 2020/7/26 20:28, Jeff Layton wrote:
> > Once we've replaced it, we don't want to keep the old one around
> > anymore.
> > 
> > Signed-off-by: Jeff Layton <jlayton@kernel.org>
> > ---
> >   fs/ceph/addr.c | 1 +
> >   1 file changed, 1 insertion(+)
> > 
> > diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> > index 01ad09733ac7..01e167efa104 100644
> > --- a/fs/ceph/addr.c
> > +++ b/fs/ceph/addr.c
> > @@ -1212,6 +1212,7 @@ static int ceph_writepages_start(struct address_space *mapping,
> >   			       locked_pages * sizeof(*pages));
> >   			memset(data_pages + i, 0,
> >   			       locked_pages * sizeof(*pages));
> 
> BTW, do we still need to memset() the data_pages ?
> 

Self-NAK on this patch...

Zheng pointed out that this array is actually freed by the request
handler after the submission. This loop is creating a new pages array
for a second request.

As far as whether we need to memset the end of the original array...I
don't think we do. It looks like the pointers at the end of the array
are ignored once you go past the length of the request. That said, it's
fairly cheap to do so, and I'm not inclined to change it, just in case
there is code that does look at those pointers.

> 
> > +			kfree(data_pages);
> >   		} else {
> >   			BUG_ON(num_ops != req->r_num_ops);
> >   			index = pages[i - 1]->index + 1;
> 
>
Xiubo Li July 27, 2020, 1:27 p.m. UTC | #3
On 2020/7/27 21:18, Jeff Layton wrote:
> On Mon, 2020-07-27 at 20:16 +0800, Xiubo Li wrote:
>> On 2020/7/26 20:28, Jeff Layton wrote:
>>> Once we've replaced it, we don't want to keep the old one around
>>> anymore.
>>>
>>> Signed-off-by: Jeff Layton <jlayton@kernel.org>
>>> ---
>>>    fs/ceph/addr.c | 1 +
>>>    1 file changed, 1 insertion(+)
>>>
>>> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
>>> index 01ad09733ac7..01e167efa104 100644
>>> --- a/fs/ceph/addr.c
>>> +++ b/fs/ceph/addr.c
>>> @@ -1212,6 +1212,7 @@ static int ceph_writepages_start(struct address_space *mapping,
>>>    			       locked_pages * sizeof(*pages));
>>>    			memset(data_pages + i, 0,
>>>    			       locked_pages * sizeof(*pages));
>> BTW, do we still need to memset() the data_pages ?
>>
> Self-NAK on this patch...
>
> Zheng pointed out that this array is actually freed by the request
> handler after the submission. This loop is creating a new pages array
> for a second request.

Do you mean ceph_osd_data_release() ?

The request is only freeing the pages in that arrary, not the arrary 
itself, did I miss something ?


> As far as whether we need to memset the end of the original array...I
> don't think we do. It looks like the pointers at the end of the array
> are ignored once you go past the length of the request. That said, it's
> fairly cheap to do so, and I'm not inclined to change it, just in case
> there is code that does look at those pointers.
>
>>> +			kfree(data_pages);
>>>    		} else {
>>>    			BUG_ON(num_ops != req->r_num_ops);
>>>    			index = pages[i - 1]->index + 1;
>>
Jeff Layton July 27, 2020, 1:35 p.m. UTC | #4
On Mon, 2020-07-27 at 21:27 +0800, Xiubo Li wrote:
> On 2020/7/27 21:18, Jeff Layton wrote:
> > On Mon, 2020-07-27 at 20:16 +0800, Xiubo Li wrote:
> > > On 2020/7/26 20:28, Jeff Layton wrote:
> > > > Once we've replaced it, we don't want to keep the old one around
> > > > anymore.
> > > > 
> > > > Signed-off-by: Jeff Layton <jlayton@kernel.org>
> > > > ---
> > > >    fs/ceph/addr.c | 1 +
> > > >    1 file changed, 1 insertion(+)
> > > > 
> > > > diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> > > > index 01ad09733ac7..01e167efa104 100644
> > > > --- a/fs/ceph/addr.c
> > > > +++ b/fs/ceph/addr.c
> > > > @@ -1212,6 +1212,7 @@ static int ceph_writepages_start(struct address_space *mapping,
> > > >    			       locked_pages * sizeof(*pages));
> > > >    			memset(data_pages + i, 0,
> > > >    			       locked_pages * sizeof(*pages));
> > > BTW, do we still need to memset() the data_pages ?
> > > 
> > Self-NAK on this patch...
> > 
> > Zheng pointed out that this array is actually freed by the request
> > handler after the submission. This loop is creating a new pages array
> > for a second request.
> 
> Do you mean ceph_osd_data_release() ?
> 
> The request is only freeing the pages in that arrary, not the arrary 
> itself, did I miss something ?
> 
> 

No, I meant in writepages_finish(). It has this:

        if (osd_data->pages_from_pool)
                mempool_free(osd_data->pages,
                             ceph_sb_to_client(inode->i_sb)->wb_pagevec_pool);
        else
                kfree(osd_data->pages);

The pages themselves are freed in the loop above that point however.

> > As far as whether we need to memset the end of the original array...I
> > don't think we do. It looks like the pointers at the end of the array
> > are ignored once you go past the length of the request. That said, it's
> > fairly cheap to do so, and I'm not inclined to change it, just in case
> > there is code that does look at those pointers.
> > 
> > > > +			kfree(data_pages);
> > > >    		} else {
> > > >    			BUG_ON(num_ops != req->r_num_ops);
> > > >    			index = pages[i - 1]->index + 1;
Xiubo Li July 27, 2020, 1:44 p.m. UTC | #5
On 2020/7/27 21:35, Jeff Layton wrote:
> On Mon, 2020-07-27 at 21:27 +0800, Xiubo Li wrote:
>> On 2020/7/27 21:18, Jeff Layton wrote:
>>> On Mon, 2020-07-27 at 20:16 +0800, Xiubo Li wrote:
>>>> On 2020/7/26 20:28, Jeff Layton wrote:
>>>>> Once we've replaced it, we don't want to keep the old one around
>>>>> anymore.
>>>>>
>>>>> Signed-off-by: Jeff Layton <jlayton@kernel.org>
>>>>> ---
>>>>>     fs/ceph/addr.c | 1 +
>>>>>     1 file changed, 1 insertion(+)
>>>>>
>>>>> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
>>>>> index 01ad09733ac7..01e167efa104 100644
>>>>> --- a/fs/ceph/addr.c
>>>>> +++ b/fs/ceph/addr.c
>>>>> @@ -1212,6 +1212,7 @@ static int ceph_writepages_start(struct address_space *mapping,
>>>>>     			       locked_pages * sizeof(*pages));
>>>>>     			memset(data_pages + i, 0,
>>>>>     			       locked_pages * sizeof(*pages));
>>>> BTW, do we still need to memset() the data_pages ?
>>>>
>>> Self-NAK on this patch...
>>>
>>> Zheng pointed out that this array is actually freed by the request
>>> handler after the submission. This loop is creating a new pages array
>>> for a second request.
>> Do you mean ceph_osd_data_release() ?
>>
>> The request is only freeing the pages in that arrary, not the arrary
>> itself, did I miss something ?
>>
>>
> No, I meant in writepages_finish(). It has this:
>
>          if (osd_data->pages_from_pool)
>                  mempool_free(osd_data->pages,
>                               ceph_sb_to_client(inode->i_sb)->wb_pagevec_pool);
>          else
>                  kfree(osd_data->pages);
>
> The pages themselves are freed in the loop above that point however.

Okay, that's right.

Thanks.


>>> As far as whether we need to memset the end of the original array...I
>>> don't think we do. It looks like the pointers at the end of the array
>>> are ignored once you go past the length of the request. That said, it's
>>> fairly cheap to do so, and I'm not inclined to change it, just in case
>>> there is code that does look at those pointers.
>>>
>>>>> +			kfree(data_pages);
>>>>>     		} else {
>>>>>     			BUG_ON(num_ops != req->r_num_ops);
>>>>>     			index = pages[i - 1]->index + 1;
diff mbox series

Patch

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 01ad09733ac7..01e167efa104 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1212,6 +1212,7 @@  static int ceph_writepages_start(struct address_space *mapping,
 			       locked_pages * sizeof(*pages));
 			memset(data_pages + i, 0,
 			       locked_pages * sizeof(*pages));
+			kfree(data_pages);
 		} else {
 			BUG_ON(num_ops != req->r_num_ops);
 			index = pages[i - 1]->index + 1;