diff mbox series

[2/3] mm/filemap: initiate readahead even if IOCB_NOWAIT is set for the I/O

Message ID 20190130124420.1834-3-vbabka@suse.cz (mailing list archive)
State New, archived
Headers show
Series mm/mincore: allow for making sys_mincore() privileged | expand

Commit Message

Vlastimil Babka Jan. 30, 2019, 12:44 p.m. UTC
From: Jiri Kosina <jkosina@suse.cz>

preadv2(RWF_NOWAIT) can be used to open a side-channel to pagecache contents, as
it reveals metadata about residency of pages in pagecache.

If preadv2(RWF_NOWAIT) returns immediately, it provides a clear "page not
resident" information, and vice versa.

Close that sidechannel by always initiating readahead on the cache if we
encounter a cache miss for preadv2(RWF_NOWAIT); with that in place, probing
the pagecache residency itself will actually populate the cache, making the
sidechannel useless.

Originally-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Dominique Martinet <asmadeus@codewreck.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Kevin Easton <kevin@guarana.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Cyril Hrubis <chrubis@suse.cz>
Cc: Tejun Heo <tj@kernel.org>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Daniel Gruss <daniel@gruss.cc>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 mm/filemap.c | 2 --
 1 file changed, 2 deletions(-)

Comments

Florian Weimer Jan. 30, 2019, 3:04 p.m. UTC | #1
* Vlastimil Babka:

> preadv2(RWF_NOWAIT) can be used to open a side-channel to pagecache
> contents, as it reveals metadata about residency of pages in
> pagecache.
>
> If preadv2(RWF_NOWAIT) returns immediately, it provides a clear "page
> not resident" information, and vice versa.
>
> Close that sidechannel by always initiating readahead on the cache if
> we encounter a cache miss for preadv2(RWF_NOWAIT); with that in place,
> probing the pagecache residency itself will actually populate the
> cache, making the sidechannel useless.

I think this needs to use a different flag because the semantics are so
much different.  If I understand this change correctly, previously,
RWF_NOWAIT essentially avoided any I/O, and now it does not.

Thanks,
Florian
Jiri Kosina Jan. 30, 2019, 3:15 p.m. UTC | #2
On Wed, 30 Jan 2019, Florian Weimer wrote:

> > preadv2(RWF_NOWAIT) can be used to open a side-channel to pagecache
> > contents, as it reveals metadata about residency of pages in
> > pagecache.
> >
> > If preadv2(RWF_NOWAIT) returns immediately, it provides a clear "page
> > not resident" information, and vice versa.
> >
> > Close that sidechannel by always initiating readahead on the cache if
> > we encounter a cache miss for preadv2(RWF_NOWAIT); with that in place,
> > probing the pagecache residency itself will actually populate the
> > cache, making the sidechannel useless.
> 
> I think this needs to use a different flag because the semantics are so
> much different.  If I understand this change correctly, previously,
> RWF_NOWAIT essentially avoided any I/O, and now it does not.

It still avoid synchronous I/O, due to this code still being in place:

                if (!PageUptodate(page)) {
                        if (iocb->ki_flags & IOCB_NOWAIT) {
                                put_page(page);
                                goto would_block;
                        }

but goes the would_block path only after initiating asynchronous 
readahead.
Michal Hocko Jan. 31, 2019, 9:56 a.m. UTC | #3
[Cc fs-devel]

On Wed 30-01-19 13:44:19, Vlastimil Babka wrote:
> From: Jiri Kosina <jkosina@suse.cz>
> 
> preadv2(RWF_NOWAIT) can be used to open a side-channel to pagecache contents, as
> it reveals metadata about residency of pages in pagecache.
> 
> If preadv2(RWF_NOWAIT) returns immediately, it provides a clear "page not
> resident" information, and vice versa.
> 
> Close that sidechannel by always initiating readahead on the cache if we
> encounter a cache miss for preadv2(RWF_NOWAIT); with that in place, probing
> the pagecache residency itself will actually populate the cache, making the
> sidechannel useless.

I guess the current wording doesn't disallow background IO to be
triggered for EAGAIN case. I am not sure whether that breaks clever
applications which try to perform larger IO for those cases though.

> Originally-by: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Dominique Martinet <asmadeus@codewreck.org>
> Cc: Andy Lutomirski <luto@amacapital.net>
> Cc: Dave Chinner <david@fromorbit.com>
> Cc: Kevin Easton <kevin@guarana.org>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Cyril Hrubis <chrubis@suse.cz>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Kirill A. Shutemov <kirill@shutemov.name>
> Cc: Daniel Gruss <daniel@gruss.cc>
> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> ---
>  mm/filemap.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 9f5e323e883e..7bcdd36e629d 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -2075,8 +2075,6 @@ static ssize_t generic_file_buffered_read(struct kiocb *iocb,
>  
>  		page = find_get_page(mapping, index);
>  		if (!page) {
> -			if (iocb->ki_flags & IOCB_NOWAIT)
> -				goto would_block;
>  			page_cache_sync_readahead(mapping,
>  					ra, filp,
>  					index, last_index - index);

Maybe a stupid question but I am not really familiar with this path but
what exactly does prevent a sync read down page_cache_sync_readahead
path?
Jiri Kosina Jan. 31, 2019, 10:15 a.m. UTC | #4
On Thu, 31 Jan 2019, Michal Hocko wrote:

> > diff --git a/mm/filemap.c b/mm/filemap.c
> > index 9f5e323e883e..7bcdd36e629d 100644
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -2075,8 +2075,6 @@ static ssize_t generic_file_buffered_read(struct kiocb *iocb,
> >  
> >  		page = find_get_page(mapping, index);
> >  		if (!page) {
> > -			if (iocb->ki_flags & IOCB_NOWAIT)
> > -				goto would_block;
> >  			page_cache_sync_readahead(mapping,
> >  					ra, filp,
> >  					index, last_index - index);
> 
> Maybe a stupid question but I am not really familiar with this path but
> what exactly does prevent a sync read down page_cache_sync_readahead
> path?

page_cache_sync_readahead() only submits the read ahead request(s), it 
doesn't wait for it to finish.
Michal Hocko Jan. 31, 2019, 10:23 a.m. UTC | #5
On Thu 31-01-19 11:15:28, Jiri Kosina wrote:
> On Thu, 31 Jan 2019, Michal Hocko wrote:
> 
> > > diff --git a/mm/filemap.c b/mm/filemap.c
> > > index 9f5e323e883e..7bcdd36e629d 100644
> > > --- a/mm/filemap.c
> > > +++ b/mm/filemap.c
> > > @@ -2075,8 +2075,6 @@ static ssize_t generic_file_buffered_read(struct kiocb *iocb,
> > >  
> > >  		page = find_get_page(mapping, index);
> > >  		if (!page) {
> > > -			if (iocb->ki_flags & IOCB_NOWAIT)
> > > -				goto would_block;
> > >  			page_cache_sync_readahead(mapping,
> > >  					ra, filp,
> > >  					index, last_index - index);
> > 
> > Maybe a stupid question but I am not really familiar with this path but
> > what exactly does prevent a sync read down page_cache_sync_readahead
> > path?
> 
> page_cache_sync_readahead() only submits the read ahead request(s), it 
> doesn't wait for it to finish.

OK, I guess my question was not precise. What does prevent taking fs
locks down the path?
Jiri Kosina Jan. 31, 2019, 10:30 a.m. UTC | #6
On Thu, 31 Jan 2019, Michal Hocko wrote:

> > > > diff --git a/mm/filemap.c b/mm/filemap.c
> > > > index 9f5e323e883e..7bcdd36e629d 100644
> > > > --- a/mm/filemap.c
> > > > +++ b/mm/filemap.c
> > > > @@ -2075,8 +2075,6 @@ static ssize_t generic_file_buffered_read(struct kiocb *iocb,
> > > >  
> > > >  		page = find_get_page(mapping, index);
> > > >  		if (!page) {
> > > > -			if (iocb->ki_flags & IOCB_NOWAIT)
> > > > -				goto would_block;
> > > >  			page_cache_sync_readahead(mapping,
> > > >  					ra, filp,
> > > >  					index, last_index - index);
> > > 
> > > Maybe a stupid question but I am not really familiar with this path but
> > > what exactly does prevent a sync read down page_cache_sync_readahead
> > > path?
> > 
> > page_cache_sync_readahead() only submits the read ahead request(s), it 
> > doesn't wait for it to finish.
> 
> OK, I guess my question was not precise. What does prevent taking fs
> locks down the path?

Well, RWF_NOWAIT doesn't mean the kernel can't reschedule while executing 
preadv2(), right? It just means it will not wait for the arrival of the 
whole data blob into pagecache in case it's not there.
Florian Weimer Jan. 31, 2019, 10:47 a.m. UTC | #7
* Jiri Kosina:

> On Wed, 30 Jan 2019, Florian Weimer wrote:
>
>> > preadv2(RWF_NOWAIT) can be used to open a side-channel to pagecache
>> > contents, as it reveals metadata about residency of pages in
>> > pagecache.
>> >
>> > If preadv2(RWF_NOWAIT) returns immediately, it provides a clear "page
>> > not resident" information, and vice versa.
>> >
>> > Close that sidechannel by always initiating readahead on the cache if
>> > we encounter a cache miss for preadv2(RWF_NOWAIT); with that in place,
>> > probing the pagecache residency itself will actually populate the
>> > cache, making the sidechannel useless.
>> 
>> I think this needs to use a different flag because the semantics are so
>> much different.  If I understand this change correctly, previously,
>> RWF_NOWAIT essentially avoided any I/O, and now it does not.
>
> It still avoid synchronous I/O, due to this code still being in place:
>
>                 if (!PageUptodate(page)) {
>                         if (iocb->ki_flags & IOCB_NOWAIT) {
>                                 put_page(page);
>                                 goto would_block;
>                         }
>
> but goes the would_block path only after initiating asynchronous 
> readahead.

But it wouldn't schedule asynchronous readahead before?

I'm worried that something, say PostgreSQL doing a sequential scan,
would implement a two-pass approach, first using RWF_NOWAIT to process
what's in the kernel page cache, and then read the rest without it.  If
RWF_NOWAIT is treated as a prefetch hint, there could be much more read
activity, and a lot of it would be pointless because the data might have
to be evicted before userspace can use it.

Thanks,
Florian
Michal Hocko Jan. 31, 2019, 11:32 a.m. UTC | #8
On Thu 31-01-19 11:30:24, Jiri Kosina wrote:
> On Thu, 31 Jan 2019, Michal Hocko wrote:
> 
> > > > > diff --git a/mm/filemap.c b/mm/filemap.c
> > > > > index 9f5e323e883e..7bcdd36e629d 100644
> > > > > --- a/mm/filemap.c
> > > > > +++ b/mm/filemap.c
> > > > > @@ -2075,8 +2075,6 @@ static ssize_t generic_file_buffered_read(struct kiocb *iocb,
> > > > >  
> > > > >  		page = find_get_page(mapping, index);
> > > > >  		if (!page) {
> > > > > -			if (iocb->ki_flags & IOCB_NOWAIT)
> > > > > -				goto would_block;
> > > > >  			page_cache_sync_readahead(mapping,
> > > > >  					ra, filp,
> > > > >  					index, last_index - index);
> > > > 
> > > > Maybe a stupid question but I am not really familiar with this path but
> > > > what exactly does prevent a sync read down page_cache_sync_readahead
> > > > path?
> > > 
> > > page_cache_sync_readahead() only submits the read ahead request(s), it 
> > > doesn't wait for it to finish.
> > 
> > OK, I guess my question was not precise. What does prevent taking fs
> > locks down the path?
> 
> Well, RWF_NOWAIT doesn't mean the kernel can't reschedule while executing 
> preadv2(), right? It just means it will not wait for the arrival of the 
> whole data blob into pagecache in case it's not there.

No, it can reschedule for sure but the man page says: 
: If this flag is specified, the preadv2() system call will return
: instantly if it would have to read data from the backing storage or wait
: for a lock.

I assume that the lock is meant to be a filesystem lock here.
Jiri Kosina Jan. 31, 2019, 11:34 a.m. UTC | #9
On Thu, 31 Jan 2019, Florian Weimer wrote:

> >> I think this needs to use a different flag because the semantics are so
> >> much different.  If I understand this change correctly, previously,
> >> RWF_NOWAIT essentially avoided any I/O, and now it does not.
> >
> > It still avoid synchronous I/O, due to this code still being in place:
> >
> >                 if (!PageUptodate(page)) {
> >                         if (iocb->ki_flags & IOCB_NOWAIT) {
> >                                 put_page(page);
> >                                 goto would_block;
> >                         }
> >
> > but goes the would_block path only after initiating asynchronous 
> > readahead.
> 
> But it wouldn't schedule asynchronous readahead before?

It would, that's kind of the whole point.

> I'm worried that something, say PostgreSQL doing a sequential scan, 
> would implement a two-pass approach, first using RWF_NOWAIT to process 
> what's in the kernel page cache, and then read the rest without it.  If 
> RWF_NOWAIT is treated as a prefetch hint, there could be much more read 
> activity, and a lot of it would be pointless because the data might have 
> to be evicted before userspace can use it.

So are you aware of anything already existing, that'd implement this 
semantics? I've quickly grepped https://github.com/postgres/postgres for 
RWF_NOWAIT, and they don't seem to use it at all. RWF_NOWAIT is rather 
new.

The usecase I am aware of is to make sure that the thread doing 
io_submit() doesn't get blocked for too long, because it has other things 
to do quickly in order to avoid starving other sub-threads (and delegate 
the I/O submission to asynchronous context).
Daniel Gruss Jan. 31, 2019, 12:04 p.m. UTC | #10
On 1/30/19 1:44 PM, Vlastimil Babka wrote:
> Close that sidechannel by always initiating readahead on the cache if we
> encounter a cache miss for preadv2(RWF_NOWAIT); with that in place, probing
> the pagecache residency itself will actually populate the cache, making the
> sidechannel useless.

I fear this does not really close the side channel. You can time the
preadv2 function and infer which path it took, so you just bring it down
to the same as using mmap and timing accesses.
If I understood it correctly, this patch just removes the advantages of
preadv2 over mmmap+access for the attacker.


Cheers,
Daniel
Vlastimil Babka Jan. 31, 2019, 12:06 p.m. UTC | #11
On 1/31/19 1:04 PM, Daniel Gruss wrote:
> On 1/30/19 1:44 PM, Vlastimil Babka wrote:
>> Close that sidechannel by always initiating readahead on the cache if we
>> encounter a cache miss for preadv2(RWF_NOWAIT); with that in place, probing
>> the pagecache residency itself will actually populate the cache, making the
>> sidechannel useless.
> 
> I fear this does not really close the side channel. You can time the
> preadv2 function and infer which path it took, so you just bring it down
> to the same as using mmap and timing accesses.
> If I understood it correctly, this patch just removes the advantages of
> preadv2 over mmmap+access for the attacker.

But isn't that the same with mincore()? We can't simply remove the
possibility of mmap+access, but we are closing the simpler methods?

Vlastimil


> Cheers,
> Daniel
>
Jiri Kosina Jan. 31, 2019, 12:08 p.m. UTC | #12
On Thu, 31 Jan 2019, Daniel Gruss wrote:

> If I understood it correctly, this patch just removes the advantages of 
> preadv2 over mmmap+access for the attacker.

Which is the desired effect. We are not trying to solve the timing aspect, 
as I don't think there is a reasonable way to do it, is there?
Daniel Gruss Jan. 31, 2019, 12:57 p.m. UTC | #13
On 1/31/19 1:08 PM, Jiri Kosina wrote:
> On Thu, 31 Jan 2019, Daniel Gruss wrote:
> 
>> If I understood it correctly, this patch just removes the advantages of 
>> preadv2 over mmmap+access for the attacker.
> 
> Which is the desired effect. We are not trying to solve the timing aspect, 
> as I don't think there is a reasonable way to do it, is there?

There are two building blocks to cache attacks, bringing the cache into
a state, and observing a state change, you can mitigate them by breaking
either of these building blocks.

For most attacks the attacker would be interested in observing *when* a
specific victim page is loaded into the page cache rather than observing
whether it is in the page cache right now (it could be there for ages if
the system was not under memory pressure).
So, one could try to prevent interference in the page cache between
attacker and victim -> working set algorithms do that to some extent.
Simpler idea (with more side effects) would be limiting the maximum
share of the page cache per user (or per process, depending on the
threat model)...


Cheers,
Daniel
Linus Torvalds Jan. 31, 2019, 5:54 p.m. UTC | #14
On Thu, Jan 31, 2019 at 2:23 AM Michal Hocko <mhocko@kernel.org> wrote:
>
> OK, I guess my question was not precise. What does prevent taking fs
> locks down the path?

IOCB_NOWAIT has never meant that, and will never mean it.

We will never give user space those kinds of guarantees. We do locking
for various reasons. For example, we'll do the mm lock just when
fetching/storing data from/to user space if there's a page fault. Or -
more obviously - we'll also check for - and sleep on - mandatory locks
in rw_verify_area().

There is nothing like "atomic IO" to user space. We simply do not give
those kinds of guarantees. That's even more true when this is a
information leak that we shouldn't expose to user space in the first
place.

                  Linus
Dave Chinner Feb. 1, 2019, 1:44 a.m. UTC | #15
On Thu, Jan 31, 2019 at 10:56:44AM +0100, Michal Hocko wrote:
> [Cc fs-devel]
> 
> On Wed 30-01-19 13:44:19, Vlastimil Babka wrote:
> > From: Jiri Kosina <jkosina@suse.cz>
> > 
> > preadv2(RWF_NOWAIT) can be used to open a side-channel to pagecache contents, as
> > it reveals metadata about residency of pages in pagecache.
> > 
> > If preadv2(RWF_NOWAIT) returns immediately, it provides a clear "page not
> > resident" information, and vice versa.
> > 
> > Close that sidechannel by always initiating readahead on the cache if we
> > encounter a cache miss for preadv2(RWF_NOWAIT); with that in place, probing
> > the pagecache residency itself will actually populate the cache, making the
> > sidechannel useless.
> 
> I guess the current wording doesn't disallow background IO to be
> triggered for EAGAIN case. I am not sure whether that breaks clever
> applications which try to perform larger IO for those cases though.

Actually, it does:

RWF_NOWAIT (since Linux 4.14)

    Do  not  wait for data which is not immediately available.  If
    this flag is specified, the preadv2() system call will return
    instantly if it would have to read data from the backing storage
    or wait for a lock.

page_cache_sync_readahead() can block on page allocation, it calls
->readpages() which means there are page locks and filesystem locks
in play (e.g.  for block mapping), there's potential for blocking on
metadata IO (both submission and completion) to read block maps, the
data readahead can be submitted for IO so it can get stuck anywhere
in the IO path, etc...

Basically, it completely subverts the documented behaviour of
RWF_NOWAIT.

There are applications (like Samba (*)) that are planning to use
this to avoid blocking their main processing threads on buffered
IO. This change makes RWF_NOWAIT pretty much useless to them - it
/was/ the only solution we had for reliably issuing non-blocking IO,
with this patch it isn't a viable solution at all.

(*) https://github.com/samba-team/samba/commit/6381044c0270a647c20935d22fd23f235d19b328

IOWs, if this change goes through, it needs to be documented as an
intentional behavioural bug in the preadv2 manpage so that userspace
developers are aware of the new limitations of RWF_NOWAIT and should
avoid it like the plague.

But worse than that is nobody has bothered to (or ask someone
familiar with the code to) do an audit of RWF_NOWAIT usage after I
pointed out the behavioural issues. The one person who was engaged
and /had done an audit/ got shouted down with so much bullshit they
just walked away....

So, I'll invite the incoherent, incandescent O_DIRECT rage flames of
Linus to be unleashed again and point out the /other reference/ to
IOCB_NOWAIT in mm/filemap.c. That is, in generic_file_read_iter(),
in the *generic O_DIRECT read path*:

	if (iocb->ki_flags & IOCB_DIRECT) {
.....
		if (iocb->ki_flags & IOCB_NOWAIT) {
			if (filemap_range_has_page(mapping, iocb->ki_pos,
						   iocb->ki_pos + count - 1))
				return -EAGAIN;
		} else {
.....

This page cache probe is about 100 lines of code down from the code
that this patch modifies, in it's direct caller. It's not hard to
find, I shouldn't have to point it out, nor have to explain how it
makes this patch completely irrelevant.

> > diff --git a/mm/filemap.c b/mm/filemap.c
> > index 9f5e323e883e..7bcdd36e629d 100644
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -2075,8 +2075,6 @@ static ssize_t generic_file_buffered_read(struct kiocb *iocb,
> >  
> >  		page = find_get_page(mapping, index);
> >  		if (!page) {
> > -			if (iocb->ki_flags & IOCB_NOWAIT)
> > -				goto would_block;
> >  			page_cache_sync_readahead(mapping,
> >  					ra, filp,
> >  					index, last_index - index);
> 
> Maybe a stupid question but I am not really familiar with this path but
> what exactly does prevent a sync read down page_cache_sync_readahead
> path?

It's effectively useless as a workaround because you can avoid the
readahead IO being issued relatively easily:

void page_cache_sync_readahead(struct address_space *mapping,
                               struct file_ra_state *ra, struct file *filp,
                               pgoff_t offset, unsigned long req_size)
{
        /* no read-ahead */
        if (!ra->ra_pages)
                return;

        if (blk_cgroup_congested())
                return;
....

IOWs, we just have to issue enough IO to congest the block device (or,
even easier, a rate-limited cgroup), and we can still use RWF_NOWAIT
to probe the page cache. Or if we can convince ra->ra_pages to be
zero (e.g. it's on bdi device with no readahead configured because
it's real fast) then it doesn't work there, either.

So this a) isn't a robust workaround, b) it breaks documented API
semantics and c) isn't the only path to page cache probing via
RWF_NOWAIT. It's just a new game of whack-a-mole.

Cheers,

Dave.
Dave Chinner Feb. 1, 2019, 5:13 a.m. UTC | #16
On Thu, Jan 31, 2019 at 09:54:16AM -0800, Linus Torvalds wrote:
> On Thu, Jan 31, 2019 at 2:23 AM Michal Hocko <mhocko@kernel.org> wrote:
> >
> > OK, I guess my question was not precise. What does prevent taking fs
> > locks down the path?
> 
> IOCB_NOWAIT has never meant that, and will never mean it.

I think you're wrong, Linus. IOCB_NOWAIT was specifically designed
to prevent blocking on filesystem locks during AIO submission. The
initial commits spell that out pretty clearly:

commit b745fafaf70c0a98a2e1e7ac8cb14542889ceb0e
Author: Goldwyn Rodrigues <rgoldwyn@suse.com>
Date:   Tue Jun 20 07:05:43 2017 -0500

    fs: Introduce RWF_NOWAIT and FMODE_AIO_NOWAIT
    
    RWF_NOWAIT informs kernel to bail out if an AIO request will block
    for reasons such as file allocations, or a writeback triggered,
    or would block while allocating requests while performing
    direct I/O.
    
    RWF_NOWAIT is translated to IOCB_NOWAIT for iocb->ki_flags.
    
    FMODE_AIO_NOWAIT is a flag which identifies the file opened is capable
    of returning -EAGAIN if the AIO call will block. This must be set by
    supporting filesystems in the ->open() call.
    
    Filesystems xfs, btrfs and ext4 would be supported in the following patches.
    
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Jan Kara <jack@suse.cz>
    Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

commit 29a5d29ec181ebdc98a26cedbd76ce9870248892
Author: Goldwyn Rodrigues <rgoldwyn@suse.com>
Date:   Tue Jun 20 07:05:48 2017 -0500

    xfs: nowait aio support
    
    If IOCB_NOWAIT is set, bail if the i_rwsem is not lockable
    immediately.
    
    IF IOMAP_NOWAIT is set, return EAGAIN in xfs_file_iomap_begin
    if it needs allocation either due to file extension, writing to a hole,
    or COW or waiting for other DIOs to finish.
    
    Return -EAGAIN if we don't have extent list in memory.
    
    Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

commit 728fbc0e10b7f3ce2ee043b32e3453fd5201c055
Author: Goldwyn Rodrigues <rgoldwyn@suse.com>
Date:   Tue Jun 20 07:05:47 2017 -0500

    ext4: nowait aio support
    
    Return EAGAIN if any of the following checks fail for direct I/O:
      + i_rwsem is lockable
      + Writing beyond end of file (will trigger allocation)
      + Blocks are not allocated at the write location
    
    Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
    Reviewed-by: Jan Kara <jack@suse.cz>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>

> We will never give user space those kinds of guarantees. We do locking
> for various reasons.  For example, we'll do the mm lock just when
> fetching/storing data from/to user space if there's a page fault.

You are conflating "best effort non-blocking operation" with
"atomic guarantee".  RWF_NOWAIT/IOCB_NOWAIT is the
former, not the latter.

i.e. RWF_NOWAIT addresses the "every second IO submission blocks"
problems that AIO submission suffered from due to filesystem lock
contention, not the rare and unusual things like  "page fault during
get_user_pages in direct IO submission".  Maybe one day, but right
now those rare cases are not pain points for applications that
require nonblock AIO submission via RWF_NOWAIT.

> Or -
> more obviously - we'll also check for - and sleep on - mandatory locks
> in rw_verify_area().

Well, only if you don't use fcntl(O_NONBLOCK) on the file to tell
mandatory locking to fail with -EAGAIN instead of sleeping.

-Dave.
Linus Torvalds Feb. 1, 2019, 7:05 a.m. UTC | #17
On Thu, Jan 31, 2019 at 9:16 PM Dave Chinner <david@fromorbit.com> wrote:
>
> You are conflating "best effort non-blocking operation" with
> "atomic guarantee".  RWF_NOWAIT/IOCB_NOWAIT is the
> former, not the latter.

Right.

That's my *point*, Dave.

It's not 'atomic guarantee", and never will be. We are in 100%
agreement. That's what I _said_.

And part of "best effort" is very much "not a security information leak".

I really don't see why you are so argumentative.

As I mentioned earlier in the thread, it's actually quite possible
that users will actually find that starting read-ahead is a *good*
thing, Dave.

Even - in fact *particularly* - the user you brought up: samba using
RWF_NOWAIT to try to do things synchronously quickly.

So Dave, why are you being so negative?

             Linus
Linus Torvalds Feb. 1, 2019, 7:21 a.m. UTC | #18
On Thu, Jan 31, 2019 at 11:05 PM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> And part of "best effort" is very much "not a security information leak".

Side note: it's entirely possible that the preadv2(RWF_NOWAIT)
interface is actually already effectively too slow to be effectively
used as much of an attack vector.

One of the advantages of mincore() for the attack was that you could
just get a lot of page status information in one go. With RWF_NOWAIT,
you only really get "up to the first non-cached page", so it's already
a weaker signal than mincore() gave.

System calls aren't horrendously slow (at least not with fixed
non-meltdown CPU's), but it might still be a somewhat noticeable
inconvenience in an attack that is already probably not all that easy
to do on an arbitrary target.

So it might not be a huge deal. But I think we should at least try to
make things less useful for these kinds of attack vectors.

And no, that doesn't mean "stop all theoretical attacks". It means
"let's try to make things less convenient as a data leak".

That's why things like "oh, you can still see the signal if you can
keep the backing device congested" is not something I'd worry about.
It's just another (big) inconvenience, and not all that simple to do.
At some point, it's simply not worth it as an attack vector any more.

               Linus
Jiri Kosina Feb. 12, 2019, 3:48 p.m. UTC | #19
On Fri, 1 Feb 2019, Dave Chinner wrote:

> So, I'll invite the incoherent, incandescent O_DIRECT rage flames of
> Linus to be unleashed again and point out the /other reference/ to
> IOCB_NOWAIT in mm/filemap.c. That is, in generic_file_read_iter(),
> in the *generic O_DIRECT read path*:
> 
> 	if (iocb->ki_flags & IOCB_DIRECT) {
> .....
> 		if (iocb->ki_flags & IOCB_NOWAIT) {
> 			if (filemap_range_has_page(mapping, iocb->ki_pos,
> 						   iocb->ki_pos + count - 1))
> 				return -EAGAIN;
> 		} else {
> .....

OK, thanks Dave, this is a good point I've missed in this mail before 
(probabably as I focused only on the aspect of disagreement what NONBLOCK 
actually means :) ). I will look into fixing it for next iteration.

> It's effectively useless as a workaround because you can avoid the
> readahead IO being issued relatively easily:
> 
> void page_cache_sync_readahead(struct address_space *mapping,
>                                struct file_ra_state *ra, struct file *filp,
>                                pgoff_t offset, unsigned long req_size)
> {
>         /* no read-ahead */
>         if (!ra->ra_pages)
>                 return;
> 
>         if (blk_cgroup_congested())
>                 return;
> ....
> 
> IOWs, we just have to issue enough IO to congest the block device (or,
> even easier, a rate-limited cgroup), and we can still use RWF_NOWAIT
> to probe the page cache. Or if we can convince ra->ra_pages to be
> zero (e.g. it's on bdi device with no readahead configured because
> it's real fast) then it doesn't work there, either.

It's though questionable whether the noise level here wouldn't be too high 
already for any sidechannel to work reliably. So I'd suggest to operate 
under the assumption that it would be too noisy, unless anyone is able to 
prove otherwise.

Thanks,
diff mbox series

Patch

diff --git a/mm/filemap.c b/mm/filemap.c
index 9f5e323e883e..7bcdd36e629d 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2075,8 +2075,6 @@  static ssize_t generic_file_buffered_read(struct kiocb *iocb,
 
 		page = find_get_page(mapping, index);
 		if (!page) {
-			if (iocb->ki_flags & IOCB_NOWAIT)
-				goto would_block;
 			page_cache_sync_readahead(mapping,
 					ra, filp,
 					index, last_index - index);