diff mbox series

mm, swap: disallow swapon() on zoned block devices

Message ID 20191015043827.160444-1-naohiro.aota@wdc.com (mailing list archive)
State New, archived
Headers show
Series mm, swap: disallow swapon() on zoned block devices | expand

Commit Message

Naohiro Aota Oct. 15, 2019, 4:38 a.m. UTC
A zoned block device consists of a number of zones. Zones are
eitherconventional and accepting random writes or sequential and
requiringthat writes be issued in LBA order from each zone write
pointerposition. For the write restriction, zoned block devices are
notsuitable for a swap device. Disallow swapon on them.

Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
 mm/swapfile.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Christoph Hellwig Oct. 15, 2019, 7:57 a.m. UTC | #1
On Tue, Oct 15, 2019 at 01:38:27PM +0900, Naohiro Aota wrote:
> +		if (blk_queue_is_zoned(p->bdev->bd_queue))
> +			return -EINVAL;

Please add a comment here (based on your changelog).
Matthew Wilcox Oct. 15, 2019, 11:35 a.m. UTC | #2
On Tue, Oct 15, 2019 at 01:38:27PM +0900, Naohiro Aota wrote:
> A zoned block device consists of a number of zones. Zones are
> eitherconventional and accepting random writes or sequential and
> requiringthat writes be issued in LBA order from each zone write
> pointerposition. For the write restriction, zoned block devices are
> notsuitable for a swap device. Disallow swapon on them.

That's unfortunate.  I wonder what it would take to make the swap code be
suitable for zoned devices.  It might even perform better on conventional
drives since swapout would be a large linear write.  Swapin would be a
fragmented, seeky set of reads, but this would seem like an excellent
university project.
Theodore Ts'o Oct. 15, 2019, 1:27 p.m. UTC | #3
On Tue, Oct 15, 2019 at 04:35:48AM -0700, Matthew Wilcox wrote:
> On Tue, Oct 15, 2019 at 01:38:27PM +0900, Naohiro Aota wrote:
> > A zoned block device consists of a number of zones. Zones are
> > eitherconventional and accepting random writes or sequential and
> > requiringthat writes be issued in LBA order from each zone write
> > pointerposition. For the write restriction, zoned block devices are
> > notsuitable for a swap device. Disallow swapon on them.
> 
> That's unfortunate.  I wonder what it would take to make the swap code be
> suitable for zoned devices.  It might even perform better on conventional
> drives since swapout would be a large linear write.  Swapin would be a
> fragmented, seeky set of reads, but this would seem like an excellent
> university project.

Also maybe a great Outreachy or GSOC project?
Hannes Reinecke Oct. 15, 2019, 1:48 p.m. UTC | #4
On 10/15/19 1:35 PM, Matthew Wilcox wrote:
> On Tue, Oct 15, 2019 at 01:38:27PM +0900, Naohiro Aota wrote:
>> A zoned block device consists of a number of zones. Zones are
>> either conventional and accepting random writes or sequential and
>> requiring that writes be issued in LBA order from each zone write
>> pointer position. For the write restriction, zoned block devices are
>> not suitable for a swap device. Disallow swapon on them.
> 
> That's unfortunate.  I wonder what it would take to make the swap code be
> suitable for zoned devices.  It might even perform better on conventional
> drives since swapout would be a large linear write.  Swapin would be a
> fragmented, seeky set of reads, but this would seem like an excellent
> university project.
> 
The main problem I'm seeing is the eviction of pages from swap.
While swapin is easy (as you can do random access on reads), evict pages
from cache becomes extremely tricky as you can only delete entire zones.
So how to we mark pages within zones as being stale?
Or can we modify the swapin code to always swap in an entire zone and
discard it immediately?

Cheers,

Hannes
Christoph Lameter (Ampere) Oct. 15, 2019, 2:50 p.m. UTC | #5
On Tue, 15 Oct 2019, Hannes Reinecke wrote:

> On 10/15/19 1:35 PM, Matthew Wilcox wrote:
> > On Tue, Oct 15, 2019 at 01:38:27PM +0900, Naohiro Aota wrote:
> >> A zoned block device consists of a number of zones. Zones are
> >> either conventional and accepting random writes or sequential and
> >> requiring that writes be issued in LBA order from each zone write
> >> pointer position. For the write restriction, zoned block devices are
> >> not suitable for a swap device. Disallow swapon on them.
> >
> > That's unfortunate.  I wonder what it would take to make the swap code be
> > suitable for zoned devices.  It might even perform better on conventional
> > drives since swapout would be a large linear write.  Swapin would be a
> > fragmented, seeky set of reads, but this would seem like an excellent
> > university project.
> >
> The main problem I'm seeing is the eviction of pages from swap.
> While swapin is easy (as you can do random access on reads), evict pages
> from cache becomes extremely tricky as you can only delete entire zones.
> So how to we mark pages within zones as being stale?
> Or can we modify the swapin code to always swap in an entire zone and
> discard it immediately?

On swapout you would change the block number on the swap device to the
latest and increment it?

Mark the prio block number as unused and then at some convenient time scan
the map and see if you can somehow free up a zone?
Matthew Wilcox Oct. 15, 2019, 3:09 p.m. UTC | #6
On Tue, Oct 15, 2019 at 03:48:47PM +0200, Hannes Reinecke wrote:
> On 10/15/19 1:35 PM, Matthew Wilcox wrote:
> > On Tue, Oct 15, 2019 at 01:38:27PM +0900, Naohiro Aota wrote:
> >> A zoned block device consists of a number of zones. Zones are
> >> either conventional and accepting random writes or sequential and
> >> requiring that writes be issued in LBA order from each zone write
> >> pointer position. For the write restriction, zoned block devices are
> >> not suitable for a swap device. Disallow swapon on them.
> > 
> > That's unfortunate.  I wonder what it would take to make the swap code be
> > suitable for zoned devices.  It might even perform better on conventional
> > drives since swapout would be a large linear write.  Swapin would be a
> > fragmented, seeky set of reads, but this would seem like an excellent
> > university project.
> 
> The main problem I'm seeing is the eviction of pages from swap.
> While swapin is easy (as you can do random access on reads), evict pages
> from cache becomes extremely tricky as you can only delete entire zones.
> So how to we mark pages within zones as being stale?
> Or can we modify the swapin code to always swap in an entire zone and
> discard it immediately?

I thought zones were too big to swap in all at once?  What's a typical
zone size these days?  (the answer looks very different if a zone is 1MB
or if it's 1GB)

Fundamentally an allocated anonymous page has 5 states:

A: In memory, not written to swap (allocated)
B: In memory, dirty, not written to swap (app modifies page)
C: In memory, clean, written to swap (kernel decides to write it)
D: Not in memory, written to swap (kernel decides to reuse the memory)
E: In memory, clean, written to swap (app faults it back in for read)

We currently have a sixth state which is a page that has previously been
written to swap but has been redirtied by the app.  It will be written
back to the allocated location the next time it's targetted for writeout.

That would have to change; since we can't do random writes, pages would
transition from states D or E back to B.  Swapping out a page that has
previously been swapped will now mean appending to the tail of the swap,
not writing in place.

So the swap code will now need to keep track of which pages are still
in use in storage and will need to be relocated once we decide to reuse
the zone.  Not an insurmountable task, but not entirely trivial.

There'd be some other gunk to deal with around handling badblocks.
Those are currently stored in page 1, so adding new ones would be
a rewrite of that block.
Hannes Reinecke Oct. 15, 2019, 3:22 p.m. UTC | #7
On 10/15/19 5:09 PM, Matthew Wilcox wrote:
> On Tue, Oct 15, 2019 at 03:48:47PM +0200, Hannes Reinecke wrote:
>> On 10/15/19 1:35 PM, Matthew Wilcox wrote:
>>> On Tue, Oct 15, 2019 at 01:38:27PM +0900, Naohiro Aota wrote:
>>>> A zoned block device consists of a number of zones. Zones are
>>>> either conventional and accepting random writes or sequential and
>>>> requiring that writes be issued in LBA order from each zone write
>>>> pointer position. For the write restriction, zoned block devices are
>>>> not suitable for a swap device. Disallow swapon on them.
>>>
>>> That's unfortunate.  I wonder what it would take to make the swap code be
>>> suitable for zoned devices.  It might even perform better on conventional
>>> drives since swapout would be a large linear write.  Swapin would be a
>>> fragmented, seeky set of reads, but this would seem like an excellent
>>> university project.
>>
>> The main problem I'm seeing is the eviction of pages from swap.
>> While swapin is easy (as you can do random access on reads), evict pages
>> from cache becomes extremely tricky as you can only delete entire zones.
>> So how to we mark pages within zones as being stale?
>> Or can we modify the swapin code to always swap in an entire zone and
>> discard it immediately?
> 
> I thought zones were too big to swap in all at once?  What's a typical
> zone size these days?  (the answer looks very different if a zone is 1MB
> or if it's 1GB)
> 
Currently things have settled at 256MB, might be increased for ZNS.
But GB would be the upper limit I'd assume.

> Fundamentally an allocated anonymous page has 5 states:
> 
> A: In memory, not written to swap (allocated)
> B: In memory, dirty, not written to swap (app modifies page)
> C: In memory, clean, written to swap (kernel decides to write it)
> D: Not in memory, written to swap (kernel decides to reuse the memory)
> E: In memory, clean, written to swap (app faults it back in for read)
> 
> We currently have a sixth state which is a page that has previously been
> written to swap but has been redirtied by the app.  It will be written
> back to the allocated location the next time it's targetted for writeout.
> 
> That would have to change; since we can't do random writes, pages would
> transition from states D or E back to B.  Swapping out a page that has
> previously been swapped will now mean appending to the tail of the swap,
> not writing in place.
> 
> So the swap code will now need to keep track of which pages are still
> in use in storage and will need to be relocated once we decide to reuse
> the zone.  Not an insurmountable task, but not entirely trivial.
> 
Precisely my worries.
However, clearing stuff is _really_ fast (you just have to reset the
pointer which is kept in NVRAM of the device). Which might help a bit.

> There'd be some other gunk to deal with around handling badblocks.
> Those are currently stored in page 1, so adding new ones would be
> a rewrite of that block.
> 
Bah. Can't we make that optional?
We really only need badblocks when writing to crappy media (or NV-DIMM
:-). Zoned devices _will_ have proper error recovery in place, so the
only time where badblocks might be used is when the device is
essentially dead ;-)

Cheers,

Hannes
diff mbox series

Patch

diff --git a/mm/swapfile.c b/mm/swapfile.c
index dab43523afdd..a9da20739017 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -2887,6 +2887,8 @@  static int claim_swapfile(struct swap_info_struct *p, struct inode *inode)
 		error = set_blocksize(p->bdev, PAGE_SIZE);
 		if (error < 0)
 			return error;
+		if (blk_queue_is_zoned(p->bdev->bd_queue))
+			return -EINVAL;
 		p->flags |= SWP_BLKDEV;
 	} else if (S_ISREG(inode->i_mode)) {
 		p->bdev = inode->i_sb->s_bdev;