mbox series

[0/4] xfs: enable per-type quota timers and warn limits

Message ID 333ea747-8b45-52ae-006e-a1804e14de32@redhat.com (mailing list archive)
Headers show
Series xfs: enable per-type quota timers and warn limits | expand

Message

Eric Sandeen Feb. 8, 2020, 9:09 p.m. UTC
Quota timers are currently a mess.  Right now, at mount time,
we pick up the first enabled type and use that for the single
timer in mp->m_quotainfo.

Interestingly, if we set a timer on a different type, /that/
gets set into mp->m_quotainfo where it stays in effect until
the next mount, when we pick the first enabled type again.

We actually write the timer values to each type of quota inode,
but only one is ever in force, according to the interesting behavior
described above.

This series allows quota timers & warn limits to be independently
set and enforced for each quota type.

All the action is in the last patch, the first 3 are cleanups to
help.

-Eric

Comments

Eric Sandeen Feb. 8, 2020, 9:12 p.m. UTC | #1
<it would be fair to ask me for an xfstest for this> ;)

I'll try to get to that next week, but figured I'd float the series for review.

Thanks,
-Eric
Darrick J. Wong Feb. 11, 2020, 3:43 p.m. UTC | #2
On Sat, Feb 08, 2020 at 03:09:19PM -0600, Eric Sandeen wrote:
> Quota timers are currently a mess.  Right now, at mount time,
> we pick up the first enabled type and use that for the single
> timer in mp->m_quotainfo.
> 
> Interestingly, if we set a timer on a different type, /that/
> gets set into mp->m_quotainfo where it stays in effect until
> the next mount, when we pick the first enabled type again.
> 
> We actually write the timer values to each type of quota inode,
> but only one is ever in force, according to the interesting behavior
> described above.
> 
> This series allows quota timers & warn limits to be independently
> set and enforced for each quota type.

Is there a test case demonstrating this behavior?

Also, what do the other filesystems (well ok ext4) do?

--D

> All the action is in the last patch, the first 3 are cleanups to
> help.
> 
> -Eric
>
Eric Sandeen Feb. 11, 2020, 3:52 p.m. UTC | #3
On 2/11/20 9:43 AM, Darrick J. Wong wrote:
> On Sat, Feb 08, 2020 at 03:09:19PM -0600, Eric Sandeen wrote:
>> Quota timers are currently a mess.  Right now, at mount time,
>> we pick up the first enabled type and use that for the single
>> timer in mp->m_quotainfo.
>>
>> Interestingly, if we set a timer on a different type, /that/
>> gets set into mp->m_quotainfo where it stays in effect until
>> the next mount, when we pick the first enabled type again.
>>
>> We actually write the timer values to each type of quota inode,
>> but only one is ever in force, according to the interesting behavior
>> described above.
>>
>> This series allows quota timers & warn limits to be independently
>> set and enforced for each quota type.
> 
> Is there a test case demonstrating this behavior?

I do still owe this a testcase.
Planned to do it yesterday and then life happened, as it does.

> Also, what do the other filesystems (well ok ext4) do?

I'll let you know after I write the testcase ;)

-Eric
Eric Sandeen Feb. 11, 2020, 9:40 p.m. UTC | #4
On 2/11/20 9:52 AM, Eric Sandeen wrote:
> 
> 
> On 2/11/20 9:43 AM, Darrick J. Wong wrote:
>> On Sat, Feb 08, 2020 at 03:09:19PM -0600, Eric Sandeen wrote:
>>> Quota timers are currently a mess.  Right now, at mount time,
>>> we pick up the first enabled type and use that for the single
>>> timer in mp->m_quotainfo.
>>>
>>> Interestingly, if we set a timer on a different type, /that/
>>> gets set into mp->m_quotainfo where it stays in effect until
>>> the next mount, when we pick the first enabled type again.
>>>
>>> We actually write the timer values to each type of quota inode,
>>> but only one is ever in force, according to the interesting behavior
>>> described above.
>>>
>>> This series allows quota timers & warn limits to be independently
>>> set and enforced for each quota type.
>>
>> Is there a test case demonstrating this behavior?
> 
> I do still owe this a testcase.
> Planned to do it yesterday and then life happened, as it does.
> 
>> Also, what do the other filesystems (well ok ext4) do?
> 
> I'll let you know after I write the testcase ;)

Spoiler: it works as expected on ext4.

set user  block & inode grace to 2 & 4 minutes;
set group block & inode grace to 4 & 8 minutes:

# setquota -t -u 120 240  mnt-ext4/
# setquota -t -g 360 480  mnt-ext4/
# setquota -t -u 120 240  mnt-xfs/
# setquota -t -g 360 480  mnt-xfs/

report user & group grace limits:

# repquota -ug mnt-ext4/ | grep "Report\|^Block"
*** Report for user quotas on device /dev/loop1
Block grace time: 00:02; Inode grace time: 00:04
*** Report for group quotas on device /dev/loop1
Block grace time: 00:06; Inode grace time: 00:08

ext4 shows all four different grace periods

# repquota -ug mnt-xfs/ | grep "Report\|^Block"
*** Report for user quotas on device /dev/loop2
Block grace time: 00:02; Inode grace time: 00:04
*** Report for group quotas on device /dev/loop2
Block grace time: 00:02; Inode grace time: 00:04

xfs shows the same grace periods for user & group, despite setting different
values for each.

-Eric
Zorro Lang Feb. 18, 2020, 4:49 a.m. UTC | #5
On Sat, Feb 08, 2020 at 03:09:19PM -0600, Eric Sandeen wrote:
> Quota timers are currently a mess.  Right now, at mount time,
> we pick up the first enabled type and use that for the single
> timer in mp->m_quotainfo.
> 
> Interestingly, if we set a timer on a different type, /that/
> gets set into mp->m_quotainfo where it stays in effect until
> the next mount, when we pick the first enabled type again.
> 
> We actually write the timer values to each type of quota inode,
> but only one is ever in force, according to the interesting behavior
> described above.
> 
> This series allows quota timers & warn limits to be independently
> set and enforced for each quota type.
> 
> All the action is in the last patch, the first 3 are cleanups to
> help.

This patchset looks good, but the testing for xfs quota timers looks
not so well. Please check the emails(test case) I sent to fstests@:
  [PATCH 1/2] generic: per-type quota timers set/get test
  [PATCH 2/2] generic: test per-type quota softlimit enforcement timeout

Why xfs has such different test results? Please feel free to tell me,
if the case is wrong.

Thanks,
Zorro

> 
> -Eric
>
Eric Sandeen Feb. 18, 2020, 9:07 p.m. UTC | #6
On 2/17/20 10:49 PM, Zorro Lang wrote:
> On Sat, Feb 08, 2020 at 03:09:19PM -0600, Eric Sandeen wrote:
>> Quota timers are currently a mess.  Right now, at mount time,
>> we pick up the first enabled type and use that for the single
>> timer in mp->m_quotainfo.
>>
>> Interestingly, if we set a timer on a different type, /that/
>> gets set into mp->m_quotainfo where it stays in effect until
>> the next mount, when we pick the first enabled type again.
>>
>> We actually write the timer values to each type of quota inode,
>> but only one is ever in force, according to the interesting behavior
>> described above.
>>
>> This series allows quota timers & warn limits to be independently
>> set and enforced for each quota type.
>>
>> All the action is in the last patch, the first 3 are cleanups to
>> help.
> 
> This patchset looks good, but the testing for xfs quota timers looks
> not so well. Please check the emails(test case) I sent to fstests@:
>   [PATCH 1/2] generic: per-type quota timers set/get test
>   [PATCH 2/2] generic: test per-type quota softlimit enforcement timeout
> 
> Why xfs has such different test results? Please feel free to tell me,
> if the case is wrong.

Thanks for this, Zorro, I'll take a look.

-Eric

> Thanks,
> Zorro
> 
>>
>> -Eric
>>
>