Message ID | 333ea747-8b45-52ae-006e-a1804e14de32@redhat.com (mailing list archive) |
---|---|
Headers | show |
Series | xfs: enable per-type quota timers and warn limits | expand |
<it would be fair to ask me for an xfstest for this> ;) I'll try to get to that next week, but figured I'd float the series for review. Thanks, -Eric
On Sat, Feb 08, 2020 at 03:09:19PM -0600, Eric Sandeen wrote: > Quota timers are currently a mess. Right now, at mount time, > we pick up the first enabled type and use that for the single > timer in mp->m_quotainfo. > > Interestingly, if we set a timer on a different type, /that/ > gets set into mp->m_quotainfo where it stays in effect until > the next mount, when we pick the first enabled type again. > > We actually write the timer values to each type of quota inode, > but only one is ever in force, according to the interesting behavior > described above. > > This series allows quota timers & warn limits to be independently > set and enforced for each quota type. Is there a test case demonstrating this behavior? Also, what do the other filesystems (well ok ext4) do? --D > All the action is in the last patch, the first 3 are cleanups to > help. > > -Eric >
On 2/11/20 9:43 AM, Darrick J. Wong wrote: > On Sat, Feb 08, 2020 at 03:09:19PM -0600, Eric Sandeen wrote: >> Quota timers are currently a mess. Right now, at mount time, >> we pick up the first enabled type and use that for the single >> timer in mp->m_quotainfo. >> >> Interestingly, if we set a timer on a different type, /that/ >> gets set into mp->m_quotainfo where it stays in effect until >> the next mount, when we pick the first enabled type again. >> >> We actually write the timer values to each type of quota inode, >> but only one is ever in force, according to the interesting behavior >> described above. >> >> This series allows quota timers & warn limits to be independently >> set and enforced for each quota type. > > Is there a test case demonstrating this behavior? I do still owe this a testcase. Planned to do it yesterday and then life happened, as it does. > Also, what do the other filesystems (well ok ext4) do? I'll let you know after I write the testcase ;) -Eric
On 2/11/20 9:52 AM, Eric Sandeen wrote: > > > On 2/11/20 9:43 AM, Darrick J. Wong wrote: >> On Sat, Feb 08, 2020 at 03:09:19PM -0600, Eric Sandeen wrote: >>> Quota timers are currently a mess. Right now, at mount time, >>> we pick up the first enabled type and use that for the single >>> timer in mp->m_quotainfo. >>> >>> Interestingly, if we set a timer on a different type, /that/ >>> gets set into mp->m_quotainfo where it stays in effect until >>> the next mount, when we pick the first enabled type again. >>> >>> We actually write the timer values to each type of quota inode, >>> but only one is ever in force, according to the interesting behavior >>> described above. >>> >>> This series allows quota timers & warn limits to be independently >>> set and enforced for each quota type. >> >> Is there a test case demonstrating this behavior? > > I do still owe this a testcase. > Planned to do it yesterday and then life happened, as it does. > >> Also, what do the other filesystems (well ok ext4) do? > > I'll let you know after I write the testcase ;) Spoiler: it works as expected on ext4. set user block & inode grace to 2 & 4 minutes; set group block & inode grace to 4 & 8 minutes: # setquota -t -u 120 240 mnt-ext4/ # setquota -t -g 360 480 mnt-ext4/ # setquota -t -u 120 240 mnt-xfs/ # setquota -t -g 360 480 mnt-xfs/ report user & group grace limits: # repquota -ug mnt-ext4/ | grep "Report\|^Block" *** Report for user quotas on device /dev/loop1 Block grace time: 00:02; Inode grace time: 00:04 *** Report for group quotas on device /dev/loop1 Block grace time: 00:06; Inode grace time: 00:08 ext4 shows all four different grace periods # repquota -ug mnt-xfs/ | grep "Report\|^Block" *** Report for user quotas on device /dev/loop2 Block grace time: 00:02; Inode grace time: 00:04 *** Report for group quotas on device /dev/loop2 Block grace time: 00:02; Inode grace time: 00:04 xfs shows the same grace periods for user & group, despite setting different values for each. -Eric
On Sat, Feb 08, 2020 at 03:09:19PM -0600, Eric Sandeen wrote: > Quota timers are currently a mess. Right now, at mount time, > we pick up the first enabled type and use that for the single > timer in mp->m_quotainfo. > > Interestingly, if we set a timer on a different type, /that/ > gets set into mp->m_quotainfo where it stays in effect until > the next mount, when we pick the first enabled type again. > > We actually write the timer values to each type of quota inode, > but only one is ever in force, according to the interesting behavior > described above. > > This series allows quota timers & warn limits to be independently > set and enforced for each quota type. > > All the action is in the last patch, the first 3 are cleanups to > help. This patchset looks good, but the testing for xfs quota timers looks not so well. Please check the emails(test case) I sent to fstests@: [PATCH 1/2] generic: per-type quota timers set/get test [PATCH 2/2] generic: test per-type quota softlimit enforcement timeout Why xfs has such different test results? Please feel free to tell me, if the case is wrong. Thanks, Zorro > > -Eric >
On 2/17/20 10:49 PM, Zorro Lang wrote: > On Sat, Feb 08, 2020 at 03:09:19PM -0600, Eric Sandeen wrote: >> Quota timers are currently a mess. Right now, at mount time, >> we pick up the first enabled type and use that for the single >> timer in mp->m_quotainfo. >> >> Interestingly, if we set a timer on a different type, /that/ >> gets set into mp->m_quotainfo where it stays in effect until >> the next mount, when we pick the first enabled type again. >> >> We actually write the timer values to each type of quota inode, >> but only one is ever in force, according to the interesting behavior >> described above. >> >> This series allows quota timers & warn limits to be independently >> set and enforced for each quota type. >> >> All the action is in the last patch, the first 3 are cleanups to >> help. > > This patchset looks good, but the testing for xfs quota timers looks > not so well. Please check the emails(test case) I sent to fstests@: > [PATCH 1/2] generic: per-type quota timers set/get test > [PATCH 2/2] generic: test per-type quota softlimit enforcement timeout > > Why xfs has such different test results? Please feel free to tell me, > if the case is wrong. Thanks for this, Zorro, I'll take a look. -Eric > Thanks, > Zorro > >> >> -Eric >> >