Message ID | 20200528135444.11508-1-schatzberg.dan@gmail.com (mailing list archive) |
---|---|
Headers | show |
Series | Charge loop device i/o to issuing cgroup | expand |
On Thu, May 28, 2020 at 6:55 AM Dan Schatzberg <schatzberg.dan@gmail.com> wrote: > > Much of the discussion about this has died down. There's been a > concern raised that we could generalize infrastructure across loop, > md, etc. This may be possible, in the future, but it isn't clear to me > how this would look like. I'm inclined to fix the existing issue with > loop devices now (this is a problem we hit at FB) and address > consolidation with other cases if and when those need to be addressed. > What's the status of this series?
On Thu, Aug 20, 2020 at 10:06:44AM -0700, Shakeel Butt wrote: > On Thu, May 28, 2020 at 6:55 AM Dan Schatzberg <schatzberg.dan@gmail.com> wrote: > > > > Much of the discussion about this has died down. There's been a > > concern raised that we could generalize infrastructure across loop, > > md, etc. This may be possible, in the future, but it isn't clear to me > > how this would look like. I'm inclined to fix the existing issue with > > loop devices now (this is a problem we hit at FB) and address > > consolidation with other cases if and when those need to be addressed. > > > > What's the status of this series? Thanks for reminding me about this. I haven't got any further feedback. I'll bug Jens to take a look and see if he has any concerns and if not send a rebased version.
On 8/21/20 9:04 AM, Dan Schatzberg wrote: > On Thu, Aug 20, 2020 at 10:06:44AM -0700, Shakeel Butt wrote: >> On Thu, May 28, 2020 at 6:55 AM Dan Schatzberg <schatzberg.dan@gmail.com> wrote: >>> >>> Much of the discussion about this has died down. There's been a >>> concern raised that we could generalize infrastructure across loop, >>> md, etc. This may be possible, in the future, but it isn't clear to me >>> how this would look like. I'm inclined to fix the existing issue with >>> loop devices now (this is a problem we hit at FB) and address >>> consolidation with other cases if and when those need to be addressed. >>> >> >> What's the status of this series? > > Thanks for reminding me about this. I haven't got any further > feedback. I'll bug Jens to take a look and see if he has any concerns > and if not send a rebased version. No immediate concerns, I think rebasing and sending one against the current tree is probably a good idea. Then we can hopefully get it queued up for 5.10.
On Fri, Aug 21, 2020 at 11:04:05AM -0400, Dan Schatzberg wrote: > On Thu, Aug 20, 2020 at 10:06:44AM -0700, Shakeel Butt wrote: > > On Thu, May 28, 2020 at 6:55 AM Dan Schatzberg <schatzberg.dan@gmail.com> wrote: > > > > > > Much of the discussion about this has died down. There's been a > > > concern raised that we could generalize infrastructure across loop, > > > md, etc. This may be possible, in the future, but it isn't clear to me > > > how this would look like. I'm inclined to fix the existing issue with > > > loop devices now (this is a problem we hit at FB) and address > > > consolidation with other cases if and when those need to be addressed. > > > > > > > What's the status of this series? > > Thanks for reminding me about this. I haven't got any further > feedback. I'll bug Jens to take a look and see if he has any concerns > and if not send a rebased version. Just as a note, I stole a patch from this series called "mm: support nesting memalloc_use_memcg()" to use for the bpf memory accounting. I rewrote the commit log and rebased to the tot with some trivial changes. I just sent it upstream: https://lore.kernel.org/bpf/20200821150134.2581465-1-guro@fb.com/T/#md7edb6b5b940cee1c4d15e3cef17aa8b07328c2e It looks like we need it for two independent sub-systems, so I wonder if we want to route it first through the mm tree as a standalone patch? Thanks!
On Fri, Aug 21, 2020 at 9:02 AM Roman Gushchin <guro@fb.com> wrote: > > On Fri, Aug 21, 2020 at 11:04:05AM -0400, Dan Schatzberg wrote: > > On Thu, Aug 20, 2020 at 10:06:44AM -0700, Shakeel Butt wrote: > > > On Thu, May 28, 2020 at 6:55 AM Dan Schatzberg <schatzberg.dan@gmail.com> wrote: > > > > > > > > Much of the discussion about this has died down. There's been a > > > > concern raised that we could generalize infrastructure across loop, > > > > md, etc. This may be possible, in the future, but it isn't clear to me > > > > how this would look like. I'm inclined to fix the existing issue with > > > > loop devices now (this is a problem we hit at FB) and address > > > > consolidation with other cases if and when those need to be addressed. > > > > > > > > > > What's the status of this series? > > > > Thanks for reminding me about this. I haven't got any further > > feedback. I'll bug Jens to take a look and see if he has any concerns > > and if not send a rebased version. > > Just as a note, I stole a patch from this series called > "mm: support nesting memalloc_use_memcg()" to use for the bpf memory accounting. > I rewrote the commit log and rebased to the tot with some trivial changes. > > I just sent it upstream: > https://lore.kernel.org/bpf/20200821150134.2581465-1-guro@fb.com/T/#md7edb6b5b940cee1c4d15e3cef17aa8b07328c2e > > It looks like we need it for two independent sub-systems, so I wonder > if we want to route it first through the mm tree as a standalone patch? > Another way is to push that patch to 5.9-rc2 linus tree, so both block and mm branches for 5.10 will have it. (Not sure if that's ok.)
On Fri, Aug 21, 2020 at 09:27:56AM -0700, Shakeel Butt wrote: > On Fri, Aug 21, 2020 at 9:02 AM Roman Gushchin <guro@fb.com> wrote: > > > > On Fri, Aug 21, 2020 at 11:04:05AM -0400, Dan Schatzberg wrote: > > > On Thu, Aug 20, 2020 at 10:06:44AM -0700, Shakeel Butt wrote: > > > > On Thu, May 28, 2020 at 6:55 AM Dan Schatzberg <schatzberg.dan@gmail.com> wrote: > > > > > > > > > > Much of the discussion about this has died down. There's been a > > > > > concern raised that we could generalize infrastructure across loop, > > > > > md, etc. This may be possible, in the future, but it isn't clear to me > > > > > how this would look like. I'm inclined to fix the existing issue with > > > > > loop devices now (this is a problem we hit at FB) and address > > > > > consolidation with other cases if and when those need to be addressed. > > > > > > > > > > > > > What's the status of this series? > > > > > > Thanks for reminding me about this. I haven't got any further > > > feedback. I'll bug Jens to take a look and see if he has any concerns > > > and if not send a rebased version. > > > > Just as a note, I stole a patch from this series called > > "mm: support nesting memalloc_use_memcg()" to use for the bpf memory accounting. > > I rewrote the commit log and rebased to the tot with some trivial changes. > > > > I just sent it upstream: > > https://lore.kernel.org/bpf/20200821150134.2581465-1-guro@fb.com/T/#md7edb6b5b940cee1c4d15e3cef17aa8b07328c2e > > > > It looks like we need it for two independent sub-systems, so I wonder > > if we want to route it first through the mm tree as a standalone patch? > > > > Another way is to push that patch to 5.9-rc2 linus tree, so both block > and mm branches for 5.10 will have it. (Not sure if that's ok.) Ok, it looks like the patch provides a generally useful API enhancement. And we do have at least two potential use cases for it. Let me send it as a standalone patch to linux-mm@. Btw, Shakeel, what do you think of s/memalloc_use_memcg()/set_active_memcg() ? And thank you for reviews! Roman
On Fri, Aug 21, 2020 at 1:05 PM Roman Gushchin <guro@fb.com> wrote: > > On Fri, Aug 21, 2020 at 09:27:56AM -0700, Shakeel Butt wrote: > > On Fri, Aug 21, 2020 at 9:02 AM Roman Gushchin <guro@fb.com> wrote: > > > > > > On Fri, Aug 21, 2020 at 11:04:05AM -0400, Dan Schatzberg wrote: > > > > On Thu, Aug 20, 2020 at 10:06:44AM -0700, Shakeel Butt wrote: > > > > > On Thu, May 28, 2020 at 6:55 AM Dan Schatzberg <schatzberg.dan@gmail.com> wrote: > > > > > > > > > > > > Much of the discussion about this has died down. There's been a > > > > > > concern raised that we could generalize infrastructure across loop, > > > > > > md, etc. This may be possible, in the future, but it isn't clear to me > > > > > > how this would look like. I'm inclined to fix the existing issue with > > > > > > loop devices now (this is a problem we hit at FB) and address > > > > > > consolidation with other cases if and when those need to be addressed. > > > > > > > > > > > > > > > > What's the status of this series? > > > > > > > > Thanks for reminding me about this. I haven't got any further > > > > feedback. I'll bug Jens to take a look and see if he has any concerns > > > > and if not send a rebased version. > > > > > > Just as a note, I stole a patch from this series called > > > "mm: support nesting memalloc_use_memcg()" to use for the bpf memory accounting. > > > I rewrote the commit log and rebased to the tot with some trivial changes. > > > > > > I just sent it upstream: > > > https://lore.kernel.org/bpf/20200821150134.2581465-1-guro@fb.com/T/#md7edb6b5b940cee1c4d15e3cef17aa8b07328c2e > > > > > > It looks like we need it for two independent sub-systems, so I wonder > > > if we want to route it first through the mm tree as a standalone patch? > > > > > > > Another way is to push that patch to 5.9-rc2 linus tree, so both block > > and mm branches for 5.10 will have it. (Not sure if that's ok.) > > Ok, it looks like the patch provides a generally useful API enhancement. > And we do have at least two potential use cases for it. > Let me send it as a standalone patch to linux-mm@. > > Btw, Shakeel, what do you think of s/memalloc_use_memcg()/set_active_memcg() ? > I am fine with it.