Message ID | 20241113055345.494856-1-neilb@suse.de (mailing list archive) |
---|---|
Headers | show |
Series | nfsd: allocate/free session-based DRC slots on demand | expand |
Neil, I'm curious if this work relates to: https://bugzilla.linux-nfs.org/show_bug.cgi?id=375 https://lore.kernel.org/all/CAPt2mGMZh9=Vwcqjh0J4XoTu3stOnKwswdzApL4wCA_usOFV_g@mail.gmail.com As my thread described, we currently use NFSv3 for our high latency NFS re-export cases simply because it performs way better for parallel client operations. You see, when you use re-exporting serving many clients, you are in effect taking all those client operations and stuffing them through a single client (the re-export server) which then becomes a bottleneck. Add any kind of latency on top (>10ms) and the NFSD_CACHE_SIZE_SLOTS_PER_SESSION (32) for NFSv4 becomes a major bottleneck for a single client (re-export server). We also used your "VFS: support parallel updates in the one directory" patches for similar reasons up until I couldn't port it to newer kernels anymore (my kernel code munging skills are not sufficient!). Sorry to spam the thread if I am misinterpreting what this patch set is all about. Daire On Wed, 13 Nov 2024 at 05:54, NeilBrown <neilb@suse.de> wrote: > > This patch set aims to allocate session-based DRC slots on demand, and > free them when not in use, or when memory is tight. > > I've tested with NFSD_MAX_UNUSED_SLOTS set to 1 so that freeing is > overly agreesive, and with lots of printks, and it seems to do the right > thing, though memory pressure has never freed anything - I think you > need several clients with a non-trivial number of slots allocated before > the thresholds in the shrinker code will trigger any freeing. > > I haven't made use of the CB_RECALL_SLOT callback. I'm not sure how > useful that is. There are certainly cases where simply setting the > target in a SEQUENCE reply might not be enough, but I doubt they are > very common. You would need a session to be completely idle, with the > last request received on it indicating that lots of slots were still in > use. > > Currently we allocate slots one at a time when the last available slot > was used by the client, and only if a NOWAIT allocation can succeed. It > is possible that this isn't quite agreesive enough. When performing a > lot of writeback it can be useful to have lots of slots, but memory > pressure is also likely to build up on the server so GFP_NOWAIT is likely > to fail. Maybe occasionally using a firmer request (outside the > spinlock) would be justified. > > We free slots when the number of unused slots passes some threshold - > currently 6 (because ... why not). Possible a hysteresis should be > added so we don't free unused slots for a least N seconds. > > When the shrinker wants to apply presure we remove slots equally from > all sessions. Maybe there should be some proportionality but that would > be more complex and I'm not sure it would gain much. Slot 0 can never > be freed of course. > > I'm very interested to see what people think of the over-all approach, > and of the specifics of the code. > > Thanks, > NeilBrown > > > [PATCH 1/4] nfsd: remove artificial limits on the session-based DRC > [PATCH 2/4] nfsd: allocate new session-based DRC slots on demand. > [PATCH 3/4] nfsd: free unused session-DRC slots > [PATCH 4/4] nfsd: add shrinker to reduce number of slots allocated >
> On Nov 13, 2024, at 12:38 AM, NeilBrown <neilb@suse.de> wrote: > > This patch set aims to allocate session-based DRC slots on demand, and > free them when not in use, or when memory is tight. > > I've tested with NFSD_MAX_UNUSED_SLOTS set to 1 so that freeing is > overly agreesive, and with lots of printks, and it seems to do the right > thing, though memory pressure has never freed anything - I think you > need several clients with a non-trivial number of slots allocated before > the thresholds in the shrinker code will trigger any freeing. Can you describe your test set-up? Generally a system with less than 4GB of memory can trigger shrinkers pretty easily. If we never see the mechanism being triggered due to memory exhaustion, then I wonder if the additional complexity is adding substantial value. > I haven't made use of the CB_RECALL_SLOT callback. I'm not sure how > useful that is. There are certainly cases where simply setting the > target in a SEQUENCE reply might not be enough, but I doubt they are > very common. You would need a session to be completely idle, with the > last request received on it indicating that lots of slots were still in > use. > > Currently we allocate slots one at a time when the last available slot > was used by the client, and only if a NOWAIT allocation can succeed. It > is possible that this isn't quite agreesive enough. When performing a > lot of writeback it can be useful to have lots of slots, but memory > pressure is also likely to build up on the server so GFP_NOWAIT is likely > to fail. Maybe occasionally using a firmer request (outside the > spinlock) would be justified. I'm wondering why GFP_NOWAIT is used here, and I admit I'm not strongly familiar with the code or mechanism. Why not always use GFP_KERNEL ? > We free slots when the number of unused slots passes some threshold - > currently 6 (because ... why not). Possible a hysteresis should be > added so we don't free unused slots for a least N seconds. Generally freeing unused resources is un-Linux like. :-) Can you provide a rationale for why this is needed? > When the shrinker wants to apply presure we remove slots equally from > all sessions. Maybe there should be some proportionality but that would > be more complex and I'm not sure it would gain much. Slot 0 can never > be freed of course. > > I'm very interested to see what people think of the over-all approach, > and of the specifics of the code. > > Thanks, > NeilBrown > > > [PATCH 1/4] nfsd: remove artificial limits on the session-based DRC > [PATCH 2/4] nfsd: allocate new session-based DRC slots on demand. > [PATCH 3/4] nfsd: free unused session-DRC slots > [PATCH 4/4] nfsd: add shrinker to reduce number of slots allocated > -- Chuck Lever