Message ID | 20221030220203.31210-1-axboe@kernel.dk (mailing list archive) |
---|---|
Headers | show |
Series | Add support for epoll min_wait | expand |
On Sun, Oct 30, 2022 at 6:02 PM Jens Axboe <axboe@kernel.dk> wrote: > > Hi, > > tldr - we saw a 6-7% CPU reduction with this patch. See patch 6 for > full numbers. > > This adds support for EPOLL_CTL_MIN_WAIT, which allows setting a minimum > time that epoll_wait() should wait for events on a given epoll context. > Some justification and numbers are in patch 6, patches 1-5 are really > just prep patches or cleanups. > > Sending this out to get some input on the API, basically. This is > obviously a per-context type of operation in this patchset, which isn't > necessarily ideal for any use case. Questions to be debated: > > 1) Would we want this to be available through epoll_wait() directly? > That would allow this to be done on a per-epoll_wait() basis, rather > than be tied to the specific context. > > 2) If the answer to #1 is yes, would we still want EPOLL_CTL_MIN_WAIT? > > I think there are pros and cons to both, and perhaps the answer to both is > "yes". There are some benefits to doing this at epoll setup time, for > example - it nicely isolates it to that part rather than needing to be > done dynamically everytime epoll_wait() is called. This also helps the > application code, as it can turn off any busy'ness tracking based on if > the setup accepted EPOLL_CTL_MIN_WAIT or not. > > Anyway, tossing this out there as it yielded quite good results in some > initial testing, we're running more of it. Sending out a v3 now since > someone reported that nonblock issue which is annoying. Hoping to get some > more discussion this time around, or at least some... My main question is whether the cycle gains justify the code complexity and runtime cost in all other epoll paths. Syscall overhead is quite dependent on architecture and things like KPTI. Indeed, I was also wondering whether an extra timeout arg to epoll_wait would give the same feature with less side effects. Then no need for that new ctrl API.
On 11/2/22 11:46 AM, Willem de Bruijn wrote: > On Sun, Oct 30, 2022 at 6:02 PM Jens Axboe <axboe@kernel.dk> wrote: >> >> Hi, >> >> tldr - we saw a 6-7% CPU reduction with this patch. See patch 6 for >> full numbers. >> >> This adds support for EPOLL_CTL_MIN_WAIT, which allows setting a minimum >> time that epoll_wait() should wait for events on a given epoll context. >> Some justification and numbers are in patch 6, patches 1-5 are really >> just prep patches or cleanups. >> >> Sending this out to get some input on the API, basically. This is >> obviously a per-context type of operation in this patchset, which isn't >> necessarily ideal for any use case. Questions to be debated: >> >> 1) Would we want this to be available through epoll_wait() directly? >> That would allow this to be done on a per-epoll_wait() basis, rather >> than be tied to the specific context. >> >> 2) If the answer to #1 is yes, would we still want EPOLL_CTL_MIN_WAIT? >> >> I think there are pros and cons to both, and perhaps the answer to both is >> "yes". There are some benefits to doing this at epoll setup time, for >> example - it nicely isolates it to that part rather than needing to be >> done dynamically everytime epoll_wait() is called. This also helps the >> application code, as it can turn off any busy'ness tracking based on if >> the setup accepted EPOLL_CTL_MIN_WAIT or not. >> >> Anyway, tossing this out there as it yielded quite good results in some >> initial testing, we're running more of it. Sending out a v3 now since >> someone reported that nonblock issue which is annoying. Hoping to get some >> more discussion this time around, or at least some... > > My main question is whether the cycle gains justify the code > complexity and runtime cost in all other epoll paths. > > Syscall overhead is quite dependent on architecture and things like KPTI. Definitely interested in experiences from other folks, but what other runtime costs do you see compared to the baseline? > Indeed, I was also wondering whether an extra timeout arg to > epoll_wait would give the same feature with less side effects. Then no > need for that new ctrl API. That was my main question in this posting - what's the best api? The current one, epoll_wait() addition, or both? The nice thing about the current one is that it's easy to integrate into existing use cases, as the decision to do batching on the userspace side or by utilizing this feature can be kept in the setup path. If you do epoll_wait() and get -1/EINVAL or false success on older kernels, then that's either a loss because of thinking it worked, or a fast path need to check for this specifically every time you call epoll_wait() rather than just at init/setup time. But this is very much the question I already posed and wanted to discuss...
On Wed, Nov 2, 2022 at 1:54 PM Jens Axboe <axboe@kernel.dk> wrote: > > On 11/2/22 11:46 AM, Willem de Bruijn wrote: > > On Sun, Oct 30, 2022 at 6:02 PM Jens Axboe <axboe@kernel.dk> wrote: > >> > >> Hi, > >> > >> tldr - we saw a 6-7% CPU reduction with this patch. See patch 6 for > >> full numbers. > >> > >> This adds support for EPOLL_CTL_MIN_WAIT, which allows setting a minimum > >> time that epoll_wait() should wait for events on a given epoll context. > >> Some justification and numbers are in patch 6, patches 1-5 are really > >> just prep patches or cleanups. > >> > >> Sending this out to get some input on the API, basically. This is > >> obviously a per-context type of operation in this patchset, which isn't > >> necessarily ideal for any use case. Questions to be debated: > >> > >> 1) Would we want this to be available through epoll_wait() directly? > >> That would allow this to be done on a per-epoll_wait() basis, rather > >> than be tied to the specific context. > >> > >> 2) If the answer to #1 is yes, would we still want EPOLL_CTL_MIN_WAIT? > >> > >> I think there are pros and cons to both, and perhaps the answer to both is > >> "yes". There are some benefits to doing this at epoll setup time, for > >> example - it nicely isolates it to that part rather than needing to be > >> done dynamically everytime epoll_wait() is called. This also helps the > >> application code, as it can turn off any busy'ness tracking based on if > >> the setup accepted EPOLL_CTL_MIN_WAIT or not. > >> > >> Anyway, tossing this out there as it yielded quite good results in some > >> initial testing, we're running more of it. Sending out a v3 now since > >> someone reported that nonblock issue which is annoying. Hoping to get some > >> more discussion this time around, or at least some... > > > > My main question is whether the cycle gains justify the code > > complexity and runtime cost in all other epoll paths. > > > > Syscall overhead is quite dependent on architecture and things like KPTI. > > Definitely interested in experiences from other folks, but what other > runtime costs do you see compared to the baseline? Nothing specific. Possible cost from added branches and moving local variables into structs with possibly cold cachelines. > > Indeed, I was also wondering whether an extra timeout arg to > > epoll_wait would give the same feature with less side effects. Then no > > need for that new ctrl API. > > That was my main question in this posting - what's the best api? The > current one, epoll_wait() addition, or both? The nice thing about the > current one is that it's easy to integrate into existing use cases, as > the decision to do batching on the userspace side or by utilizing this > feature can be kept in the setup path. If you do epoll_wait() and get > -1/EINVAL or false success on older kernels, then that's either a loss > because of thinking it worked, or a fast path need to check for this > specifically every time you call epoll_wait() rather than just at > init/setup time. > > But this is very much the question I already posed and wanted to > discuss... I see the value in being able to detect whether the feature is present. But a pure epoll_wait implementation seems a lot simpler to me, and more elegant: timeout is an argument to epoll_wait already. A new epoll_wait variant would have to be a new system call, so it would be easy to infer support for the feature. > > -- > Jens Axboe
On 11/2/22 5:09 PM, Willem de Bruijn wrote: > On Wed, Nov 2, 2022 at 1:54 PM Jens Axboe <axboe@kernel.dk> wrote: >> >> On 11/2/22 11:46 AM, Willem de Bruijn wrote: >>> On Sun, Oct 30, 2022 at 6:02 PM Jens Axboe <axboe@kernel.dk> wrote: >>>> >>>> Hi, >>>> >>>> tldr - we saw a 6-7% CPU reduction with this patch. See patch 6 for >>>> full numbers. >>>> >>>> This adds support for EPOLL_CTL_MIN_WAIT, which allows setting a minimum >>>> time that epoll_wait() should wait for events on a given epoll context. >>>> Some justification and numbers are in patch 6, patches 1-5 are really >>>> just prep patches or cleanups. >>>> >>>> Sending this out to get some input on the API, basically. This is >>>> obviously a per-context type of operation in this patchset, which isn't >>>> necessarily ideal for any use case. Questions to be debated: >>>> >>>> 1) Would we want this to be available through epoll_wait() directly? >>>> That would allow this to be done on a per-epoll_wait() basis, rather >>>> than be tied to the specific context. >>>> >>>> 2) If the answer to #1 is yes, would we still want EPOLL_CTL_MIN_WAIT? >>>> >>>> I think there are pros and cons to both, and perhaps the answer to both is >>>> "yes". There are some benefits to doing this at epoll setup time, for >>>> example - it nicely isolates it to that part rather than needing to be >>>> done dynamically everytime epoll_wait() is called. This also helps the >>>> application code, as it can turn off any busy'ness tracking based on if >>>> the setup accepted EPOLL_CTL_MIN_WAIT or not. >>>> >>>> Anyway, tossing this out there as it yielded quite good results in some >>>> initial testing, we're running more of it. Sending out a v3 now since >>>> someone reported that nonblock issue which is annoying. Hoping to get some >>>> more discussion this time around, or at least some... >>> >>> My main question is whether the cycle gains justify the code >>> complexity and runtime cost in all other epoll paths. >>> >>> Syscall overhead is quite dependent on architecture and things like KPTI. >> >> Definitely interested in experiences from other folks, but what other >> runtime costs do you see compared to the baseline? > > Nothing specific. Possible cost from added branches and moving local > variables into structs with possibly cold cachelines. > >>> Indeed, I was also wondering whether an extra timeout arg to >>> epoll_wait would give the same feature with less side effects. Then no >>> need for that new ctrl API. >> >> That was my main question in this posting - what's the best api? The >> current one, epoll_wait() addition, or both? The nice thing about the >> current one is that it's easy to integrate into existing use cases, as >> the decision to do batching on the userspace side or by utilizing this >> feature can be kept in the setup path. If you do epoll_wait() and get >> -1/EINVAL or false success on older kernels, then that's either a loss >> because of thinking it worked, or a fast path need to check for this >> specifically every time you call epoll_wait() rather than just at >> init/setup time. >> >> But this is very much the question I already posed and wanted to >> discuss... > > I see the value in being able to detect whether the feature is present. > > But a pure epoll_wait implementation seems a lot simpler to me, and > more elegant: timeout is an argument to epoll_wait already. > > A new epoll_wait variant would have to be a new system call, so it > would be easy to infer support for the feature. Right, but it'd still mean that you'd need to check this in the fast path in the app vs being able to do it at init time. Might there be merit to doing both? From the conversion that we tried, the CTL variant definitely made things easier to port. The new syscall would make enable per-call delays however. There might be some merit to that, though I do think that max_events + min_time is how you'd control batching anything and that's suitably set in the context itself for most use cases.
On Wed, Nov 2, 2022 at 7:42 PM Jens Axboe <axboe@kernel.dk> wrote: > > On 11/2/22 5:09 PM, Willem de Bruijn wrote: > > On Wed, Nov 2, 2022 at 1:54 PM Jens Axboe <axboe@kernel.dk> wrote: > >> > >> On 11/2/22 11:46 AM, Willem de Bruijn wrote: > >>> On Sun, Oct 30, 2022 at 6:02 PM Jens Axboe <axboe@kernel.dk> wrote: > >>>> > >>>> Hi, > >>>> > >>>> tldr - we saw a 6-7% CPU reduction with this patch. See patch 6 for > >>>> full numbers. > >>>> > >>>> This adds support for EPOLL_CTL_MIN_WAIT, which allows setting a minimum > >>>> time that epoll_wait() should wait for events on a given epoll context. > >>>> Some justification and numbers are in patch 6, patches 1-5 are really > >>>> just prep patches or cleanups. > >>>> > >>>> Sending this out to get some input on the API, basically. This is > >>>> obviously a per-context type of operation in this patchset, which isn't > >>>> necessarily ideal for any use case. Questions to be debated: > >>>> > >>>> 1) Would we want this to be available through epoll_wait() directly? > >>>> That would allow this to be done on a per-epoll_wait() basis, rather > >>>> than be tied to the specific context. > >>>> > >>>> 2) If the answer to #1 is yes, would we still want EPOLL_CTL_MIN_WAIT? > >>>> > >>>> I think there are pros and cons to both, and perhaps the answer to both is > >>>> "yes". There are some benefits to doing this at epoll setup time, for > >>>> example - it nicely isolates it to that part rather than needing to be > >>>> done dynamically everytime epoll_wait() is called. This also helps the > >>>> application code, as it can turn off any busy'ness tracking based on if > >>>> the setup accepted EPOLL_CTL_MIN_WAIT or not. > >>>> > >>>> Anyway, tossing this out there as it yielded quite good results in some > >>>> initial testing, we're running more of it. Sending out a v3 now since > >>>> someone reported that nonblock issue which is annoying. Hoping to get some > >>>> more discussion this time around, or at least some... > >>> > >>> My main question is whether the cycle gains justify the code > >>> complexity and runtime cost in all other epoll paths. > >>> > >>> Syscall overhead is quite dependent on architecture and things like KPTI. > >> > >> Definitely interested in experiences from other folks, but what other > >> runtime costs do you see compared to the baseline? > > > > Nothing specific. Possible cost from added branches and moving local > > variables into structs with possibly cold cachelines. > > > >>> Indeed, I was also wondering whether an extra timeout arg to > >>> epoll_wait would give the same feature with less side effects. Then no > >>> need for that new ctrl API. > >> > >> That was my main question in this posting - what's the best api? The > >> current one, epoll_wait() addition, or both? The nice thing about the > >> current one is that it's easy to integrate into existing use cases, as > >> the decision to do batching on the userspace side or by utilizing this > >> feature can be kept in the setup path. If you do epoll_wait() and get > >> -1/EINVAL or false success on older kernels, then that's either a loss > >> because of thinking it worked, or a fast path need to check for this > >> specifically every time you call epoll_wait() rather than just at > >> init/setup time. > >> > >> But this is very much the question I already posed and wanted to > >> discuss... > > > > I see the value in being able to detect whether the feature is present. > > > > But a pure epoll_wait implementation seems a lot simpler to me, and > > more elegant: timeout is an argument to epoll_wait already. > > > > A new epoll_wait variant would have to be a new system call, so it > > would be easy to infer support for the feature. > > Right, but it'd still mean that you'd need to check this in the fast > path in the app vs being able to do it at init time. A process could call the new syscall with timeout 0 at init time to learn whether the feature is supported. > Might there be > merit to doing both? From the conversion that we tried, the CTL variant > definitely made things easier to port. The new syscall would make enable > per-call delays however. There might be some merit to that, though I do > think that max_events + min_time is how you'd control batching anything > and that's suitably set in the context itself for most use cases. I'm surprised a CTL variant is easier to port. An epoll_pwait3 with an extra argument only needs to pass that argument to do_epoll_wait. FWIW, when adding nsec resolution I initially opted for an init-based approach, passing a new flag to epoll_create1. Feedback then was that it was odd to have one syscall affect the behavior of another. The final version just added a new epoll_pwait2 with timespec.
On 11/2/22 5:51 PM, Willem de Bruijn wrote: > On Wed, Nov 2, 2022 at 7:42 PM Jens Axboe <axboe@kernel.dk> wrote: >> >> On 11/2/22 5:09 PM, Willem de Bruijn wrote: >>> On Wed, Nov 2, 2022 at 1:54 PM Jens Axboe <axboe@kernel.dk> wrote: >>>> >>>> On 11/2/22 11:46 AM, Willem de Bruijn wrote: >>>>> On Sun, Oct 30, 2022 at 6:02 PM Jens Axboe <axboe@kernel.dk> wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> tldr - we saw a 6-7% CPU reduction with this patch. See patch 6 for >>>>>> full numbers. >>>>>> >>>>>> This adds support for EPOLL_CTL_MIN_WAIT, which allows setting a minimum >>>>>> time that epoll_wait() should wait for events on a given epoll context. >>>>>> Some justification and numbers are in patch 6, patches 1-5 are really >>>>>> just prep patches or cleanups. >>>>>> >>>>>> Sending this out to get some input on the API, basically. This is >>>>>> obviously a per-context type of operation in this patchset, which isn't >>>>>> necessarily ideal for any use case. Questions to be debated: >>>>>> >>>>>> 1) Would we want this to be available through epoll_wait() directly? >>>>>> That would allow this to be done on a per-epoll_wait() basis, rather >>>>>> than be tied to the specific context. >>>>>> >>>>>> 2) If the answer to #1 is yes, would we still want EPOLL_CTL_MIN_WAIT? >>>>>> >>>>>> I think there are pros and cons to both, and perhaps the answer to both is >>>>>> "yes". There are some benefits to doing this at epoll setup time, for >>>>>> example - it nicely isolates it to that part rather than needing to be >>>>>> done dynamically everytime epoll_wait() is called. This also helps the >>>>>> application code, as it can turn off any busy'ness tracking based on if >>>>>> the setup accepted EPOLL_CTL_MIN_WAIT or not. >>>>>> >>>>>> Anyway, tossing this out there as it yielded quite good results in some >>>>>> initial testing, we're running more of it. Sending out a v3 now since >>>>>> someone reported that nonblock issue which is annoying. Hoping to get some >>>>>> more discussion this time around, or at least some... >>>>> >>>>> My main question is whether the cycle gains justify the code >>>>> complexity and runtime cost in all other epoll paths. >>>>> >>>>> Syscall overhead is quite dependent on architecture and things like KPTI. >>>> >>>> Definitely interested in experiences from other folks, but what other >>>> runtime costs do you see compared to the baseline? >>> >>> Nothing specific. Possible cost from added branches and moving local >>> variables into structs with possibly cold cachelines. >>> >>>>> Indeed, I was also wondering whether an extra timeout arg to >>>>> epoll_wait would give the same feature with less side effects. Then no >>>>> need for that new ctrl API. >>>> >>>> That was my main question in this posting - what's the best api? The >>>> current one, epoll_wait() addition, or both? The nice thing about the >>>> current one is that it's easy to integrate into existing use cases, as >>>> the decision to do batching on the userspace side or by utilizing this >>>> feature can be kept in the setup path. If you do epoll_wait() and get >>>> -1/EINVAL or false success on older kernels, then that's either a loss >>>> because of thinking it worked, or a fast path need to check for this >>>> specifically every time you call epoll_wait() rather than just at >>>> init/setup time. >>>> >>>> But this is very much the question I already posed and wanted to >>>> discuss... >>> >>> I see the value in being able to detect whether the feature is present. >>> >>> But a pure epoll_wait implementation seems a lot simpler to me, and >>> more elegant: timeout is an argument to epoll_wait already. >>> >>> A new epoll_wait variant would have to be a new system call, so it >>> would be easy to infer support for the feature. >> >> Right, but it'd still mean that you'd need to check this in the fast >> path in the app vs being able to do it at init time. > > A process could call the new syscall with timeout 0 at init time to > learn whether the feature is supported. That is pretty clunky, though... It'd work, but not a very elegant API. >> Might there be >> merit to doing both? From the conversion that we tried, the CTL variant >> definitely made things easier to port. The new syscall would make enable >> per-call delays however. There might be some merit to that, though I do >> think that max_events + min_time is how you'd control batching anything >> and that's suitably set in the context itself for most use cases. > > I'm surprised a CTL variant is easier to port. An epoll_pwait3 with an > extra argument only needs to pass that argument to do_epoll_wait. It's literally adding two lines of code, that's it. A new syscall is way worse both in terms of the userspace and kernel side for archs, and for changing call sites in the app. > FWIW, when adding nsec resolution I initially opted for an init-based > approach, passing a new flag to epoll_create1. Feedback then was that > it was odd to have one syscall affect the behavior of another. The > final version just added a new epoll_pwait2 with timespec. I'm fine with just doing a pure syscall variant too, it was my original plan. Only changed it to allow for easier experimentation and adoption, and based on the fact that most use cases would likely use a fixed value per context anyway. I think it'd be a shame to drop the ctl, unless there's strong arguments against it. I'm quite happy to add a syscall variant too, that's not a big deal and would be a minor addition. Patch 6 should probably cut out the ctl addition and leave that for a patch 7, and then a patch 8 for adding a syscall.
>> FWIW, when adding nsec resolution I initially opted for an init-based >> approach, passing a new flag to epoll_create1. Feedback then was that >> it was odd to have one syscall affect the behavior of another. The >> final version just added a new epoll_pwait2 with timespec. > > I'm fine with just doing a pure syscall variant too, it was my original > plan. Only changed it to allow for easier experimentation and adoption, > and based on the fact that most use cases would likely use a fixed value > per context anyway. > > I think it'd be a shame to drop the ctl, unless there's strong arguments > against it. I'm quite happy to add a syscall variant too, that's not a > big deal and would be a minor addition. Patch 6 should probably cut out > the ctl addition and leave that for a patch 7, and then a patch 8 for > adding a syscall. I split the ctl patch out from the core change, and then took a look at doing a syscall variant too. But there are a few complications there... It would seem to make the most sense to build this on top of the newest epoll wait syscall, epoll_pwait2(). But we're already at the max number of arguments there... Arguably pwait2 should've been converted to use some kind of versioned struct instead. I'm going to take a stab at pwait3 with that kind of interface.
On Sat, Nov 5, 2022 at 1:39 PM Jens Axboe <axboe@kernel.dk> wrote: > > >> FWIW, when adding nsec resolution I initially opted for an init-based > >> approach, passing a new flag to epoll_create1. Feedback then was that > >> it was odd to have one syscall affect the behavior of another. The > >> final version just added a new epoll_pwait2 with timespec. > > > > I'm fine with just doing a pure syscall variant too, it was my original > > plan. Only changed it to allow for easier experimentation and adoption, > > and based on the fact that most use cases would likely use a fixed value > > per context anyway. > > > > I think it'd be a shame to drop the ctl, unless there's strong arguments > > against it. I'm quite happy to add a syscall variant too, that's not a > > big deal and would be a minor addition. Patch 6 should probably cut out > > the ctl addition and leave that for a patch 7, and then a patch 8 for > > adding a syscall. > I split the ctl patch out from the core change, and then took a look at > doing a syscall variant too. But there are a few complications there... > It would seem to make the most sense to build this on top of the newest > epoll wait syscall, epoll_pwait2(). But we're already at the max number > of arguments there... > > Arguably pwait2 should've been converted to use some kind of versioned > struct instead. I'm going to take a stab at pwait3 with that kind of > interface. Don't convert to a syscall approach based solely on my feedback. It would be good to hear from others. At a high level, I'm somewhat uncomfortable merging two syscalls for behavior that already works, just to save half the syscall overhead. There is no shortage of calls that may make some sense for a workload to merge. Is the quoted 6-7% cpu cycle reduction due to saving one SYSENTER/SYSEXIT (as the high resolution timer wake-up will be the same), or am I missing something more fundamental?
On 11/5/22 12:05 PM, Willem de Bruijn wrote: > On Sat, Nov 5, 2022 at 1:39 PM Jens Axboe <axboe@kernel.dk> wrote: >> >>>> FWIW, when adding nsec resolution I initially opted for an init-based >>>> approach, passing a new flag to epoll_create1. Feedback then was that >>>> it was odd to have one syscall affect the behavior of another. The >>>> final version just added a new epoll_pwait2 with timespec. >>> >>> I'm fine with just doing a pure syscall variant too, it was my original >>> plan. Only changed it to allow for easier experimentation and adoption, >>> and based on the fact that most use cases would likely use a fixed value >>> per context anyway. >>> >>> I think it'd be a shame to drop the ctl, unless there's strong arguments >>> against it. I'm quite happy to add a syscall variant too, that's not a >>> big deal and would be a minor addition. Patch 6 should probably cut out >>> the ctl addition and leave that for a patch 7, and then a patch 8 for >>> adding a syscall. >> I split the ctl patch out from the core change, and then took a look at >> doing a syscall variant too. But there are a few complications there... >> It would seem to make the most sense to build this on top of the newest >> epoll wait syscall, epoll_pwait2(). But we're already at the max number >> of arguments there... >> >> Arguably pwait2 should've been converted to use some kind of versioned >> struct instead. I'm going to take a stab at pwait3 with that kind of >> interface. > > Don't convert to a syscall approach based solely on my feedback. It > would be good to hear from others. It's not just based on your feedback, if you read the original cover letter, then that is the question that is posed in terms of API - ctl to modify it, new syscall, or both? So figured I should at least try and see what the syscall would look like. > At a high level, I'm somewhat uncomfortable merging two syscalls for > behavior that already works, just to save half the syscall overhead. > There is no shortage of calls that may make some sense for a workload > to merge. Is the quoted 6-7% cpu cycle reduction due to saving one > SYSENTER/SYSEXIT (as the high resolution timer wake-up will be the > same), or am I missing something more fundamental? No, it's not really related to saving a single syscall, and you'd potentially save more than just one as well. If we look at the two extremes of applications, one will be low load and you're handling probably just 1 event per loop. Not really interesting. At the other end, you're fully loaded, and by the time you check for events, you have 'maxevents' (or close to) available. That obviously reduces system calls, but more importantly, it also allows the application to get some batching effects from processing these events. In the medium range, there's enough processing to react pretty quickly to events coming in, and you then end up doing just 1 event (or close to that). To overcome that, we have some applications that detect this medium range and do an artificial sleep before calling epoll_wait(). That was a nice effiency win for them. But we can do this a lot more efficiently in the kernel. That was the idea behind this, and the initial results from TAO (which does that sleep hack) proved it to be more than worthwhile. Syscall reduction is one thing, improved batching another, and just as importanly is sleep+wakeup reductions.
From: Jens Axboe > Sent: 05 November 2022 17:39 > > >> FWIW, when adding nsec resolution I initially opted for an init-based > >> approach, passing a new flag to epoll_create1. Feedback then was that > >> it was odd to have one syscall affect the behavior of another. The > >> final version just added a new epoll_pwait2 with timespec. > > > > I'm fine with just doing a pure syscall variant too, it was my original > > plan. Only changed it to allow for easier experimentation and adoption, > > and based on the fact that most use cases would likely use a fixed value > > per context anyway. > > > > I think it'd be a shame to drop the ctl, unless there's strong arguments > > against it. I'm quite happy to add a syscall variant too, that's not a > > big deal and would be a minor addition. Patch 6 should probably cut out > > the ctl addition and leave that for a patch 7, and then a patch 8 for > > adding a syscall. > > I split the ctl patch out from the core change, and then took a look at > doing a syscall variant too. But there are a few complications there... > It would seem to make the most sense to build this on top of the newest > epoll wait syscall, epoll_pwait2(). But we're already at the max number > of arguments there... > > Arguably pwait2 should've been converted to use some kind of versioned > struct instead. I'm going to take a stab at pwait3 with that kind of > interface. Adding an extra copy_from_user() adds a measurable overhead to a system call - so you really don't want to do it unless absolutely necessary. I was wondering if you actually need two timeout parameters? Could you just use a single bit (I presume one is available) to request that the timeout be restarted when he first message arrives and the syscall then return when either the timer expires or the full number of events has been returned. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On Sat, Nov 5, 2022 at 2:46 PM Jens Axboe <axboe@kernel.dk> wrote: > > On 11/5/22 12:05 PM, Willem de Bruijn wrote: > > On Sat, Nov 5, 2022 at 1:39 PM Jens Axboe <axboe@kernel.dk> wrote: > >> > >>>> FWIW, when adding nsec resolution I initially opted for an init-based > >>>> approach, passing a new flag to epoll_create1. Feedback then was that > >>>> it was odd to have one syscall affect the behavior of another. The > >>>> final version just added a new epoll_pwait2 with timespec. > >>> > >>> I'm fine with just doing a pure syscall variant too, it was my original > >>> plan. Only changed it to allow for easier experimentation and adoption, > >>> and based on the fact that most use cases would likely use a fixed value > >>> per context anyway. > >>> > >>> I think it'd be a shame to drop the ctl, unless there's strong arguments > >>> against it. I'm quite happy to add a syscall variant too, that's not a > >>> big deal and would be a minor addition. Patch 6 should probably cut out > >>> the ctl addition and leave that for a patch 7, and then a patch 8 for > >>> adding a syscall. > >> I split the ctl patch out from the core change, and then took a look at > >> doing a syscall variant too. But there are a few complications there... > >> It would seem to make the most sense to build this on top of the newest > >> epoll wait syscall, epoll_pwait2(). But we're already at the max number > >> of arguments there... > >> > >> Arguably pwait2 should've been converted to use some kind of versioned > >> struct instead. I'm going to take a stab at pwait3 with that kind of > >> interface. > > > > Don't convert to a syscall approach based solely on my feedback. It > > would be good to hear from others. > > It's not just based on your feedback, if you read the original cover > letter, then that is the question that is posed in terms of API - ctl to > modify it, new syscall, or both? So figured I should at least try and > see what the syscall would look like. > > > At a high level, I'm somewhat uncomfortable merging two syscalls for > > behavior that already works, just to save half the syscall overhead. > > There is no shortage of calls that may make some sense for a workload > > to merge. Is the quoted 6-7% cpu cycle reduction due to saving one > > SYSENTER/SYSEXIT (as the high resolution timer wake-up will be the > > same), or am I missing something more fundamental? > > No, it's not really related to saving a single syscall, and you'd > potentially save more than just one as well. If we look at the two > extremes of applications, one will be low load and you're handling > probably just 1 event per loop. Not really interesting. At the other > end, you're fully loaded, and by the time you check for events, you have > 'maxevents' (or close to) available. That obviously reduces system > calls, but more importantly, it also allows the application to get some > batching effects from processing these events. > > In the medium range, there's enough processing to react pretty quickly > to events coming in, and you then end up doing just 1 event (or close to > that). To overcome that, we have some applications that detect this > medium range and do an artificial sleep before calling epoll_wait(). > That was a nice effiency win for them. But we can do this a lot more > efficiently in the kernel. That was the idea behind this, and the > initial results from TAO (which does that sleep hack) proved it to be > more than worthwhile. Syscall reduction is one thing, improved batching > another, and just as importanly is sleep+wakeup reductions. Thanks for the context. So this is akin to interrupt moderation in network interfaces. Would it make sense to wait for timeout or nr of events, whichever comes first, similar to rx_usecs/rx_frames. Instead of an unconditional sleep at the start.
On 11/7/22 6:25 AM, Willem de Bruijn wrote: > On Sat, Nov 5, 2022 at 2:46 PM Jens Axboe <axboe@kernel.dk> wrote: >> >> On 11/5/22 12:05 PM, Willem de Bruijn wrote: >>> On Sat, Nov 5, 2022 at 1:39 PM Jens Axboe <axboe@kernel.dk> wrote: >>>> >>>>>> FWIW, when adding nsec resolution I initially opted for an init-based >>>>>> approach, passing a new flag to epoll_create1. Feedback then was that >>>>>> it was odd to have one syscall affect the behavior of another. The >>>>>> final version just added a new epoll_pwait2 with timespec. >>>>> >>>>> I'm fine with just doing a pure syscall variant too, it was my original >>>>> plan. Only changed it to allow for easier experimentation and adoption, >>>>> and based on the fact that most use cases would likely use a fixed value >>>>> per context anyway. >>>>> >>>>> I think it'd be a shame to drop the ctl, unless there's strong arguments >>>>> against it. I'm quite happy to add a syscall variant too, that's not a >>>>> big deal and would be a minor addition. Patch 6 should probably cut out >>>>> the ctl addition and leave that for a patch 7, and then a patch 8 for >>>>> adding a syscall. >>>> I split the ctl patch out from the core change, and then took a look at >>>> doing a syscall variant too. But there are a few complications there... >>>> It would seem to make the most sense to build this on top of the newest >>>> epoll wait syscall, epoll_pwait2(). But we're already at the max number >>>> of arguments there... >>>> >>>> Arguably pwait2 should've been converted to use some kind of versioned >>>> struct instead. I'm going to take a stab at pwait3 with that kind of >>>> interface. >>> >>> Don't convert to a syscall approach based solely on my feedback. It >>> would be good to hear from others. >> >> It's not just based on your feedback, if you read the original cover >> letter, then that is the question that is posed in terms of API - ctl to >> modify it, new syscall, or both? So figured I should at least try and >> see what the syscall would look like. >> >>> At a high level, I'm somewhat uncomfortable merging two syscalls for >>> behavior that already works, just to save half the syscall overhead. >>> There is no shortage of calls that may make some sense for a workload >>> to merge. Is the quoted 6-7% cpu cycle reduction due to saving one >>> SYSENTER/SYSEXIT (as the high resolution timer wake-up will be the >>> same), or am I missing something more fundamental? >> >> No, it's not really related to saving a single syscall, and you'd >> potentially save more than just one as well. If we look at the two >> extremes of applications, one will be low load and you're handling >> probably just 1 event per loop. Not really interesting. At the other >> end, you're fully loaded, and by the time you check for events, you have >> 'maxevents' (or close to) available. That obviously reduces system >> calls, but more importantly, it also allows the application to get some >> batching effects from processing these events. >> >> In the medium range, there's enough processing to react pretty quickly >> to events coming in, and you then end up doing just 1 event (or close to >> that). To overcome that, we have some applications that detect this >> medium range and do an artificial sleep before calling epoll_wait(). >> That was a nice effiency win for them. But we can do this a lot more >> efficiently in the kernel. That was the idea behind this, and the >> initial results from TAO (which does that sleep hack) proved it to be >> more than worthwhile. Syscall reduction is one thing, improved batching >> another, and just as importanly is sleep+wakeup reductions. > > Thanks for the context. > > So this is akin to interrupt moderation in network interfaces. Would > it make sense to wait for timeout or nr of events, whichever comes > first, similar to rx_usecs/rx_frames. Instead of an unconditional > sleep at the start. There's no unconditional sleep at the start with my patches, not sure where you are getting that from. You already have 'nr of events', that's the maxevents being passed in. If nr_available >= maxevents, then no sleep will take place. We did debate doing a minevents kind of thing as well, but the time based metric is more usable.
Hi Jens, NICs and storage controllers have interrupt mitigation/coalescing mechanisms that are similar. NVMe has an Aggregation Time (timeout) and an Aggregation Threshold (counter) value. When a completion occurs, the device waits until the timeout or until the completion counter value is reached. If I've read the code correctly, min_wait is computed at the beginning of epoll_wait(2). NVMe's Aggregation Time is computed from the first completion. It makes me wonder which approach is more useful for applications. With the Aggregation Time approach applications can control how much extra latency is added. What do you think about that approach? Stefan
On 11/7/22 1:56 PM, Stefan Hajnoczi wrote: > Hi Jens, > NICs and storage controllers have interrupt mitigation/coalescing > mechanisms that are similar. Yep > NVMe has an Aggregation Time (timeout) and an Aggregation Threshold > (counter) value. When a completion occurs, the device waits until the > timeout or until the completion counter value is reached. > > If I've read the code correctly, min_wait is computed at the beginning > of epoll_wait(2). NVMe's Aggregation Time is computed from the first > completion. > > It makes me wonder which approach is more useful for applications. With > the Aggregation Time approach applications can control how much extra > latency is added. What do you think about that approach? We only tested the current approach, which is time noted from entry, not from when the first event arrives. I suspect the nvme approach is better suited to the hw side, the epoll timeout helps ensure that we batch within xx usec rather than xx usec + whatever the delay until the first one arrives. Which is why it's handled that way currently. That gives you a fixed batch latency.
On Mon, Nov 07, 2022 at 02:38:52PM -0700, Jens Axboe wrote: > On 11/7/22 1:56 PM, Stefan Hajnoczi wrote: > > Hi Jens, > > NICs and storage controllers have interrupt mitigation/coalescing > > mechanisms that are similar. > > Yep > > > NVMe has an Aggregation Time (timeout) and an Aggregation Threshold > > (counter) value. When a completion occurs, the device waits until the > > timeout or until the completion counter value is reached. > > > > If I've read the code correctly, min_wait is computed at the beginning > > of epoll_wait(2). NVMe's Aggregation Time is computed from the first > > completion. > > > > It makes me wonder which approach is more useful for applications. With > > the Aggregation Time approach applications can control how much extra > > latency is added. What do you think about that approach? > > We only tested the current approach, which is time noted from entry, not > from when the first event arrives. I suspect the nvme approach is better > suited to the hw side, the epoll timeout helps ensure that we batch > within xx usec rather than xx usec + whatever the delay until the first > one arrives. Which is why it's handled that way currently. That gives > you a fixed batch latency. min_wait is fine when the goal is just maximizing throughput without any latency targets. The min_wait approach makes it hard to set a useful upper bound on latency because unlucky requests that complete early experience much more latency than requests that complete later. Stefan
On 11/8/22 7:00 AM, Stefan Hajnoczi wrote: > On Mon, Nov 07, 2022 at 02:38:52PM -0700, Jens Axboe wrote: >> On 11/7/22 1:56 PM, Stefan Hajnoczi wrote: >>> Hi Jens, >>> NICs and storage controllers have interrupt mitigation/coalescing >>> mechanisms that are similar. >> >> Yep >> >>> NVMe has an Aggregation Time (timeout) and an Aggregation Threshold >>> (counter) value. When a completion occurs, the device waits until the >>> timeout or until the completion counter value is reached. >>> >>> If I've read the code correctly, min_wait is computed at the beginning >>> of epoll_wait(2). NVMe's Aggregation Time is computed from the first >>> completion. >>> >>> It makes me wonder which approach is more useful for applications. With >>> the Aggregation Time approach applications can control how much extra >>> latency is added. What do you think about that approach? >> >> We only tested the current approach, which is time noted from entry, not >> from when the first event arrives. I suspect the nvme approach is better >> suited to the hw side, the epoll timeout helps ensure that we batch >> within xx usec rather than xx usec + whatever the delay until the first >> one arrives. Which is why it's handled that way currently. That gives >> you a fixed batch latency. > > min_wait is fine when the goal is just maximizing throughput without any > latency targets. That's not true at all, I think you're in different time scales than this would be used for. > The min_wait approach makes it hard to set a useful upper bound on > latency because unlucky requests that complete early experience much > more latency than requests that complete later. As mentioned in the cover letter or the main patch, this is most useful for the medium load kind of scenarios. For high load, the min_wait time ends up not mattering because you will hit maxevents first anyway. For the testing that we did, the target was 2-300 usec, and 200 usec was used for the actual test. Depending on what the kind of traffic the server is serving, that's usually not much of a concern. From your reply, I'm guessing you're thinking of much higher min_wait numbers. I don't think those would make sense. If your rate of arrival is low enough that min_wait needs to be high to make a difference, then the load is low enough anyway that it doesn't matter. Hence I'd argue that it is indeed NOT hard to set a useful upper bound on latency, because that is very much what min_wait is. I'm happy to argue merits of one approach over another, but keep in mind that this particular approach was not pulled out of thin air AND it has actually been tested and verified successfully on a production workload. This isn't a hypothetical benchmark kind of setup.
On Tue, Nov 08, 2022 at 07:09:30AM -0700, Jens Axboe wrote: > On 11/8/22 7:00 AM, Stefan Hajnoczi wrote: > > On Mon, Nov 07, 2022 at 02:38:52PM -0700, Jens Axboe wrote: > >> On 11/7/22 1:56 PM, Stefan Hajnoczi wrote: > >>> Hi Jens, > >>> NICs and storage controllers have interrupt mitigation/coalescing > >>> mechanisms that are similar. > >> > >> Yep > >> > >>> NVMe has an Aggregation Time (timeout) and an Aggregation Threshold > >>> (counter) value. When a completion occurs, the device waits until the > >>> timeout or until the completion counter value is reached. > >>> > >>> If I've read the code correctly, min_wait is computed at the beginning > >>> of epoll_wait(2). NVMe's Aggregation Time is computed from the first > >>> completion. > >>> > >>> It makes me wonder which approach is more useful for applications. With > >>> the Aggregation Time approach applications can control how much extra > >>> latency is added. What do you think about that approach? > >> > >> We only tested the current approach, which is time noted from entry, not > >> from when the first event arrives. I suspect the nvme approach is better > >> suited to the hw side, the epoll timeout helps ensure that we batch > >> within xx usec rather than xx usec + whatever the delay until the first > >> one arrives. Which is why it's handled that way currently. That gives > >> you a fixed batch latency. > > > > min_wait is fine when the goal is just maximizing throughput without any > > latency targets. > > That's not true at all, I think you're in different time scales than > this would be used for. > > > The min_wait approach makes it hard to set a useful upper bound on > > latency because unlucky requests that complete early experience much > > more latency than requests that complete later. > > As mentioned in the cover letter or the main patch, this is most useful > for the medium load kind of scenarios. For high load, the min_wait time > ends up not mattering because you will hit maxevents first anyway. For > the testing that we did, the target was 2-300 usec, and 200 usec was > used for the actual test. Depending on what the kind of traffic the > server is serving, that's usually not much of a concern. From your > reply, I'm guessing you're thinking of much higher min_wait numbers. I > don't think those would make sense. If your rate of arrival is low > enough that min_wait needs to be high to make a difference, then the > load is low enough anyway that it doesn't matter. Hence I'd argue that > it is indeed NOT hard to set a useful upper bound on latency, because > that is very much what min_wait is. > > I'm happy to argue merits of one approach over another, but keep in mind > that this particular approach was not pulled out of thin air AND it has > actually been tested and verified successfully on a production workload. > This isn't a hypothetical benchmark kind of setup. Fair enough. I just wanted to make sure the syscall interface that gets merged is as useful as possible. Thanks, Stefan
On 11/8/22 9:10 AM, Stefan Hajnoczi wrote: > On Tue, Nov 08, 2022 at 07:09:30AM -0700, Jens Axboe wrote: >> On 11/8/22 7:00 AM, Stefan Hajnoczi wrote: >>> On Mon, Nov 07, 2022 at 02:38:52PM -0700, Jens Axboe wrote: >>>> On 11/7/22 1:56 PM, Stefan Hajnoczi wrote: >>>>> Hi Jens, >>>>> NICs and storage controllers have interrupt mitigation/coalescing >>>>> mechanisms that are similar. >>>> >>>> Yep >>>> >>>>> NVMe has an Aggregation Time (timeout) and an Aggregation Threshold >>>>> (counter) value. When a completion occurs, the device waits until the >>>>> timeout or until the completion counter value is reached. >>>>> >>>>> If I've read the code correctly, min_wait is computed at the beginning >>>>> of epoll_wait(2). NVMe's Aggregation Time is computed from the first >>>>> completion. >>>>> >>>>> It makes me wonder which approach is more useful for applications. With >>>>> the Aggregation Time approach applications can control how much extra >>>>> latency is added. What do you think about that approach? >>>> >>>> We only tested the current approach, which is time noted from entry, not >>>> from when the first event arrives. I suspect the nvme approach is better >>>> suited to the hw side, the epoll timeout helps ensure that we batch >>>> within xx usec rather than xx usec + whatever the delay until the first >>>> one arrives. Which is why it's handled that way currently. That gives >>>> you a fixed batch latency. >>> >>> min_wait is fine when the goal is just maximizing throughput without any >>> latency targets. >> >> That's not true at all, I think you're in different time scales than >> this would be used for. >> >>> The min_wait approach makes it hard to set a useful upper bound on >>> latency because unlucky requests that complete early experience much >>> more latency than requests that complete later. >> >> As mentioned in the cover letter or the main patch, this is most useful >> for the medium load kind of scenarios. For high load, the min_wait time >> ends up not mattering because you will hit maxevents first anyway. For >> the testing that we did, the target was 2-300 usec, and 200 usec was >> used for the actual test. Depending on what the kind of traffic the >> server is serving, that's usually not much of a concern. From your >> reply, I'm guessing you're thinking of much higher min_wait numbers. I >> don't think those would make sense. If your rate of arrival is low >> enough that min_wait needs to be high to make a difference, then the >> load is low enough anyway that it doesn't matter. Hence I'd argue that >> it is indeed NOT hard to set a useful upper bound on latency, because >> that is very much what min_wait is. >> >> I'm happy to argue merits of one approach over another, but keep in mind >> that this particular approach was not pulled out of thin air AND it has >> actually been tested and verified successfully on a production workload. >> This isn't a hypothetical benchmark kind of setup. > > Fair enough. I just wanted to make sure the syscall interface that gets > merged is as useful as possible. That is indeed the main discussion as far as I'm concerned - syscall, ctl, or both? At this point I'm inclined to just push forward with the ctl addition. A new syscall can always be added, and if we do, then it'd be nice to make one that will work going forward so we don't have to keep adding epoll_wait variants...
On Tue, Nov 08, 2022 at 09:15:23AM -0700, Jens Axboe wrote: > On 11/8/22 9:10 AM, Stefan Hajnoczi wrote: > > On Tue, Nov 08, 2022 at 07:09:30AM -0700, Jens Axboe wrote: > >> On 11/8/22 7:00 AM, Stefan Hajnoczi wrote: > >>> On Mon, Nov 07, 2022 at 02:38:52PM -0700, Jens Axboe wrote: > >>>> On 11/7/22 1:56 PM, Stefan Hajnoczi wrote: > >>>>> Hi Jens, > >>>>> NICs and storage controllers have interrupt mitigation/coalescing > >>>>> mechanisms that are similar. > >>>> > >>>> Yep > >>>> > >>>>> NVMe has an Aggregation Time (timeout) and an Aggregation Threshold > >>>>> (counter) value. When a completion occurs, the device waits until the > >>>>> timeout or until the completion counter value is reached. > >>>>> > >>>>> If I've read the code correctly, min_wait is computed at the beginning > >>>>> of epoll_wait(2). NVMe's Aggregation Time is computed from the first > >>>>> completion. > >>>>> > >>>>> It makes me wonder which approach is more useful for applications. With > >>>>> the Aggregation Time approach applications can control how much extra > >>>>> latency is added. What do you think about that approach? > >>>> > >>>> We only tested the current approach, which is time noted from entry, not > >>>> from when the first event arrives. I suspect the nvme approach is better > >>>> suited to the hw side, the epoll timeout helps ensure that we batch > >>>> within xx usec rather than xx usec + whatever the delay until the first > >>>> one arrives. Which is why it's handled that way currently. That gives > >>>> you a fixed batch latency. > >>> > >>> min_wait is fine when the goal is just maximizing throughput without any > >>> latency targets. > >> > >> That's not true at all, I think you're in different time scales than > >> this would be used for. > >> > >>> The min_wait approach makes it hard to set a useful upper bound on > >>> latency because unlucky requests that complete early experience much > >>> more latency than requests that complete later. > >> > >> As mentioned in the cover letter or the main patch, this is most useful > >> for the medium load kind of scenarios. For high load, the min_wait time > >> ends up not mattering because you will hit maxevents first anyway. For > >> the testing that we did, the target was 2-300 usec, and 200 usec was > >> used for the actual test. Depending on what the kind of traffic the > >> server is serving, that's usually not much of a concern. From your > >> reply, I'm guessing you're thinking of much higher min_wait numbers. I > >> don't think those would make sense. If your rate of arrival is low > >> enough that min_wait needs to be high to make a difference, then the > >> load is low enough anyway that it doesn't matter. Hence I'd argue that > >> it is indeed NOT hard to set a useful upper bound on latency, because > >> that is very much what min_wait is. > >> > >> I'm happy to argue merits of one approach over another, but keep in mind > >> that this particular approach was not pulled out of thin air AND it has > >> actually been tested and verified successfully on a production workload. > >> This isn't a hypothetical benchmark kind of setup. > > > > Fair enough. I just wanted to make sure the syscall interface that gets > > merged is as useful as possible. > > That is indeed the main discussion as far as I'm concerned - syscall, > ctl, or both? At this point I'm inclined to just push forward with the > ctl addition. A new syscall can always be added, and if we do, then it'd > be nice to make one that will work going forward so we don't have to > keep adding epoll_wait variants... epoll_wait3() would be consistent with how maxevents and timeout work. It does not suffer from extra ctl syscall overhead when applications need to change min_wait. The way the current patches add min_wait into epoll_ctl() seems hacky to me. struct epoll_event was meant for file descriptor event entries. It won't necessarily be large enough for future extensions (luckily min_wait only needs a uint64_t value). It's turning epoll_ctl() into an ioctl()/setsockopt()-style interface, which is bad for anything that needs to understand syscalls, like seccomp. A properly typed epoll_wait3() seems cleaner to me. Stefan
On 11/8/22 10:24 AM, Stefan Hajnoczi wrote: > On Tue, Nov 08, 2022 at 09:15:23AM -0700, Jens Axboe wrote: >> On 11/8/22 9:10 AM, Stefan Hajnoczi wrote: >>> On Tue, Nov 08, 2022 at 07:09:30AM -0700, Jens Axboe wrote: >>>> On 11/8/22 7:00 AM, Stefan Hajnoczi wrote: >>>>> On Mon, Nov 07, 2022 at 02:38:52PM -0700, Jens Axboe wrote: >>>>>> On 11/7/22 1:56 PM, Stefan Hajnoczi wrote: >>>>>>> Hi Jens, >>>>>>> NICs and storage controllers have interrupt mitigation/coalescing >>>>>>> mechanisms that are similar. >>>>>> >>>>>> Yep >>>>>> >>>>>>> NVMe has an Aggregation Time (timeout) and an Aggregation Threshold >>>>>>> (counter) value. When a completion occurs, the device waits until the >>>>>>> timeout or until the completion counter value is reached. >>>>>>> >>>>>>> If I've read the code correctly, min_wait is computed at the beginning >>>>>>> of epoll_wait(2). NVMe's Aggregation Time is computed from the first >>>>>>> completion. >>>>>>> >>>>>>> It makes me wonder which approach is more useful for applications. With >>>>>>> the Aggregation Time approach applications can control how much extra >>>>>>> latency is added. What do you think about that approach? >>>>>> >>>>>> We only tested the current approach, which is time noted from entry, not >>>>>> from when the first event arrives. I suspect the nvme approach is better >>>>>> suited to the hw side, the epoll timeout helps ensure that we batch >>>>>> within xx usec rather than xx usec + whatever the delay until the first >>>>>> one arrives. Which is why it's handled that way currently. That gives >>>>>> you a fixed batch latency. >>>>> >>>>> min_wait is fine when the goal is just maximizing throughput without any >>>>> latency targets. >>>> >>>> That's not true at all, I think you're in different time scales than >>>> this would be used for. >>>> >>>>> The min_wait approach makes it hard to set a useful upper bound on >>>>> latency because unlucky requests that complete early experience much >>>>> more latency than requests that complete later. >>>> >>>> As mentioned in the cover letter or the main patch, this is most useful >>>> for the medium load kind of scenarios. For high load, the min_wait time >>>> ends up not mattering because you will hit maxevents first anyway. For >>>> the testing that we did, the target was 2-300 usec, and 200 usec was >>>> used for the actual test. Depending on what the kind of traffic the >>>> server is serving, that's usually not much of a concern. From your >>>> reply, I'm guessing you're thinking of much higher min_wait numbers. I >>>> don't think those would make sense. If your rate of arrival is low >>>> enough that min_wait needs to be high to make a difference, then the >>>> load is low enough anyway that it doesn't matter. Hence I'd argue that >>>> it is indeed NOT hard to set a useful upper bound on latency, because >>>> that is very much what min_wait is. >>>> >>>> I'm happy to argue merits of one approach over another, but keep in mind >>>> that this particular approach was not pulled out of thin air AND it has >>>> actually been tested and verified successfully on a production workload. >>>> This isn't a hypothetical benchmark kind of setup. >>> >>> Fair enough. I just wanted to make sure the syscall interface that gets >>> merged is as useful as possible. >> >> That is indeed the main discussion as far as I'm concerned - syscall, >> ctl, or both? At this point I'm inclined to just push forward with the >> ctl addition. A new syscall can always be added, and if we do, then it'd >> be nice to make one that will work going forward so we don't have to >> keep adding epoll_wait variants... > > epoll_wait3() would be consistent with how maxevents and timeout work. > It does not suffer from extra ctl syscall overhead when applications > need to change min_wait. > > The way the current patches add min_wait into epoll_ctl() seems hacky to > me. struct epoll_event was meant for file descriptor event entries. It > won't necessarily be large enough for future extensions (luckily > min_wait only needs a uint64_t value). It's turning epoll_ctl() into an > ioctl()/setsockopt()-style interface, which is bad for anything that > needs to understand syscalls, like seccomp. A properly typed > epoll_wait3() seems cleaner to me. The ctl method is definitely a bit of an oddball. I've highlighted why I went that way in earlier emails, but in summary: - Makes it easy to adopt, just adding two lines at init time. - Moves detection of availability to init time as well, rather than the fast path. I don't think anyone would want to often change the wait, it's something you'd set at init time. If you often want to change values for some reason, then obviously a syscall parameter would be a lot better. epoll_pwait3() would be vastly different than the other ones, simply because epoll_pwait2() is already using the maximum number of args. We'd need to add an epoll syscall struct at that point, probably with flags telling us if signal_struct or timeout is actually valid. This is not to say I don't think we should add a syscall interface, just some of the arguments pro and con from having actually looked at it.
On Tue, Nov 08, 2022 at 10:28:37AM -0700, Jens Axboe wrote: > On 11/8/22 10:24 AM, Stefan Hajnoczi wrote: > > On Tue, Nov 08, 2022 at 09:15:23AM -0700, Jens Axboe wrote: > >> On 11/8/22 9:10 AM, Stefan Hajnoczi wrote: > >>> On Tue, Nov 08, 2022 at 07:09:30AM -0700, Jens Axboe wrote: > >>>> On 11/8/22 7:00 AM, Stefan Hajnoczi wrote: > >>>>> On Mon, Nov 07, 2022 at 02:38:52PM -0700, Jens Axboe wrote: > >>>>>> On 11/7/22 1:56 PM, Stefan Hajnoczi wrote: > >>>>>>> Hi Jens, > >>>>>>> NICs and storage controllers have interrupt mitigation/coalescing > >>>>>>> mechanisms that are similar. > >>>>>> > >>>>>> Yep > >>>>>> > >>>>>>> NVMe has an Aggregation Time (timeout) and an Aggregation Threshold > >>>>>>> (counter) value. When a completion occurs, the device waits until the > >>>>>>> timeout or until the completion counter value is reached. > >>>>>>> > >>>>>>> If I've read the code correctly, min_wait is computed at the beginning > >>>>>>> of epoll_wait(2). NVMe's Aggregation Time is computed from the first > >>>>>>> completion. > >>>>>>> > >>>>>>> It makes me wonder which approach is more useful for applications. With > >>>>>>> the Aggregation Time approach applications can control how much extra > >>>>>>> latency is added. What do you think about that approach? > >>>>>> > >>>>>> We only tested the current approach, which is time noted from entry, not > >>>>>> from when the first event arrives. I suspect the nvme approach is better > >>>>>> suited to the hw side, the epoll timeout helps ensure that we batch > >>>>>> within xx usec rather than xx usec + whatever the delay until the first > >>>>>> one arrives. Which is why it's handled that way currently. That gives > >>>>>> you a fixed batch latency. > >>>>> > >>>>> min_wait is fine when the goal is just maximizing throughput without any > >>>>> latency targets. > >>>> > >>>> That's not true at all, I think you're in different time scales than > >>>> this would be used for. > >>>> > >>>>> The min_wait approach makes it hard to set a useful upper bound on > >>>>> latency because unlucky requests that complete early experience much > >>>>> more latency than requests that complete later. > >>>> > >>>> As mentioned in the cover letter or the main patch, this is most useful > >>>> for the medium load kind of scenarios. For high load, the min_wait time > >>>> ends up not mattering because you will hit maxevents first anyway. For > >>>> the testing that we did, the target was 2-300 usec, and 200 usec was > >>>> used for the actual test. Depending on what the kind of traffic the > >>>> server is serving, that's usually not much of a concern. From your > >>>> reply, I'm guessing you're thinking of much higher min_wait numbers. I > >>>> don't think those would make sense. If your rate of arrival is low > >>>> enough that min_wait needs to be high to make a difference, then the > >>>> load is low enough anyway that it doesn't matter. Hence I'd argue that > >>>> it is indeed NOT hard to set a useful upper bound on latency, because > >>>> that is very much what min_wait is. > >>>> > >>>> I'm happy to argue merits of one approach over another, but keep in mind > >>>> that this particular approach was not pulled out of thin air AND it has > >>>> actually been tested and verified successfully on a production workload. > >>>> This isn't a hypothetical benchmark kind of setup. > >>> > >>> Fair enough. I just wanted to make sure the syscall interface that gets > >>> merged is as useful as possible. > >> > >> That is indeed the main discussion as far as I'm concerned - syscall, > >> ctl, or both? At this point I'm inclined to just push forward with the > >> ctl addition. A new syscall can always be added, and if we do, then it'd > >> be nice to make one that will work going forward so we don't have to > >> keep adding epoll_wait variants... > > > > epoll_wait3() would be consistent with how maxevents and timeout work. > > It does not suffer from extra ctl syscall overhead when applications > > need to change min_wait. > > > > The way the current patches add min_wait into epoll_ctl() seems hacky to > > me. struct epoll_event was meant for file descriptor event entries. It > > won't necessarily be large enough for future extensions (luckily > > min_wait only needs a uint64_t value). It's turning epoll_ctl() into an > > ioctl()/setsockopt()-style interface, which is bad for anything that > > needs to understand syscalls, like seccomp. A properly typed > > epoll_wait3() seems cleaner to me. > > The ctl method is definitely a bit of an oddball. I've highlighted why > I went that way in earlier emails, but in summary: > > - Makes it easy to adopt, just adding two lines at init time. > > - Moves detection of availability to init time as well, rather than > the fast path. Add an epoll_create1() flag to test for availability? > I don't think anyone would want to often change the wait, it's > something you'd set at init time. If you often want to change values > for some reason, then obviously a syscall parameter would be a lot > better. > > epoll_pwait3() would be vastly different than the other ones, simply > because epoll_pwait2() is already using the maximum number of args. > We'd need to add an epoll syscall struct at that point, probably > with flags telling us if signal_struct or timeout is actually valid. Yes :/. > This is not to say I don't think we should add a syscall interface, > just some of the arguments pro and con from having actually looked > at it. > > -- > Jens Axboe > >
From: Stefan Hajnoczi > Sent: 08 November 2022 17:24 ... > The way the current patches add min_wait into epoll_ctl() seems hacky to > me. struct epoll_event was meant for file descriptor event entries. It > won't necessarily be large enough for future extensions (luckily > min_wait only needs a uint64_t value). It's turning epoll_ctl() into an > ioctl()/setsockopt()-style interface, which is bad for anything that > needs to understand syscalls, like seccomp. A properly typed > epoll_wait3() seems cleaner to me. Is there any reason you can't use an ioctl() on an epoll fd? That would be cleaner that hacking at epoll_ctl(). It would also be easier to modify to allow (strange) things like: - return if no events for 10ms. - return 200us after the first event. - return after 10 events. - return at most 100 events. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On Tue, Nov 8, 2022 at 3:09 PM Jens Axboe <axboe@kernel.dk> wrote: > > On 11/8/22 7:00 AM, Stefan Hajnoczi wrote: > > On Mon, Nov 07, 2022 at 02:38:52PM -0700, Jens Axboe wrote: > >> On 11/7/22 1:56 PM, Stefan Hajnoczi wrote: > >>> Hi Jens, > >>> NICs and storage controllers have interrupt mitigation/coalescing > >>> mechanisms that are similar. > >> > >> Yep > >> > >>> NVMe has an Aggregation Time (timeout) and an Aggregation Threshold > >>> (counter) value. When a completion occurs, the device waits until the > >>> timeout or until the completion counter value is reached. > >>> > >>> If I've read the code correctly, min_wait is computed at the beginning > >>> of epoll_wait(2). NVMe's Aggregation Time is computed from the first > >>> completion. > >>> > >>> It makes me wonder which approach is more useful for applications. With > >>> the Aggregation Time approach applications can control how much extra > >>> latency is added. What do you think about that approach? > >> > >> We only tested the current approach, which is time noted from entry, not > >> from when the first event arrives. I suspect the nvme approach is better > >> suited to the hw side, the epoll timeout helps ensure that we batch > >> within xx usec rather than xx usec + whatever the delay until the first > >> one arrives. Which is why it's handled that way currently. That gives > >> you a fixed batch latency. > > > > min_wait is fine when the goal is just maximizing throughput without any > > latency targets. > > That's not true at all, I think you're in different time scales than > this would be used for. > > > The min_wait approach makes it hard to set a useful upper bound on > > latency because unlucky requests that complete early experience much > > more latency than requests that complete later. > > As mentioned in the cover letter or the main patch, this is most useful > for the medium load kind of scenarios. For high load, the min_wait time > ends up not mattering because you will hit maxevents first anyway. For > the testing that we did, the target was 2-300 usec, and 200 usec was > used for the actual test. Depending on what the kind of traffic the > server is serving, that's usually not much of a concern. From your > reply, I'm guessing you're thinking of much higher min_wait numbers. I > don't think those would make sense. If your rate of arrival is low > enough that min_wait needs to be high to make a difference, then the > load is low enough anyway that it doesn't matter. Hence I'd argue that > it is indeed NOT hard to set a useful upper bound on latency, because > that is very much what min_wait is. > > I'm happy to argue merits of one approach over another, but keep in mind > that this particular approach was not pulled out of thin air AND it has > actually been tested and verified successfully on a production workload. > This isn't a hypothetical benchmark kind of setup. Following up on the interrupt mitigation analogy. This also reminds somewhat of SO_RCVLOWAT. That sets a lower bound on received data before waking up a single thread. Would it be more useful to define a minevents event count, rather than a minwait timeout? That might give the same amount of preferred batch size, without adding latency when unnecessary, or having to infer a reasonable bound from expected event rate. Bounded still by the max timeout.