Message ID | 20230522183845.354920-1-bvanassche@acm.org (mailing list archive) |
---|---|
Headers | show |
Series | Submit zoned writes in order | expand |
On Mon, May 22, 2023 at 11:38:35AM -0700, Bart Van Assche wrote: > - Changed the approach from one requeue list per hctx into preserving one > requeue list per request queue. Can you explain why? The resulting code looks rather odd to me as we now reach out to a global list from the per-hctx run_queue helper, which seems a bit awkware.
On 5/23/23 00:22, Christoph Hellwig wrote: > On Mon, May 22, 2023 at 11:38:35AM -0700, Bart Van Assche wrote: >> - Changed the approach from one requeue list per hctx into preserving one >> requeue list per request queue. > > Can you explain why? The resulting code looks rather odd to me as we > now reach out to a global list from the per-hctx run_queue helper, > which seems a bit awkward. Hi Christoph, This change is based on the assumption that requeuing and flushing are relatively rare events. Do you perhaps want me to change the approach back to one requeue list and one flush list per hctx? Thanks, Bart.
On Tue, May 23, 2023 at 01:04:44PM -0700, Bart Van Assche wrote: >> Can you explain why? The resulting code looks rather odd to me as we >> now reach out to a global list from the per-hctx run_queue helper, >> which seems a bit awkward. > > Hi Christoph, > > This change is based on the assumption that requeuing and flushing are > relatively rare events. The former are, the latter not so much. But more importantly you now look into a global list in the per-hctx dispatch, adding cache line sharing. > Do you perhaps want me to change the approach back > to one requeue list and one flush list per hctx? Unless we have a very good reason to make them global that would be my preference.