Message ID | 20241217024047.1091893-3-yukuai1@huaweicloud.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | lib/sbitmap: fix shallow_depth tag allocation | expand |
On 12/16/24 6:40 PM, Yu Kuai wrote: > From: Yu Kuai <yukuai3@huawei.com> > > Currently, shallow_depth is used by bfq, kyber and mq-deadline, they both both -> all > pass in the value for the whole sbitmap, while sbitmap treats the value treats for -> applies to > for just one word. Which means, shallow_depth never work as expected, work -> works > and there really is no such functional tests to covert it. is ... tests -> is ... test or are ... tests covert -> cover > Consider that callers doesn't know which word will be used, and it's Consider -> Considering doesn't -> don't > diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h > index 189140bf11fc..92e77bc13cf6 100644 > --- a/include/linux/sbitmap.h > +++ b/include/linux/sbitmap.h > @@ -213,12 +213,12 @@ int sbitmap_get(struct sbitmap *sb); > * sbitmap_get_shallow() - Try to allocate a free bit from a &struct sbitmap, > * limiting the depth used from each word. > * @sb: Bitmap to allocate from. > - * @shallow_depth: The maximum number of bits to allocate from a single word. > + * @shallow_depth: The maximum number of bits to allocate from the bitmap. > * > * This rather specific operation allows for having multiple users with > * different allocation limits. E.g., there can be a high-priority class that > * uses sbitmap_get() and a low-priority class that uses sbitmap_get_shallow() > - * with a @shallow_depth of (1 << (@sb->shift - 1)). Then, the low-priority > + * with a @shallow_depth of (sb->depth << 1). Then, the low-priority (sb->depth << 1) -> (sb->depth >> 1) > diff --git a/lib/sbitmap.c b/lib/sbitmap.c > index d3412984170c..6b8b909614a5 100644 > --- a/lib/sbitmap.c > +++ b/lib/sbitmap.c > @@ -208,8 +208,27 @@ static int sbitmap_find_bit_in_word(struct sbitmap_word *map, > return nr; > } > > +static unsigned int __map_depth_with_shallow(const struct sbitmap *sb, > + int index, > + unsigned int shallow_depth) > +{ > + unsigned int pre_word_bits = 0; > + > + if (shallow_depth >= sb->depth) > + return __map_depth(sb, index); > + > + if (index > 0) > + pre_word_bits += (index - 1) << sb->shift; Why "index - 1" instead of "index"? > + > + if (shallow_depth <= pre_word_bits) > + return 0; > + > + return min_t(unsigned int, __map_depth(sb, index), > + shallow_depth - pre_word_bits); > +} How about renaming pre_word_bits into lower_bound? Otherwise this patch looks good to me. Thanks, Bart.
Hi, 在 2024/12/18 5:47, Bart Van Assche 写道: > On 12/16/24 6:40 PM, Yu Kuai wrote: >> From: Yu Kuai <yukuai3@huawei.com> >> >> Currently, shallow_depth is used by bfq, kyber and mq-deadline, they both > > both -> all > >> pass in the value for the whole sbitmap, while sbitmap treats the value > > treats for -> applies to > >> for just one word. Which means, shallow_depth never work as expected, > > work -> works > >> and there really is no such functional tests to covert it. > > is ... tests -> is ... test or are ... tests > > covert -> cover > >> Consider that callers doesn't know which word will be used, and it's > > Consider -> Considering > doesn't -> don't > >> diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h >> index 189140bf11fc..92e77bc13cf6 100644 >> --- a/include/linux/sbitmap.h >> +++ b/include/linux/sbitmap.h >> @@ -213,12 +213,12 @@ int sbitmap_get(struct sbitmap *sb); >> * sbitmap_get_shallow() - Try to allocate a free bit from a &struct >> sbitmap, >> * limiting the depth used from each word. >> * @sb: Bitmap to allocate from. >> - * @shallow_depth: The maximum number of bits to allocate from a >> single word. >> + * @shallow_depth: The maximum number of bits to allocate from the >> bitmap. >> * >> * This rather specific operation allows for having multiple users with >> * different allocation limits. E.g., there can be a high-priority >> class that >> * uses sbitmap_get() and a low-priority class that uses >> sbitmap_get_shallow() >> - * with a @shallow_depth of (1 << (@sb->shift - 1)). Then, the >> low-priority >> + * with a @shallow_depth of (sb->depth << 1). Then, the low-priority > > (sb->depth << 1) -> (sb->depth >> 1) > >> diff --git a/lib/sbitmap.c b/lib/sbitmap.c >> index d3412984170c..6b8b909614a5 100644 >> --- a/lib/sbitmap.c >> +++ b/lib/sbitmap.c >> @@ -208,8 +208,27 @@ static int sbitmap_find_bit_in_word(struct >> sbitmap_word *map, >> return nr; >> } >> +static unsigned int __map_depth_with_shallow(const struct sbitmap *sb, >> + int index, >> + unsigned int shallow_depth) >> +{ >> + unsigned int pre_word_bits = 0; >> + >> + if (shallow_depth >= sb->depth) >> + return __map_depth(sb, index); >> + >> + if (index > 0) >> + pre_word_bits += (index - 1) << sb->shift; > > Why "index - 1" instead of "index"? We're finding bit in the 'index' word, and pre_word_bits are the number of bits in previous workds. > >> + >> + if (shallow_depth <= pre_word_bits) >> + return 0; >> + >> + return min_t(unsigned int, __map_depth(sb, index), >> + shallow_depth - pre_word_bits); >> +} > > How about renaming pre_word_bits into lower_bound? Yes. > > Otherwise this patch looks good to me. > Thanks, Kuai > Thanks, > > Bart. > . >
diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index 189140bf11fc..92e77bc13cf6 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -213,12 +213,12 @@ int sbitmap_get(struct sbitmap *sb); * sbitmap_get_shallow() - Try to allocate a free bit from a &struct sbitmap, * limiting the depth used from each word. * @sb: Bitmap to allocate from. - * @shallow_depth: The maximum number of bits to allocate from a single word. + * @shallow_depth: The maximum number of bits to allocate from the bitmap. * * This rather specific operation allows for having multiple users with * different allocation limits. E.g., there can be a high-priority class that * uses sbitmap_get() and a low-priority class that uses sbitmap_get_shallow() - * with a @shallow_depth of (1 << (@sb->shift - 1)). Then, the low-priority + * with a @shallow_depth of (sb->depth << 1). Then, the low-priority * class can only allocate half of the total bits in the bitmap, preventing it * from starving out the high-priority class. * @@ -478,7 +478,7 @@ unsigned long __sbitmap_queue_get_batch(struct sbitmap_queue *sbq, int nr_tags, * sbitmap_queue, limiting the depth used from each word, with preemption * already disabled. * @sbq: Bitmap queue to allocate from. - * @shallow_depth: The maximum number of bits to allocate from a single word. + * @shallow_depth: The maximum number of bits to allocate from the queue. * See sbitmap_get_shallow(). * * If you call this, make sure to call sbitmap_queue_min_shallow_depth() after diff --git a/lib/sbitmap.c b/lib/sbitmap.c index d3412984170c..6b8b909614a5 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -208,8 +208,27 @@ static int sbitmap_find_bit_in_word(struct sbitmap_word *map, return nr; } +static unsigned int __map_depth_with_shallow(const struct sbitmap *sb, + int index, + unsigned int shallow_depth) +{ + unsigned int pre_word_bits = 0; + + if (shallow_depth >= sb->depth) + return __map_depth(sb, index); + + if (index > 0) + pre_word_bits += (index - 1) << sb->shift; + + if (shallow_depth <= pre_word_bits) + return 0; + + return min_t(unsigned int, __map_depth(sb, index), + shallow_depth - pre_word_bits); +} + static int sbitmap_find_bit(struct sbitmap *sb, - unsigned int depth, + unsigned int shallow_depth, unsigned int index, unsigned int alloc_hint, bool wrap) @@ -218,12 +237,12 @@ static int sbitmap_find_bit(struct sbitmap *sb, int nr = -1; for (i = 0; i < sb->map_nr; i++) { - nr = sbitmap_find_bit_in_word(&sb->map[index], - min_t(unsigned int, - __map_depth(sb, index), - depth), - alloc_hint, wrap); + unsigned int depth = __map_depth_with_shallow(sb, index, + shallow_depth); + if (depth) + nr = sbitmap_find_bit_in_word(&sb->map[index], depth, + alloc_hint, wrap); if (nr != -1) { nr += index << sb->shift; break; @@ -406,27 +425,9 @@ EXPORT_SYMBOL_GPL(sbitmap_bitmap_show); static unsigned int sbq_calc_wake_batch(struct sbitmap_queue *sbq, unsigned int depth) { - unsigned int wake_batch; - unsigned int shallow_depth; - - /* - * Each full word of the bitmap has bits_per_word bits, and there might - * be a partial word. There are depth / bits_per_word full words and - * depth % bits_per_word bits left over. In bitwise arithmetic: - * - * bits_per_word = 1 << shift - * depth / bits_per_word = depth >> shift - * depth % bits_per_word = depth & ((1 << shift) - 1) - * - * Each word can be limited to sbq->min_shallow_depth bits. - */ - shallow_depth = min(1U << sbq->sb.shift, sbq->min_shallow_depth); - depth = ((depth >> sbq->sb.shift) * shallow_depth + - min(depth & ((1U << sbq->sb.shift) - 1), shallow_depth)); - wake_batch = clamp_t(unsigned int, depth / SBQ_WAIT_QUEUES, 1, - SBQ_WAKE_BATCH); - - return wake_batch; + return clamp_t(unsigned int, + min(depth, sbq->min_shallow_depth) / SBQ_WAIT_QUEUES, + 1, SBQ_WAKE_BATCH); } int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth,