Message ID | 20210325114228.27719-1-mgorman@techsingularity.net (mailing list archive) |
---|---|
Headers | show |
Series | Introduce a bulk order-0 page allocator with two in-tree users | expand |
On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote: > This series introduces a bulk order-0 page allocator with sunrpc and > the network page pool being the first users. The implementation is not > efficient as semantics needed to be ironed out first. If no other semantic > changes are needed, it can be made more efficient. Despite that, this > is a performance-related for users that require multiple pages for an > operation without multiple round-trips to the page allocator. Quoting > the last patch for the high-speed networking use-case > > Kernel XDP stats CPU pps Delta > Baseline XDP-RX CPU total 3,771,046 n/a > List XDP-RX CPU total 3,940,242 +4.49% > Array XDP-RX CPU total 4,249,224 +12.68% > > >From the SUNRPC traces of svc_alloc_arg() > > Single page: 25.007 us per call over 532,571 calls > Bulk list: 6.258 us per call over 517,034 calls > Bulk array: 4.590 us per call over 517,442 calls > > Both potential users in this series are corner cases (NFS and high-speed > networks) so it is unlikely that most users will see any benefit in the > short term. Other potential other users are batch allocations for page > cache readahead, fault around and SLUB allocations when high-order pages > are unavailable. It's unknown how much benefit would be seen by converting > multiple page allocation calls to a single batch or what difference it may > make to headline performance. We have a third user, vmalloc(), with a 16% perf improvement. I know the email says 21% but that includes the 5% improvement from switching to kvmalloc() to allocate area->pages. https://lore.kernel.org/linux-mm/20210323133948.GA10046@pc638.lan/ I don't know how many _frequent_ vmalloc users we have that will benefit from this, but it's probably more than will benefit from improvements to 200Gbit networking performance.
On Thu, Mar 25, 2021 at 12:50:01PM +0000, Matthew Wilcox wrote: > On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote: > > This series introduces a bulk order-0 page allocator with sunrpc and > > the network page pool being the first users. The implementation is not > > efficient as semantics needed to be ironed out first. If no other semantic > > changes are needed, it can be made more efficient. Despite that, this > > is a performance-related for users that require multiple pages for an > > operation without multiple round-trips to the page allocator. Quoting > > the last patch for the high-speed networking use-case > > > > Kernel XDP stats CPU pps Delta > > Baseline XDP-RX CPU total 3,771,046 n/a > > List XDP-RX CPU total 3,940,242 +4.49% > > Array XDP-RX CPU total 4,249,224 +12.68% > > > > >From the SUNRPC traces of svc_alloc_arg() > > > > Single page: 25.007 us per call over 532,571 calls > > Bulk list: 6.258 us per call over 517,034 calls > > Bulk array: 4.590 us per call over 517,442 calls > > > > Both potential users in this series are corner cases (NFS and high-speed > > networks) so it is unlikely that most users will see any benefit in the > > short term. Other potential other users are batch allocations for page > > cache readahead, fault around and SLUB allocations when high-order pages > > are unavailable. It's unknown how much benefit would be seen by converting > > multiple page allocation calls to a single batch or what difference it may > > make to headline performance. > > We have a third user, vmalloc(), with a 16% perf improvement. I know the > email says 21% but that includes the 5% improvement from switching to > kvmalloc() to allocate area->pages. > > https://lore.kernel.org/linux-mm/20210323133948.GA10046@pc638.lan/ > That's fairly promising. Assuming the bulk allocator gets merged, it would make sense to add vmalloc on top. That's for bringing it to my attention because it's far more relevant than my imaginary potential use cases. > I don't know how many _frequent_ vmalloc users we have that will benefit > from this, but it's probably more than will benefit from improvements > to 200Gbit networking performance. I think it was 100Gbit being looked at but your point is still valid and there is no harm in incrementally improving over time.
> On Thu, Mar 25, 2021 at 12:50:01PM +0000, Matthew Wilcox wrote: > > On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote: > > > This series introduces a bulk order-0 page allocator with sunrpc and > > > the network page pool being the first users. The implementation is not > > > efficient as semantics needed to be ironed out first. If no other semantic > > > changes are needed, it can be made more efficient. Despite that, this > > > is a performance-related for users that require multiple pages for an > > > operation without multiple round-trips to the page allocator. Quoting > > > the last patch for the high-speed networking use-case > > > > > > Kernel XDP stats CPU pps Delta > > > Baseline XDP-RX CPU total 3,771,046 n/a > > > List XDP-RX CPU total 3,940,242 +4.49% > > > Array XDP-RX CPU total 4,249,224 +12.68% > > > > > > >From the SUNRPC traces of svc_alloc_arg() > > > > > > Single page: 25.007 us per call over 532,571 calls > > > Bulk list: 6.258 us per call over 517,034 calls > > > Bulk array: 4.590 us per call over 517,442 calls > > > > > > Both potential users in this series are corner cases (NFS and high-speed > > > networks) so it is unlikely that most users will see any benefit in the > > > short term. Other potential other users are batch allocations for page > > > cache readahead, fault around and SLUB allocations when high-order pages > > > are unavailable. It's unknown how much benefit would be seen by converting > > > multiple page allocation calls to a single batch or what difference it may > > > make to headline performance. > > > > We have a third user, vmalloc(), with a 16% perf improvement. I know the > > email says 21% but that includes the 5% improvement from switching to > > kvmalloc() to allocate area->pages. > > > > https://lore.kernel.org/linux-mm/20210323133948.GA10046@pc638.lan/ > > > > That's fairly promising. Assuming the bulk allocator gets merged, it would > make sense to add vmalloc on top. That's for bringing it to my attention > because it's far more relevant than my imaginary potential use cases. > For the vmalloc we should be able to allocating on a specific NUMA node, at least the current interface takes it into account. As far as i see the current interface allocate on a current node: static inline unsigned long alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array) { return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array); } Or am i missing something? -- Vlad Rezki
On Thu, Mar 25, 2021 at 03:06:57PM +0100, Uladzislau Rezki wrote: > For the vmalloc we should be able to allocating on a specific NUMA node, > at least the current interface takes it into account. As far as i see > the current interface allocate on a current node: > > static inline unsigned long > alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array) > { > return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array); > } > > Or am i missing something? You can call __alloc_pages_bulk() directly; there's no need to indirect through alloc_pages_bulk_array().
On Thu, Mar 25, 2021 at 02:09:27PM +0000, Matthew Wilcox wrote: > On Thu, Mar 25, 2021 at 03:06:57PM +0100, Uladzislau Rezki wrote: > > For the vmalloc we should be able to allocating on a specific NUMA node, > > at least the current interface takes it into account. As far as i see > > the current interface allocate on a current node: > > > > static inline unsigned long > > alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array) > > { > > return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array); > > } > > > > Or am i missing something? > > You can call __alloc_pages_bulk() directly; there's no need to indirect > through alloc_pages_bulk_array(). > OK. It is accessible then. -- Vlad Rezki
On Thu, Mar 25, 2021 at 03:06:57PM +0100, Uladzislau Rezki wrote: > > On Thu, Mar 25, 2021 at 12:50:01PM +0000, Matthew Wilcox wrote: > > > On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote: > > > > This series introduces a bulk order-0 page allocator with sunrpc and > > > > the network page pool being the first users. The implementation is not > > > > efficient as semantics needed to be ironed out first. If no other semantic > > > > changes are needed, it can be made more efficient. Despite that, this > > > > is a performance-related for users that require multiple pages for an > > > > operation without multiple round-trips to the page allocator. Quoting > > > > the last patch for the high-speed networking use-case > > > > > > > > Kernel XDP stats CPU pps Delta > > > > Baseline XDP-RX CPU total 3,771,046 n/a > > > > List XDP-RX CPU total 3,940,242 +4.49% > > > > Array XDP-RX CPU total 4,249,224 +12.68% > > > > > > > > >From the SUNRPC traces of svc_alloc_arg() > > > > > > > > Single page: 25.007 us per call over 532,571 calls > > > > Bulk list: 6.258 us per call over 517,034 calls > > > > Bulk array: 4.590 us per call over 517,442 calls > > > > > > > > Both potential users in this series are corner cases (NFS and high-speed > > > > networks) so it is unlikely that most users will see any benefit in the > > > > short term. Other potential other users are batch allocations for page > > > > cache readahead, fault around and SLUB allocations when high-order pages > > > > are unavailable. It's unknown how much benefit would be seen by converting > > > > multiple page allocation calls to a single batch or what difference it may > > > > make to headline performance. > > > > > > We have a third user, vmalloc(), with a 16% perf improvement. I know the > > > email says 21% but that includes the 5% improvement from switching to > > > kvmalloc() to allocate area->pages. > > > > > > https://lore.kernel.org/linux-mm/20210323133948.GA10046@pc638.lan/ > > > > > > > That's fairly promising. Assuming the bulk allocator gets merged, it would > > make sense to add vmalloc on top. That's for bringing it to my attention > > because it's far more relevant than my imaginary potential use cases. > > > For the vmalloc we should be able to allocating on a specific NUMA node, > at least the current interface takes it into account. As far as i see > the current interface allocate on a current node: > > static inline unsigned long > alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array) > { > return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array); > } > > Or am i missing something? > No, you're not missing anything. Options would be to add a helper similar alloc_pages_node or to directly call __alloc_pages_bulk specifying a node and using GFP_THISNODE. prepare_alloc_pages() should pick the correct zonelist containing only the required node. > -- > Vlad Rezki
On Thu, Mar 25, 2021 at 02:26:24PM +0000, Mel Gorman wrote: > On Thu, Mar 25, 2021 at 03:06:57PM +0100, Uladzislau Rezki wrote: > > > On Thu, Mar 25, 2021 at 12:50:01PM +0000, Matthew Wilcox wrote: > > > > On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote: > > > > > This series introduces a bulk order-0 page allocator with sunrpc and > > > > > the network page pool being the first users. The implementation is not > > > > > efficient as semantics needed to be ironed out first. If no other semantic > > > > > changes are needed, it can be made more efficient. Despite that, this > > > > > is a performance-related for users that require multiple pages for an > > > > > operation without multiple round-trips to the page allocator. Quoting > > > > > the last patch for the high-speed networking use-case > > > > > > > > > > Kernel XDP stats CPU pps Delta > > > > > Baseline XDP-RX CPU total 3,771,046 n/a > > > > > List XDP-RX CPU total 3,940,242 +4.49% > > > > > Array XDP-RX CPU total 4,249,224 +12.68% > > > > > > > > > > >From the SUNRPC traces of svc_alloc_arg() > > > > > > > > > > Single page: 25.007 us per call over 532,571 calls > > > > > Bulk list: 6.258 us per call over 517,034 calls > > > > > Bulk array: 4.590 us per call over 517,442 calls > > > > > > > > > > Both potential users in this series are corner cases (NFS and high-speed > > > > > networks) so it is unlikely that most users will see any benefit in the > > > > > short term. Other potential other users are batch allocations for page > > > > > cache readahead, fault around and SLUB allocations when high-order pages > > > > > are unavailable. It's unknown how much benefit would be seen by converting > > > > > multiple page allocation calls to a single batch or what difference it may > > > > > make to headline performance. > > > > > > > > We have a third user, vmalloc(), with a 16% perf improvement. I know the > > > > email says 21% but that includes the 5% improvement from switching to > > > > kvmalloc() to allocate area->pages. > > > > > > > > https://lore.kernel.org/linux-mm/20210323133948.GA10046@pc638.lan/ > > > > > > > > > > That's fairly promising. Assuming the bulk allocator gets merged, it would > > > make sense to add vmalloc on top. That's for bringing it to my attention > > > because it's far more relevant than my imaginary potential use cases. > > > > > For the vmalloc we should be able to allocating on a specific NUMA node, > > at least the current interface takes it into account. As far as i see > > the current interface allocate on a current node: > > > > static inline unsigned long > > alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array) > > { > > return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array); > > } > > > > Or am i missing something? > > > > No, you're not missing anything. Options would be to add a helper similar > alloc_pages_node or to directly call __alloc_pages_bulk specifying a node > and using GFP_THISNODE. prepare_alloc_pages() should pick the correct > zonelist containing only the required node. > IMHO, a helper something like *_node() would be reasonable. I see that many functions in "mm" have its own variants which explicitly add "_node()" prefix to signal to users that it is a NUMA aware calls. As for __alloc_pages_bulk(), i got it. Thanks! -- Vlad Rezki