diff mbox series

[RFC,v2,1/6] mm, doc: Add doc for MPOL_F_NUMA_BALANCING

Message ID 20231122141559.4228-2-laoar.shao@gmail.com (mailing list archive)
State Superseded
Headers show
Series mm, security, bpf: Fine-grained control over memory policy adjustments with lsm bpf | expand

Checks

Context Check Description
netdev/tree_selection success Not a local patch
bpf/vmtest-bpf-next-PR fail PR summary
bpf/vmtest-bpf-next-VM_Test-3 fail Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-5 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-4 success Logs for aarch64-gcc / test
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-8 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-9 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-11 success Logs for x86_64-gcc / test
bpf/vmtest-bpf-next-VM_Test-10 fail Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 fail Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for x86_64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-15 success Logs for x86_64-llvm-16 / veristat
bpf/vmtest-bpf-next-VM_Test-14 success Logs for x86_64-llvm-16 / test
bpf/vmtest-bpf-next-VM_Test-13 fail Logs for x86_64-llvm-16 / build / build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-7 success Logs for s390x-gcc / test

Commit Message

Yafang Shao Nov. 22, 2023, 2:15 p.m. UTC
The document on MPOL_F_NUMA_BALANCING was missed in the initial commit
The MPOL_F_NUMA_BALANCING document was inadvertently omitted from the
initial commit bda420b98505 ("numa balancing: migrate on fault among
multiple bound nodes")

Let's ensure its inclusion.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
---
 .../admin-guide/mm/numa_memory_policy.rst     | 27 +++++++++++++++++++
 1 file changed, 27 insertions(+)

Comments

Huang, Ying Nov. 23, 2023, 6:37 a.m. UTC | #1
Yafang Shao <laoar.shao@gmail.com> writes:

> The document on MPOL_F_NUMA_BALANCING was missed in the initial commit
> The MPOL_F_NUMA_BALANCING document was inadvertently omitted from the
> initial commit bda420b98505 ("numa balancing: migrate on fault among
> multiple bound nodes")
>
> Let's ensure its inclusion.
>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> Cc: "Huang, Ying" <ying.huang@intel.com>

LGTM, Thanks!

Reviewed-by: "Huang, Ying" <ying.huang@intel.com>

> ---
>  .../admin-guide/mm/numa_memory_policy.rst     | 27 +++++++++++++++++++
>  1 file changed, 27 insertions(+)
>
> diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst
> index eca38fa81e0f..19071b71979c 100644
> --- a/Documentation/admin-guide/mm/numa_memory_policy.rst
> +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst
> @@ -332,6 +332,33 @@ MPOL_F_RELATIVE_NODES
>  	MPOL_PREFERRED policies that were created with an empty nodemask
>  	(local allocation).
>  
> +MPOL_F_NUMA_BALANCING (since Linux 5.12)
> +        When operating in MPOL_BIND mode, enables NUMA balancing for tasks,
> +        contingent upon kernel support. This feature optimizes page
> +        placement within the confines of the specified memory binding
> +        policy. The addition of the MPOL_F_NUMA_BALANCING flag augments the
> +        control mechanism for NUMA balancing:
> +
> +        - The sysctl knob numa_balancing governs global activation or
> +          deactivation of NUMA balancing.
> +
> +        - Even if sysctl numa_balancing is enabled, NUMA balancing remains
> +          disabled by default for memory areas or applications utilizing
> +          explicit memory policies.
> +
> +        - The MPOL_F_NUMA_BALANCING flag facilitates NUMA balancing
> +          activation for applications employing explicit memory policies
> +          (MPOL_BIND).
> +
> +        This flags enables various optimizations for page placement through
> +        NUMA balancing. For instance, when an application's memory is bound
> +        to multiple nodes (MPOL_BIND), the hint page fault handler attempts
> +        to migrate accessed pages to reduce cross-node access if the
> +        accessing node aligns with the policy nodemask.
> +
> +        If the flag isn't supported by the kernel, or is used with mode
> +        other than MPOL_BIND, -1 is returned and errno is set to EINVAL.
> +
>  Memory Policy Reference Counting
>  ================================

--
Best Regards,
Huang, Ying
diff mbox series

Patch

diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst
index eca38fa81e0f..19071b71979c 100644
--- a/Documentation/admin-guide/mm/numa_memory_policy.rst
+++ b/Documentation/admin-guide/mm/numa_memory_policy.rst
@@ -332,6 +332,33 @@  MPOL_F_RELATIVE_NODES
 	MPOL_PREFERRED policies that were created with an empty nodemask
 	(local allocation).
 
+MPOL_F_NUMA_BALANCING (since Linux 5.12)
+        When operating in MPOL_BIND mode, enables NUMA balancing for tasks,
+        contingent upon kernel support. This feature optimizes page
+        placement within the confines of the specified memory binding
+        policy. The addition of the MPOL_F_NUMA_BALANCING flag augments the
+        control mechanism for NUMA balancing:
+
+        - The sysctl knob numa_balancing governs global activation or
+          deactivation of NUMA balancing.
+
+        - Even if sysctl numa_balancing is enabled, NUMA balancing remains
+          disabled by default for memory areas or applications utilizing
+          explicit memory policies.
+
+        - The MPOL_F_NUMA_BALANCING flag facilitates NUMA balancing
+          activation for applications employing explicit memory policies
+          (MPOL_BIND).
+
+        This flags enables various optimizations for page placement through
+        NUMA balancing. For instance, when an application's memory is bound
+        to multiple nodes (MPOL_BIND), the hint page fault handler attempts
+        to migrate accessed pages to reduce cross-node access if the
+        accessing node aligns with the policy nodemask.
+
+        If the flag isn't supported by the kernel, or is used with mode
+        other than MPOL_BIND, -1 is returned and errno is set to EINVAL.
+
 Memory Policy Reference Counting
 ================================