diff mbox series

[v15,1/3] fs: Add trusted_for(2) syscall implementation and related sysctl

Message ID 20211012192410.2356090-2-mic@digikod.net (mailing list archive)
State New, archived
Headers show
Series Add trusted_for(2) (was O_MAYEXEC) | expand

Commit Message

Mickaël Salaün Oct. 12, 2021, 7:24 p.m. UTC
From: Mickaël Salaün <mic@linux.microsoft.com>

The trusted_for() syscall enables user space tasks to check that files
are trusted to be executed or interpreted by user space.  This may allow
script interpreters to check execution permission before reading
commands from a file, or dynamic linkers to allow shared object loading.
This may be seen as a way for a trusted task (e.g. interpreter) to check
the trustworthiness of files (e.g. scripts) before extending its control
flow graph with new ones originating from these files.

The security policy is consistently managed by the kernel through the
new sysctl: fs.trusted_for_policy .  This enables system administrators
to enforce two complementary security policies according to the
installed system: enforce the noexec mount option, and enforce
executable file permission.  Indeed, because of compatibility with
installed systems, only system administrators are able to check that
this new enforcement is in line with the system mount points and file
permissions.

For this to be possible, script interpreters must use trusted_for(2)
with the TRUSTED_FOR_EXECUTION usage.  To be fully effective, these
interpreters also need to handle the other ways to execute code: command
line parameters (e.g., option -e for Perl), module loading (e.g., option
-m for Python), stdin, file sourcing, environment variables,
configuration files, etc.  According to the threat model, it may be
acceptable to allow some script interpreters (e.g. Bash) to interpret
commands from stdin, may it be a TTY or a pipe, because it may not be
enough to (directly) perform syscalls.

Even without enforced security policy, user space interpreters can use
this syscall to try as much as possible to enforce the system policy at
their level, knowing that it will not break anything on running systems
which do not care about this feature.  However, on systems which want
this feature enforced, there will be knowledgeable people (i.e. system
administrator who configured fs.trusted_for_policy deliberately) to
manage it.

Because trusted_for(2) is a mean to enforce a system-wide security
policy (but not application-centric policies), it does not make sense
for user space to check the sysctl value.  Indeed, this new syscall only
enables to extend the system ability to enforce a policy thanks to (some
trusted) user space collaboration.  Moreover, additional security
policies could be managed by LSMs.  This is a best-effort approach from
the application developer point of view:
https://lore.kernel.org/lkml/1477d3d7-4b36-afad-7077-a38f42322238@digikod.net/

trusted_for(2) with TRUSTED_FOR_EXECUTION should not be confused with
the O_EXEC flag (for open) which is intended for execute-only, which
obviously doesn't work for scripts.  However, a similar behavior could
be implemented in user space with O_PATH:
https://lore.kernel.org/lkml/1e2f6913-42f2-3578-28ed-567f6a4bdda1@digikod.net/

Being able to restrict execution also enables to protect the kernel by
restricting arbitrary syscalls that an attacker could perform with a
crafted binary or certain script languages.  It also improves multilevel
isolation by reducing the ability of an attacker to use side channels
with specific code.  These restrictions can natively be enforced for ELF
binaries (with the noexec mount option) but require this kernel
extension to properly handle scripts (e.g. Python, Perl).  To get a
consistent execution policy, additional memory restrictions should also
be enforced (e.g. thanks to SELinux).

This is a new implementation of a patch initially written by
Vincent Strubel for CLIP OS 4:
https://github.com/clipos-archive/src_platform_clip-patches/blob/f5cb330d6b684752e403b4e41b39f7004d88e561/1901_open_mayexec.patch
This patch has been used for more than 13 years with customized script
interpreters.  Some examples (with the original O_MAYEXEC) can be found
here:
https://github.com/clipos-archive/clipos4_portage-overlay/search?q=O_MAYEXEC

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Jonathan Corbet <corbet@lwn.net>
Co-developed-by: Thibaut Sautereau <thibaut.sautereau@ssi.gouv.fr>
Signed-off-by: Thibaut Sautereau <thibaut.sautereau@ssi.gouv.fr>
Signed-off-by: Mickaël Salaün <mic@linux.microsoft.com>
Acked-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20211012192410.2356090-2-mic@digikod.net
---

Changes since v14:
* Add full syscall documentation (requested by Andrew Morton).

Changes since v13:
* Rename sysctl from "trust_policy" to "trusted_for_policy" (suggested
  by Kees Cook).
* Add Acked-by Kees Cook.

Changes since v12:
* Update inode_permission() call to allign with commit 47291baa8ddf
  ("namei: make permission helpers idmapped mount aware").
* Switch from d_backing_inode(f.file->f_path.dentry) to
  file_inode(f.file).

Changes since v10:
* Add enum definition to syscalls.h .

Changes since v9:
* Rename the syscall to trusted_for(2) and the sysctl to fs.trust_policy
* Add a dedicated enum trusted_for_usage with include/uapi/linux/trusted-for.h
* Remove the extra MAY_INTROSPECTION_EXEC bit.  LSMs can still implement
  this feature themselves.

Changes since v8:
* Add a dedicated syscall introspect_access() (requested by Al Viro).
* Rename MAY_INTERPRETED_EXEC to MAY_INTROSPECTION_EXEC .
* Rename the sysctl fs.interpreted_access to fs.introspection_policy .
* Update documentation.

Changes since v7:
* Replaces openat2/O_MAYEXEC with faccessat2/X_OK/AT_INTERPRETED .
  Switching to an FD-based syscall was suggested by Al Viro and Jann
  Horn.
* Handle special file descriptors.
* Add a compatibility mode for execute/read check.
* Move the sysctl policy from fs/namei.c to fs/open.c for the new
  faccessat2/AT_INTERPRETED.
* Rename the sysctl from fs.open_mayexec_enforce to
  fs.interpreted_access .
* Update documentation accordingly.

Changes since v6:
* Allow opening pipes, block devices and character devices with
  O_MAYEXEC when there is no enforced policy, but forbid any non-regular
  file opened with O_MAYEXEC otherwise (i.e. for any enforced policy).
* Add a paragraph about the non-regular files policy.
* Move path_noexec() calls out of the fast-path (suggested by Kees
  Cook).
* Do not set __FMODE_EXEC for now because of inconsistent behavior:
  https://lore.kernel.org/lkml/202007160822.CCDB5478@keescook/
* Returns EISDIR when opening a directory with O_MAYEXEC.
* Removed Deven Bowers and Kees Cook Reviewed-by tags because of the
  current update.

Changes since v5:
* Remove the static enforcement configuration through Kconfig because it
  makes the code more simple like this, and because the current sysctl
  configuration can only be set with CAP_SYS_ADMIN, the same way mount
  options (i.e. noexec) can be set.  If an harden distro wants to
  enforce a configuration, it should restrict capabilities or sysctl
  configuration.  Furthermore, an LSM can easily leverage O_MAYEXEC to
  fit its need.
* Move checks from inode_permission() to may_open() and make the error
  codes more consistent according to file types (in line with a previous
  commit): opening a directory with O_MAYEXEC returns EISDIR and other
  non-regular file types may return EACCES.
* In may_open(), when OMAYEXEC_ENFORCE_FILE is set, replace explicit
  call to generic_permission() with an artificial MAY_EXEC to avoid
  double calls.  This makes sense especially when an LSM policy forbids
  execution of a file.
* Replace the custom proc_omayexec() with
  proc_dointvec_minmax_sysadmin(), and then replace the CAP_MAC_ADMIN
  check with a CAP_SYS_ADMIN one (suggested by Kees Cook and Stephen
  Smalley).
* Use BIT() (suggested by Kees Cook).
* Rename variables (suggested by Kees Cook).
* Reword the kconfig help.
* Import the documentation patch (suggested by Kees Cook):
  https://lore.kernel.org/lkml/20200505153156.925111-6-mic@digikod.net/
* Update documentation and add LWN.net article.

Changes since v4:
* Add kernel configuration options to enforce O_MAYEXEC at build time,
  and disable the sysctl in such case (requested by James Morris).
* Reword commit message.

Changes since v3:
* Switch back to O_MAYEXEC, but only handle it with openat2(2) which
  checks unknown flags (suggested by Aleksa Sarai). Cf.
  https://lore.kernel.org/lkml/20200430015429.wuob7m5ofdewubui@yavin.dot.cyphar.com/

Changes since v2:
* Replace O_MAYEXEC with RESOLVE_MAYEXEC from openat2(2).  This change
  enables to not break existing application using bogus O_* flags that
  may be ignored by current kernels by using a new dedicated flag, only
  usable through openat2(2) (suggested by Jeff Layton).  Using this flag
  will results in an error if the running kernel does not support it.
  User space needs to manage this case, as with other RESOLVE_* flags.
  The best effort approach to security (for most common distros) will
  simply consists of ignoring such an error and retry without
  RESOLVE_MAYEXEC.  However, a fully controlled system may which to
  error out if such an inconsistency is detected.
* Cosmetic changes.

Changes since v1:
* Set __FMODE_EXEC when using O_MAYEXEC to make this information
  available through the new fanotify/FAN_OPEN_EXEC event (suggested by
  Jan Kara and Matthew Bobrowski):
  https://lore.kernel.org/lkml/20181213094658.GA996@lithium.mbobrowski.org/
* Move code from Yama to the FS subsystem (suggested by Kees Cook).
* Make omayexec_inode_permission() static (suggested by Jann Horn).
* Use mode 0600 for the sysctl.
* Only match regular files (not directories nor other types), which
  follows the same semantic as commit 73601ea5b7b1 ("fs/open.c: allow
  opening only regular files during execve()").
---
 Documentation/admin-guide/sysctl/fs.rst |  50 +++++++++++
 fs/open.c                               | 110 ++++++++++++++++++++++++
 include/linux/fs.h                      |   1 +
 include/linux/syscalls.h                |   2 +
 include/uapi/linux/trusted-for.h        |  18 ++++
 kernel/sysctl.c                         |  12 ++-
 6 files changed, 191 insertions(+), 2 deletions(-)
 create mode 100644 include/uapi/linux/trusted-for.h

Comments

Kees Cook Nov. 9, 2021, 5:21 p.m. UTC | #1
On Fri, Nov 05, 2021 at 02:41:59PM +0800, kernel test robot wrote:
> 
> 
> Greeting,
> 
> FYI, we noticed a -11.6% regression of netperf.Throughput_tps due to commit:
> 
> 
> commit: a0918006f9284b77397ae4f163f055c3e0f987b2 ("[PATCH v15 1/3] fs: Add trusted_for(2) syscall implementation and related sysctl")
> url: https://github.com/0day-ci/linux/commits/Micka-l-Sala-n/Add-trusted_for-2-was-O_MAYEXEC/20211013-032533
> patch link: https://lore.kernel.org/kernel-hardening/20211012192410.2356090-2-mic@digikod.net
> 
> in testcase: netperf
> on test machine: 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
> with following parameters:
> 
> 	ip: ipv4
> 	runtime: 300s
> 	nr_threads: 16
> 	cluster: cs-localhost
> 	test: TCP_CRR
> 	cpufreq_governor: performance
> 	ucode: 0x5003006
> 
> test-description: Netperf is a benchmark that can be use to measure various aspect of networking performance.
> test-url: http://www.netperf.org/netperf/
> 
> 
> please be noted we made out some further analysis/tests, as Fengwei mentioned:
> ==============================================================================
> Here is my investigation result of this regression:
> 
> If I add patch to make sure the kernel function address and data address is
> almost same even with this patch, there is almost no performance delta(0.1%)
> w/o the patch.
> 
> And if I only make sure function address same w/o the patch, the performance
> delta is about 5.1%.
> 
> So suppose this regression is triggered by different function and data address.
> We don't know why the different address could bring such kind of regression yet
> ===============================================================================
> 
> 
> we also tested on other platforms.
> on a Cooper Lake (Intel(R) Xeon(R) Gold 5318H CPU @ 2.50GHz with 128G memory),
> we also observed regression but the gap is smaller:
> =========================================================================================
> cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/tbox_group/test/testcase/ucode:
>   cs-localhost/gcc-9/performance/ipv4/x86_64-rhel-8.3/16/debian-10.4-x86_64-20200603.cgz/300s/lkp-cpl-4sp1/TCP_CRR/netperf/0x700001e
> 
> commit:
>   v5.15-rc4
>   a0918006f9284b77397ae4f163f055c3e0f987b2
> 
>        v5.15-rc4 a0918006f9284b77397ae4f163f
> ---------------- ---------------------------
>          %stddev     %change         %stddev
>              \          |                \
>     333492            -5.7%     314346 ±  2%  netperf.Throughput_total_tps
>      20843            -4.5%      19896        netperf.Throughput_tps
> 
> 
> but no regression on a 96 threads 2 sockets Ice Lake with 256G memory:
> =========================================================================================
> cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/tbox_group/test/testcase/ucode:
>   cs-localhost/gcc-9/performance/ipv4/x86_64-rhel-8.3/16/debian-10.4-x86_64-20200603.cgz/300s/lkp-icl-2sp1/TCP_CRR/netperf/0xb000280
> 
> commit:
>   v5.15-rc4
>   a0918006f9284b77397ae4f163f055c3e0f987b2
> 
>        v5.15-rc4 a0918006f9284b77397ae4f163f
> ---------------- ---------------------------
>          %stddev     %change         %stddev
>              \          |                \
>     555600            -0.1%     555305        netperf.Throughput_total_tps
>      34725            -0.1%      34706        netperf.Throughput_tps
> 
> 
> Fengwei also helped review these results and commented:
> I suppose these three CPUs have different cache policy. It also could be
> related with netperf throughput testing.

Does moving the syscall implementation somewhere else change things?
That's a _huge_ performance change for something that isn't even called.
What's going on here?

-Kees

> 
> 
> If you fix the issue, kindly add following tag
> Reported-by: kernel test robot <oliver.sang@intel.com>
> 
> 
> Details are as below:
> -------------------------------------------------------------------------------------------------->
> 
> 
> To reproduce:
> 
>         git clone https://github.com/intel/lkp-tests.git
>         cd lkp-tests
>         sudo bin/lkp install job.yaml           # job file is attached in this email
>         bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
>         sudo bin/lkp run generated-yaml-file
> 
>         # if come across any failure that blocks the test,
>         # please remove ~/.lkp and /lkp dir to run from a clean state.
> 
> =========================================================================================
> cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/tbox_group/test/testcase/ucode:
>   cs-localhost/gcc-9/performance/ipv4/x86_64-rhel-8.3/16/debian-10.4-x86_64-20200603.cgz/300s/lkp-csl-2ap3/TCP_CRR/netperf/0x5003006
> 
> commit: 
>   v5.15-rc4
>   a0918006f9 ("fs: Add trusted_for(2) syscall implementation and related sysctl")
> 
>        v5.15-rc4 a0918006f9284b77397ae4f163f 
> ---------------- --------------------------- 
>          %stddev     %change         %stddev
>              \          |                \  
>     354692           -11.6%     313620        netperf.Throughput_total_tps
>      22168           -11.6%      19601        netperf.Throughput_tps
>  2.075e+08           -11.6%  1.834e+08        netperf.time.voluntary_context_switches
>  1.064e+08           -11.6%   94086163        netperf.workload
>       0.27 ± 35%      -0.1        0.22 ±  2%  mpstat.cpu.all.usr%
>    2207583            -6.3%    2068413        vmstat.system.cs
>    3029480 ±  6%     -23.3%    2324079 ±  7%  interrupts.CAL:Function_call_interrupts
>      13768 ± 25%     -35.6%       8872 ± 23%  interrupts.CPU30.CAL:Function_call_interrupts
>    2014617 ± 16%     -26.3%    1485200 ± 24%  softirqs.CPU180.NET_RX
>  3.268e+08           -12.1%  2.874e+08        softirqs.NET_RX
>     287881 ±  2%     +24.6%     358692        softirqs.TIMER
>    3207001            -9.6%    2899010        perf-sched.wait_and_delay.count.schedule_timeout.inet_csk_accept.inet_accept.do_accept
>       0.01 ± 15%     +67.1%       0.01 ±  9%  perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__release_sock.release_sock.sk_wait_data
>       0.02 ±  2%     +23.3%       0.03 ± 21%  perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.aa_sk_perm.security_socket_accept.do_accept
>       0.01           +20.0%       0.01        perf-sched.wait_time.avg.ms.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg_locked
>      63320 ±  2%     -10.6%      56615 ±  2%  slabinfo.sock_inode_cache.active_objs
>       1626 ±  2%     -10.6%       1454 ±  2%  slabinfo.sock_inode_cache.active_slabs
>      63445 ±  2%     -10.6%      56722 ±  2%  slabinfo.sock_inode_cache.num_objs
>       1626 ±  2%     -10.6%       1454 ±  2%  slabinfo.sock_inode_cache.num_slabs
>      49195            -3.2%      47624        proc-vmstat.nr_slab_reclaimable
>    4278441            -6.6%    3996109        proc-vmstat.numa_hit
>    4052317 ±  2%      -7.4%    3751341        proc-vmstat.numa_local
>    4285136            -6.5%    4006356        proc-vmstat.pgalloc_normal
>    1704913           -11.4%    1511123        proc-vmstat.pgfree
>  9.382e+09           -10.1%  8.438e+09        perf-stat.i.branch-instructions
>  1.391e+08           -10.0%  1.252e+08        perf-stat.i.branch-misses
>      13.98            +2.2       16.20        perf-stat.i.cache-miss-rate%
>   87082775           +14.0%   99273064        perf-stat.i.cache-misses
>    2231661            -6.4%    2088571        perf-stat.i.context-switches
>       1.65            +8.6%       1.79        perf-stat.i.cpi
>  7.603e+10            -2.1%  7.441e+10        perf-stat.i.cpu-cycles
>     907.53 ±  2%     -13.0%     789.92 ±  2%  perf-stat.i.cycles-between-cache-misses
>     920324 ± 19%     -20.3%     733572 ±  5%  perf-stat.i.dTLB-load-misses
>  1.417e+10           -10.3%  1.271e+10        perf-stat.i.dTLB-loads
>     182445 ± 16%     -57.6%      77419 ±  9%  perf-stat.i.dTLB-store-misses
>  8.254e+09           -10.3%  7.403e+09        perf-stat.i.dTLB-stores
>      88.23            -1.7       86.52        perf-stat.i.iTLB-load-miss-rate%
>   96633753           -11.0%   85983323        perf-stat.i.iTLB-load-misses
>   12277057            +4.0%   12766535        perf-stat.i.iTLB-loads
>  4.741e+10           -10.2%  4.259e+10        perf-stat.i.instructions
>       0.62            -8.2%       0.57        perf-stat.i.ipc
>       0.40            -2.1%       0.39        perf-stat.i.metric.GHz
>     168.88           -10.1%     151.87        perf-stat.i.metric.M/sec
>   16134360 ±  2%     +15.0%   18550862        perf-stat.i.node-load-misses
>    1576525 ±  2%     +10.0%    1734370 ±  2%  perf-stat.i.node-loads
>   10027868           -11.5%    8871598        perf-stat.i.node-store-misses
>     386034 ±  3%     -16.0%     324290 ±  7%  perf-stat.i.node-stores
>      13.15            +9.2%      14.36        perf-stat.overall.MPKI
>      13.97            +2.3       16.23        perf-stat.overall.cache-miss-rate%
>       1.60            +8.9%       1.75        perf-stat.overall.cpi
>     873.29           -14.2%     749.60        perf-stat.overall.cycles-between-cache-misses
>       0.00 ± 15%      -0.0        0.00 ±  9%  perf-stat.overall.dTLB-store-miss-rate%
>      88.73            -1.7       87.07        perf-stat.overall.iTLB-load-miss-rate%
>       0.62            -8.2%       0.57        perf-stat.overall.ipc
>     135778            +1.7%     138069        perf-stat.overall.path-length
>  9.351e+09           -10.1%   8.41e+09        perf-stat.ps.branch-instructions
>  1.387e+08           -10.0%  1.248e+08        perf-stat.ps.branch-misses
>   86797490           +14.0%   98949207        perf-stat.ps.cache-misses
>    2224197            -6.4%    2081616        perf-stat.ps.context-switches
>  7.578e+10            -2.1%  7.416e+10        perf-stat.ps.cpu-cycles
>     917495 ± 19%     -20.3%     731365 ±  5%  perf-stat.ps.dTLB-load-misses
>  1.412e+10           -10.3%  1.267e+10        perf-stat.ps.dTLB-loads
>     181859 ± 16%     -57.6%      77179 ±  9%  perf-stat.ps.dTLB-store-misses
>  8.227e+09           -10.3%  7.379e+09        perf-stat.ps.dTLB-stores
>   96313891           -11.0%   85700283        perf-stat.ps.iTLB-load-misses
>   12236194            +4.0%   12724086        perf-stat.ps.iTLB-loads
>  4.726e+10           -10.2%  4.245e+10        perf-stat.ps.instructions
>   16081690 ±  2%     +15.0%   18490522        perf-stat.ps.node-load-misses
>    1571411 ±  2%     +10.0%    1728755 ±  2%  perf-stat.ps.node-loads
>    9995103           -11.5%    8842824        perf-stat.ps.node-store-misses
>     385193 ±  3%     -16.0%     323588 ±  7%  perf-stat.ps.node-stores
>  1.445e+13           -10.1%  1.299e+13        perf-stat.total.instructions
>       1.51 ±  7%      -0.2        1.29 ±  7%  perf-profile.calltrace.cycles-pp.smpboot_thread_fn.kthread.ret_from_fork
>       1.53 ±  7%      -0.2        1.31 ±  7%  perf-profile.calltrace.cycles-pp.ret_from_fork
>       1.53 ±  7%      -0.2        1.31 ±  7%  perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
>       1.48 ±  7%      -0.2        1.26 ±  7%  perf-profile.calltrace.cycles-pp.rcu_core.__softirqentry_text_start.run_ksoftirqd.smpboot_thread_fn.kthread
>       1.49 ±  7%      -0.2        1.27 ±  7%  perf-profile.calltrace.cycles-pp.__softirqentry_text_start.run_ksoftirqd.smpboot_thread_fn.kthread.ret_from_fork
>       1.50 ±  7%      -0.2        1.27 ±  7%  perf-profile.calltrace.cycles-pp.run_ksoftirqd.smpboot_thread_fn.kthread.ret_from_fork
>       1.47 ±  7%      -0.2        1.25 ±  7%  perf-profile.calltrace.cycles-pp.rcu_do_batch.rcu_core.__softirqentry_text_start.run_ksoftirqd.smpboot_thread_fn
>       1.41 ±  7%      -0.2        1.19 ±  7%  perf-profile.calltrace.cycles-pp.kmem_cache_free.rcu_do_batch.rcu_core.__softirqentry_text_start.run_ksoftirqd
>       1.25 ±  7%      -0.2        1.06 ±  7%  perf-profile.calltrace.cycles-pp.obj_cgroup_uncharge_pages.kmem_cache_free.rcu_do_batch.rcu_core.__softirqentry_text_start
>       1.21 ±  7%      -0.2        1.03 ±  7%  perf-profile.calltrace.cycles-pp.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.rcu_do_batch.rcu_core
>       0.94 ±  7%      -0.1        0.80 ±  7%  perf-profile.calltrace.cycles-pp.page_counter_cancel.page_counter_uncharge.obj_cgroup_uncharge_pages.kmem_cache_free.rcu_do_batch
>       0.62 ±  7%      +0.2        0.80 ±  9%  perf-profile.calltrace.cycles-pp.tcp_rcv_state_process.tcp_child_process.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish
>       1.51 ±  7%      -0.2        1.29 ±  7%  perf-profile.children.cycles-pp.smpboot_thread_fn
>       1.53 ±  7%      -0.2        1.31 ±  7%  perf-profile.children.cycles-pp.ret_from_fork
>       1.53 ±  7%      -0.2        1.31 ±  7%  perf-profile.children.cycles-pp.kthread
>       1.50 ±  7%      -0.2        1.27 ±  7%  perf-profile.children.cycles-pp.run_ksoftirqd
>       1.73 ±  6%      -0.2        1.51 ±  5%  perf-profile.children.cycles-pp._raw_spin_lock_bh
>       1.25 ±  5%      -0.2        1.07 ±  6%  perf-profile.children.cycles-pp.lock_sock_nested
>       1.03 ±  7%      -0.1        0.88 ±  6%  perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
>       0.83 ±  6%      -0.1        0.72 ±  6%  perf-profile.children.cycles-pp.sk_clone_lock
>       0.84 ±  6%      -0.1        0.73 ±  6%  perf-profile.children.cycles-pp.inet_csk_clone_lock
>       0.45 ±  8%      -0.1        0.34 ±  6%  perf-profile.children.cycles-pp.__tcp_get_metrics
>       0.70 ±  6%      -0.1        0.60 ±  6%  perf-profile.children.cycles-pp.percpu_counter_add_batch
>       0.52 ±  8%      -0.1        0.42 ±  6%  perf-profile.children.cycles-pp.tcp_get_metrics
>       0.72 ±  5%      -0.1        0.62 ±  6%  perf-profile.children.cycles-pp.sk_forced_mem_schedule
>       0.32 ±  7%      -0.1        0.24 ±  7%  perf-profile.children.cycles-pp.sk_filter_trim_cap
>       0.49 ±  7%      -0.1        0.41 ±  8%  perf-profile.children.cycles-pp.tcp_v4_destroy_sock
>       0.26 ±  7%      -0.0        0.22 ±  8%  perf-profile.children.cycles-pp.ip_finish_output
>       0.29 ±  6%      -0.0        0.25 ±  9%  perf-profile.children.cycles-pp.tcp_write_queue_purge
>       0.16 ± 10%      -0.0        0.12 ±  8%  perf-profile.children.cycles-pp.get_obj_cgroup_from_current
>       0.10 ±  8%      -0.0        0.08 ±  6%  perf-profile.children.cycles-pp.__destroy_inode
>       0.10 ±  8%      -0.0        0.08 ±  6%  perf-profile.children.cycles-pp.destroy_inode
>       0.10 ±  9%      -0.0        0.08 ± 10%  perf-profile.children.cycles-pp.sock_put
>       0.10 ± 10%      -0.0        0.07 ±  8%  perf-profile.children.cycles-pp.d_instantiate
>       0.08 ± 11%      -0.0        0.06 ±  9%  perf-profile.children.cycles-pp.kmem_cache_alloc_trace
>       0.11 ±  8%      +0.0        0.15 ±  6%  perf-profile.children.cycles-pp.__inet_lookup_listener
>       0.08 ±  9%      +0.0        0.12 ±  8%  perf-profile.children.cycles-pp.inet_lhash2_lookup
>       0.10 ±  7%      +0.0        0.14 ±  7%  perf-profile.children.cycles-pp.tcp_ca_openreq_child
>       0.08 ±  9%      +0.0        0.13 ±  9%  perf-profile.children.cycles-pp.tcp_newly_delivered
>       0.08 ±  6%      +0.0        0.12 ±  9%  perf-profile.children.cycles-pp.tcp_mtup_init
>       0.09 ±  8%      +0.1        0.15 ±  6%  perf-profile.children.cycles-pp.tcp_stream_memory_free
>       0.24 ±  6%      +0.1        0.30 ±  8%  perf-profile.children.cycles-pp.ip_rcv_core
>       0.06 ±  9%      +0.1        0.12 ±  7%  perf-profile.children.cycles-pp.tcp_push
>       0.11 ±  9%      +0.1        0.17 ±  7%  perf-profile.children.cycles-pp.tcp_synack_rtt_meas
>       0.00 ±412%      +0.1        0.07 ± 14%  perf-profile.children.cycles-pp.tcp_rack_update_reo_wnd
>       0.20 ±  8%      +0.1        0.28 ±  6%  perf-profile.children.cycles-pp.tcp_assign_congestion_control
>       0.34 ±  8%      +0.1        0.42 ±  6%  perf-profile.children.cycles-pp.tcp_init_metrics
>       0.14 ±  6%      +0.1        0.22 ±  8%  perf-profile.children.cycles-pp.tcp_sync_mss
>       0.33 ±  5%      +0.1        0.41 ±  8%  perf-profile.children.cycles-pp.inet_csk_route_req
>       0.31 ±  6%      +0.1        0.40 ±  6%  perf-profile.children.cycles-pp.inet_csk_route_child_sock
>       0.13 ±  8%      +0.1        0.22 ±  6%  perf-profile.children.cycles-pp.skb_entail
>       0.21 ±  6%      +0.1        0.32 ±  7%  perf-profile.children.cycles-pp.ip_rcv_finish_core
>       0.24 ±  5%      +0.1        0.35 ±  7%  perf-profile.children.cycles-pp.ip_rcv_finish
>       0.20 ±  7%      +0.1        0.32 ±  5%  perf-profile.children.cycles-pp.tcp_select_initial_window
>       0.14 ±  5%      +0.1        0.26 ±  8%  perf-profile.children.cycles-pp.secure_tcp_ts_off
>       0.45 ±  6%      +0.1        0.58 ±  6%  perf-profile.children.cycles-pp.tcp_finish_connect
>       0.23 ±  5%      +0.1        0.35 ±  5%  perf-profile.children.cycles-pp.tcp_parse_options
>       0.17 ±  7%      +0.1        0.31 ±  6%  perf-profile.children.cycles-pp.tcp_update_pacing_rate
>       0.20 ±  7%      +0.1        0.35 ±  6%  perf-profile.children.cycles-pp.tcp_openreq_init_rwin
>       0.27 ±  9%      +0.1        0.42 ±  7%  perf-profile.children.cycles-pp.tcp_connect_init
>       0.45 ±  7%      +0.2        0.60 ±  5%  perf-profile.children.cycles-pp.tcp_v4_init_sock
>       0.44 ±  7%      +0.2        0.60 ±  6%  perf-profile.children.cycles-pp.tcp_init_sock
>       0.23 ±  7%      +0.2        0.39 ±  6%  perf-profile.children.cycles-pp.tcp_schedule_loss_probe
>       0.35 ±  6%      +0.2        0.57 ±  7%  perf-profile.children.cycles-pp.inet_sk_rebuild_header
>       0.25 ±  9%      +0.2        0.49 ±  7%  perf-profile.children.cycles-pp.__tcp_select_window
>       0.35 ±  6%      +0.3        0.61 ±  6%  perf-profile.children.cycles-pp.tcp_ack_update_rtt
>       0.76 ±  5%      +0.3        1.04 ±  6%  perf-profile.children.cycles-pp.ip_route_output_flow
>       0.78 ±  6%      +0.3        1.08 ±  6%  perf-profile.children.cycles-pp.tcp_init_transfer
>       1.78 ±  6%      +0.3        2.11 ±  6%  perf-profile.children.cycles-pp.tcp_conn_request
>       1.07 ±  4%      +0.4        1.44 ±  5%  perf-profile.children.cycles-pp.ip_route_output_key_hash
>       1.02 ±  5%      +0.4        1.40 ±  5%  perf-profile.children.cycles-pp.ip_route_output_key_hash_rcu
>       2.02 ±  5%      +0.5        2.50 ±  6%  perf-profile.children.cycles-pp.tcp_ack
>       1.04 ±  7%      +0.6        1.63 ±  7%  perf-profile.children.cycles-pp.__sk_dst_check
>       1.18 ±  7%      +0.7        1.86 ±  7%  perf-profile.children.cycles-pp.ipv4_dst_check
>       5.95 ±  5%      +0.9        6.87 ±  6%  perf-profile.children.cycles-pp.tcp_v4_connect
>       1.02 ±  7%      -0.2        0.87 ±  5%  perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
>       0.44 ±  8%      -0.1        0.34 ±  6%  perf-profile.self.cycles-pp.__tcp_get_metrics
>       0.69 ±  6%      -0.1        0.59 ±  6%  perf-profile.self.cycles-pp.percpu_counter_add_batch
>       0.71 ±  5%      -0.1        0.61 ±  6%  perf-profile.self.cycles-pp.sk_forced_mem_schedule
>       0.32 ±  6%      -0.1        0.26 ±  8%  perf-profile.self.cycles-pp.ip_finish_output2
>       0.35 ±  7%      -0.1        0.29 ±  5%  perf-profile.self.cycles-pp.tcp_recvmsg_locked
>       0.15 ±  7%      -0.0        0.12 ±  8%  perf-profile.self.cycles-pp.exit_to_user_mode_prepare
>       0.17 ±  6%      -0.0        0.14 ± 10%  perf-profile.self.cycles-pp.__skb_clone
>       0.07 ±  5%      -0.0        0.04 ± 43%  perf-profile.self.cycles-pp.sk_filter_trim_cap
>       0.09 ±  9%      -0.0        0.07 ±  6%  perf-profile.self.cycles-pp.dequeue_task_fair
>       0.08 ±  7%      -0.0        0.06 ±  8%  perf-profile.self.cycles-pp.release_sock
>       0.07 ± 10%      +0.0        0.09 ±  9%  perf-profile.self.cycles-pp.tcp_create_openreq_child
>       0.11 ±  7%      +0.0        0.15 ±  5%  perf-profile.self.cycles-pp.tcp_connect
>       0.08 ±  9%      +0.0        0.12 ±  8%  perf-profile.self.cycles-pp.inet_lhash2_lookup
>       0.09 ±  9%      +0.0        0.13 ±  6%  perf-profile.self.cycles-pp.inet_csk_get_port
>       0.08 ± 10%      +0.0        0.12 ±  8%  perf-profile.self.cycles-pp.tcp_init_transfer
>       0.08 ±  9%      +0.0        0.13 ±  8%  perf-profile.self.cycles-pp.tcp_newly_delivered
>       0.07 ±  7%      +0.0        0.12 ±  9%  perf-profile.self.cycles-pp.tcp_mtup_init
>       0.35 ±  5%      +0.1        0.40 ±  5%  perf-profile.self.cycles-pp.__ip_queue_xmit
>       0.16 ±  7%      +0.1        0.22 ±  6%  perf-profile.self.cycles-pp.__inet_bind
>       0.09 ±  8%      +0.1        0.15 ±  6%  perf-profile.self.cycles-pp.tcp_stream_memory_free
>       0.24 ±  6%      +0.1        0.30 ±  8%  perf-profile.self.cycles-pp.ip_rcv_core
>       0.06 ±  9%      +0.1        0.12 ±  6%  perf-profile.self.cycles-pp.tcp_push
>       0.00            +0.1        0.07 ± 11%  perf-profile.self.cycles-pp.tcp_rack_update_reo_wnd
>       0.23 ±  8%      +0.1        0.30 ±  6%  perf-profile.self.cycles-pp.ip_output
>       0.20 ±  8%      +0.1        0.28 ±  5%  perf-profile.self.cycles-pp.tcp_assign_congestion_control
>       0.10 ±  8%      +0.1        0.18 ±  7%  perf-profile.self.cycles-pp.tcp_v4_syn_recv_sock
>       0.09 ±  7%      +0.1        0.17 ±  7%  perf-profile.self.cycles-pp.tcp_openreq_init_rwin
>       0.07 ± 10%      +0.1        0.16 ±  6%  perf-profile.self.cycles-pp.tcp_v4_send_synack
>       0.13 ±  7%      +0.1        0.22 ±  7%  perf-profile.self.cycles-pp.tcp_sync_mss
>       0.12 ±  8%      +0.1        0.20 ±  7%  perf-profile.self.cycles-pp.skb_entail
>       0.18 ±  8%      +0.1        0.27 ±  6%  perf-profile.self.cycles-pp.ip_protocol_deliver_rcu
>       0.21 ±  5%      +0.1        0.31 ±  6%  perf-profile.self.cycles-pp.ip_rcv_finish_core
>       0.15 ±  9%      +0.1        0.26 ±  6%  perf-profile.self.cycles-pp.tcp_update_metrics
>       0.20 ±  8%      +0.1        0.31 ±  5%  perf-profile.self.cycles-pp.tcp_select_initial_window
>       0.12 ±  9%      +0.1        0.25 ±  8%  perf-profile.self.cycles-pp.tcp_connect_init
>       0.11 ±  8%      +0.1        0.24 ±  8%  perf-profile.self.cycles-pp.secure_tcp_ts_off
>       0.22 ±  5%      +0.1        0.35 ±  5%  perf-profile.self.cycles-pp.tcp_parse_options
>       0.13 ± 12%      +0.1        0.27 ±  7%  perf-profile.self.cycles-pp.tcp_init_metrics
>       0.17 ±  7%      +0.1        0.30 ±  7%  perf-profile.self.cycles-pp.tcp_update_pacing_rate
>       0.17 ± 10%      +0.2        0.32 ±  6%  perf-profile.self.cycles-pp.tcp_init_sock
>       0.18 ±  8%      +0.2        0.35 ±  6%  perf-profile.self.cycles-pp.tcp_schedule_loss_probe
>       0.42 ±  8%      +0.2        0.62 ±  7%  perf-profile.self.cycles-pp.tcp_write_xmit
>       0.25 ±  8%      +0.2        0.48 ±  7%  perf-profile.self.cycles-pp.__tcp_select_window
>       0.28 ±  8%      +0.3        0.56 ±  5%  perf-profile.self.cycles-pp.tcp_ack_update_rtt
>       0.71 ±  5%      +0.4        1.09 ±  6%  perf-profile.self.cycles-pp.ip_route_output_key_hash_rcu
>       1.17 ±  7%      +0.7        1.84 ±  7%  perf-profile.self.cycles-pp.ipv4_dst_check
> 
> 
>                                                                                 
>                                netperf.Throughput_tps                           
>                                                                                 
>   22500 +-------------------------------------------------------------------+   
>         |        ...+......                           ...+......+.....+.....|   
>   22000 |.....+..          +.....+.....+.....+.....+..                      |   
>         |                                                                   |   
>         |                                                                   |   
>   21500 |-+                                                                 |   
>         |                                                                   |   
>   21000 |-+                                                                 |   
>         |                                                                   |   
>   20500 |-+                                                                 |   
>         |                                                                   |   
>         |                                                                   |   
>   20000 |-+                                                                 |   
>         |     O     O            O     O                 O                  |   
>   19500 +-------------------------------------------------------------------+   
>                                                                                 
>                                                                                                                                                                 
>                             netperf.Throughput_total_tps                        
>                                                                                 
>   360000 +------------------------------------------------------------------+   
>   355000 |-+      ...+.....                ...+.....   ...+..         +.....|   
>          |.....+..         +.....+.....+...         +..                     |   
>   350000 |-+                                                                |   
>   345000 |-+                                                                |   
>          |                                                                  |   
>   340000 |-+                                                                |   
>   335000 |-+                                                                |   
>   330000 |-+                                                                |   
>          |                                                                  |   
>   325000 |-+                                                                |   
>   320000 |-+                                                                |   
>          |                                                                  |   
>   315000 |-+   O     O     O     O     O      O     O     O     O     O     |   
>   310000 +------------------------------------------------------------------+   
>                                                                                 
>                                                                                                                                                                 
>                                    netperf.workload                             
>                                                                                 
>   1.08e+08 +----------------------------------------------------------------+   
>            |        ...+.....+.....         ..+.....   ...+..         +.....|   
>   1.06e+08 |.....+..               +.....+..        +..                     |   
>   1.04e+08 |-+                                                              |   
>            |                                                                |   
>   1.02e+08 |-+                                                              |   
>            |                                                                |   
>      1e+08 |-+                                                              |   
>            |                                                                |   
>    9.8e+07 |-+                                                              |   
>    9.6e+07 |-+                                                              |   
>            |                                                                |   
>    9.4e+07 |-+   O     O     O     O     O    O     O     O     O     O     |   
>            |                                                                |   
>    9.2e+07 +----------------------------------------------------------------+   
>                                                                                 
>                                                                                                                                                                 
>                         netperf.time.voluntary_context_switches                 
>                                                                                 
>    2.1e+08 +----------------------------------------------------------------+   
>            |.....+.....+.....+.....+.....+....+.....   ...+..         +.....|   
>   2.05e+08 |-+                                      +..                     |   
>            |                                                                |   
>            |                                                                |   
>      2e+08 |-+                                                              |   
>            |                                                                |   
>   1.95e+08 |-+                                                              |   
>            |                                                                |   
>    1.9e+08 |-+                                                              |   
>            |                                                                |   
>            |                                                                |   
>   1.85e+08 |-+   O     O     O     O     O          O     O                 |   
>            |                                  O                 O     O     |   
>    1.8e+08 +----------------------------------------------------------------+   
>                                                                                 
>                                                                                                                                                                 
>                                                                                 
>                                                                                 
>    0.006 +------------------------------------------------------------------+   
>          |                                                                  |   
>          |                                                                  |   
>   0.0058 |-+                                                                |   
>          |                                                                  |   
>          |                                                                  |   
>   0.0056 |-+                                                                |   
>          |                                                                  |   
>   0.0054 |-+                                                                |   
>          |                                                                  |   
>          |                                                                  |   
>   0.0052 |-+                                                                |   
>          |                                                                  |   
>          |                                                                  |   
>    0.005 +------------------------------------------------------------------+   
>                                                                                 
>                                                                                                                                                                 
>                                                                                 
>                                                                                 
>   3.25e+06 +----------------------------------------------------------------+   
>            |.....   ...+....          ...+....+.....+.....+.....   ...+.....|   
>    3.2e+06 |-+   +..        .   ...+..                          +..         |   
>            |                 +..                                            |   
>   3.15e+06 |-+                                                              |   
>    3.1e+06 |-+                                                              |   
>            |                                                                |   
>   3.05e+06 |-+                                                              |   
>            |                                                                |   
>      3e+06 |-+                                                              |   
>   2.95e+06 |-+                                                              |   
>            |                                                                |   
>    2.9e+06 |-+   O     O     O           O    O     O     O     O     O     |   
>            |                       O                                        |   
>   2.85e+06 +----------------------------------------------------------------+   
>                                                                                 
>                                                                                 
> [*] bisect-good sample
> [O] bisect-bad  sample
> 
> ***************************************************************************************************
> lkp-icl-2sp1: 96 threads 2 sockets Ice Lake with 256G memory
> 
> 
> 
> 
> 
> Disclaimer:
> Results have been estimated based on internal Intel analysis and are provided
> for informational purposes only. Any difference in system hardware or software
> design or configuration may affect actual performance.
> 
> 
> ---
> 0DAY/LKP+ Test Infrastructure                   Open Source Technology Center
> https://lists.01.org/hyperkitty/list/lkp@lists.01.org       Intel Corporation
> 
> Thanks,
> Oliver Sang
> 


> #!/bin/sh
> 
> export_top_env()
> {
> 	export suite='netperf'
> 	export testcase='netperf'
> 	export category='benchmark'
> 	export disable_latency_stats=1
> 	export set_nic_irq_affinity=1
> 	export ip='ipv4'
> 	export runtime=300
> 	export nr_threads=16
> 	export cluster='cs-localhost'
> 	export job_origin='netperf-small-threads.yaml'
> 	export queue_cmdline_keys=
> 	export queue='vip'
> 	export testbox='lkp-csl-2ap3'
> 	export tbox_group='lkp-csl-2ap3'
> 	export kconfig='x86_64-rhel-8.3'
> 	export submit_id='617960e80b9a930a5af4f104'
> 	export job_file='/lkp/jobs/scheduled/lkp-csl-2ap3/netperf-cs-localhost-performance-ipv4-16-300s-TCP_CRR-ucode=0x5003006-debian-10.4-x86_64-20200603.cgz-a0918006f9284b77397ae4f163-20211027-68186-ja0nr3-6.yaml'
> 	export id='fbecc857f957790eb9cfac7363705ffadfda23f9'
> 	export queuer_version='/lkp/xsang/.src-20211027-151141'
> 	export model='Cascade Lake'
> 	export nr_node=4
> 	export nr_cpu=192
> 	export memory='192G'
> 	export ssd_partitions=
> 	export rootfs_partition='LABEL=LKP-ROOTFS'
> 	export kernel_cmdline_hw='acpi_rsdp=0x67f44014'
> 	export brand='Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz'
> 	export commit='a0918006f9284b77397ae4f163f055c3e0f987b2'
> 	export need_kconfig_hw='{"IGB"=>"y"}
> BLK_DEV_NVME'
> 	export ucode='0x5003006'
> 	export enqueue_time='2021-10-27 22:23:36 +0800'
> 	export _id='617960f00b9a930a5af4f10a'
> 	export _rt='/result/netperf/cs-localhost-performance-ipv4-16-300s-TCP_CRR-ucode=0x5003006/lkp-csl-2ap3/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/a0918006f9284b77397ae4f163f055c3e0f987b2'
> 	export user='lkp'
> 	export compiler='gcc-9'
> 	export LKP_SERVER='internal-lkp-server'
> 	export head_commit='955f175760f41ad2a80b07a390bac9a0444a47a6'
> 	export base_commit='519d81956ee277b4419c723adfb154603c2565ba'
> 	export branch='linux-devel/devel-hourly-20211025-030231'
> 	export rootfs='debian-10.4-x86_64-20200603.cgz'
> 	export result_root='/result/netperf/cs-localhost-performance-ipv4-16-300s-TCP_CRR-ucode=0x5003006/lkp-csl-2ap3/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/a0918006f9284b77397ae4f163f055c3e0f987b2/8'
> 	export scheduler_version='/lkp/lkp/.src-20211027-140142'
> 	export arch='x86_64'
> 	export max_uptime=2100
> 	export initrd='/osimage/debian/debian-10.4-x86_64-20200603.cgz'
> 	export bootloader_append='root=/dev/ram0
> user=lkp
> job=/lkp/jobs/scheduled/lkp-csl-2ap3/netperf-cs-localhost-performance-ipv4-16-300s-TCP_CRR-ucode=0x5003006-debian-10.4-x86_64-20200603.cgz-a0918006f9284b77397ae4f163-20211027-68186-ja0nr3-6.yaml
> ARCH=x86_64
> kconfig=x86_64-rhel-8.3
> branch=linux-devel/devel-hourly-20211025-030231
> commit=a0918006f9284b77397ae4f163f055c3e0f987b2
> BOOT_IMAGE=/pkg/linux/x86_64-rhel-8.3/gcc-9/a0918006f9284b77397ae4f163f055c3e0f987b2/vmlinuz-5.15.0-rc4-00001-ga0918006f928
> acpi_rsdp=0x67f44014
> max_uptime=2100
> RESULT_ROOT=/result/netperf/cs-localhost-performance-ipv4-16-300s-TCP_CRR-ucode=0x5003006/lkp-csl-2ap3/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/a0918006f9284b77397ae4f163f055c3e0f987b2/8
> LKP_SERVER=internal-lkp-server
> nokaslr
> selinux=0
> debug
> apic=debug
> sysrq_always_enabled
> rcupdate.rcu_cpu_stall_timeout=100
> net.ifnames=0
> printk.devkmsg=on
> panic=-1
> softlockup_panic=1
> nmi_watchdog=panic
> oops=panic
> load_ramdisk=2
> prompt_ramdisk=0
> drbd.minor_count=8
> systemd.log_level=err
> ignore_loglevel
> console=tty0
> earlyprintk=ttyS0,115200
> console=ttyS0,115200
> vga=normal
> rw'
> 	export modules_initrd='/pkg/linux/x86_64-rhel-8.3/gcc-9/a0918006f9284b77397ae4f163f055c3e0f987b2/modules.cgz'
> 	export bm_initrd='/osimage/deps/debian-10.4-x86_64-20200603.cgz/run-ipconfig_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/lkp_20210707.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/rsync-rootfs_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/netperf_20210930.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/netperf-x86_64-2.7-0_20211027.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/mpstat_20200714.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/turbostat_20200721.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/turbostat-x86_64-3.7-4_20200721.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/perf_20211027.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/perf-x86_64-d25f27432f80-1_20211027.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/sar-x86_64-34c92ae-1_20200702.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/hw_20200715.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/cluster_20211026.cgz'
> 	export ucode_initrd='/osimage/ucode/intel-ucode-20210222.cgz'
> 	export lkp_initrd='/osimage/user/lkp/lkp-x86_64.cgz'
> 	export site='inn'
> 	export LKP_CGI_PORT=80
> 	export LKP_CIFS_PORT=139
> 	export last_kernel='5.15.0-rc6-wt-12022-g955f175760f4'
> 	export queue_at_least_once=0
> 	export schedule_notify_address=
> 	export kernel='/pkg/linux/x86_64-rhel-8.3/gcc-9/a0918006f9284b77397ae4f163f055c3e0f987b2/vmlinuz-5.15.0-rc4-00001-ga0918006f928'
> 	export dequeue_time='2021-10-28 03:03:20 +0800'
> 	export node_roles='server client'
> 	export job_initrd='/lkp/jobs/scheduled/lkp-csl-2ap3/netperf-cs-localhost-performance-ipv4-16-300s-TCP_CRR-ucode=0x5003006-debian-10.4-x86_64-20200603.cgz-a0918006f9284b77397ae4f163-20211027-68186-ja0nr3-6.cgz'
> 
> 	[ -n "$LKP_SRC" ] ||
> 	export LKP_SRC=/lkp/${user:-lkp}/src
> }
> 
> run_job()
> {
> 	echo $$ > $TMP/run-job.pid
> 
> 	. $LKP_SRC/lib/http.sh
> 	. $LKP_SRC/lib/job.sh
> 	. $LKP_SRC/lib/env.sh
> 
> 	export_top_env
> 
> 	run_setup $LKP_SRC/setup/cpufreq_governor 'performance'
> 
> 	run_monitor $LKP_SRC/monitors/wrapper kmsg
> 	run_monitor $LKP_SRC/monitors/no-stdout/wrapper boot-time
> 	run_monitor $LKP_SRC/monitors/wrapper uptime
> 	run_monitor $LKP_SRC/monitors/wrapper iostat
> 	run_monitor $LKP_SRC/monitors/wrapper heartbeat
> 	run_monitor $LKP_SRC/monitors/wrapper vmstat
> 	run_monitor $LKP_SRC/monitors/wrapper numa-numastat
> 	run_monitor $LKP_SRC/monitors/wrapper numa-vmstat
> 	run_monitor $LKP_SRC/monitors/wrapper numa-meminfo
> 	run_monitor $LKP_SRC/monitors/wrapper proc-vmstat
> 	run_monitor $LKP_SRC/monitors/wrapper proc-stat
> 	run_monitor $LKP_SRC/monitors/wrapper meminfo
> 	run_monitor $LKP_SRC/monitors/wrapper slabinfo
> 	run_monitor $LKP_SRC/monitors/wrapper interrupts
> 	run_monitor $LKP_SRC/monitors/wrapper lock_stat
> 	run_monitor lite_mode=1 $LKP_SRC/monitors/wrapper perf-sched
> 	run_monitor $LKP_SRC/monitors/wrapper softirqs
> 	run_monitor $LKP_SRC/monitors/one-shot/wrapper bdi_dev_mapping
> 	run_monitor $LKP_SRC/monitors/wrapper diskstats
> 	run_monitor $LKP_SRC/monitors/wrapper nfsstat
> 	run_monitor $LKP_SRC/monitors/wrapper cpuidle
> 	run_monitor $LKP_SRC/monitors/wrapper cpufreq-stats
> 	run_monitor $LKP_SRC/monitors/wrapper turbostat
> 	run_monitor $LKP_SRC/monitors/wrapper sched_debug
> 	run_monitor $LKP_SRC/monitors/wrapper perf-stat
> 	run_monitor $LKP_SRC/monitors/wrapper mpstat
> 	run_monitor lite_mode=1 $LKP_SRC/monitors/no-stdout/wrapper perf-profile
> 	run_monitor $LKP_SRC/monitors/wrapper oom-killer
> 	run_monitor $LKP_SRC/monitors/plain/watchdog
> 
> 	if role server
> 	then
> 		start_daemon $LKP_SRC/daemon/netserver
> 	fi
> 
> 	if role client
> 	then
> 		run_test test='TCP_CRR' $LKP_SRC/tests/wrapper netperf
> 	fi
> }
> 
> extract_stats()
> {
> 	export stats_part_begin=
> 	export stats_part_end=
> 
> 	env test='TCP_CRR' $LKP_SRC/stats/wrapper netperf
> 	$LKP_SRC/stats/wrapper kmsg
> 	$LKP_SRC/stats/wrapper boot-time
> 	$LKP_SRC/stats/wrapper uptime
> 	$LKP_SRC/stats/wrapper iostat
> 	$LKP_SRC/stats/wrapper vmstat
> 	$LKP_SRC/stats/wrapper numa-numastat
> 	$LKP_SRC/stats/wrapper numa-vmstat
> 	$LKP_SRC/stats/wrapper numa-meminfo
> 	$LKP_SRC/stats/wrapper proc-vmstat
> 	$LKP_SRC/stats/wrapper meminfo
> 	$LKP_SRC/stats/wrapper slabinfo
> 	$LKP_SRC/stats/wrapper interrupts
> 	$LKP_SRC/stats/wrapper lock_stat
> 	env lite_mode=1 $LKP_SRC/stats/wrapper perf-sched
> 	$LKP_SRC/stats/wrapper softirqs
> 	$LKP_SRC/stats/wrapper diskstats
> 	$LKP_SRC/stats/wrapper nfsstat
> 	$LKP_SRC/stats/wrapper cpuidle
> 	$LKP_SRC/stats/wrapper turbostat
> 	$LKP_SRC/stats/wrapper sched_debug
> 	$LKP_SRC/stats/wrapper perf-stat
> 	$LKP_SRC/stats/wrapper mpstat
> 	env lite_mode=1 $LKP_SRC/stats/wrapper perf-profile
> 
> 	$LKP_SRC/stats/wrapper time netperf.time
> 	$LKP_SRC/stats/wrapper dmesg
> 	$LKP_SRC/stats/wrapper kmsg
> 	$LKP_SRC/stats/wrapper last_state
> 	$LKP_SRC/stats/wrapper stderr
> 	$LKP_SRC/stats/wrapper time
> }
> 
> "$@"

> ---
> :#! jobs/netperf-small-threads.yaml:
> suite: netperf
> testcase: netperf
> category: benchmark
> :# upto 90% CPU cycles may be used by latency stats:
> disable_latency_stats: 1
> set_nic_irq_affinity: 1
> ip: ipv4
> runtime: 300s
> nr_threads: 16
> cluster: cs-localhost
> if role server:
>   netserver:
> if role client:
>   netperf:
>     test: TCP_CRR
> job_origin: netperf-small-threads.yaml
> :#! queue options:
> queue_cmdline_keys: []
> queue: vip
> testbox: lkp-csl-2ap3
> tbox_group: lkp-csl-2ap3
> kconfig: x86_64-rhel-8.3
> submit_id: 617960e80b9a930a5af4f104
> job_file: "/lkp/jobs/scheduled/lkp-csl-2ap3/netperf-cs-localhost-performance-ipv4-16-300s-TCP_CRR-ucode=0x5003006-debian-10.4-x86_64-20200603.cgz-a0918006f9284b77397ae4f163-20211027-68186-ja0nr3-4.yaml"
> id: d804900eed74d058b23143a825d247e0f8d03392
> queuer_version: "/lkp/xsang/.src-20211027-151141"
> :#! hosts/lkp-csl-2ap3:
> model: Cascade Lake
> nr_node: 4
> nr_cpu: 192
> memory: 192G
> ssd_partitions:
> rootfs_partition: LABEL=LKP-ROOTFS
> kernel_cmdline_hw: acpi_rsdp=0x67f44014
> brand: Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz
> :#! include/category/benchmark:
> kmsg:
> boot-time:
> uptime:
> iostat:
> heartbeat:
> vmstat:
> numa-numastat:
> numa-vmstat:
> numa-meminfo:
> proc-vmstat:
> proc-stat:
> meminfo:
> slabinfo:
> interrupts:
> lock_stat:
> perf-sched:
>   lite_mode: 1
> softirqs:
> bdi_dev_mapping:
> diskstats:
> nfsstat:
> cpuidle:
> cpufreq-stats:
> turbostat:
> sched_debug:
> perf-stat:
> mpstat:
> perf-profile:
>   lite_mode: 1
> :#! include/category/ALL:
> cpufreq_governor: performance
> :#! include/queue/cyclic:
> commit: a0918006f9284b77397ae4f163f055c3e0f987b2
> :#! include/testbox/lkp-csl-2ap3:
> need_kconfig_hw:
> - IGB: y
> - BLK_DEV_NVME
> ucode: '0x5003006'
> enqueue_time: 2021-10-27 22:23:36.647383432 +08:00
> _id: 617960f00b9a930a5af4f108
> _rt: "/result/netperf/cs-localhost-performance-ipv4-16-300s-TCP_CRR-ucode=0x5003006/lkp-csl-2ap3/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/a0918006f9284b77397ae4f163f055c3e0f987b2"
> :#! schedule options:
> user: lkp
> compiler: gcc-9
> LKP_SERVER: internal-lkp-server
> head_commit: 955f175760f41ad2a80b07a390bac9a0444a47a6
> base_commit: 519d81956ee277b4419c723adfb154603c2565ba
> branch: linux-devel/devel-hourly-20211025-030231
> rootfs: debian-10.4-x86_64-20200603.cgz
> result_root: "/result/netperf/cs-localhost-performance-ipv4-16-300s-TCP_CRR-ucode=0x5003006/lkp-csl-2ap3/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/a0918006f9284b77397ae4f163f055c3e0f987b2/0"
> scheduler_version: "/lkp/lkp/.src-20211027-140142"
> arch: x86_64
> max_uptime: 2100
> initrd: "/osimage/debian/debian-10.4-x86_64-20200603.cgz"
> bootloader_append:
> - root=/dev/ram0
> - user=lkp
> - job=/lkp/jobs/scheduled/lkp-csl-2ap3/netperf-cs-localhost-performance-ipv4-16-300s-TCP_CRR-ucode=0x5003006-debian-10.4-x86_64-20200603.cgz-a0918006f9284b77397ae4f163-20211027-68186-ja0nr3-4.yaml
> - ARCH=x86_64
> - kconfig=x86_64-rhel-8.3
> - branch=linux-devel/devel-hourly-20211025-030231
> - commit=a0918006f9284b77397ae4f163f055c3e0f987b2
> - BOOT_IMAGE=/pkg/linux/x86_64-rhel-8.3/gcc-9/a0918006f9284b77397ae4f163f055c3e0f987b2/vmlinuz-5.15.0-rc4-00001-ga0918006f928
> - acpi_rsdp=0x67f44014
> - max_uptime=2100
> - RESULT_ROOT=/result/netperf/cs-localhost-performance-ipv4-16-300s-TCP_CRR-ucode=0x5003006/lkp-csl-2ap3/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/a0918006f9284b77397ae4f163f055c3e0f987b2/0
> - LKP_SERVER=internal-lkp-server
> - nokaslr
> - selinux=0
> - debug
> - apic=debug
> - sysrq_always_enabled
> - rcupdate.rcu_cpu_stall_timeout=100
> - net.ifnames=0
> - printk.devkmsg=on
> - panic=-1
> - softlockup_panic=1
> - nmi_watchdog=panic
> - oops=panic
> - load_ramdisk=2
> - prompt_ramdisk=0
> - drbd.minor_count=8
> - systemd.log_level=err
> - ignore_loglevel
> - console=tty0
> - earlyprintk=ttyS0,115200
> - console=ttyS0,115200
> - vga=normal
> - rw
> modules_initrd: "/pkg/linux/x86_64-rhel-8.3/gcc-9/a0918006f9284b77397ae4f163f055c3e0f987b2/modules.cgz"
> bm_initrd: "/osimage/deps/debian-10.4-x86_64-20200603.cgz/run-ipconfig_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/lkp_20210707.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/rsync-rootfs_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/netperf_20210930.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/netperf-x86_64-2.7-0_20211025.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/mpstat_20200714.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/turbostat_20200721.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/turbostat-x86_64-3.7-4_20200721.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/perf_20211027.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/perf-x86_64-d25f27432f80-1_20211027.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/sar-x86_64-34c92ae-1_20200702.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/hw_20200715.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/cluster_20211026.cgz"
> ucode_initrd: "/osimage/ucode/intel-ucode-20210222.cgz"
> lkp_initrd: "/osimage/user/lkp/lkp-x86_64.cgz"
> site: inn
> :#! /lkp/lkp/.src-20211026-143536/include/site/inn:
> LKP_CGI_PORT: 80
> LKP_CIFS_PORT: 139
> oom-killer:
> watchdog:
> :#! runtime status:
> last_kernel: 5.15.0-rc6-wt-12022-g955f175760f4
> queue_at_least_once: 0
> :#! /lkp/lkp/.src-20211026-205023/include/site/inn:
> :#! user overrides:
> schedule_notify_address:
> kernel: "/pkg/linux/x86_64-rhel-8.3/gcc-9/a0918006f9284b77397ae4f163f055c3e0f987b2/vmlinuz-5.15.0-rc4-00001-ga0918006f928"
> dequeue_time: 2021-10-28 02:02:15.174367353 +08:00
> :#! /lkp/lkp/.src-20211027-140142/include/site/inn:
> job_state: finished
> loadavg: 6.09 10.58 5.53 1/1355 19660
> start_time: '1635357811'
> end_time: '1635358116'
> version: "/lkp/lkp/.src-20211027-140222:5f87ddf4:8610dc698"

> 
> for cpu_dir in /sys/devices/system/cpu/cpu[0-9]*
> do
> 	online_file="$cpu_dir"/online
> 	[ -f "$online_file" ] && [ "$(cat "$online_file")" -eq 0 ] && continue
> 
> 	file="$cpu_dir"/cpufreq/scaling_governor
> 	[ -f "$file" ] && echo "performance" > "$file"
> done
> 
> netserver -4 -D
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> netperf -4 -H 127.0.0.1 -t TCP_CRR -c -C -l 300  &
> wait
Yin, Fengwei Nov. 10, 2021, 1:54 a.m. UTC | #2
Hi Kees,

On 11/10/21 1:21 AM, Kees Cook wrote:
>>     555600            -0.1%     555305        netperf.Throughput_total_tps
>>      34725            -0.1%      34706        netperf.Throughput_tps
>>
>>
>> Fengwei also helped review these results and commented:
>> I suppose these three CPUs have different cache policy. It also could be
>> related with netperf throughput testing.
> Does moving the syscall implementation somewhere else change things?
> That's a _huge_ performance change for something that isn't even called.
> What's going on here?

We just tried to do trick change to make sure the new function doesn't
make other kernel function address changed. But didn't try to move around
the new function even it's not called. We could try to move it around to
see the impact to netperf throughput.


We tried the original patch (without change to make sure no kernel function
address patch) on other box. As report, the regression are different on
different CPUs like:
       -11.6% vs -5.7% vs 0.1%

So my guess is that the different CPUs on these test box have different
cache policies which cause different performance impact when kernel
functions/data address are different. 

Yes. This is strange. We don't know exact reason. Need deeper investigation
on this.


Regards
Yin, Fengwei

> 
> -Kees
>
Mickaël Salaün Nov. 10, 2021, 8:52 a.m. UTC | #3
On 09/11/2021 18:21, Kees Cook wrote:
> On Fri, Nov 05, 2021 at 02:41:59PM +0800, kernel test robot wrote:
>>
>>
>> Greeting,
>>
>> FYI, we noticed a -11.6% regression of netperf.Throughput_tps due to commit:
>>
>>
>> commit: a0918006f9284b77397ae4f163f055c3e0f987b2 ("[PATCH v15 1/3] fs: Add trusted_for(2) syscall implementation and related sysctl")
>> url: https://github.com/0day-ci/linux/commits/Micka-l-Sala-n/Add-trusted_for-2-was-O_MAYEXEC/20211013-032533
>> patch link: https://lore.kernel.org/kernel-hardening/20211012192410.2356090-2-mic@digikod.net
>>
>> in testcase: netperf
>> on test machine: 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
>> with following parameters:
>>
>> 	ip: ipv4
>> 	runtime: 300s
>> 	nr_threads: 16
>> 	cluster: cs-localhost
>> 	test: TCP_CRR
>> 	cpufreq_governor: performance
>> 	ucode: 0x5003006
>>
>> test-description: Netperf is a benchmark that can be use to measure various aspect of networking performance.
>> test-url: http://www.netperf.org/netperf/
>>
>>
>> please be noted we made out some further analysis/tests, as Fengwei mentioned:
>> ==============================================================================
>> Here is my investigation result of this regression:
>>
>> If I add patch to make sure the kernel function address and data address is
>> almost same even with this patch, there is almost no performance delta(0.1%)
>> w/o the patch.
>>
>> And if I only make sure function address same w/o the patch, the performance
>> delta is about 5.1%.
>>
>> So suppose this regression is triggered by different function and data address.
>> We don't know why the different address could bring such kind of regression yet
>> ===============================================================================
>>
>>
>> we also tested on other platforms.
>> on a Cooper Lake (Intel(R) Xeon(R) Gold 5318H CPU @ 2.50GHz with 128G memory),
>> we also observed regression but the gap is smaller:
>> =========================================================================================
>> cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/tbox_group/test/testcase/ucode:
>>   cs-localhost/gcc-9/performance/ipv4/x86_64-rhel-8.3/16/debian-10.4-x86_64-20200603.cgz/300s/lkp-cpl-4sp1/TCP_CRR/netperf/0x700001e
>>
>> commit:
>>   v5.15-rc4
>>   a0918006f9284b77397ae4f163f055c3e0f987b2
>>
>>        v5.15-rc4 a0918006f9284b77397ae4f163f
>> ---------------- ---------------------------
>>          %stddev     %change         %stddev
>>              \          |                \
>>     333492            -5.7%     314346 ±  2%  netperf.Throughput_total_tps
>>      20843            -4.5%      19896        netperf.Throughput_tps
>>
>>
>> but no regression on a 96 threads 2 sockets Ice Lake with 256G memory:
>> =========================================================================================
>> cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/tbox_group/test/testcase/ucode:
>>   cs-localhost/gcc-9/performance/ipv4/x86_64-rhel-8.3/16/debian-10.4-x86_64-20200603.cgz/300s/lkp-icl-2sp1/TCP_CRR/netperf/0xb000280
>>
>> commit:
>>   v5.15-rc4
>>   a0918006f9284b77397ae4f163f055c3e0f987b2
>>
>>        v5.15-rc4 a0918006f9284b77397ae4f163f
>> ---------------- ---------------------------
>>          %stddev     %change         %stddev
>>              \          |                \
>>     555600            -0.1%     555305        netperf.Throughput_total_tps
>>      34725            -0.1%      34706        netperf.Throughput_tps
>>
>>
>> Fengwei also helped review these results and commented:
>> I suppose these three CPUs have different cache policy. It also could be
>> related with netperf throughput testing.
> 
> Does moving the syscall implementation somewhere else change things?
> That's a _huge_ performance change for something that isn't even called.
> What's going on here?

This regression doesn't make sense. I guess this is the result of a
flaky netperf test, maybe because the test machine was overloaded at
that time.
Yin, Fengwei Nov. 11, 2021, 3:29 a.m. UTC | #4
On 11/10/2021 4:52 PM, Mickaël Salaün wrote:
>>> ---------------- ---------------------------
>>>          %stddev     %change         %stddev
>>>              \          |                \
>>>     555600            -0.1%     555305        netperf.Throughput_total_tps
>>>      34725            -0.1%      34706        netperf.Throughput_tps
>>>
>>>
>>> Fengwei also helped review these results and commented:
>>> I suppose these three CPUs have different cache policy. It also could be
>>> related with netperf throughput testing.
>> Does moving the syscall implementation somewhere else change things?
>> That's a _huge_ performance change for something that isn't even called.
>> What's going on here?
> This regression doesn't make sense. I guess this is the result of a
> flaky netperf test, maybe because the test machine was overloaded at
> that time.

I agree the test result looks strange. But I don't think the test machine
or test methodology has issue. It's not possible the test box is overloaded
when test case is running. We did test several times (> 12 times) on different
days. Thanks.


Regards
Yin, Fengwei
Yin, Fengwei Nov. 12, 2021, 12:25 p.m. UTC | #5
Hi Kees,

On 11/10/2021 1:21 AM, Kees Cook wrote:
>> I suppose these three CPUs have different cache policy. It also could be
>> related with netperf throughput testing.
> Does moving the syscall implementation somewhere else change things?
I moved the syscall implementation to a stand alone file and put the file
to net directory for testing. The new patch is like:
https://zerobin.net/?a2b782afadf3c428#Me8l4AJuhiSCfaLVWVzydAVIK6ves0EVIVD76wLnVQo=


The test result is like following:
     - on Cascade Lake: -10.4%
    356365           -10.4%     319180        netperf.Throughput_total_tps
     22272           -10.4%      19948        netperf.Throughput_tps


     - on Cooper Lake: -4.0%
    345772 ±  4%      -4.0%     331814        netperf.Throughput_total_tps
     21610 ±  4%      -4.0%      20738        netperf.Throughput_tps


     - on Ice Lake: -1.1%
    509824            -1.1%     504434        netperf.Throughput_total_tps
     31864            -1.1%      31527        netperf.Throughput_tps


Regards
Yin, Fengwei
diff mbox series

Patch

diff --git a/Documentation/admin-guide/sysctl/fs.rst b/Documentation/admin-guide/sysctl/fs.rst
index 2a501c9ddc55..e364d6c45790 100644
--- a/Documentation/admin-guide/sysctl/fs.rst
+++ b/Documentation/admin-guide/sysctl/fs.rst
@@ -48,6 +48,7 @@  Currently, these files are in /proc/sys/fs:
 - suid_dumpable
 - super-max
 - super-nr
+- trusted_for_policy
 
 
 aio-nr & aio-max-nr
@@ -382,3 +383,52 @@  Each "watch" costs roughly 90 bytes on a 32bit kernel, and roughly 160 bytes
 on a 64bit one.
 The current default value for  max_user_watches  is the 1/25 (4%) of the
 available low memory, divided for the "watch" cost in bytes.
+
+
+trusted_for_policy
+------------------
+
+An interpreter can call :manpage:`trusted_for(2)` with a
+``TRUSTED_FOR_EXECUTION`` usage to check that opened regular files are expected
+to be executable.  If the file is not identified as executable, then the
+syscall returns -EACCES.  This may allow a script interpreter to check
+executable permission before reading commands from a file, or a dynamic linker
+to only load executable shared objects.  One interesting use case is to enforce
+a "write xor execute" policy through interpreters.
+
+The ability to restrict code execution must be thought as a system-wide policy,
+which first starts by restricting mount points with the ``noexec`` option.
+This option is also automatically applied to special filesystems such as /proc .
+This prevents files on such mount points to be directly executed by the kernel
+or mapped as executable memory (e.g. libraries).  With script interpreters
+using :manpage:`trusted_for(2)`, the executable permission can then be checked
+before reading commands from files.  This makes it possible to enforce the
+``noexec`` at the interpreter level, and thus propagates this security policy
+to scripts.  To be fully effective, these interpreters also need to handle the
+other ways to execute code: command line parameters (e.g., option ``-e`` for
+Perl), module loading (e.g., option ``-m`` for Python), stdin, file sourcing,
+environment variables, configuration files, etc.  According to the threat
+model, it may be acceptable to allow some script interpreters (e.g.  Bash) to
+interpret commands from stdin, may it be a TTY or a pipe, because it may not be
+enough to (directly) perform syscalls.
+
+There are two complementary security policies: enforce the ``noexec`` mount
+option, and enforce executable file permission.  These policies are handled by
+the ``fs.trusted_for_policy`` sysctl (writable only with ``CAP_SYS_ADMIN``) as
+a bitmask:
+
+1 - Mount restriction: checks that the mount options for the underlying VFS
+    mount do not prevent execution.
+
+2 - File permission restriction: checks that the file is marked as
+    executable for the current process (e.g., POSIX permissions, ACLs).
+
+Note that as long as a policy is enforced, checking any non-regular file with
+:manpage:`trusted_for(2)` returns -EACCES (e.g. TTYs, pipe), even when such a
+file is marked as executable or is on an executable mount point.
+
+Code samples can be found in
+tools/testing/selftests/interpreter/trust_policy_test.c and interpreter patches
+(for the original O_MAYEXEC) are available at
+https://github.com/clipos-archive/clipos4_portage-overlay/search?q=O_MAYEXEC .
+See also an overview article: https://lwn.net/Articles/820000/ .
diff --git a/fs/open.c b/fs/open.c
index daa324606a41..c79c138a638c 100644
--- a/fs/open.c
+++ b/fs/open.c
@@ -32,6 +32,8 @@ 
 #include <linux/ima.h>
 #include <linux/dnotify.h>
 #include <linux/compat.h>
+#include <linux/sysctl.h>
+#include <uapi/linux/trusted-for.h>
 
 #include "internal.h"
 
@@ -480,6 +482,114 @@  SYSCALL_DEFINE2(access, const char __user *, filename, int, mode)
 	return do_faccessat(AT_FDCWD, filename, mode, 0);
 }
 
+#define TRUST_POLICY_EXEC_MOUNT			BIT(0)
+#define TRUST_POLICY_EXEC_FILE			BIT(1)
+
+int sysctl_trusted_for_policy __read_mostly;
+
+/**
+ * sys_trusted_for - Check that a FD is trusted for a specific usage
+ *
+ * @fd: File descriptor to check.
+ * @usage: Identify the user space usage intended for the file descriptor (only
+ *         TRUSTED_FOR_EXECUTION for now).
+ * @flags: Must be 0.
+ *
+ * This system call enables user space to ask the kernel: is this file
+ * descriptor's content trusted to be used for this purpose?  The set of @usage
+ * currently only contains TRUSTED_FOR_EXECUTION, but other may follow (e.g.
+ * configuration, sensitive data).  If the kernel identifies the file
+ * descriptor as trustworthy for this usage, this call returns 0 and the caller
+ * should then take this information into account.
+ *
+ * The execution usage means that the content of the file descriptor is trusted
+ * according to the system policy to be executed by user space, which means
+ * that it interprets the content or (try to) maps it as executable memory.
+ *
+ * A simple system-wide security policy can be set by the system administrator
+ * through a sysctl configuration consistent with the mount points or the file
+ * access rights: Documentation/admin-guide/sysctl/fs.rst
+ *
+ * @flags could be used in the future to do complementary checks (e.g.
+ * signature or integrity requirements, origin of the file).
+ *
+ * Possible returned errors are:
+ *
+ * - EINVAL: unknown @usage or unknown @flags;
+ * - EBADF: @fd is not a file descriptor for the calling thread;
+ * - EACCES: the requested usage is denied (and user space should enforce it).
+ */
+SYSCALL_DEFINE3(trusted_for, const int, fd, const enum trusted_for_usage, usage,
+		const u32, flags)
+{
+	int mask, err = -EACCES;
+	struct fd f;
+	struct inode *inode;
+
+	if (flags)
+		return -EINVAL;
+
+	/* Only handles execution for now. */
+	if (usage != TRUSTED_FOR_EXECUTION)
+		return -EINVAL;
+	mask = MAY_EXEC;
+
+	f = fdget(fd);
+	if (!f.file)
+		return -EBADF;
+	inode = file_inode(f.file);
+
+	/*
+	 * For compatibility reasons, without a defined security policy, we
+	 * must map the execute permission to the read permission.  Indeed,
+	 * from user space point of view, being able to execute data (e.g.
+	 * scripts) implies to be able to read this data.
+	 */
+	if ((mask & MAY_EXEC)) {
+		/*
+		 * If there is a system-wide execute policy enforced, then
+		 * forbids access to non-regular files and special superblocks.
+		 */
+		if ((sysctl_trusted_for_policy & (TRUST_POLICY_EXEC_MOUNT |
+						TRUST_POLICY_EXEC_FILE))) {
+			if (!S_ISREG(inode->i_mode))
+				goto out_fd;
+			/*
+			 * Denies access to pseudo filesystems that will never
+			 * be mountable (e.g. sockfs, pipefs) but can still be
+			 * reachable through /proc/self/fd, or memfd-like file
+			 * descriptors, or nsfs-like files.
+			 *
+			 * According to the selftests, SB_NOEXEC seems to be
+			 * only used by proc and nsfs filesystems.
+			 */
+			if ((f.file->f_path.dentry->d_sb->s_flags &
+						(SB_NOUSER | SB_KERNMOUNT | SB_NOEXEC)))
+				goto out_fd;
+		}
+
+		if ((sysctl_trusted_for_policy & TRUST_POLICY_EXEC_MOUNT) &&
+				path_noexec(&f.file->f_path))
+			goto out_fd;
+		/*
+		 * For compatibility reasons, if the system-wide policy doesn't
+		 * enforce file permission checks, then replaces the execute
+		 * permission request with a read permission request.
+		 */
+		if (!(sysctl_trusted_for_policy & TRUST_POLICY_EXEC_FILE))
+			mask &= ~MAY_EXEC;
+		/* To be executed *by* user space, files must be readable. */
+		mask |= MAY_READ;
+	}
+
+	err = inode_permission(file_mnt_user_ns(f.file), inode,
+			mask | MAY_ACCESS);
+
+out_fd:
+	fdput(f);
+	return err;
+}
+
 SYSCALL_DEFINE1(chdir, const char __user *, filename)
 {
 	struct path path;
diff --git a/include/linux/fs.h b/include/linux/fs.h
index e7a633353fd2..9689b8a22ec5 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -86,6 +86,7 @@  extern int sysctl_protected_symlinks;
 extern int sysctl_protected_hardlinks;
 extern int sysctl_protected_fifos;
 extern int sysctl_protected_regular;
+extern int sysctl_trusted_for_policy;
 
 typedef __kernel_rwf_t rwf_t;
 
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 252243c7783d..8a69a6b1c1ef 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -71,6 +71,7 @@  struct open_how;
 struct mount_attr;
 struct landlock_ruleset_attr;
 enum landlock_rule_type;
+enum trusted_for_usage;
 
 #include <linux/types.h>
 #include <linux/aio_abi.h>
@@ -461,6 +462,7 @@  asmlinkage long sys_fallocate(int fd, int mode, loff_t offset, loff_t len);
 asmlinkage long sys_faccessat(int dfd, const char __user *filename, int mode);
 asmlinkage long sys_faccessat2(int dfd, const char __user *filename, int mode,
 			       int flags);
+asmlinkage long sys_trusted_for(int fd, enum trusted_for_usage usage, u32 flags);
 asmlinkage long sys_chdir(const char __user *filename);
 asmlinkage long sys_fchdir(unsigned int fd);
 asmlinkage long sys_chroot(const char __user *filename);
diff --git a/include/uapi/linux/trusted-for.h b/include/uapi/linux/trusted-for.h
new file mode 100644
index 000000000000..cc4f030c5103
--- /dev/null
+++ b/include/uapi/linux/trusted-for.h
@@ -0,0 +1,18 @@ 
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _UAPI_LINUX_TRUSTED_FOR_H
+#define _UAPI_LINUX_TRUSTED_FOR_H
+
+/**
+ * enum trusted_for_usage - Usage for which a file descriptor is trusted
+ *
+ * Argument of trusted_for(2).
+ */
+enum trusted_for_usage {
+	/**
+	 * @TRUSTED_FOR_EXECUTION: Check that the data read from a file
+	 * descriptor is trusted to be executed or interpreted (e.g. scripts).
+	 */
+	TRUSTED_FOR_EXECUTION = 1,
+};
+
+#endif /* _UAPI_LINUX_TRUSTED_FOR_H */
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 083be6af29d7..002dc830c165 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -115,6 +115,7 @@  static int sixty = 60;
 
 static int __maybe_unused neg_one = -1;
 static int __maybe_unused two = 2;
+static int __maybe_unused three = 3;
 static int __maybe_unused four = 4;
 static unsigned long zero_ul;
 static unsigned long one_ul = 1;
@@ -936,7 +937,6 @@  static int proc_taint(struct ctl_table *table, int write,
 	return err;
 }
 
-#ifdef CONFIG_PRINTK
 static int proc_dointvec_minmax_sysadmin(struct ctl_table *table, int write,
 				void *buffer, size_t *lenp, loff_t *ppos)
 {
@@ -945,7 +945,6 @@  static int proc_dointvec_minmax_sysadmin(struct ctl_table *table, int write,
 
 	return proc_dointvec_minmax(table, write, buffer, lenp, ppos);
 }
-#endif
 
 /**
  * struct do_proc_dointvec_minmax_conv_param - proc_dointvec_minmax() range checking structure
@@ -3357,6 +3356,15 @@  static struct ctl_table fs_table[] = {
 		.extra1		= SYSCTL_ZERO,
 		.extra2		= &two,
 	},
+	{
+		.procname       = "trusted_for_policy",
+		.data           = &sysctl_trusted_for_policy,
+		.maxlen         = sizeof(int),
+		.mode           = 0600,
+		.proc_handler	= proc_dointvec_minmax_sysadmin,
+		.extra1		= SYSCTL_ZERO,
+		.extra2		= &three,
+	},
 #if defined(CONFIG_BINFMT_MISC) || defined(CONFIG_BINFMT_MISC_MODULE)
 	{
 		.procname	= "binfmt_misc",