mbox series

[RFC,00/13] x86 User Interrupts support

Message ID 20210913200132.3396598-1-sohil.mehta@intel.com (mailing list archive)
Headers show
Series x86 User Interrupts support | expand

Message

Sohil Mehta Sept. 13, 2021, 8:01 p.m. UTC
User Interrupts Introduction
============================

User Interrupts (Uintr) is a hardware technology that enables delivering
interrupts directly to user space.

Today, virtually all communication across privilege boundaries happens by going
through the kernel. These include signals, pipes, remote procedure calls and
hardware interrupt based notifications. User interrupts provide the foundation
for more efficient (low latency and low CPU utilization) versions of these
common operations by avoiding transitions through the kernel.

In the User Interrupts hardware architecture, a receiver is always expected to
be a user space task. However, a user interrupt can be sent by another user
space task, kernel or an external source (like a device).

In addition to the general infrastructure to receive user interrupts, this
series introduces a single source: interrupts from another user task.  These
are referred to as User IPIs.

The first implementation of User IPIs will be in the Intel processor code-named
Sapphire Rapids. Refer Chapter 11 of the Intel Architecture instruction set
extensions for details of the hardware architecture [1].

Series-reviewed-by: Tony Luck <tony.luck@intel.com>

Main goals of this RFC
======================
- Introduce this upcoming technology to the community.
This cover letter includes a hardware architecture summary along with the
software architecture and kernel design choices. This post is a bit long as a
result. Hopefully, it helps answer more questions than it creates :) I am also
planning to talk about User Interrupts next week at the LPC Kernel summit.

- Discuss potential use cases.
We are starting to look at actual usages and libraries (like libevent[2] and
liburing[3]) that can take advantage of this technology. Unfortunately, we
don't have much to share on this right now. We need some help from the
community to identify usages that can benefit from this. We would like to make
sure the proposed APIs work for the eventual consumers.

- Get early feedback on the software architecture.
We are hoping to get some feedback on the direction of overall software
architecture - starting with User IPI, extending it for kernel-to-user
interrupt notifications and external interrupts in the future. 

- Discuss some of the main architecture opens.
There is lot of work that still needs to happen to enable this technology. We
are looking for some input on future patches that would be of interest. Here
are some of the big opens that we are looking to resolve.
* Should Uintr interrupt all blocking system calls like sleep(), read(),
  poll(), etc? If so, should we implement an SA_RESTART type of mechanism
  similar to signals? - Refer Blocking for interrupts section below.

* Should the User Interrupt Target table (UITT) be shared between threads of a
  multi-threaded application or maybe even across processes? - Refer Sharing
  the UITT section below.

Why care about this? - Micro benchmark performance
==================================================
There is a ~9x or higher performance improvement using User IPI over other IPC
mechanisms for event signaling.

Below is the average normalized latency for a 1M ping-pong IPC notifications
with message size=1.

+------------+-------------------------+
| IPC type   |   Relative Latency      |
|            |(normalized to User IPI) |
+------------+-------------------------+
| User IPI   |                     1.0 |
| Signal     |                    14.8 |
| Eventfd    |                     9.7 |
| Pipe       |                    16.3 |
| Domain     |                    17.3 |
+------------+-------------------------+

Results have been estimated based on tests on internal hardware with Linux
v5.14 + User IPI patches.

Original benchmark: https://github.com/goldsborough/ipc-bench
Updated benchmark: https://github.com/intel/uintr-ipc-bench/tree/linux-rfc-v1

*Performance varies by use, configuration and other factors.

How it works underneath? - Hardware Summary
===========================================
User Interrupts is a posted interrupt delivery mechanism. The interrupts are
first posted to a memory location and then delivered to the receiver when they
are running with CPL=3.

Kernel managed architectural data structures
--------------------------------------------
UPID: User Posted Interrupt Descriptor - Holds receiver interrupt vector
information and notification state (like an ongoing notification, suppressed
notifications).

UITT: User Interrupt Target Table - Stores UPID pointer and vector information
for interrupt routing on the sender side. Referred by the senduipi instruction.

The interrupt state of each task is referenced via MSRs which are saved and
restored by the kernel during context switch.

Instructions
------------
senduipi <index> - send a user IPI to a target task based on the UITT index.

clui - Mask user interrupts by clearing UIF (User Interrupt Flag).

stui - Unmask user interrupts by setting UIF.

testui - Test current value of UIF.

uiret - return from a user interrupt handler.

User IPI
--------
When a User IPI sender executes 'senduipi <index>', the hardware refers the
UITT table entry pointed by the index and posts the interrupt vector (63-0)
into the receiver's UPID.

If the receiver is running (CPL=3), the sender cpu would send a physical IPI to
the receiver's cpu. On the receiver side this IPI is detected as a User
Interrupt. The User Interrupt handler for the receiver is invoked and the
vector number (63-0) is pushed onto the stack.

Upon execution of 'uiret' in the interrupt handler, the control is transferred
back to instruction that was interrupted.

Refer Chapter 11 of the Intel Architecture instruction set extensions [1] for
more details.

Application interface - Software Architecture
=============================================
User Interrupts (Uintr) is an opt-in feature (unlike signals). Applications
wanting to use Uintr are expected to register themselves with the kernel using
the Uintr related system calls. A Uintr receiver is always a userspace task. A
Uintr sender can be another userspace task, kernel or a device.

1) A receiver can register/unregister an interrupt handler using the Uintr
receiver related syscalls. 
		uintr_register_handler(handler, flags)
		uintr_unregister_handler(flags)

2) A syscall also allows a receiver to register a vector and create a user
interrupt file descriptor - uintr_fd. 
		uintr_fd = uintr_create_fd(vector, flags)

Uintr can be useful in some of the usages where eventfd or signals are used for
frequent userspace event notifications. The semantics of uintr_fd are somewhat
similar to an eventfd() or the write end of a pipe.

3) Any sender with access to uintr_fd can use it to deliver events (in this
case - interrupts) to a receiver. A sender task can manage its connection with
the receiver using the sender related syscalls based on uintr_fd.
		uipi_index = uintr_register_sender(uintr_fd, flags)

Using an FD abstraction provides a secure mechanism to connect with a receiver.
The FD sharing and isolation mechanisms put in place by the kernel would extend
to Uintr as well. 

4a) After the initial setup, a sender task can use the SENDUIPI instruction
along with the uipi_index to generate user IPIs without any kernel
intervention.
		SENDUIPI <uipi_index>

If the receiver is running (CPL=3), then the user interrupt is delivered
directly without a kernel transition. If the receiver isn't running the
interrupt is delivered when the receiver gets context switched back. If the
receiver is blocked in the kernel, the user interrupt is delivered to the
kernel which then unblocks the intended receiver to deliver the interrupt.

4b) If the sender is the kernel or a device, the uintr_fd can be passed onto
the related kernel entity to allow them to setup a connection and then generate
a user interrupt for event delivery. <The exact details of this API are still
being worked upon.>

For details of the user interface and associated system calls refer the Uintr
man-pages draft:
https://github.com/intel/uintr-linux-kernel/tree/rfc-v1/tools/uintr/manpages.
We have also included the same content as patch 1 of this series to make it
easier to review.

Refer the Uintr compiler programming guide [4] for details on Uintr integration
with GCC and Binutils.

Kernel design choices
=====================
Here are some of the reasons and trade-offs for the current design of the APIs.

System call interface
---------------------
Why a system call interface?: The 2 options we considered are using a char
device at /dev or use system calls (current approach). A syscall approach
avoids exposing a core cpu feature through a driver model. Also, we want to
have a user interrupt FD per vector and share a single common interrupt handler
among all vectors. This seems easier for the kernel and userspace to accomplish
using a syscall based approach.

Data sharing using user interrupts: Uintr doesn't include a mechanism to
share/transmit data. The expectation is applications use existing data sharing
mechanisms to share data and use Uintr only for signaling.

An FD for each vector: A uintr_fd is assigned to each vector to allow fine
grained priority and event management by the receiver. The alternative we
considered was to allocate an FD to the interrupt handler and having that
shared with the sender. However, that approach relies on the sender selecting
the vector and moves the vector priority management to the sender. Also, if
multiple senders want to send unique user interrupts they would need to
coordinate the vector selection amongst them.

Extending the APIs: Currently, the system calls are only extendable using the
flags argument. We can add a variable size struct to some of the syscalls if
needed.

Extending existing mechanisms
-----------------------------
Uintr can be beneficial in some of the usages where eventfd() or signals are
used. Since Uintr is hardware-dependent, thread-specific and bypasses the
kernel in the fast path, it makes extending existing mechanisms harder.

Main issues with extending signals:
Signal handlers are defined significantly differently than a User interrupt
handler. An application needs to save/restore registers in a user interrupt
handler and call uiret to return from it. Also, signals can be process directed
(or thread directed) but user interrupts are always thread directed.

Comparison of signals with User Interrupts:
+=====================+===========================+===========================+
|                     | Signals                   | User Interrupts           |
+=====================+===========================+===========================+
| Stacks              | Has alt stacks            | Uses application stack    |
|                     |                           | (alternate stack option   |
|                     |                           | not yet enabled)          |
+---------------------+---------------------------+---------------------------+
| Registers state     | Kernel manages incl.      | App responsible (Use GCC  |
|                     | FPU/XSTATE area           | 'interrupt' attribute for |
|                     |                           | general purpose registers)|
+---------------------+---------------------------+---------------------------+
| Blocking/Masking    | sigprocmask(2)/sa_mask    | CLUI instruction (No per  |
|                     |                           | vector masking)           |
+---------------------+---------------------------+---------------------------+
| Direction           | Uni-directional           | Uni-directional           |
+---------------------+---------------------------+---------------------------+
| Post event          | kill(), signal(),         | SENDUIPI <index> - index  |
|                     | sigqueue(), etc.          | derived from uintr_fd     |
+---------------------+---------------------------+---------------------------+
| Target              | Process-directed or       | Thread-directed           |
|                     | thread-directed           |                           |
+---------------------+---------------------------+---------------------------+
| Fork/inheritance    | Empty signal set          | Nothing is inherited      |
+---------------------+---------------------------+---------------------------+
| Execv               | Pending signals preserved | Nothing is inherited      |
+---------------------+---------------------------+---------------------------+
| Order of delivery   | Undetermined              | High to low vector numbers|
| for multiple signals|                           |                           |
+---------------------+---------------------------+---------------------------+
| Handler re-entry    | All signals except the    | No interrupts can cause   |
|                     | one being handled         | handler re-entry.         |
+---------------------+---------------------------+---------------------------+
| Delivery feedback   | 0 or -1 based on whether  | No feedback on whether the|
|                     | the signal was sent       | interrupt was sent or     |
|                     |                           | received.                 |
+---------------------+---------------------------+---------------------------+

Main issues with extending eventfd():
eventfd() has a counter value that is core to the API. User interrupts can't
have an associated counter since the signaling happens at the user level and
the hardware doesn't have a memory counter mechanism. Also, eventfd can be used
for bi-directional signaling where as uintr_fd is uni-directional.

Comparison of eventfd with uintr_fd:
+====================+======================+==============================+
|                    | Eventfd              | uintr_fd (User Interrupt FD) |
+====================+======================+==============================+
| Object             | Counter - uint64     | Receiver vector information  |
+--------------------+----------------------+------------------------------+
| Post event         | write() to eventfd   | SENDUIPI <index> - index     |
|                    |                      | derived from uintr_fd        |
+--------------------+----------------------+------------------------------+
| Receive event      | read() on eventfd    | Implicit - Handler is        |
|                    |                      | invoked with associated      |
|                    |                      | vector.                      |
+--------------------+----------------------+------------------------------+
| Direction          | Bi-directional       | Uni-directional              |
+--------------------+----------------------+------------------------------+
| Data transmitted   | Counter - uint64     | None                         |
+--------------------+----------------------+------------------------------+
| Waiting for events | Poll() family of     | No per vector wait.          |
|                    | syscalls             | uintr_wait() allows waiting  |
|                    |                      | for all user interrupts      |
+--------------------+----------------------+------------------------------+

Security Model
==============
User Interrupts is designed as an opt-in feature (unlike signals). The security
model for user interrupts is intended to be similar to eventfd(). The general
idea is that any sender with access to uintr_fd would be able to generate the
associated interrupt vector for the receiver task that created the fd.

Untrusted processes
-------------------
The current implementation expects only trusted and cooperating processes to
communicate using user interrupts. Coordination is expected between processes
for a connection teardown. In situations where coordination doesn't happen
(say, due to abrupt process exit), the kernel would end up keeping shared
resources (like UPID) allocated to avoid faults.

Currently, a sender can easily cause a denial of service for the receiver by
generating a storm of user interrupts. A user interrupt handler is invoked with
interrupts disabled, but upon execution of uiret, interrupts get enabled again
by the hardware. This can lead to the handler being invoked again before normal
execution can resume. There isn't a hardware mechanism to mask specific
interrupt vectors. 

To enable untrusted processes to communicate, we need to add a per-vector
masking option through another syscall (or maybe IOCTL). However, this can add
some complexity to the kernel code. A vector can only be masked by modifying
the UITT entries at the source. We need to be careful about races while
removing and restoring the UPID from the UITT.

Resource limits
---------------
The maximum number of receiver-sender connections would be limited by the
maximum number of open file descriptors and the size of the UITT.

The UITT size is chosen as 4kB fixed size arbitrarily right now. We plan to
make it dynamic and configurable in size. RLIMIT_MEMLOCK or ENOMEM should be
triggered when the size limits have been hit.

Main Opens
==========

Blocking for interrupts
-----------------------
User interrupts are delivered to applications immediately if they are running
in userspace. If a receiver task has blocked in the kernel using the placeholder
uintr_wait() syscall, the task would be woken up to deliver the user interrupt.
However, if the task is blocked due to any other blocking calls like read(),
sleep(), etc; the interrupt will only get delivered when the application gets
scheduled again. We need to consider if applications need to receive User
Interrupts as soon as they are posted (similar to signals) when they are
blocked due to some other reason. Adding this capability would likely make the
kernel implementation more complex.

Interrupting system calls using User Interrupts would also mean we need to
consider an SA_RESTART type of mechanism. We also need to evaluate if some of
the signal handler related semantics in the kernel can be reused for User
Interrupts.

Sharing the User Interrupt Target Table (UITT)
----------------------------------------------
The current implementation assigns a unique UITT to each task. This assumes
that User interrupts are used for point-to-point communication between 2 tasks.
Also, this keeps the kernel implementation relatively simple.

However, there are of benefits to sharing the UITT between threads of a
multi-threaded application. One, they would see a consistent view of the UITT.
i.e. SENDUIPI <index> would mean the same on all threads of the application.
Also, each thread doesn't have to register itself using the common uintr_fd.
This would simplify the userspace setup and make efficient use of kernel
memory. The potential downside is that the kernel implementation to allocate,
modify, expand and free the UITT would be more complex.

A similar argument can be made for a set of processes that do a lot of IPC
amongst them. They would prefer to have a shared UITT that lets them target any
process from any process. With the current file descriptor based approach, the
connection setup can be time consuming and somewhat cumbersome. We need to
evaluate if this can be made simpler as well.

Kernel page table isolation (KPTI)
----------------------------------
SENDUIPI is a special ring-3 instruction that makes a supervisor mode memory
access to the UPID and UITT memory. The current patches need KPTI to be
disabled for User IPIs to work. To make User IPI work with KPTI, we need to
allocate these structures from a special memory region that has supervisor
access but it is mapped into userspace. The plan is to implement a mechanism
similar to LDT. 

Processors that support user interrupts are not affected by Meltdown so the
auto mode of KPTI will default to off. Users who want to force enable KPTI will
need to wait for a later version of this patch series to use user interrupts.
Please let us know if you want the development of these patches to be
prioritized (or deprioritized).

FAQs
====
Q: What happens if a process is "surprised" by a user interrupt?
A: For tasks that haven't registered with the kernel and requested for user
interrupts aren't expected or able to receive to user interrupts.

Q: Do user interrupts affect kernel scheduling?
A: No. If a task is blocked waiting for user interrupts, when the kernel
receives a notification on behalf of that task we only put it back on the
runqueue. Delivery of a user interrupt in no way changes the scheduling
priorities of a task.

Q: Does the sender get to know if the interrupt was delivered?
A: No. User interrupts only provides a posted interrupt delivery mechanism. If
applications need to rely on whether the interrupt was delivered they should
consider a userspace mechanism for feedback (like a shared memory counter or a
user interrupt back to the sender).

Q: Why is there no feedback on interrupt delivery?
A: Being a posted interrupt delivery mechanism, the interrupt delivery
happens in 2 steps:
1) The interrupt information is stored in a memory location (UPID).
2) The physical interrupt is delivered to the interrupt receiver.

The 2nd step could happen immediately, after an extended period, or it might
never happen based on the state of the receiver after step 1. (The receiver
could have disabled interrupts, have been context switched out or it might have
crashed during that time.) This makes it very hard for the hardware to reliably
provide feedback upon execution of SENDUIPI.

Q: Can user interrupts be nested?
A: Yes. Using STUI instruction in the interrupt handler would allow new user
interrupts to be delivered. However, there no TPR(thread priority register)
like mechanism to allow only higher priority interrupts. Any user interrupt can
be taken when nesting is enabled.

Q: Can a task receive all pending user interrupts in one go?
A: No. The hardware allows only one vector to be processed at a time. If a task
is interested in knowing all the interrupts that are pending then we could add
a syscall that provides the pending interrupts information.

Q: Do the processes need to be pinned to a cpu?
A: No. User interrupts will be routed correctly to whichever cpu the receiver
is running on. The kernel updates the cpu information in the UPID during
context switch.

Q: Why are UPID and UITT allocated by the kernel?
A: If allocated by user space, applications could misuse the UPID and UITT to
write to unauthorized memory and generate interrupts on any cpu. The UPID and
UITT are allocated by the kernel and accessed by the hardware with supervisor
privilege.

Patch structure for this series
===============================
- Man-pages and Kernel documentation (patch 1,2)
- Hardware enumeration (patch 3, 4)
- User IPI kernel vector reservation (patch 5)
- Syscall interface for interrupt receiver, sender and vector
  management(uintr_fd) (patch 6-12)
- Basic selftests (patch 13)

Along with the patches in this RFC, there are additional tests and samples that
are available at:
https://github.com/intel/uintr-linux-kernel/tree/rfc-v1

Links
=====
[1]: https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html
[2]: https://libevent.org/
[3]: https://github.com/axboe/liburing
[4]: https://github.com/intel/uintr-compiler-guide/blob/uintr-gcc-11.1/UINTR-compiler-guide.pdf

Sohil Mehta (13):
  x86/uintr/man-page: Include man pages draft for reference
  Documentation/x86: Add documentation for User Interrupts
  x86/cpu: Enumerate User Interrupts support
  x86/fpu/xstate: Enumerate User Interrupts supervisor state
  x86/irq: Reserve a user IPI notification vector
  x86/uintr: Introduce uintr receiver syscalls
  x86/process/64: Add uintr task context switch support
  x86/process/64: Clean up uintr task fork and exit paths
  x86/uintr: Introduce vector registration and uintr_fd syscall
  x86/uintr: Introduce user IPI sender syscalls
  x86/uintr: Introduce uintr_wait() syscall
  x86/uintr: Wire up the user interrupt syscalls
  selftests/x86: Add basic tests for User IPI

 .../admin-guide/kernel-parameters.txt         |   2 +
 Documentation/x86/index.rst                   |   1 +
 Documentation/x86/user-interrupts.rst         | 107 +++
 arch/x86/Kconfig                              |  12 +
 arch/x86/entry/syscalls/syscall_32.tbl        |   6 +
 arch/x86/entry/syscalls/syscall_64.tbl        |   6 +
 arch/x86/include/asm/cpufeatures.h            |   1 +
 arch/x86/include/asm/disabled-features.h      |   8 +-
 arch/x86/include/asm/entry-common.h           |   4 +
 arch/x86/include/asm/fpu/types.h              |  20 +-
 arch/x86/include/asm/fpu/xstate.h             |   3 +-
 arch/x86/include/asm/hardirq.h                |   4 +
 arch/x86/include/asm/idtentry.h               |   5 +
 arch/x86/include/asm/irq_vectors.h            |   6 +-
 arch/x86/include/asm/msr-index.h              |   8 +
 arch/x86/include/asm/processor.h              |   8 +
 arch/x86/include/asm/uintr.h                  |  76 ++
 arch/x86/include/uapi/asm/processor-flags.h   |   2 +
 arch/x86/kernel/Makefile                      |   1 +
 arch/x86/kernel/cpu/common.c                  |  61 ++
 arch/x86/kernel/cpu/cpuid-deps.c              |   1 +
 arch/x86/kernel/fpu/core.c                    |  17 +
 arch/x86/kernel/fpu/xstate.c                  |  20 +-
 arch/x86/kernel/idt.c                         |   4 +
 arch/x86/kernel/irq.c                         |  51 +
 arch/x86/kernel/process.c                     |  10 +
 arch/x86/kernel/process_64.c                  |   4 +
 arch/x86/kernel/uintr_core.c                  | 880 ++++++++++++++++++
 arch/x86/kernel/uintr_fd.c                    | 300 ++++++
 include/linux/syscalls.h                      |   8 +
 include/uapi/asm-generic/unistd.h             |  15 +-
 kernel/sys_ni.c                               |   8 +
 scripts/checksyscalls.sh                      |   6 +
 tools/testing/selftests/x86/Makefile          |  10 +
 tools/testing/selftests/x86/uintr.c           | 147 +++
 tools/uintr/manpages/0_overview.txt           | 265 ++++++
 tools/uintr/manpages/1_register_receiver.txt  | 122 +++
 .../uintr/manpages/2_unregister_receiver.txt  |  62 ++
 tools/uintr/manpages/3_create_fd.txt          | 104 +++
 tools/uintr/manpages/4_register_sender.txt    | 121 +++
 tools/uintr/manpages/5_unregister_sender.txt  |  79 ++
 tools/uintr/manpages/6_wait.txt               |  59 ++
 42 files changed, 2626 insertions(+), 8 deletions(-)
 create mode 100644 Documentation/x86/user-interrupts.rst
 create mode 100644 arch/x86/include/asm/uintr.h
 create mode 100644 arch/x86/kernel/uintr_core.c
 create mode 100644 arch/x86/kernel/uintr_fd.c
 create mode 100644 tools/testing/selftests/x86/uintr.c
 create mode 100644 tools/uintr/manpages/0_overview.txt
 create mode 100644 tools/uintr/manpages/1_register_receiver.txt
 create mode 100644 tools/uintr/manpages/2_unregister_receiver.txt
 create mode 100644 tools/uintr/manpages/3_create_fd.txt
 create mode 100644 tools/uintr/manpages/4_register_sender.txt
 create mode 100644 tools/uintr/manpages/5_unregister_sender.txt
 create mode 100644 tools/uintr/manpages/6_wait.txt


base-commit: 6880fa6c56601bb8ed59df6c30fd390cc5f6dd8f

Comments

Dave Hansen Sept. 13, 2021, 8:27 p.m. UTC | #1
On 9/13/21 1:01 PM, Sohil Mehta wrote:
> User Interrupts (Uintr) is a hardware technology that enables delivering
> interrupts directly to user space.

Your problem in all of this is going to be convincing folks that this is
a problem worth solving.  I'd start this off with something
attention-grabbing.

Two things.  Good, snazzy writing doesn't repeat words.  You repeated
"interrupt" twice in that first sentence.  It also doesn't get my
attention.  Here's a more concise way of saying it, and also adding
something to get the reader's attention:

	User Interrupts directly deliver events to user space and are
	10x faster than the closest alternative.
Sohil Mehta Sept. 14, 2021, 7:03 p.m. UTC | #2
Resending.. There were some email delivery issues.

On 9/13/2021 1:27 PM, Dave Hansen wrote:
>	User Interrupts directly deliver events to user space and are
>	10x faster than the closest alternative.

Thanks Dave. This is definitely more attention-grabbing than the
previous intro. I'll include this next time.

One thing to note, the 10x gain is only applicable for User IPIs.
For other source of User Interrupts (like kernel-to-user
notifications and other external sources), we don't have the data
yet.

I realized the User IPI data in the cover also needs some
clarification. The 10x gain is only seen when the receiver is
spinning in User space - waiting for interrupts.

If the receiver were to block (wait) in the kernel, the performance
would drop as expected. However, User IPI (blocked) would still be
10% faster than Eventfd and 40% faster than signals.

Here is the updated table:
+---------------------+-------------------------+
| IPC type            |   Relative Latency      |
|                     |(normalized to User IPI) |
+---------------------+-------------------------+
| User IPI            |                     1.0 |
| User IPI (blocked)  |                     8.9 |
| Signal              |                    14.8 |
| Eventfd             |                     9.7 |
| Pipe                |                    16.3 |
| Domain              |                    17.3 |
+---------------------+-------------------------+

--Sohil
Greg Kroah-Hartman Sept. 23, 2021, 12:19 p.m. UTC | #3
On Tue, Sep 14, 2021 at 07:03:36PM +0000, Mehta, Sohil wrote:
> Resending.. There were some email delivery issues.
> 
> On 9/13/2021 1:27 PM, Dave Hansen wrote:
> >	User Interrupts directly deliver events to user space and are
> >	10x faster than the closest alternative.
> 
> Thanks Dave. This is definitely more attention-grabbing than the
> previous intro. I'll include this next time.
> 
> One thing to note, the 10x gain is only applicable for User IPIs.
> For other source of User Interrupts (like kernel-to-user
> notifications and other external sources), we don't have the data
> yet.
> 
> I realized the User IPI data in the cover also needs some
> clarification. The 10x gain is only seen when the receiver is
> spinning in User space - waiting for interrupts.
> 
> If the receiver were to block (wait) in the kernel, the performance
> would drop as expected. However, User IPI (blocked) would still be
> 10% faster than Eventfd and 40% faster than signals.
> 
> Here is the updated table:
> +---------------------+-------------------------+
> | IPC type            |   Relative Latency      |
> |                     |(normalized to User IPI) |
> +---------------------+-------------------------+
> | User IPI            |                     1.0 |
> | User IPI (blocked)  |                     8.9 |
> | Signal              |                    14.8 |
> | Eventfd             |                     9.7 |
> | Pipe                |                    16.3 |
> | Domain              |                    17.3 |
> +---------------------+-------------------------+

Relative is just that, "relative".  If the real values are extremely
tiny, then relative is just "this goes a tiny tiny bit faster than what
you have today in eventfd", right?

So how about "absolute"?  What are we talking here?

And this is really only for the "one userspace task waking up another
userspace task" policies.  What real workload can actually use this?

thanks,

greg k-h
Greg Kroah-Hartman Sept. 23, 2021, 2:09 p.m. UTC | #4
On Thu, Sep 23, 2021 at 02:19:05PM +0200, Greg KH wrote:
> On Tue, Sep 14, 2021 at 07:03:36PM +0000, Mehta, Sohil wrote:
> > Resending.. There were some email delivery issues.
> > 
> > On 9/13/2021 1:27 PM, Dave Hansen wrote:
> > >	User Interrupts directly deliver events to user space and are
> > >	10x faster than the closest alternative.
> > 
> > Thanks Dave. This is definitely more attention-grabbing than the
> > previous intro. I'll include this next time.
> > 
> > One thing to note, the 10x gain is only applicable for User IPIs.
> > For other source of User Interrupts (like kernel-to-user
> > notifications and other external sources), we don't have the data
> > yet.
> > 
> > I realized the User IPI data in the cover also needs some
> > clarification. The 10x gain is only seen when the receiver is
> > spinning in User space - waiting for interrupts.
> > 
> > If the receiver were to block (wait) in the kernel, the performance
> > would drop as expected. However, User IPI (blocked) would still be
> > 10% faster than Eventfd and 40% faster than signals.
> > 
> > Here is the updated table:
> > +---------------------+-------------------------+
> > | IPC type            |   Relative Latency      |
> > |                     |(normalized to User IPI) |
> > +---------------------+-------------------------+
> > | User IPI            |                     1.0 |
> > | User IPI (blocked)  |                     8.9 |
> > | Signal              |                    14.8 |
> > | Eventfd             |                     9.7 |
> > | Pipe                |                    16.3 |
> > | Domain              |                    17.3 |
> > +---------------------+-------------------------+
> 
> Relative is just that, "relative".  If the real values are extremely
> tiny, then relative is just "this goes a tiny tiny bit faster than what
> you have today in eventfd", right?
> 
> So how about "absolute"?  What are we talking here?
> 
> And this is really only for the "one userspace task waking up another
> userspace task" policies.  What real workload can actually use this?

Also, you forgot to list Binder in the above IPC type.

And you forgot to mention that this is tied to one specific CPU type
only.  Are syscalls allowed to be created that would only work on
obscure cpus like this one?

thanks,

greg k-h
Jens Axboe Sept. 23, 2021, 2:39 p.m. UTC | #5
On 9/13/21 2:01 PM, Sohil Mehta wrote:
> - Discuss potential use cases.
> We are starting to look at actual usages and libraries (like libevent[2] and
> liburing[3]) that can take advantage of this technology. Unfortunately, we
> don't have much to share on this right now. We need some help from the
> community to identify usages that can benefit from this. We would like to make
> sure the proposed APIs work for the eventual consumers.

One use case for liburing/io_uring would be to use it instead of eventfd
for notifications. I know some folks do use eventfd right now, though
it's not that common. But if we had support for something like this,
then you could use it to know when to reap events rather than sleep in
the kernel. Or at least to be notified when new events have been posted
to the cq ring.
Dave Hansen Sept. 23, 2021, 2:46 p.m. UTC | #6
On 9/23/21 7:09 AM, Greg KH wrote:
> And you forgot to mention that this is tied to one specific CPU type
> only.  Are syscalls allowed to be created that would only work on
> obscure cpus like this one?

Well, you have to start somewhere.  For example, when memory protection
keys went in, we added three syscalls:

> 329     common  pkey_mprotect           sys_pkey_mprotect
> 330     common  pkey_alloc              sys_pkey_alloc
> 331     common  pkey_free               sys_pkey_free

At the point that I started posting these, you couldn't even buy a
system with this feature.  For a while, there was only one Intel Xeon
generation that had support.

But, if you build it, they will come.  Today, there is powerpc support
and our friends at AMD added support to their processors.  In addition,
protection keys are found across Intel's entire CPU line: from big
Xeons, down to the little Atoms you find in Chromebooks.

I encourage everyone submitting new hardware features to include
information about where their feature will show up to end users *and* to
say how widely it will be available.  I'd actually prefer if maintainers
rejected patches that didn't have this information.
Greg Kroah-Hartman Sept. 23, 2021, 3:07 p.m. UTC | #7
On Thu, Sep 23, 2021 at 07:46:43AM -0700, Dave Hansen wrote:
> I encourage everyone submitting new hardware features to include
> information about where their feature will show up to end users *and* to
> say how widely it will be available.  I'd actually prefer if maintainers
> rejected patches that didn't have this information.

Make sense.  So, what are the answers to these questions for this new
CPU feature?

thanks,

greg k-h
Sohil Mehta Sept. 23, 2021, 11:09 p.m. UTC | #8
On 9/23/2021 5:19 AM, Greg KH wrote:
> On Tue, Sep 14, 2021 at 07:03:36PM +0000, Mehta, Sohil wrote:
>
> Here is the updated table:
> +---------------------+-------------------------+
> | IPC type            |   Relative Latency      |
> |                     |(normalized to User IPI) |
> +---------------------+-------------------------+
> | User IPI            |                     1.0 |
> | User IPI (blocked)  |                     8.9 |
> | Signal              |                    14.8 |
> | Eventfd             |                     9.7 |
> | Pipe                |                    16.3 |
> | Domain              |                    17.3 |
> +---------------------+-------------------------+
> Relative is just that, "relative".  If the real values are extremely
> tiny, then relative is just "this goes a tiny tiny bit faster than what
> you have today in eventfd", right?
>
> So how about "absolute"?  What are we talking here?

Thanks Greg for reviewing the patches.

The reason I have not included absolute numbers is that on a 
pre-production platform it could be misleading. The data here is more of 
an approximation with the final performance expected to trend in this 
direction.

I have used the term "relative" only to signify that this is comparing 
User IPI with others.

Let's say, if eventfd took 9.7 usec on a system then User IPI (running) 
would take 1 usec. So it would still be a 9x improvement.

But, I agree with your point. This is only a micro-benchmark performance 
comparison. The overall gain in a real workload would depend on how it 
uses IPC.

+---------------------+------------------------------+
| IPC type            |       Example Latency        |
|                     |        (micro seconds)       |
+---------------------+------------------------------+
| User IPI (running)  |                     1.0 usec |
| User IPI (blocked)  |                     8.9 usec |
| Signal              |                    14.8 usec |
| Eventfd             |                     9.7 usec |
| Pipe                |                    16.3 usec |
| Domain              |                    17.3 usec |
+---------------------+------------------------------+


> And this is really only for the "one userspace task waking up another
> userspace task" policies.  What real workload can actually use this?

A User IPI sender could be registered to send IPIs to multiple targets. 
But, there is no broadcast mechanism, so it can only target one receiver 
everytime it executes the SENDUIPI instruction.

Thanks,

Sohil

> thanks,
>
> greg k-h
Sohil Mehta Sept. 23, 2021, 11:24 p.m. UTC | #9
On 9/23/2021 7:09 AM, Greg KH wrote:
> Also, you forgot to list Binder in the above IPC type.
>
Thanks for pointing that out. In the LPC discussion today there was also 
a suggestion to compare this with Futex wake.

I'll include a comparison with Binder and Futex next time.

I used this IPC benchmark this time but it doesn't include Binder and Futex.

https://github.com/goldsborough/ipc-bench

Would you know if there is anything out there that is more comprehensive 
for benchmarking IPC?

Thanks,

Sohil
Sohil Mehta Sept. 24, 2021, 12:17 a.m. UTC | #10
On 9/23/2021 5:19 AM, Greg KH wrote:

> What real workload can actually use this?
>
I missed replying to this.

User mode runtimes is one the usages that we think would benefit from 
User IPIs.

Also as Jens mentioned in another thread, this could help kernel to user 
notifications in io_uring (using User Interrupts instead of eventfd for 
signaling).

Libevent is another abstraction that we are evaluating.


Thanks,

Sohil
Andy Lutomirski Sept. 29, 2021, 4:31 a.m. UTC | #11
On Mon, Sep 13, 2021, at 1:01 PM, Sohil Mehta wrote:
> User Interrupts Introduction
> ============================
>
> User Interrupts (Uintr) is a hardware technology that enables delivering
> interrupts directly to user space.
>
> Today, virtually all communication across privilege boundaries happens by going
> through the kernel. These include signals, pipes, remote procedure calls and
> hardware interrupt based notifications. User interrupts provide the foundation
> for more efficient (low latency and low CPU utilization) versions of these
> common operations by avoiding transitions through the kernel.
>

...

I spent some time reviewing the docs (ISE) and contemplating how this all fits together, and I have a high level question:

Can someone give an example of a realistic workload that would benefit from SENDUIPI and precisely how it would use SENDUIPI?  Or an example of a realistic workload that would benefit from hypothetical device-initiated user interrupts and how it would use them?  I'm having trouble imagining something that wouldn't work as well or better by simply polling, at least on DMA-coherent architectures like x86.

(I can imagine some benefit to a hypothetical improved SENDUIPI with idential user semantics but that supported a proper interaction with the scheduler and blocking syscalls.  But that's not what's documented in the ISE...)

--Andy
Stefan Hajnoczi Sept. 30, 2021, 4:26 p.m. UTC | #12
On Mon, Sep 13, 2021 at 01:01:19PM -0700, Sohil Mehta wrote:
> User Interrupts Introduction
> ============================
> 
> User Interrupts (Uintr) is a hardware technology that enables delivering
> interrupts directly to user space.
> 
> Today, virtually all communication across privilege boundaries happens by going
> through the kernel. These include signals, pipes, remote procedure calls and
> hardware interrupt based notifications. User interrupts provide the foundation
> for more efficient (low latency and low CPU utilization) versions of these
> common operations by avoiding transitions through the kernel.
> 
> In the User Interrupts hardware architecture, a receiver is always expected to
> be a user space task. However, a user interrupt can be sent by another user
> space task, kernel or an external source (like a device).
> 
> In addition to the general infrastructure to receive user interrupts, this
> series introduces a single source: interrupts from another user task.  These
> are referred to as User IPIs.
> 
> The first implementation of User IPIs will be in the Intel processor code-named
> Sapphire Rapids. Refer Chapter 11 of the Intel Architecture instruction set
> extensions for details of the hardware architecture [1].
> 
> Series-reviewed-by: Tony Luck <tony.luck@intel.com>
> 
> Main goals of this RFC
> ======================
> - Introduce this upcoming technology to the community.
> This cover letter includes a hardware architecture summary along with the
> software architecture and kernel design choices. This post is a bit long as a
> result. Hopefully, it helps answer more questions than it creates :) I am also
> planning to talk about User Interrupts next week at the LPC Kernel summit.
> 
> - Discuss potential use cases.
> We are starting to look at actual usages and libraries (like libevent[2] and
> liburing[3]) that can take advantage of this technology. Unfortunately, we
> don't have much to share on this right now. We need some help from the
> community to identify usages that can benefit from this. We would like to make
> sure the proposed APIs work for the eventual consumers.
> 
> - Get early feedback on the software architecture.
> We are hoping to get some feedback on the direction of overall software
> architecture - starting with User IPI, extending it for kernel-to-user
> interrupt notifications and external interrupts in the future. 
> 
> - Discuss some of the main architecture opens.
> There is lot of work that still needs to happen to enable this technology. We
> are looking for some input on future patches that would be of interest. Here
> are some of the big opens that we are looking to resolve.
> * Should Uintr interrupt all blocking system calls like sleep(), read(),
>   poll(), etc? If so, should we implement an SA_RESTART type of mechanism
>   similar to signals? - Refer Blocking for interrupts section below.
> 
> * Should the User Interrupt Target table (UITT) be shared between threads of a
>   multi-threaded application or maybe even across processes? - Refer Sharing
>   the UITT section below.
> 
> Why care about this? - Micro benchmark performance
> ==================================================
> There is a ~9x or higher performance improvement using User IPI over other IPC
> mechanisms for event signaling.
> 
> Below is the average normalized latency for a 1M ping-pong IPC notifications
> with message size=1.
> 
> +------------+-------------------------+
> | IPC type   |   Relative Latency      |
> |            |(normalized to User IPI) |
> +------------+-------------------------+
> | User IPI   |                     1.0 |
> | Signal     |                    14.8 |
> | Eventfd    |                     9.7 |

Is this the bi-directional eventfd benchmark?
https://github.com/intel/uintr-ipc-bench/blob/linux-rfc-v1/source/eventfd/eventfd-bi.c

Two things stand out:

1. The server and client threads are racing on the same eventfd.
   Eventfds aren't bi-directional! The eventfd_wait() function has code
   to write the value back, which is a waste of CPU cycles and hinders
   progress. I've never seen eventfd used this way in real applications.
   Can you use two separate eventfds?

2. The fd is in blocking mode and the task may be descheduled, so we're
   measuring eventfd read/write latency plus scheduler/context-switch
   latency. A fairer comparison against user interrupts would be to busy
   wait on a non-blocking fd so the scheduler/context-switch latency is
   mostly avoided. After all, the uintrfd-bi.c benchmark does this in
   uintrfd_wait():

     // Keep spinning until the interrupt is received
     while (!uintr_received[token]);
Stefan Hajnoczi Sept. 30, 2021, 4:30 p.m. UTC | #13
On Tue, Sep 28, 2021 at 09:31:34PM -0700, Andy Lutomirski wrote:
> On Mon, Sep 13, 2021, at 1:01 PM, Sohil Mehta wrote:
> > User Interrupts Introduction
> > ============================
> >
> > User Interrupts (Uintr) is a hardware technology that enables delivering
> > interrupts directly to user space.
> >
> > Today, virtually all communication across privilege boundaries happens by going
> > through the kernel. These include signals, pipes, remote procedure calls and
> > hardware interrupt based notifications. User interrupts provide the foundation
> > for more efficient (low latency and low CPU utilization) versions of these
> > common operations by avoiding transitions through the kernel.
> >
> 
> ...
> 
> I spent some time reviewing the docs (ISE) and contemplating how this all fits together, and I have a high level question:
> 
> Can someone give an example of a realistic workload that would benefit from SENDUIPI and precisely how it would use SENDUIPI?  Or an example of a realistic workload that would benefit from hypothetical device-initiated user interrupts and how it would use them?  I'm having trouble imagining something that wouldn't work as well or better by simply polling, at least on DMA-coherent architectures like x86.

I was wondering the same thing. One thing came to mind:

An application that wants to be *interrupted* from what it's doing
rather than waiting until the next polling point. For example,
applications that are CPU-intensive and have green threads. I can't name
a real application like this though :P.

Stefan
Sohil Mehta Sept. 30, 2021, 5:24 p.m. UTC | #14
On 9/30/2021 9:30 AM, Stefan Hajnoczi wrote:
> On Tue, Sep 28, 2021 at 09:31:34PM -0700, Andy Lutomirski wrote:
>>
>> I spent some time reviewing the docs (ISE) and contemplating how this all fits together, and I have a high level question:
>>
>> Can someone give an example of a realistic workload that would benefit from SENDUIPI and precisely how it would use SENDUIPI?  Or an example of a realistic workload that would benefit from hypothetical device-initiated user interrupts and how it would use them?  I'm having trouble imagining something that wouldn't work as well or better by simply polling, at least on DMA-coherent architectures like x86.
> I was wondering the same thing. One thing came to mind:
>
> An application that wants to be *interrupted* from what it's doing
> rather than waiting until the next polling point. For example,
> applications that are CPU-intensive and have green threads. I can't name
> a real application like this though :P.

Thank you Stefan and Andy for giving this some thought.

We are consolidating the information internally on where and how exactly 
we expect to see benefits with real workloads for the various sources of 
User Interrupts. It will take a few days to get back on this one.


> (I can imagine some benefit to a hypothetical improved SENDUIPI with idential user semantics but that supported a proper interaction with the scheduler and blocking syscalls.  But that's not what's documented in the ISE...)

Andy, can you please provide some more context/details on this? Is this 
regarding the blocking syscalls discussion (in patch 11) or something else?


Thanks,
Sohil
Andy Lutomirski Sept. 30, 2021, 5:26 p.m. UTC | #15
On Thu, Sep 30, 2021, at 10:24 AM, Sohil Mehta wrote:
> On 9/30/2021 9:30 AM, Stefan Hajnoczi wrote:
>> On Tue, Sep 28, 2021 at 09:31:34PM -0700, Andy Lutomirski wrote:
>>>
>>> I spent some time reviewing the docs (ISE) and contemplating how this all fits together, and I have a high level question:
>>>
>>> Can someone give an example of a realistic workload that would benefit from SENDUIPI and precisely how it would use SENDUIPI?  Or an example of a realistic workload that would benefit from hypothetical device-initiated user interrupts and how it would use them?  I'm having trouble imagining something that wouldn't work as well or better by simply polling, at least on DMA-coherent architectures like x86.
>> I was wondering the same thing. One thing came to mind:
>>
>> An application that wants to be *interrupted* from what it's doing
>> rather than waiting until the next polling point. For example,
>> applications that are CPU-intensive and have green threads. I can't name
>> a real application like this though :P.
>
> Thank you Stefan and Andy for giving this some thought.
>
> We are consolidating the information internally on where and how exactly 
> we expect to see benefits with real workloads for the various sources of 
> User Interrupts. It will take a few days to get back on this one.

Thanks!

>
>
>> (I can imagine some benefit to a hypothetical improved SENDUIPI with idential user semantics but that supported a proper interaction with the scheduler and blocking syscalls.  But that's not what's documented in the ISE...)
>
> Andy, can you please provide some more context/details on this? Is this 
> regarding the blocking syscalls discussion (in patch 11) or something else?
>

Yes, and I'll follow up there.  I hereby upgrade my opinion of SENDUIPI wakeups to "probably doable but maybe not in a nice way."
Sohil Mehta Oct. 1, 2021, 12:40 a.m. UTC | #16
On 9/30/2021 9:26 AM, Stefan Hajnoczi wrote:
> On Mon, Sep 13, 2021 at 01:01:19PM -0700, Sohil Mehta wrote:
>> +------------+-------------------------+
>> | IPC type   |   Relative Latency      |
>> |            |(normalized to User IPI) |
>> +------------+-------------------------+
>> | User IPI   |                     1.0 |
>> | Signal     |                    14.8 |
>> | Eventfd    |                     9.7 |
> Is this the bi-directional eventfd benchmark?
> https://github.com/intel/uintr-ipc-bench/blob/linux-rfc-v1/source/eventfd/eventfd-bi.c

Yes. I have left it unmodified from the original source. But, I should 
have looked at it more closely.

> Two things stand out:
>
> 1. The server and client threads are racing on the same eventfd.
>     Eventfds aren't bi-directional! The eventfd_wait() function has code
>     to write the value back, which is a waste of CPU cycles and hinders
>     progress. I've never seen eventfd used this way in real applications.
>     Can you use two separate eventfds?

Sure. I can do that.


> 2. The fd is in blocking mode and the task may be descheduled, so we're
>     measuring eventfd read/write latency plus scheduler/context-switch
>     latency. A fairer comparison against user interrupts would be to busy
>     wait on a non-blocking fd so the scheduler/context-switch latency is
>     mostly avoided. After all, the uintrfd-bi.c benchmark does this in
>     uintrfd_wait():
>
>       // Keep spinning until the interrupt is received
>       while (!uintr_received[token]);

That makes sense. I'll give this a try and send out the updated results.

Thanks,
Sohil
Pavel Machek Oct. 1, 2021, 8:19 a.m. UTC | #17
Hi!

> Instructions
> ------------
> senduipi <index> - send a user IPI to a target task based on the UITT index.
> 
> clui - Mask user interrupts by clearing UIF (User Interrupt Flag).
> 
> stui - Unmask user interrupts by setting UIF.
> 
> testui - Test current value of UIF.
> 
> uiret - return from a user interrupt handler.

Are other CPU vendors allowed to implement compatible instructions?

If not, we should probably have VDSO entries so kernel can abstract
differences between CPUs.

> Untrusted processes
> -------------------
> The current implementation expects only trusted and cooperating processes to
> communicate using user interrupts. Coordination is expected between processes
> for a connection teardown. In situations where coordination doesn't happen
> (say, due to abrupt process exit), the kernel would end up keeping shared
> resources (like UPID) allocated to avoid faults.

Keeping resources allocated after process exit is a no-no.

Best regards,
								Pavel
Stefan Hajnoczi Oct. 1, 2021, 4:35 p.m. UTC | #18
On Thu, Sep 30, 2021 at 10:24:24AM -0700, Sohil Mehta wrote:
> 
> On 9/30/2021 9:30 AM, Stefan Hajnoczi wrote:
> > On Tue, Sep 28, 2021 at 09:31:34PM -0700, Andy Lutomirski wrote:
> > > 
> > > I spent some time reviewing the docs (ISE) and contemplating how this all fits together, and I have a high level question:
> > > 
> > > Can someone give an example of a realistic workload that would benefit from SENDUIPI and precisely how it would use SENDUIPI?  Or an example of a realistic workload that would benefit from hypothetical device-initiated user interrupts and how it would use them?  I'm having trouble imagining something that wouldn't work as well or better by simply polling, at least on DMA-coherent architectures like x86.
> > I was wondering the same thing. One thing came to mind:
> > 
> > An application that wants to be *interrupted* from what it's doing
> > rather than waiting until the next polling point. For example,
> > applications that are CPU-intensive and have green threads. I can't name
> > a real application like this though :P.
> 
> Thank you Stefan and Andy for giving this some thought.
> 
> We are consolidating the information internally on where and how exactly we
> expect to see benefits with real workloads for the various sources of User
> Interrupts. It will take a few days to get back on this one.

One possible use case came to mind in QEMU's TCG just-in-time compiler:

QEMU's TCG threads execute translated code. There are events that
require interrupting these threads. Today a check is performed at the
start of every translated block. Most of the time the check is false and
it's a waste of CPU.

User interrupts can eliminate the need for checks by interrupting TCG
threads when events occur.

I don't know whether this will improve performance or how feasible it is
to implement, but I've added people who might have ideas. (For a summary
of user interrupts, see
https://lwn.net/SubscriberLink/871113/60652640e11fc5df/.)

Stefan
Richard Henderson Oct. 1, 2021, 4:41 p.m. UTC | #19
On 10/1/21 12:35 PM, Stefan Hajnoczi wrote:
> QEMU's TCG threads execute translated code. There are events that
> require interrupting these threads. Today a check is performed at the
> start of every translated block. Most of the time the check is false and
> it's a waste of CPU.
> 
> User interrupts can eliminate the need for checks by interrupting TCG
> threads when events occur.

We used to use interrupts, and stopped because we need to wait until the guest is in a 
stable state.  The guest is always in a stable state at the beginning of each TB.

See 378df4b2375.


r~
Prakash Sangappa Nov. 16, 2021, 3:49 a.m. UTC | #20
> On Sep 13, 2021, at 1:01 PM, Sohil Mehta <sohil.mehta@intel.com> wrote:
> 
> User Interrupts Introduction
> ============================
> 
> User Interrupts (Uintr) is a hardware technology that enables delivering
> interrupts directly to user space.
> 
> Today, virtually all communication across privilege boundaries happens by going
> through the kernel. These include signals, pipes, remote procedure calls and
> hardware interrupt based notifications. User interrupts provide the foundation
> for more efficient (low latency and low CPU utilization) versions of these
> common operations by avoiding transitions through the kernel.
> 
> In the User Interrupts hardware architecture, a receiver is always expected to
> be a user space task. However, a user interrupt can be sent by another user
> space task, kernel or an external source (like a device).
> 
> In addition to the general infrastructure to receive user interrupts, this
> series introduces a single source: interrupts from another user task.  These
> are referred to as User IPIs.
> 
> The first implementation of User IPIs will be in the Intel processor code-named
> Sapphire Rapids. Refer Chapter 11 of the Intel Architecture instruction set
> extensions for details of the hardware architecture [1].
> 
> Series-reviewed-by: Tony Luck <tony.luck@intel.com>
> 
> Main goals of this RFC
> ======================
> - Introduce this upcoming technology to the community.
> This cover letter includes a hardware architecture summary along with the
> software architecture and kernel design choices. This post is a bit long as a
> result. Hopefully, it helps answer more questions than it creates :) I am also
> planning to talk about User Interrupts next week at the LPC Kernel summit.
> 
> - Discuss potential use cases.
> We are starting to look at actual usages and libraries (like libevent[2] and
> liburing[3]) that can take advantage of this technology. Unfortunately, we
> don't have much to share on this right now. We need some help from the
> community to identify usages that can benefit from this. We would like to make
> sure the proposed APIs work for the eventual consumers.
> 

Here are some use cases received from our Databases(Oracle) group.
They envision considerable benefits with use of user interrupts in 
following areas.

1) User mode scheduler 
Oracle DB implements user threads(green threads), with cooperative 
task switching. User interrupts would enable preempting tasks that are
long running.  SENDUIPI  would be useful in preempting tasks.

2) Get attention of threads/processes dedicated to cores. 

3) Latency sensitive execution 
To perform a task in the interrupt context of running threads.

4) User mode I/O completion 
Some threads can be waiting on memory using ‘umwait' for RDMA to complete.  
Consequently  dispatch completion of RDMA event to another thread using 
user interrupts for processing.


On those lines, DB will implement services like following using user 
interrupts.

1) General RPC, for example
	a.  Service requests to a small number of worker process/threads.
           To request processing data in memory locally on a numa  node
            by service threads running on that node. 
	b.  Abort current operation  to yield or release resources to a
            higher  priority threads.
2) User mode scheduling - force run the holder of resources.
3) User mode scheduling - log writer is a critical piece of execution for 
database updates. So we could post to a thread asking it to run log writer 
thread(user thread) immediately. This is useful in user mode scheduling 
with use of PMEM, where the log writer no longer has to perform I/O. The 
log is written sequentially in PMEM. 

4) Debugging:
User interrupt SENDUIPI would be used to force a running DB process to 
dump out debug information.

> - Get early feedback on the software architecture.
> We are hoping to get some feedback on the direction of overall software
> architecture - starting with User IPI, extending it for kernel-to-user
> interrupt notifications and external interrupts in the future. 
> 
> - Discuss some of the main architecture opens.
> There is lot of work that still needs to happen to enable this technology. We
> are looking for some input on future patches that would be of interest. Here
> are some of the big opens that we are looking to resolve.
> * Should Uintr interrupt all blocking system calls like sleep(), read(),
>  poll(), etc? If so, should we implement an SA_RESTART type of mechanism
>  similar to signals? - Refer Blocking for interrupts section below.
> 
> * Should the User Interrupt Target table (UITT) be shared between threads of a
>  multi-threaded application or maybe even across processes? - Refer Sharing
>  the UITT section below.
> 
> Why care about this? - Micro benchmark performance
> ==================================================
> There is a ~9x or higher performance improvement using User IPI over other IPC
> mechanisms for event signaling.
> 
> Below is the average normalized latency for a 1M ping-pong IPC notifications
> with message size=1.
> 
> +------------+-------------------------+
> | IPC type   |   Relative Latency      |
> |            |(normalized to User IPI) |
> +------------+-------------------------+
> | User IPI   |                     1.0 |
> | Signal     |                    14.8 |
> | Eventfd    |                     9.7 |
> | Pipe       |                    16.3 |
> | Domain     |                    17.3 |
> +------------+-------------------------+
> 
> Results have been estimated based on tests on internal hardware with Linux
> v5.14 + User IPI patches.
> 
> Original benchmark: https://github.com/goldsborough/ipc-bench
> Updated benchmark: https://github.com/intel/uintr-ipc-bench/tree/linux-rfc-v1
> 
> *Performance varies by use, configuration and other factors.
> 
> How it works underneath? - Hardware Summary
> ===========================================
> User Interrupts is a posted interrupt delivery mechanism. The interrupts are
> first posted to a memory location and then delivered to the receiver when they
> are running with CPL=3.
> 
> Kernel managed architectural data structures
> --------------------------------------------
> UPID: User Posted Interrupt Descriptor - Holds receiver interrupt vector
> information and notification state (like an ongoing notification, suppressed
> notifications).
> 
> UITT: User Interrupt Target Table - Stores UPID pointer and vector information
> for interrupt routing on the sender side. Referred by the senduipi instruction.
> 
> The interrupt state of each task is referenced via MSRs which are saved and
> restored by the kernel during context switch.
> 
> Instructions
> ------------
> senduipi <index> - send a user IPI to a target task based on the UITT index.
> 
> clui - Mask user interrupts by clearing UIF (User Interrupt Flag).
> 
> stui - Unmask user interrupts by setting UIF.
> 
> testui - Test current value of UIF.
> 
> uiret - return from a user interrupt handler.
> 
> User IPI
> --------
> When a User IPI sender executes 'senduipi <index>', the hardware refers the
> UITT table entry pointed by the index and posts the interrupt vector (63-0)
> into the receiver's UPID.
> 
> If the receiver is running (CPL=3), the sender cpu would send a physical IPI to
> the receiver's cpu. On the receiver side this IPI is detected as a User
> Interrupt. The User Interrupt handler for the receiver is invoked and the
> vector number (63-0) is pushed onto the stack.
> 
> Upon execution of 'uiret' in the interrupt handler, the control is transferred
> back to instruction that was interrupted.
> 
> Refer Chapter 11 of the Intel Architecture instruction set extensions [1] for
> more details.
> 
> Application interface - Software Architecture
> =============================================
> User Interrupts (Uintr) is an opt-in feature (unlike signals). Applications
> wanting to use Uintr are expected to register themselves with the kernel using
> the Uintr related system calls. A Uintr receiver is always a userspace task. A
> Uintr sender can be another userspace task, kernel or a device.
> 
> 1) A receiver can register/unregister an interrupt handler using the Uintr
> receiver related syscalls. 
> 		uintr_register_handler(handler, flags)
> 		uintr_unregister_handler(flags)
> 
> 2) A syscall also allows a receiver to register a vector and create a user
> interrupt file descriptor - uintr_fd. 
> 		uintr_fd = uintr_create_fd(vector, flags)
> 
> Uintr can be useful in some of the usages where eventfd or signals are used for
> frequent userspace event notifications. The semantics of uintr_fd are somewhat
> similar to an eventfd() or the write end of a pipe.
> 
> 3) Any sender with access to uintr_fd can use it to deliver events (in this
> case - interrupts) to a receiver. A sender task can manage its connection with
> the receiver using the sender related syscalls based on uintr_fd.
> 		uipi_index = uintr_register_sender(uintr_fd, flags)
> 
> Using an FD abstraction provides a secure mechanism to connect with a receiver.
> The FD sharing and isolation mechanisms put in place by the kernel would extend
> to Uintr as well. 
> 
> 4a) After the initial setup, a sender task can use the SENDUIPI instruction
> along with the uipi_index to generate user IPIs without any kernel
> intervention.
> 		SENDUIPI <uipi_index>
> 
> If the receiver is running (CPL=3), then the user interrupt is delivered
> directly without a kernel transition. If the receiver isn't running the
> interrupt is delivered when the receiver gets context switched back. If the
> receiver is blocked in the kernel, the user interrupt is delivered to the
> kernel which then unblocks the intended receiver to deliver the interrupt.
> 
> 4b) If the sender is the kernel or a device, the uintr_fd can be passed onto
> the related kernel entity to allow them to setup a connection and then generate
> a user interrupt for event delivery. <The exact details of this API are still
> being worked upon.>
> 
> For details of the user interface and associated system calls refer the Uintr
> man-pages draft:
> https://github.com/intel/uintr-linux-kernel/tree/rfc-v1/tools/uintr/manpages.
> We have also included the same content as patch 1 of this series to make it
> easier to review.
> 
> Refer the Uintr compiler programming guide [4] for details on Uintr integration
> with GCC and Binutils.
> 
> Kernel design choices
> =====================
> Here are some of the reasons and trade-offs for the current design of the APIs.
> 
> System call interface
> ---------------------
> Why a system call interface?: The 2 options we considered are using a char
> device at /dev or use system calls (current approach). A syscall approach
> avoids exposing a core cpu feature through a driver model. Also, we want to
> have a user interrupt FD per vector and share a single common interrupt handler
> among all vectors. This seems easier for the kernel and userspace to accomplish
> using a syscall based approach.
> 
> Data sharing using user interrupts: Uintr doesn't include a mechanism to
> share/transmit data. The expectation is applications use existing data sharing
> mechanisms to share data and use Uintr only for signaling.
> 
> An FD for each vector: A uintr_fd is assigned to each vector to allow fine
> grained priority and event management by the receiver. The alternative we
> considered was to allocate an FD to the interrupt handler and having that
> shared with the sender. However, that approach relies on the sender selecting
> the vector and moves the vector priority management to the sender. Also, if
> multiple senders want to send unique user interrupts they would need to
> coordinate the vector selection amongst them.
> 
> Extending the APIs: Currently, the system calls are only extendable using the
> flags argument. We can add a variable size struct to some of the syscalls if
> needed.
> 
> Extending existing mechanisms
> -----------------------------
> Uintr can be beneficial in some of the usages where eventfd() or signals are
> used. Since Uintr is hardware-dependent, thread-specific and bypasses the
> kernel in the fast path, it makes extending existing mechanisms harder.
> 
> Main issues with extending signals:
> Signal handlers are defined significantly differently than a User interrupt
> handler. An application needs to save/restore registers in a user interrupt
> handler and call uiret to return from it. Also, signals can be process directed
> (or thread directed) but user interrupts are always thread directed.
> 
> Comparison of signals with User Interrupts:
> +=====================+===========================+===========================+
> |                     | Signals                   | User Interrupts           |
> +=====================+===========================+===========================+
> | Stacks              | Has alt stacks            | Uses application stack    |
> |                     |                           | (alternate stack option   |
> |                     |                           | not yet enabled)          |
> +---------------------+---------------------------+---------------------------+
> | Registers state     | Kernel manages incl.      | App responsible (Use GCC  |
> |                     | FPU/XSTATE area           | 'interrupt' attribute for |
> |                     |                           | general purpose registers)|
> +---------------------+---------------------------+---------------------------+
> | Blocking/Masking    | sigprocmask(2)/sa_mask    | CLUI instruction (No per  |
> |                     |                           | vector masking)           |
> +---------------------+---------------------------+---------------------------+
> | Direction           | Uni-directional           | Uni-directional           |
> +---------------------+---------------------------+---------------------------+
> | Post event          | kill(), signal(),         | SENDUIPI <index> - index  |
> |                     | sigqueue(), etc.          | derived from uintr_fd     |
> +---------------------+---------------------------+---------------------------+
> | Target              | Process-directed or       | Thread-directed           |
> |                     | thread-directed           |                           |
> +---------------------+---------------------------+---------------------------+
> | Fork/inheritance    | Empty signal set          | Nothing is inherited      |
> +---------------------+---------------------------+---------------------------+
> | Execv               | Pending signals preserved | Nothing is inherited      |
> +---------------------+---------------------------+---------------------------+
> | Order of delivery   | Undetermined              | High to low vector numbers|
> | for multiple signals|                           |                           |
> +---------------------+---------------------------+---------------------------+
> | Handler re-entry    | All signals except the    | No interrupts can cause   |
> |                     | one being handled         | handler re-entry.         |
> +---------------------+---------------------------+---------------------------+
> | Delivery feedback   | 0 or -1 based on whether  | No feedback on whether the|
> |                     | the signal was sent       | interrupt was sent or     |
> |                     |                           | received.                 |
> +---------------------+---------------------------+---------------------------+
> 
> Main issues with extending eventfd():
> eventfd() has a counter value that is core to the API. User interrupts can't
> have an associated counter since the signaling happens at the user level and
> the hardware doesn't have a memory counter mechanism. Also, eventfd can be used
> for bi-directional signaling where as uintr_fd is uni-directional.
> 
> Comparison of eventfd with uintr_fd:
> +====================+======================+==============================+
> |                    | Eventfd              | uintr_fd (User Interrupt FD) |
> +====================+======================+==============================+
> | Object             | Counter - uint64     | Receiver vector information  |
> +--------------------+----------------------+------------------------------+
> | Post event         | write() to eventfd   | SENDUIPI <index> - index     |
> |                    |                      | derived from uintr_fd        |
> +--------------------+----------------------+------------------------------+
> | Receive event      | read() on eventfd    | Implicit - Handler is        |
> |                    |                      | invoked with associated      |
> |                    |                      | vector.                      |
> +--------------------+----------------------+------------------------------+
> | Direction          | Bi-directional       | Uni-directional              |
> +--------------------+----------------------+------------------------------+
> | Data transmitted   | Counter - uint64     | None                         |
> +--------------------+----------------------+------------------------------+
> | Waiting for events | Poll() family of     | No per vector wait.          |
> |                    | syscalls             | uintr_wait() allows waiting  |
> |                    |                      | for all user interrupts      |
> +--------------------+----------------------+------------------------------+
> 
> Security Model
> ==============
> User Interrupts is designed as an opt-in feature (unlike signals). The security
> model for user interrupts is intended to be similar to eventfd(). The general
> idea is that any sender with access to uintr_fd would be able to generate the
> associated interrupt vector for the receiver task that created the fd.
> 
> Untrusted processes
> -------------------
> The current implementation expects only trusted and cooperating processes to
> communicate using user interrupts. Coordination is expected between processes
> for a connection teardown. In situations where coordination doesn't happen
> (say, due to abrupt process exit), the kernel would end up keeping shared
> resources (like UPID) allocated to avoid faults.
> 
> Currently, a sender can easily cause a denial of service for the receiver by
> generating a storm of user interrupts. A user interrupt handler is invoked with
> interrupts disabled, but upon execution of uiret, interrupts get enabled again
> by the hardware. This can lead to the handler being invoked again before normal
> execution can resume. There isn't a hardware mechanism to mask specific
> interrupt vectors. 
> 
> To enable untrusted processes to communicate, we need to add a per-vector
> masking option through another syscall (or maybe IOCTL). However, this can add
> some complexity to the kernel code. A vector can only be masked by modifying
> the UITT entries at the source. We need to be careful about races while
> removing and restoring the UPID from the UITT.
> 
> Resource limits
> ---------------
> The maximum number of receiver-sender connections would be limited by the
> maximum number of open file descriptors and the size of the UITT.
> 
> The UITT size is chosen as 4kB fixed size arbitrarily right now. We plan to
> make it dynamic and configurable in size. RLIMIT_MEMLOCK or ENOMEM should be
> triggered when the size limits have been hit.
> 
> Main Opens
> ==========
> 
> Blocking for interrupts
> -----------------------
> User interrupts are delivered to applications immediately if they are running
> in userspace. If a receiver task has blocked in the kernel using the placeholder
> uintr_wait() syscall, the task would be woken up to deliver the user interrupt.
> However, if the task is blocked due to any other blocking calls like read(),
> sleep(), etc; the interrupt will only get delivered when the application gets
> scheduled again. We need to consider if applications need to receive User
> Interrupts as soon as they are posted (similar to signals) when they are
> blocked due to some other reason. Adding this capability would likely make the
> kernel implementation more complex.
> 
> Interrupting system calls using User Interrupts would also mean we need to
> consider an SA_RESTART type of mechanism. We also need to evaluate if some of
> the signal handler related semantics in the kernel can be reused for User
> Interrupts.

The DB use case requires a thread blocked in the kernel in a system call
to be interrupted immediately by the user interrupt. Need same behavior 
as signals with respect to interrupting system calls.

Aim is to use user interrupts as one mechanism, for fast IPC and to signal 
target thread blocked in the kernel in a system call.
i.e replace use of signals with user interrupts.

> 
> Sharing the User Interrupt Target Table (UITT)
> ----------------------------------------------
> The current implementation assigns a unique UITT to each task. This assumes
> that User interrupts are used for point-to-point communication between 2 tasks.
> Also, this keeps the kernel implementation relatively simple.
> 
> However, there are of benefits to sharing the UITT between threads of a
> multi-threaded application. One, they would see a consistent view of the UITT.
> i.e. SENDUIPI <index> would mean the same on all threads of the application.
> Also, each thread doesn't have to register itself using the common uintr_fd.
> This would simplify the userspace setup and make efficient use of kernel
> memory. The potential downside is that the kernel implementation to allocate,
> modify, expand and free the UITT would be more complex.
> 
> A similar argument can be made for a set of processes that do a lot of IPC
> amongst them. They would prefer to have a shared UITT that lets them target any
> process from any process. With the current file descriptor based approach, the
> connection setup can be time consuming and somewhat cumbersome. We need to
> evaluate if this can be made simpler as well.
> 

Following enhancements with respect to sharing UITT table will be beneficial.

Oracle DB creates large number of multithreaded processes. A thread in a 
process may need to communicate(using user interrupts) with another 
thread in any other process. Current proposal of receiver sending an FD 
per vector to each of the sender will be an overhead. Also every sender 
process/thread allocating a sender table for storing same receiver UPIDs 
will be duplication resulting in wasted memory. 

In addition to the current FD based registration approach, having a way 
for a group of DB processes to share a sender(UITT) table and  allowing 
each of the receiver threads to directly register itself in the shared UITT 
table,  will be efficient. For this the receiver need not create an fd. The 
receiver’s UPID index in UITT got from the registration will  be shared 
with all senders via shared memory(IPC).

DB maintains a process table of all the DB processes/threads in the shared 
memory. The receiver can register itself in the shared UITT table and store 
its UPID index in the process table. Sender will lookup target process from 
the process table to get the UITT index and send the user interrupt.


> Kernel page table isolation (KPTI)
> ----------------------------------
> SENDUIPI is a special ring-3 instruction that makes a supervisor mode memory
> access to the UPID and UITT memory. The current patches need KPTI to be
> disabled for User IPIs to work. To make User IPI work with KPTI, we need to
> allocate these structures from a special memory region that has supervisor
> access but it is mapped into userspace. The plan is to implement a mechanism
> similar to LDT. 
> 
> Processors that support user interrupts are not affected by Meltdown so the
> auto mode of KPTI will default to off. Users who want to force enable KPTI will
> need to wait for a later version of this patch series to use user interrupts.
> Please let us know if you want the development of these patches to be
> prioritized (or deprioritized).
> 
> FAQs
> ====
> Q: What happens if a process is "surprised" by a user interrupt?
> A: For tasks that haven't registered with the kernel and requested for user
> interrupts aren't expected or able to receive to user interrupts.
> 
> Q: Do user interrupts affect kernel scheduling?
> A: No. If a task is blocked waiting for user interrupts, when the kernel
> receives a notification on behalf of that task we only put it back on the
> runqueue. Delivery of a user interrupt in no way changes the scheduling
> priorities of a task.
> 
> Q: Does the sender get to know if the interrupt was delivered?
> A: No. User interrupts only provides a posted interrupt delivery mechanism. If
> applications need to rely on whether the interrupt was delivered they should
> consider a userspace mechanism for feedback (like a shared memory counter or a
> user interrupt back to the sender).
> 
> Q: Why is there no feedback on interrupt delivery?
> A: Being a posted interrupt delivery mechanism, the interrupt delivery
> happens in 2 steps:
> 1) The interrupt information is stored in a memory location (UPID).
> 2) The physical interrupt is delivered to the interrupt receiver.
> 
> The 2nd step could happen immediately, after an extended period, or it might
> never happen based on the state of the receiver after step 1. (The receiver
> could have disabled interrupts, have been context switched out or it might have
> crashed during that time.) This makes it very hard for the hardware to reliably
> provide feedback upon execution of SENDUIPI.
> 
> Q: Can user interrupts be nested?
> A: Yes. Using STUI instruction in the interrupt handler would allow new user
> interrupts to be delivered. However, there no TPR(thread priority register)
> like mechanism to allow only higher priority interrupts. Any user interrupt can
> be taken when nesting is enabled.
> 
> Q: Can a task receive all pending user interrupts in one go?
> A: No. The hardware allows only one vector to be processed at a time. If a task
> is interested in knowing all the interrupts that are pending then we could add
> a syscall that provides the pending interrupts information.
> 
> Q: Do the processes need to be pinned to a cpu?
> A: No. User interrupts will be routed correctly to whichever cpu the receiver
> is running on. The kernel updates the cpu information in the UPID during
> context switch.
> 
> Q: Why are UPID and UITT allocated by the kernel?
> A: If allocated by user space, applications could misuse the UPID and UITT to
> write to unauthorized memory and generate interrupts on any cpu. The UPID and
> UITT are allocated by the kernel and accessed by the hardware with supervisor
> privilege.
> 
> Patch structure for this series
> ===============================
> - Man-pages and Kernel documentation (patch 1,2)
> - Hardware enumeration (patch 3, 4)
> - User IPI kernel vector reservation (patch 5)
> - Syscall interface for interrupt receiver, sender and vector
>  management(uintr_fd) (patch 6-12)
> - Basic selftests (patch 13)
> 
> Along with the patches in this RFC, there are additional tests and samples that
> are available at:
> https://github.com/intel/uintr-linux-kernel/tree/rfc-v1
> 
> Links
> =====
> [1]: https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html
> [2]: https://libevent.org/
> [3]: https://github.com/axboe/liburing
> [4]: https://github.com/intel/uintr-compiler-guide/blob/uintr-gcc-11.1/UINTR-compiler-guide.pdf
> 
> Sohil Mehta (13):
>  x86/uintr/man-page: Include man pages draft for reference
>  Documentation/x86: Add documentation for User Interrupts
>  x86/cpu: Enumerate User Interrupts support
>  x86/fpu/xstate: Enumerate User Interrupts supervisor state
>  x86/irq: Reserve a user IPI notification vector
>  x86/uintr: Introduce uintr receiver syscalls
>  x86/process/64: Add uintr task context switch support
>  x86/process/64: Clean up uintr task fork and exit paths
>  x86/uintr: Introduce vector registration and uintr_fd syscall
>  x86/uintr: Introduce user IPI sender syscalls
>  x86/uintr: Introduce uintr_wait() syscall
>  x86/uintr: Wire up the user interrupt syscalls
>  selftests/x86: Add basic tests for User IPI
> 
> .../admin-guide/kernel-parameters.txt         |   2 +
> Documentation/x86/index.rst                   |   1 +
> Documentation/x86/user-interrupts.rst         | 107 +++
> arch/x86/Kconfig                              |  12 +
> arch/x86/entry/syscalls/syscall_32.tbl        |   6 +
> arch/x86/entry/syscalls/syscall_64.tbl        |   6 +
> arch/x86/include/asm/cpufeatures.h            |   1 +
> arch/x86/include/asm/disabled-features.h      |   8 +-
> arch/x86/include/asm/entry-common.h           |   4 +
> arch/x86/include/asm/fpu/types.h              |  20 +-
> arch/x86/include/asm/fpu/xstate.h             |   3 +-
> arch/x86/include/asm/hardirq.h                |   4 +
> arch/x86/include/asm/idtentry.h               |   5 +
> arch/x86/include/asm/irq_vectors.h            |   6 +-
> arch/x86/include/asm/msr-index.h              |   8 +
> arch/x86/include/asm/processor.h              |   8 +
> arch/x86/include/asm/uintr.h                  |  76 ++
> arch/x86/include/uapi/asm/processor-flags.h   |   2 +
> arch/x86/kernel/Makefile                      |   1 +
> arch/x86/kernel/cpu/common.c                  |  61 ++
> arch/x86/kernel/cpu/cpuid-deps.c              |   1 +
> arch/x86/kernel/fpu/core.c                    |  17 +
> arch/x86/kernel/fpu/xstate.c                  |  20 +-
> arch/x86/kernel/idt.c                         |   4 +
> arch/x86/kernel/irq.c                         |  51 +
> arch/x86/kernel/process.c                     |  10 +
> arch/x86/kernel/process_64.c                  |   4 +
> arch/x86/kernel/uintr_core.c                  | 880 ++++++++++++++++++
> arch/x86/kernel/uintr_fd.c                    | 300 ++++++
> include/linux/syscalls.h                      |   8 +
> include/uapi/asm-generic/unistd.h             |  15 +-
> kernel/sys_ni.c                               |   8 +
> scripts/checksyscalls.sh                      |   6 +
> tools/testing/selftests/x86/Makefile          |  10 +
> tools/testing/selftests/x86/uintr.c           | 147 +++
> tools/uintr/manpages/0_overview.txt           | 265 ++++++
> tools/uintr/manpages/1_register_receiver.txt  | 122 +++
> .../uintr/manpages/2_unregister_receiver.txt  |  62 ++
> tools/uintr/manpages/3_create_fd.txt          | 104 +++
> tools/uintr/manpages/4_register_sender.txt    | 121 +++
> tools/uintr/manpages/5_unregister_sender.txt  |  79 ++
> tools/uintr/manpages/6_wait.txt               |  59 ++
> 42 files changed, 2626 insertions(+), 8 deletions(-)
> create mode 100644 Documentation/x86/user-interrupts.rst
> create mode 100644 arch/x86/include/asm/uintr.h
> create mode 100644 arch/x86/kernel/uintr_core.c
> create mode 100644 arch/x86/kernel/uintr_fd.c
> create mode 100644 tools/testing/selftests/x86/uintr.c
> create mode 100644 tools/uintr/manpages/0_overview.txt
> create mode 100644 tools/uintr/manpages/1_register_receiver.txt
> create mode 100644 tools/uintr/manpages/2_unregister_receiver.txt
> create mode 100644 tools/uintr/manpages/3_create_fd.txt
> create mode 100644 tools/uintr/manpages/4_register_sender.txt
> create mode 100644 tools/uintr/manpages/5_unregister_sender.txt
> create mode 100644 tools/uintr/manpages/6_wait.txt
> 
> 
> base-commit: 6880fa6c56601bb8ed59df6c30fd390cc5f6dd8f
> -- 
> 2.33.0
> 
>
Sohil Mehta Nov. 18, 2021, 9:44 p.m. UTC | #21
On 11/15/2021 7:49 PM, Prakash Sangappa wrote:
> 

> Here are some use cases received from our Databases(Oracle) group.

Thank you Prakash for providing the potential use cases. This would 
really help with the design and validation of the UINTR APIs.

> 
> Aim is to use user interrupts as one mechanism, for fast IPC and to signal
> target thread blocked in the kernel in a system call.
> i.e replace use of signals with user interrupts.
> 

Mimicking this signal behavior would likely add some complexity to the 
implementation. Since there is interest, we'll work on prototyping this 
to evaluate tradeoffs and present them here.

> Following enhancements with respect to sharing UITT table will be beneficial.
> 
> Oracle DB creates large number of multithreaded processes. A thread in a
> process may need to communicate(using user interrupts) with another
> thread in any other process. Current proposal of receiver sending an FD
> per vector to each of the sender will be an overhead. Also every sender
> process/thread allocating a sender table for storing same receiver UPIDs
> will be duplication resulting in wasted memory.
> > In addition to the current FD based registration approach, having a way
> for a group of DB processes to share a sender(UITT) table and  allowing
> each of the receiver threads to directly register itself in the shared UITT
> table,  will be efficient. For this the receiver need not create an fd. The
> receiver’s UPID index in UITT got from the registration will  be shared
> with all senders via shared memory(IPC).
> 

Sharing the UITT between tasks of the same process would be relatively 
easier compared to the sharing the UITT across processes. We would need 
a scalable mechanism to authenticate the sharing of this kernel resource 
across the process boundary.

I am working on a proposal for this. I'll send it out once I have 
something concrete.

> DB maintains a process table of all the DB processes/threads in the shared
> memory. The receiver can register itself in the shared UITT table and store
> its UPID index in the process table. Sender will lookup target process from
> the process table to get the UITT index and send the user interrupt.
> 

Thanks,
Sohil
Sohil Mehta Nov. 18, 2021, 10:19 p.m. UTC | #22
On 10/1/2021 1:19 AM, Pavel Machek wrote:
> Hi!
> 

Thank you for reviewing the patches!

>> Instructions
>> ------------
>> senduipi <index> - send a user IPI to a target task based on the UITT index.
>>
>> clui - Mask user interrupts by clearing UIF (User Interrupt Flag).
>>
>> stui - Unmask user interrupts by setting UIF.
>>
>> testui - Test current value of UIF.
>>
>> uiret - return from a user interrupt handler.
> 
> Are other CPU vendors allowed to implement compatible instructions?
> 
> If not, we should probably have VDSO entries so kernel can abstract
> differences between CPUs.
> 

Yes, we are evaluating VDSO support for this.

>> Untrusted processes
>> -------------------
>> The current implementation expects only trusted and cooperating processes to
>> communicate using user interrupts. Coordination is expected between processes
>> for a connection teardown. In situations where coordination doesn't happen
>> (say, due to abrupt process exit), the kernel would end up keeping shared
>> resources (like UPID) allocated to avoid faults.
> 
> Keeping resources allocated after process exit is a no-no.
> 

I meant the resource is still tracked via the shared file descriptor, so 
it will eventually get freed when the FD release happens. I am planning 
to include better documentation on lifetime rules of these shared 
resources next time.

Thanks,
Sohil
Chrisma Pakha Dec. 22, 2021, 4:17 p.m. UTC | #23
On 9/13/21 4:01 PM, Sohil Mehta wrote:
> User Interrupts Introduction
> ============================
>
> User Interrupts (Uintr) is a hardware technology that enables delivering
> interrupts directly to user space.
>
> Today, virtually all communication across privilege boundaries happens by going
> through the kernel. These include signals, pipes, remote procedure calls and
> hardware interrupt based notifications. User interrupts provide the foundation
> for more efficient (low latency and low CPU utilization) versions of these
> common operations by avoiding transitions through the kernel.
>
> In the User Interrupts hardware architecture, a receiver is always expected to
> be a user space task. However, a user interrupt can be sent by another user
> space task, kernel or an external source (like a device).
>
> In addition to the general infrastructure to receive user interrupts, this
> series introduces a single source: interrupts from another user task.  These
> are referred to as User IPIs.
>
> The first implementation of User IPIs will be in the Intel processor code-named
> Sapphire Rapids. Refer Chapter 11 of the Intel Architecture instruction set
> extensions for details of the hardware architecture [1].
>
> Series-reviewed-by: Tony Luck<tony.luck@intel.com>
>
> Main goals of this RFC
> ======================
> - Introduce this upcoming technology to the community.
> This cover letter includes a hardware architecture summary along with the
> software architecture and kernel design choices. This post is a bit long as a
> result. Hopefully, it helps answer more questions than it creates :) I am also
> planning to talk about User Interrupts next week at the LPC Kernel summit.
>
> - Discuss potential use cases.
> We are starting to look at actual usages and libraries (like libevent[2] and
> liburing[3]) that can take advantage of this technology. Unfortunately, we
> don't have much to share on this right now. We need some help from the
> community to identify usages that can benefit from this. We would like to make
> sure the proposed APIs work for the eventual consumers.
>
> - Get early feedback on the software architecture.
> We are hoping to get some feedback on the direction of overall software
> architecture - starting with User IPI, extending it for kernel-to-user
> interrupt notifications and external interrupts in the future.
>
> - Discuss some of the main architecture opens.
> There is lot of work that still needs to happen to enable this technology. We
> are looking for some input on future patches that would be of interest. Here
> are some of the big opens that we are looking to resolve.
> * Should Uintr interrupt all blocking system calls like sleep(), read(),
>    poll(), etc? If so, should we implement an SA_RESTART type of mechanism
>    similar to signals? - Refer Blocking for interrupts section below.
>
> * Should the User Interrupt Target table (UITT) be shared between threads of a
>    multi-threaded application or maybe even across processes? - Refer Sharing
>    the UITT section below.
>
> Why care about this? - Micro benchmark performance
> ==================================================
> There is a ~9x or higher performance improvement using User IPI over other IPC
> mechanisms for event signaling.
>
> Below is the average normalized latency for a 1M ping-pong IPC notifications
> with message size=1.
>
> +------------+-------------------------+
> | IPC type   |   Relative Latency      |
> |            |(normalized to User IPI) |
> +------------+-------------------------+
> | User IPI   |                     1.0 |
> | Signal     |                    14.8 |
> | Eventfd    |                     9.7 |
> | Pipe       |                    16.3 |
> | Domain     |                    17.3 |
> +------------+-------------------------+
>
> Results have been estimated based on tests on internal hardware with Linux
> v5.14 + User IPI patches.
>
> Original benchmark:https://github.com/goldsborough/ipc-bench
> Updated benchmark:https://github.com/intel/uintr-ipc-bench/tree/linux-rfc-v1
>
> *Performance varies by use, configuration and other factors.
>
> How it works underneath? - Hardware Summary
> ===========================================
> User Interrupts is a posted interrupt delivery mechanism. The interrupts are
> first posted to a memory location and then delivered to the receiver when they
> are running with CPL=3.
>
> Kernel managed architectural data structures
> --------------------------------------------
> UPID: User Posted Interrupt Descriptor - Holds receiver interrupt vector
> information and notification state (like an ongoing notification, suppressed
> notifications).
>
> UITT: User Interrupt Target Table - Stores UPID pointer and vector information
> for interrupt routing on the sender side. Referred by the senduipi instruction.
>
> The interrupt state of each task is referenced via MSRs which are saved and
> restored by the kernel during context switch.
>
> Instructions
> ------------
> senduipi <index> - send a user IPI to a target task based on the UITT index.
>
> clui - Mask user interrupts by clearing UIF (User Interrupt Flag).
>
> stui - Unmask user interrupts by setting UIF.
>
> testui - Test current value of UIF.
>
> uiret - return from a user interrupt handler.
>
> User IPI
> --------
> When a User IPI sender executes 'senduipi <index>', the hardware refers the
> UITT table entry pointed by the index and posts the interrupt vector (63-0)
> into the receiver's UPID.
>
> If the receiver is running (CPL=3), the sender cpu would send a physical IPI to
> the receiver's cpu. On the receiver side this IPI is detected as a User
> Interrupt. The User Interrupt handler for the receiver is invoked and the
> vector number (63-0) is pushed onto the stack.
>
> Upon execution of 'uiret' in the interrupt handler, the control is transferred
> back to instruction that was interrupted.
>
> Refer Chapter 11 of the Intel Architecture instruction set extensions [1] for
> more details.
>
> Application interface - Software Architecture
> =============================================
> User Interrupts (Uintr) is an opt-in feature (unlike signals). Applications
> wanting to use Uintr are expected to register themselves with the kernel using
> the Uintr related system calls. A Uintr receiver is always a userspace task. A
> Uintr sender can be another userspace task, kernel or a device.
>
> 1) A receiver can register/unregister an interrupt handler using the Uintr
> receiver related syscalls.
> 		uintr_register_handler(handler, flags)
> 		uintr_unregister_handler(flags)
>
> 2) A syscall also allows a receiver to register a vector and create a user
> interrupt file descriptor - uintr_fd.
> 		uintr_fd = uintr_create_fd(vector, flags)
>
> Uintr can be useful in some of the usages where eventfd or signals are used for
> frequent userspace event notifications. The semantics of uintr_fd are somewhat
> similar to an eventfd() or the write end of a pipe.
>
> 3) Any sender with access to uintr_fd can use it to deliver events (in this
> case - interrupts) to a receiver. A sender task can manage its connection with
> the receiver using the sender related syscalls based on uintr_fd.
> 		uipi_index = uintr_register_sender(uintr_fd, flags)
>
> Using an FD abstraction provides a secure mechanism to connect with a receiver.
> The FD sharing and isolation mechanisms put in place by the kernel would extend
> to Uintr as well.
>
> 4a) After the initial setup, a sender task can use the SENDUIPI instruction
> along with the uipi_index to generate user IPIs without any kernel
> intervention.
> 		SENDUIPI <uipi_index>
>
> If the receiver is running (CPL=3), then the user interrupt is delivered
> directly without a kernel transition. If the receiver isn't running the
> interrupt is delivered when the receiver gets context switched back. If the
> receiver is blocked in the kernel, the user interrupt is delivered to the
> kernel which then unblocks the intended receiver to deliver the interrupt.
>
> 4b) If the sender is the kernel or a device, the uintr_fd can be passed onto
> the related kernel entity to allow them to setup a connection and then generate
> a user interrupt for event delivery. <The exact details of this API are still
> being worked upon.>
>
> For details of the user interface and associated system calls refer the Uintr
> man-pages draft:
> https://github.com/intel/uintr-linux-kernel/tree/rfc-v1/tools/uintr/manpages.
> We have also included the same content as patch 1 of this series to make it
> easier to review.
>
> Refer the Uintr compiler programming guide [4] for details on Uintr integration
> with GCC and Binutils.
>
> Kernel design choices
> =====================
> Here are some of the reasons and trade-offs for the current design of the APIs.
>
> System call interface
> ---------------------
> Why a system call interface?: The 2 options we considered are using a char
> device at /dev or use system calls (current approach). A syscall approach
> avoids exposing a core cpu feature through a driver model. Also, we want to
> have a user interrupt FD per vector and share a single common interrupt handler
> among all vectors. This seems easier for the kernel and userspace to accomplish
> using a syscall based approach.
>
> Data sharing using user interrupts: Uintr doesn't include a mechanism to
> share/transmit data. The expectation is applications use existing data sharing
> mechanisms to share data and use Uintr only for signaling.
>
> An FD for each vector: A uintr_fd is assigned to each vector to allow fine
> grained priority and event management by the receiver. The alternative we
> considered was to allocate an FD to the interrupt handler and having that
> shared with the sender. However, that approach relies on the sender selecting
> the vector and moves the vector priority management to the sender. Also, if
> multiple senders want to send unique user interrupts they would need to
> coordinate the vector selection amongst them.
>
> Extending the APIs: Currently, the system calls are only extendable using the
> flags argument. We can add a variable size struct to some of the syscalls if
> needed.
>
> Extending existing mechanisms
> -----------------------------
> Uintr can be beneficial in some of the usages where eventfd() or signals are
> used. Since Uintr is hardware-dependent, thread-specific and bypasses the
> kernel in the fast path, it makes extending existing mechanisms harder.
>
> Main issues with extending signals:
> Signal handlers are defined significantly differently than a User interrupt
> handler. An application needs to save/restore registers in a user interrupt
> handler and call uiret to return from it. Also, signals can be process directed
> (or thread directed) but user interrupts are always thread directed.
>
> Comparison of signals with User Interrupts:
> +=====================+===========================+===========================+
> |                     | Signals                   | User Interrupts           |
> +=====================+===========================+===========================+
> | Stacks              | Has alt stacks            | Uses application stack    |
> |                     |                           | (alternate stack option   |
> |                     |                           | not yet enabled)          |
> +---------------------+---------------------------+---------------------------+
> | Registers state     | Kernel manages incl.      | App responsible (Use GCC  |
> |                     | FPU/XSTATE area           | 'interrupt' attribute for |
> |                     |                           | general purpose registers)|
> +---------------------+---------------------------+---------------------------+
> | Blocking/Masking    | sigprocmask(2)/sa_mask    | CLUI instruction (No per  |
> |                     |                           | vector masking)           |
> +---------------------+---------------------------+---------------------------+
> | Direction           | Uni-directional           | Uni-directional           |
> +---------------------+---------------------------+---------------------------+
> | Post event          | kill(), signal(),         | SENDUIPI <index> - index  |
> |                     | sigqueue(), etc.          | derived from uintr_fd     |
> +---------------------+---------------------------+---------------------------+
> | Target              | Process-directed or       | Thread-directed           |
> |                     | thread-directed           |                           |
> +---------------------+---------------------------+---------------------------+
> | Fork/inheritance    | Empty signal set          | Nothing is inherited      |
> +---------------------+---------------------------+---------------------------+
> | Execv               | Pending signals preserved | Nothing is inherited      |
> +---------------------+---------------------------+---------------------------+
> | Order of delivery   | Undetermined              | High to low vector numbers|
> | for multiple signals|                           |                           |
> +---------------------+---------------------------+---------------------------+
> | Handler re-entry    | All signals except the    | No interrupts can cause   |
> |                     | one being handled         | handler re-entry.         |
> +---------------------+---------------------------+---------------------------+
> | Delivery feedback   | 0 or -1 based on whether  | No feedback on whether the|
> |                     | the signal was sent       | interrupt was sent or     |
> |                     |                           | received.                 |
> +---------------------+---------------------------+---------------------------+
>
> Main issues with extending eventfd():
> eventfd() has a counter value that is core to the API. User interrupts can't
> have an associated counter since the signaling happens at the user level and
> the hardware doesn't have a memory counter mechanism. Also, eventfd can be used
> for bi-directional signaling where as uintr_fd is uni-directional.
>
> Comparison of eventfd with uintr_fd:
> +====================+======================+==============================+
> |                    | Eventfd              | uintr_fd (User Interrupt FD) |
> +====================+======================+==============================+
> | Object             | Counter - uint64     | Receiver vector information  |
> +--------------------+----------------------+------------------------------+
> | Post event         | write() to eventfd   | SENDUIPI <index> - index     |
> |                    |                      | derived from uintr_fd        |
> +--------------------+----------------------+------------------------------+
> | Receive event      | read() on eventfd    | Implicit - Handler is        |
> |                    |                      | invoked with associated      |
> |                    |                      | vector.                      |
> +--------------------+----------------------+------------------------------+
> | Direction          | Bi-directional       | Uni-directional              |
> +--------------------+----------------------+------------------------------+
> | Data transmitted   | Counter - uint64     | None                         |
> +--------------------+----------------------+------------------------------+
> | Waiting for events | Poll() family of     | No per vector wait.          |
> |                    | syscalls             | uintr_wait() allows waiting  |
> |                    |                      | for all user interrupts      |
> +--------------------+----------------------+------------------------------+
>
> Security Model
> ==============
> User Interrupts is designed as an opt-in feature (unlike signals). The security
> model for user interrupts is intended to be similar to eventfd(). The general
> idea is that any sender with access to uintr_fd would be able to generate the
> associated interrupt vector for the receiver task that created the fd.
>
> Untrusted processes
> -------------------
> The current implementation expects only trusted and cooperating processes to
> communicate using user interrupts. Coordination is expected between processes
> for a connection teardown. In situations where coordination doesn't happen
> (say, due to abrupt process exit), the kernel would end up keeping shared
> resources (like UPID) allocated to avoid faults.
>
> Currently, a sender can easily cause a denial of service for the receiver by
> generating a storm of user interrupts. A user interrupt handler is invoked with
> interrupts disabled, but upon execution of uiret, interrupts get enabled again
> by the hardware. This can lead to the handler being invoked again before normal
> execution can resume. There isn't a hardware mechanism to mask specific
> interrupt vectors.
>
> To enable untrusted processes to communicate, we need to add a per-vector
> masking option through another syscall (or maybe IOCTL). However, this can add
> some complexity to the kernel code. A vector can only be masked by modifying
> the UITT entries at the source. We need to be careful about races while
> removing and restoring the UPID from the UITT.
>
> Resource limits
> ---------------
> The maximum number of receiver-sender connections would be limited by the
> maximum number of open file descriptors and the size of the UITT.
>
> The UITT size is chosen as 4kB fixed size arbitrarily right now. We plan to
> make it dynamic and configurable in size. RLIMIT_MEMLOCK or ENOMEM should be
> triggered when the size limits have been hit.
>
> Main Opens
> ==========
>
> Blocking for interrupts
> -----------------------
> User interrupts are delivered to applications immediately if they are running
> in userspace. If a receiver task has blocked in the kernel using the placeholder
> uintr_wait() syscall, the task would be woken up to deliver the user interrupt.
> However, if the task is blocked due to any other blocking calls like read(),
> sleep(), etc; the interrupt will only get delivered when the application gets
> scheduled again. We need to consider if applications need to receive User
> Interrupts as soon as they are posted (similar to signals) when they are
> blocked due to some other reason. Adding this capability would likely make the
> kernel implementation more complex.
>
> Interrupting system calls using User Interrupts would also mean we need to
> consider an SA_RESTART type of mechanism. We also need to evaluate if some of
> the signal handler related semantics in the kernel can be reused for User
> Interrupts.
>
> Sharing the User Interrupt Target Table (UITT)
> ----------------------------------------------
> The current implementation assigns a unique UITT to each task. This assumes
> that User interrupts are used for point-to-point communication between 2 tasks.
> Also, this keeps the kernel implementation relatively simple.
>
> However, there are of benefits to sharing the UITT between threads of a
> multi-threaded application. One, they would see a consistent view of the UITT.
> i.e. SENDUIPI <index> would mean the same on all threads of the application.
> Also, each thread doesn't have to register itself using the common uintr_fd.
> This would simplify the userspace setup and make efficient use of kernel
> memory. The potential downside is that the kernel implementation to allocate,
> modify, expand and free the UITT would be more complex.
>
> A similar argument can be made for a set of processes that do a lot of IPC
> amongst them. They would prefer to have a shared UITT that lets them target any
> process from any process. With the current file descriptor based approach, the
> connection setup can be time consuming and somewhat cumbersome. We need to
> evaluate if this can be made simpler as well.
>
> Kernel page table isolation (KPTI)
> ----------------------------------
> SENDUIPI is a special ring-3 instruction that makes a supervisor mode memory
> access to the UPID and UITT memory. The current patches need KPTI to be
> disabled for User IPIs to work. To make User IPI work with KPTI, we need to
> allocate these structures from a special memory region that has supervisor
> access but it is mapped into userspace. The plan is to implement a mechanism
> similar to LDT.
>
> Processors that support user interrupts are not affected by Meltdown so the
> auto mode of KPTI will default to off. Users who want to force enable KPTI will
> need to wait for a later version of this patch series to use user interrupts.
> Please let us know if you want the development of these patches to be
> prioritized (or deprioritized).
>
> FAQs
> ====
> Q: What happens if a process is "surprised" by a user interrupt?
> A: For tasks that haven't registered with the kernel and requested for user
> interrupts aren't expected or able to receive to user interrupts.
>
> Q: Do user interrupts affect kernel scheduling?
> A: No. If a task is blocked waiting for user interrupts, when the kernel
> receives a notification on behalf of that task we only put it back on the
> runqueue. Delivery of a user interrupt in no way changes the scheduling
> priorities of a task.
>
> Q: Does the sender get to know if the interrupt was delivered?
> A: No. User interrupts only provides a posted interrupt delivery mechanism. If
> applications need to rely on whether the interrupt was delivered they should
> consider a userspace mechanism for feedback (like a shared memory counter or a
> user interrupt back to the sender).
>
> Q: Why is there no feedback on interrupt delivery?
> A: Being a posted interrupt delivery mechanism, the interrupt delivery
> happens in 2 steps:
> 1) The interrupt information is stored in a memory location (UPID).
> 2) The physical interrupt is delivered to the interrupt receiver.
>
> The 2nd step could happen immediately, after an extended period, or it might
> never happen based on the state of the receiver after step 1. (The receiver
> could have disabled interrupts, have been context switched out or it might have
> crashed during that time.) This makes it very hard for the hardware to reliably
> provide feedback upon execution of SENDUIPI.
>
> Q: Can user interrupts be nested?
> A: Yes. Using STUI instruction in the interrupt handler would allow new user
> interrupts to be delivered. However, there no TPR(thread priority register)
> like mechanism to allow only higher priority interrupts. Any user interrupt can
> be taken when nesting is enabled.
>
> Q: Can a task receive all pending user interrupts in one go?
> A: No. The hardware allows only one vector to be processed at a time. If a task
> is interested in knowing all the interrupts that are pending then we could add
> a syscall that provides the pending interrupts information.
>
> Q: Do the processes need to be pinned to a cpu?
> A: No. User interrupts will be routed correctly to whichever cpu the receiver
> is running on. The kernel updates the cpu information in the UPID during
> context switch.
>
> Q: Why are UPID and UITT allocated by the kernel?
> A: If allocated by user space, applications could misuse the UPID and UITT to
> write to unauthorized memory and generate interrupts on any cpu. The UPID and
> UITT are allocated by the kernel and accessed by the hardware with supervisor
> privilege.
>
> Patch structure for this series
> ===============================
> - Man-pages and Kernel documentation (patch 1,2)
> - Hardware enumeration (patch 3, 4)
> - User IPI kernel vector reservation (patch 5)
> - Syscall interface for interrupt receiver, sender and vector
>    management(uintr_fd) (patch 6-12)
> - Basic selftests (patch 13)
>
> Along with the patches in this RFC, there are additional tests and samples that
> are available at:
> https://github.com/intel/uintr-linux-kernel/tree/rfc-v1
>
> Links
> =====
> [1]:https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html
> [2]:https://libevent.org/
> [3]:https://github.com/axboe/liburing
> [4]:https://github.com/intel/uintr-compiler-guide/blob/uintr-gcc-11.1/UINTR-compiler-guide.pdf
>
> Sohil Mehta (13):
>    x86/uintr/man-page: Include man pages draft for reference
>    Documentation/x86: Add documentation for User Interrupts
>    x86/cpu: Enumerate User Interrupts support
>    x86/fpu/xstate: Enumerate User Interrupts supervisor state
>    x86/irq: Reserve a user IPI notification vector
>    x86/uintr: Introduce uintr receiver syscalls
>    x86/process/64: Add uintr task context switch support
>    x86/process/64: Clean up uintr task fork and exit paths
>    x86/uintr: Introduce vector registration and uintr_fd syscall
>    x86/uintr: Introduce user IPI sender syscalls
>    x86/uintr: Introduce uintr_wait() syscall
>    x86/uintr: Wire up the user interrupt syscalls
>    selftests/x86: Add basic tests for User IPI
>
>   .../admin-guide/kernel-parameters.txt         |   2 +
>   Documentation/x86/index.rst                   |   1 +
>   Documentation/x86/user-interrupts.rst         | 107 +++
>   arch/x86/Kconfig                              |  12 +
>   arch/x86/entry/syscalls/syscall_32.tbl        |   6 +
>   arch/x86/entry/syscalls/syscall_64.tbl        |   6 +
>   arch/x86/include/asm/cpufeatures.h            |   1 +
>   arch/x86/include/asm/disabled-features.h      |   8 +-
>   arch/x86/include/asm/entry-common.h           |   4 +
>   arch/x86/include/asm/fpu/types.h              |  20 +-
>   arch/x86/include/asm/fpu/xstate.h             |   3 +-
>   arch/x86/include/asm/hardirq.h                |   4 +
>   arch/x86/include/asm/idtentry.h               |   5 +
>   arch/x86/include/asm/irq_vectors.h            |   6 +-
>   arch/x86/include/asm/msr-index.h              |   8 +
>   arch/x86/include/asm/processor.h              |   8 +
>   arch/x86/include/asm/uintr.h                  |  76 ++
>   arch/x86/include/uapi/asm/processor-flags.h   |   2 +
>   arch/x86/kernel/Makefile                      |   1 +
>   arch/x86/kernel/cpu/common.c                  |  61 ++
>   arch/x86/kernel/cpu/cpuid-deps.c              |   1 +
>   arch/x86/kernel/fpu/core.c                    |  17 +
>   arch/x86/kernel/fpu/xstate.c                  |  20 +-
>   arch/x86/kernel/idt.c                         |   4 +
>   arch/x86/kernel/irq.c                         |  51 +
>   arch/x86/kernel/process.c                     |  10 +
>   arch/x86/kernel/process_64.c                  |   4 +
>   arch/x86/kernel/uintr_core.c                  | 880 ++++++++++++++++++
>   arch/x86/kernel/uintr_fd.c                    | 300 ++++++
>   include/linux/syscalls.h                      |   8 +
>   include/uapi/asm-generic/unistd.h             |  15 +-
>   kernel/sys_ni.c                               |   8 +
>   scripts/checksyscalls.sh                      |   6 +
>   tools/testing/selftests/x86/Makefile          |  10 +
>   tools/testing/selftests/x86/uintr.c           | 147 +++
>   tools/uintr/manpages/0_overview.txt           | 265 ++++++
>   tools/uintr/manpages/1_register_receiver.txt  | 122 +++
>   .../uintr/manpages/2_unregister_receiver.txt  |  62 ++
>   tools/uintr/manpages/3_create_fd.txt          | 104 +++
>   tools/uintr/manpages/4_register_sender.txt    | 121 +++
>   tools/uintr/manpages/5_unregister_sender.txt  |  79 ++
>   tools/uintr/manpages/6_wait.txt               |  59 ++
>   42 files changed, 2626 insertions(+), 8 deletions(-)
>   create mode 100644 Documentation/x86/user-interrupts.rst
>   create mode 100644 arch/x86/include/asm/uintr.h
>   create mode 100644 arch/x86/kernel/uintr_core.c
>   create mode 100644 arch/x86/kernel/uintr_fd.c
>   create mode 100644 tools/testing/selftests/x86/uintr.c
>   create mode 100644 tools/uintr/manpages/0_overview.txt
>   create mode 100644 tools/uintr/manpages/1_register_receiver.txt
>   create mode 100644 tools/uintr/manpages/2_unregister_receiver.txt
>   create mode 100644 tools/uintr/manpages/3_create_fd.txt
>   create mode 100644 tools/uintr/manpages/4_register_sender.txt
>   create mode 100644 tools/uintr/manpages/5_unregister_sender.txt
>   create mode 100644 tools/uintr/manpages/6_wait.txt
>
>
> base-commit: 6880fa6c56601bb8ed59df6c30fd390cc5f6dd8f

Hi All,
My apologies if this email was sent twice.
I was not sure if my previous email followed the proper reply instruction.
I resent this email using the first reply method (Saving the mbox file, 
importing it into my mail client, and using reply-to-all from there).
The following is our understanding of the proposed User Interrupt.

----------------------------------------------------------------------------------------

We have been exploring how user-level interrupts (UIs) can be used to
improve performance and programmability in several different areas:
e.g., parallel programming, memory management, I/O, and floating-point
libraries.  Before we venture into the discussion here, we want to
make sure we understand the proposed model. We describe our
understanding below in four sections:

1. Current target use cases
2. Preparing for future use cases
3. Basic Understanding
4. Multi-threaded parallel programming example

If people on this thread could either confirm or point out our
misunderstandings, we would appreciate it.

# Current Use Cases

The Current RFC is focused on sending an interrupt from one user-space
thread (UST) to another user-space thread (UST2UST).  These threads
could be in different processes, as long as the sender has access to
the receiver's User Interrupt File Descriptor (uifd).  Based on our
understanding, UIs are currently targeted as a low overhead
alternative for the current IPC mechanisms.

# Preparing for future use cases

Based on the RFC, we are aware that allowing a device and the kernel
to send a UI is still in development.  Both these cases would support
imprecise interrupts.  We can see a clear use case for the Device to
user-space thread (D2UST) UI, for example, supporting a fast way for a
GPU to inform a thread that it has finished executing a particular
kernel. If someone could point out an example for Kernel to
user-space thread (K2UST) UI, we would appreciate it.

In our work, we have also been exploring precise UIs from the
currently running thread.  We call these CPU to UST (CPU2UST) UIs.
For example, a SIGSEGV generated by writing to a read-only page, a
SIGFPE generated by dividing a number by zero.

- QUESTION: Is there is a rough draft/plan that we can refer to that 
describes the
current thinking on these three cases.

- QUESTION: Are there use cases for K2UST, or is K2UST the same as CPU2UST?

# Basic Understanding

First, we would like to make sure that our understanding of the 
terminology and the data structures is correct.

- User Interrupt Vector (UIV): The identity of the user interrupt.
- User Interrupt Target Table (UITT):
   This allows the sender to locate the "address" of the receiver 
through the uifd.
- ui_frame: Argument passed to the UI handler. It contains a stack 
pointer, saved flags, and an instruction pointer.
- Sender: The thread that issues the `_senduipi`.
- Receiver: The thread that receives the UI from the sender.

Below outlines our understanding of the current API for UIs.

- Each thread that can receive UIs has exactly one handler
   registered with `uintr_register_handler` (a syscall).
- Each thread that registers a handler calls `uintr_create_fd` for
   every user-level interrupt vector (UIV) that they expect to receive.
- The only information delivered to the handler is the UIV.
- There are 64 UIVs that can be used per thread.
- A thread that wants to send a UI must register the receiver's uifd 
with `uintr_register_sender`  (a syscall).
   This returns an index the sender uses to locate the receiver.
- `_senduipi(index)` sends a user interrupt to a particular destination.
   The sender's UITT and index determine the destination.
- A thread uses `_stui` (and `_clui`) to enable (and disable) the 
reception of UIs.
- As for now, there is no mechanism to mask a particular UIV.
- A UI is delivered to the receiver immediately only if it is currently 
running.
- If a thread executes the `uintr_wait()`, it will be scheduled only 
after receiving a UI.
   There is no guarantee on the delay between the processor receiving 
the UI and when the thread is scheduled.
- If a thread is the target of a UI and another thread is running, or 
the target thread is blocked in the kernel,
   then the target thread will handle the UI when it is next scheduled.
- Ordinary interrupts (interrupt delivered with CPL=0) have a higher 
priority over user interrupts.
- UI handler only saves general-purpose registers (e.g., do not save 
floating-point registers).
- User Interrupts with higher UIV are given a higher priority than those 
with smaller UIV.

## Private UITT

The Current RFC focuses on a private UITT where each thread has its own
UITT.  Thus, different threads executing `_senduipi(index1)` with the
same `index1` may cause different receiver threads to be interrupted.

In many cases, the receiver of an interrupt needs to know which thread
sent the interrupt. If we understand the proposal correctly, there are
only 64 user-level interrupt vectors (UIVs), and the UIV is the only
information transmitted to the receiver. The UIV itself allows the
receiver to distinguish different senders through careful management
of the receiver's UIV.

- QUESTION: Given the code below where the same UIV is registered twice:
```c
   uintr_fd1 = uintr_create_fd(vector1, flags)
   uintr_fd2 = uintr_create_fd(vector1, flags)
```
Would `uintr_fd1` be the same as `uintr_fd2`, or would it be registered 
with a different index in the UITT table?

- QUESTION: If it is registered in a different index, would the
   receiver be able to distinguish the sender if `uintr_fd1` and
   `uintr_fd2` are used from two different threads?

- QUESTION: What is the intended future use of the `flags` argument?

## Shared UITT

In the case of the shared UITT model, all the threads share the same
UITT and thus, if two different threads execute `_senduipi(index)`
with the same index, they would both cause an interrupt in the
same destination/receiver.

- QUESTION: Since both threads use the same entry (same
   destination/receiver), does this mean that the receiver will not be
   able to distinguish the sender of the interrupt?

# Multi-threaded parallel programming example

One of the uses for UIs that we have been exploring is combining the
message-passing and shared memory models for parallel programming.  In
our approach, message-passing is used for synchronization and shared
memory for data sharing.  The message passing part of the programming
pattern is based loosely on Active Messages (See ISCA92), where a
particular thread can turn off/on interrupts to ignore incoming
messages so they can execute critical sections without having to
notify any other threads in the system.

- QUESTION: Is there any data on the performance impact of `_stui` and 
`_clui`?

----------------------------------------------------------------------------------------


Thank you.
Best regards,
Chrisma
Sohil Mehta Jan. 7, 2022, 2:08 a.m. UTC | #24
Hi Chrisma,

On 12/22/2021 8:17 AM, Chrisma Pakha wrote:
> 
> The following is our understanding of the proposed User Interrupt.
> 

Thank you for giving this some thought.

> 
> We have been exploring how user-level interrupts (UIs) can be used to
> improve performance and programmability in several different areas:
> e.g., parallel programming, memory management, I/O, and floating-point
> libraries.

Can you please share more details on this? It would really help improve 
the API design.

> 
> # Current Use Cases
> 
> The Current RFC is focused on sending an interrupt from one user-space
> thread (UST) to another user-space thread (UST2UST).  These threads
> could be in different processes, as long as the sender has access to
> the receiver's User Interrupt File Descriptor (uifd).  Based on our
> understanding, UIs are currently targeted as a low overhead
> alternative for the current IPC mechanisms.
> 

That's correct.

> # Preparing for future use cases
> > If someone could point out an example for Kernel to
> user-space thread (K2UST) UI, we would appreciate it.
> 

The idea here is improve the kernel-to-user event notification latency. 
Theoretically, this can be useful when the kernel sees event completion 
on one cpu but it want to signal (notify) a thread actively running on 
some other CPU. The receiver thread can save some cycles by avoiding 
ring transitions to receive the event.

IO_URING is one of the examples for kernel-to-user event notifications. 
We are evaluating whether providing a UINTR based completion mechanism 
can have benefit over eventfd based completions. The benefits in 
practice are yet to be measured and proven.

> In our work, we have also been exploring precise UIs from the
> currently running thread.  We call these CPU to UST (CPU2UST) UIs.
> For example, a SIGSEGV generated by writing to a read-only page, a
> SIGFPE generated by dividing a number by zero.
> 

It is definitely possible in future to delivery CPU events as User 
Interrupts. The hardware architecture for this is still being worked on 
internally.

Though our focus isn't on exceptions being delivered as User Interrupts. 
Do you have details on what type of benefit is expected?


> - QUESTION: Is there is a rough draft/plan that we can refer to that 
> describes the
> current thinking on these three cases.
> 
> - QUESTION: Are there use cases for K2UST, or is K2UST the same as CPU2UST?
> 

No, K2UST isn't the same as CPU2UST. We would expect limited benefits 
from K2UST but on the other hand CPU2UST can provide significant speedup 
since it avoids the kernel completely.

Unfortunately, due to the large scope of the feature, the hardware 
architecture development is happening in stages. I don't have detailed 
plans for each of the sources of User Interrupts.

Here is our rough plan:

1. Provide a common infrastructure to receive User Interrupts. This is 
independent of the source of the interrupt. The intention here is to 
keep the software APIs generic and extendable so that future sources can 
be added without causing much disturbance to the older APIs.

2. Introduce various sources of User Interrupts in stages:

UST2UST - This RFC. Available in the upcoming Sapphire Rapids processor.

K2UST - Also available in upcoming Sapphire Rapids. Working towards 
proving the value before sending something out.

D2UST - Future processor. Hardware architecture being worked on 
internally. Not much to share right now.

CPU2UST - Future processor. Hardware architecture being worked on 
internally. Not much to share right now.

> # Basic Understanding
> 

The overall description you have mentioned below looks good to me. I 
have added some minor comments for clarification.

Also, the abbreviations that you have used are somewhat different from 
the ones I have used in the patches.

> First, we would like to make sure that our understanding of the 
> terminology and the data structures is correct.
> 
> - User Interrupt Vector (UIV): The identity of the user interrupt.
> - User Interrupt Target Table (UITT):
>    This allows the sender to locate the "address" of the receiver 
> through the uifd.

The UITT refers to the 'UPID' address which is different from the uifd 
that you mention below.


> Below outlines our understanding of the current API for UIs.
> 

All of the statements below seem accurate.

However, some of the restrictions below are due to hardware design and 
some are mainly due to the software implementation. The software design 
and APIs might change significantly as this patch series evolves.

Please feel free to provide input wherever you think the APIs can be 
improved.

> - Each thread that can receive UIs has exactly one handler
>    registered with `uintr_register_handler` (a syscall).
> - Each thread that registers a handler calls `uintr_create_fd` for
>    every user-level interrupt vector (UIV) that they expect to receive.
> - The only information delivered to the handler is the UIV.
> - There are 64 UIVs that can be used per thread.

Though only one generic handler is registered with the hardware, an 
application can choose to implement 64 unique sub-handlers in user space 
based on each unique UIV.

> - A thread that wants to send a UI must register the receiver's uifd 
> with `uintr_register_sender`  (a syscall).
>    This returns an index the sender uses to locate the receiver.
> - `_senduipi(index)` sends a user interrupt to a particular destination.
>    The sender's UITT and index determine the destination.
> - A thread uses `_stui` (and `_clui`) to enable (and disable) the 
> reception of UIs.
> - As for now, there is no mechanism to mask a particular UIV.
> - A UI is delivered to the receiver immediately only if it is currently 
> running.
> - If a thread executes the `uintr_wait()`, it will be scheduled only 
> after receiving a UI.
>    There is no guarantee on the delay between the processor receiving 
> the UI and when the thread is scheduled.
> - If a thread is the target of a UI and another thread is running, or 
> the target thread is blocked in the kernel,
>    then the target thread will handle the UI when it is next scheduled.
> - Ordinary interrupts (interrupt delivered with CPL=0) have a higher 
> priority over user interrupts.
> - UI handler only saves general-purpose registers (e.g., do not save 
> floating-point registers).

The saving and restoring of the registers is done by gcc when the muintr 
flag along with the 'interrupt' attribute is used. Applications can 
choose to save floating point registers as part of the interrupt handler 
as well.

To make it easier for applications we are working on implementing a thin 
library that can help with some of this common functionality like saving 
floating point registers or redirecting to 64 sub-handlers.

> - User Interrupts with higher UIV are given a higher priority than those 
> with smaller UIV.
> 
> ## Private UITT
> 
> The Current RFC focuses on a private UITT where each thread has its own
> UITT.  Thus, different threads executing `_senduipi(index1)` with the
> same `index1` may cause different receiver threads to be interrupted.
> 

That's right.

> In many cases, the receiver of an interrupt needs to know which thread
> sent the interrupt. If we understand the proposal correctly, there are
> only 64 user-level interrupt vectors (UIVs), and the UIV is the only
> information transmitted to the receiver. The UIV itself allows the
> receiver to distinguish different senders through careful management
> of the receiver's UIV.
> 

That's correct. User Interrupts mainly provide a door bell mechanism 
with the actual data expected to be shared through some existing mechanism.

If multiple senders want to share the same interrupt vector then they 
would have to rely on some sort of shared memory (or similar) mechanism 
to relay the relevant information to the receiver. This would likely 
come with some latency cost.

> - QUESTION: Given the code below where the same UIV is registered twice:
> ```c
>    uintr_fd1 = uintr_create_fd(vector1, flags)
>    uintr_fd2 = uintr_create_fd(vector1, flags)
> ```
> Would `uintr_fd1` be the same as `uintr_fd2`, or would it be registered 
> with a different index in the UITT table?

In the current design, if the same thread tries to register the same 
vector again the second uintr_create_fd() would fail with a EBUSY error 
code.

> 
> - QUESTION: If it is registered in a different index, would the
>    receiver be able to distinguish the sender if `uintr_fd1` and
>    `uintr_fd2` are used from two different threads?
> 
> - QUESTION: What is the intended future use of the `flags` argument?
> 

In the uintr_create_fd() call, flags would be used to provide options 
such as O_CLOEXEC. In general, I added flags argument to all the system 
calls to keep them extendable when new boolean options need to be added.

> ## Shared UITT
> 
> In the case of the shared UITT model, all the threads share the same
> UITT and thus, if two different threads execute `_senduipi(index)`
> with the same index, they would both cause an interrupt in the
> same destination/receiver.
> 
> - QUESTION: Since both threads use the same entry (same
>    destination/receiver), does this mean that the receiver will not be
>    able to distinguish the sender of the interrupt?
> 

Yes. However this is true even in case of a private UITT. This isn't 
because the senders used the same UITT index rather it is the result of 
the senders generating the same UIVs.

For example, even if a receiver created 2 FDs with 2 unique vectors.

	uintr_fd1 = uintr_create_fd(vector1, flags)
	uintr_fd2 = uintr_create_fd(vector2, flags)

In case of the a private UITT, both sender threads can register 
themselves with uintr_fd1. They might get different uitt indexes 
returned to them. But when they generate a User interrupt using their 
respective index, the end result would be the same. The receiver will 
see the same vector1 being generated. There is no way for the receiver 
to distinguish the sender without some additional information being 
shared somewhere.


> # Multi-threaded parallel programming example
> 
> One of the uses for UIs that we have been exploring is combining the
> message-passing and shared memory models for parallel programming.  In
> our approach, message-passing is used for synchronization and shared
> memory for data sharing.  The message passing part of the programming
> pattern is based loosely on Active Messages (See ISCA92), where a
> particular thread can turn off/on interrupts to ignore incoming
> messages so they can execute critical sections without having to
> notify any other threads in the system.
> 

This look like a good fit for the User IPI (UST2UST) implementation in 
this RFC. Have you had a chance to evaluate the current API design for 
this usage?

Also, is any of the above work publicly available?

> - QUESTION: Is there any data on the performance impact of `_stui` and 
> `_clui`?
> 

_stui and _clui are expected to have very minimal overhead since they 
only modify a local flag. I'll try to measure this next time I am doing 
some performance measurement.

Thanks,
Sohil
Chrisma Pakha Jan. 17, 2022, 1:14 a.m. UTC | #25
Hi Sohil,

Thank you for your reply and the clarification.

>>
>> We have been exploring how user-level interrupts (UIs) can be used to
>> improve performance and programmability in several different areas:
>> e.g., parallel programming, memory management, I/O, and floating-point
>> libraries.
>
> Can you please share more details on this? It would really help 
> improve the API design.
>
Of course! Below we describe a few use cases for both user-level 
interrupts (UIs) and user-level exceptions (UEs). We realize that the 
current proposal is targeted towards UIs, but we also describe some UEs 
use cases because we believe handling exceptions without going through 
the kernel may provide even more of a benefit than UIs. We hope these 
use cases can influence the direction of the API so that it can be made 
forward compatible for future hardware revisions.

To be clear, we distinguish between interrupts (generated from an 
external source, such as another core or Device) that are most likely 
imprecise and asynchronous and exceptions (generated by the currently 
executing program) that need to be precise and synchronous.

# UST2UST

A UI is a mechanism to allow two or more threads to communicate with one 
another asynchronously without requiring the intervention of the kernel 
or a change in privilege. We believe that having UIs can help integrate 
the shared memory model and message passing model for multicore 
processors. This integration makes it easier to build parallel programs, 
allowing developers to take advantage of both models. The shared memory 
model provides an easy way to share data between threads, while the 
message passing model can be used for synchronization between threads.

In the following section, we will describe two use cases for UIs.
- We show how UIs can be used to improve parallel program performance by 
reducing the overhead of exposing parallelism.
- We show how UIs can be used to build efficient active messages.

Both of the use cases we present below require the receiver of a UI to 
know which thread issued it. At the end of the email we describe how we 
would implement this using the current API and suggest an alternative, 
and possibly more streamlined approach.

## Lazy Work Stealing

One of the hurdles in writing parallel programs is ensuring that the 
cost of parallelizing the code does not become a bottleneck in program 
performance. Some of these overheads come from unnecessarily exposing 
too much parallelism, even if all cores are busy. One mechanism to 
reduce this overhead is to lazily expose parallelism only when it is 
needed. This can be done through stack unwinding (similar to how 
Exception Handling works). Whenever a thread (thief) asks for work from 
another thread (victim), the victim will perform stack unwinding, 
creating the work for the thief. This approach to lazy thread creation 
requires some mechanism for the thief to ask for work.

We have implemented a prototype compiler and runtime for this mechanism. 
Our runtime requires a mechanism for the thief to signal the victim when 
it needs work. We implement this signaling through polls because the 
current IPI mechanism is too expensive to use. However, requiring the 
victim to poll can introduce excessive polling overhead and/or introduce 
significant latency between the request and the response. The compiler 
tries to keep the overhead of polling low (<5%) while still ensuring 
that the latency between a work-stealing request and its response is as 
low as possible. Currently, we essentially only poll for work requests 
in the function prologue, keeping the overhead to about 2% of execution 
time on average. This works well for almost all applications. However, 
in some applications, this can add 100 of microseconds of latency to the 
response of a work-stealing request.

One reason we use polling today, instead of the victim just taking work, 
is that there are points in the program where work-stealing is not 
allowed. So, in addition to having an inexpensive mechanism to request 
work, we need an inexpensive method to disallow the requests. IOW, the 
compiler only inserts polls at points where it is safe to do so. With 
the UI mechanism in the proposed API, we could signal a work-stealing 
request with a UST2UST UI and disallow such requests by disabling 
interrupts. One nice advantage of the current proposal is that disabling 
interrupts is a *local* operation, making it very inexpensive and not 
causing any interference with the rest of the threads. In other words, 
an important benefit of the proposed UI mechanism is that we can ensure 
atomicity (with respect to work stealing) without having to do any 
global communication.

## Implementing Active Message

Active messages can efficiently support the message passing parallel 
programming model. With the proposed API, the UI could signal that an AM 
is being delivered while shared memory data structures could be used for 
the payload. As described in the above use case, this would allow 
receiving threads to provide atomicity by disabling interrupts without 
any global communication.

Clearly, having a shared address space makes data access and management 
easier for parallel programming. On the other hand, controlling access 
to that data can often be cleaner to implement in message passing 
models. Dogan et al. have shown promising improvements by using explicit 
messaging hardware to accelerate Machine Learning and Graphs workloads 
(see [DHAKWETAL17,DAKK19]). Explicit messaging is used as a 
synchronization mechanism and has better scalability than shared 
memory-based synchronization. The current proposal would support this 
model integration with significantly lower overheads and lower latencies 
compared to what are available on today's machines.

------------------------------------------------------------------------
# D2UST

Applications that frequently interact with external devices can benefit 
from UIs. To achieve high performance, conventional IO approaches 
through the kernel are not appealing as it incurs high overhead. It 
requires context switching and data transfer from kernel-space to 
user-space, possibly polluting the cache and TLB. One improvement to 
bypass the kernel is by pinning pages to specific physical addresses, 
where these pages act as a buffer between user-space and device. 
However, since the device cannot directly interrupt the UST, the UST 
needs to poll to check if the data from the device is available. 
However, having the UST poll can easily erase any potential performance 
improvements offered by bypassing the kernel in the first place. 
Allowing a device to interrupt a UST under the proposed API will 
eliminate the need to poll and support atomicity as required which could 
significantly improve application performance.

This would be particularly useful when an application uses a GPU as an 
accelerator for parallel computation and CPU for serial computation (see 
[WBSACETAL08]). An example would be K-means. Finding which clusters each 
point belongs to is computed in the GPU (in parallel), while computing 
the mean is computed in the CPU (in serial). As this process is 
iterative, there are multiple computation transitions between the CPU 
and GPU. Without UIs, the only real option is to poll for GPU task 
completion, complicating control flow if there is also other work for 
the CPU thread to do. With UIs, keeping the GPU busy can be handled by 
the UI handler. The result would be cleaner code and better load 
balancing. To make this work, the D2UST interrupt will have to ensure 
that the process that started the task on the GPU is the same one that 
is currently running. When a different process is running, the interrupt 
will have to be saved by the kernel so it can be delivered to the UST 
when is is next scheduled.

------------------------------------------------------------------------

# CPU2UST

Providing a low cost user-level exception mechanism could fundamentally 
change the approach to implementing many algorithms. Examples range 
across many common tasks, e.g., checking for valid pointers, 
preprocessing floating-point data, garbage collection, etc. Today, due 
to the high cost of exception handling, programmers go to great lengths 
to ensure that exceptions do not happen. Unfortunately, this leads to 
more code and often less performance. Below we describe different 
scenarios where UEs could potentially reduce programming effort and/or 
improve performance.

## API for CPU2UST

For the examples, below we propose a small modification to the proposed 
API to support exceptions. We propose that a handler be registered for a 
particular fault to distinguish the exception type. Potentially, the 
`flags` argument could hold the `signum`, or a bit in the `flags` 
argument could indicate that a third parameter was being included with 
the `signum`. We suggest including `signum` in the current API for 
future use.

```
int uintr_register_handler(u64 handler_address, unsigned int flags, int 
signum);
```

Since each handler is registered for a particular exception, the handler 
itself would only have one argument, a pointer to the `__uintr_frame`. 
In some cases, the handler might need the `error_code` information 
(e.g., for a page-fault), which could be obtained using a new function, 
`unsigned long long __get_ue_errorcode(void)`.

```
__attribute__ ((interrupt))
void
handlerFunction (struct __uintr_frame *ui_frame)
{
   // Get error code if needed
   // unsigned long long error_code = __get_ue_errorcode();
   ...
}
```

We envision four ways for the user handler to manipulate the thread's 
state. Here we assume that a UE is handled by the thread that causes the 
exception.

1. Continuing the faulting thread of control.
2. Suspending a faulting thread or continuing another thread in the same 
process.
3. Deferring processing of the fault back to the kernel.
4. Or, finally, terminating the thread of control.

In case 1, where the faulting thread is continued, the handler can 
simply use the uiret (It could potentially modify the return address on 
the stack to change where execution continues). For case 2, we do not 
have a proposed API yet, but potentially some set of functions that 
extend pthreads might be appropriate. For case 3, the handler would use 
a trap to signal that the kernel should continue processing the 
exception. The compiler would have to restore registers appropriately 
before the trap is executed.

## Binary rewriting

Binary rewriting is a valuable technique for debugging, optimizing, 
repairing, emulating, and hardening (tightening security) a program 
[WMUW19]. One implementation of binary rewriting is to replace the 
probed instructions (instrumentation points) with a redirect instruction 
(either jump or trap) to the patch instructions. Most developers use 
jump instructions instead of traps due to their lower cost. However, 
because instructions have variable encoding lengths, inserting jump 
instructions requires care, e.g., "instruction punning" [CSDN17] with a 
combination of padding and eviction [DGR20]. On the other hand, the trap 
instruction is only a single byte, allowing it to replace any patched 
instruction. If the trap can be made inexpensive, this would potentially 
allow a simpler approach to binary rewriting without control flow recovery.

## Binary Emulation for forward/backward compatibility

Some processor families have an all encompassing ISA of which only a 
subset is implemented in hardware for some instances of the family. 
Applications built for the processor family either have to be recompiled 
for each instance or software emulation must handle the unimplemented 
instructions. If there is a UE for the illegal instruction fault, this 
can potentially be made inexpensive enough to avoid recompilation. 
Furthermore, it could be a way to handle legacy code and allow future 
generations to avoid the older crufty instructions that are no longer 
commonly used.

## Floating-Point Performance

Today, floating-point algorithms often preprocess the data in order to 
avoid underflow (or overflow) exceptions. If UEs were low enough cost, 
it is possible that these time consuming data preparation steps could be 
removed and only run if an exception was generated. A simple example is 
the calculation of the Root Mean Square (RMS) of a vector [HFT94,H96]. 
The common approach to calculating a vector's RMS is to scan the input 
vector and then potentially scale it to avoid underflow/overflow. For 
many applications, the common case is that the data does not require 
rescaling. In those cases, one could calculate the RMS on the unscaled 
data and only scale it if a UE was generated.

## Memory: garbage collection and watch points

User-level Page Fault exceptions (ULPF) is one essential component for 
improving the performance of a wide variety of applications. For 
example, in [AL20], we describe a solution that shows how ULPFs when 
combined with a mechanism that allows the user a limited ability to 
change a page's permissions without kernel intervention, can be used to 
implement an unlimited number of efficient software watchpoints. Our 
experiments were performed using GEM5, where we made changes to the MMU 
and TLB. However, Intel's Memory Protection Keys for User (MPK) [MPK17] 
combined with UE could also potentially do the trick.

Another example of an application that could benefit from ULPF is 
Concurrent Garbage Collection. Concurrent Garbage collection allows both 
the program (aka mutator) threads and the collector to run in parallel. 
To implement concurrent GC, a read barrier or write barrier is often 
needed (these are GC terms and should not be confused with hardware 
memory barriers). These barriers ensure that the GC invariants are 
maintained before a read or write operations. The write barrier prevents 
the GC from reclaiming a live object that was recently accessed by the 
mutator (in the case of a concurrent mark-sweep) [BDS91]. The read 
barrier prevents the mutator from reading stale objects (in the case of 
concurrent mark-compact) [AL91]. Both read and write barriers can be 
implemented using ULPF. The programmer can use the permission bit in the 
user-level page tables to cheaply turn on/off memory protection (e.g., 
inside the handler).

Belay et al. [BBMTMETAL12] has shown how to implement Boehm GC[BDS91] (a 
mostly parallel mark-sweep GC used in the Mono project [Mono18] and 
Objective-C [Objc15]) on their platform, Dune. Dune is a platform that 
allows user-space direct access to exceptions and privileged hardware 
features. The results show both speedup and slowdown, where the slowdown 
is attributed to their platform's inherent overhead. On the other hand, 
Click et al. [CTW05] and Tene et al. [TIW11] have built a custom system 
to build a Pauseless GC. This custom system allows fast page fault 
handling. The mechanism described in [AL20] could be extended to 
implement a similar approach.

# References

- [AL91] Appel, Andrew W. and Li, Kai, Virtual Memory Primitives for 
User Programs (1991)
- [AL20] Li, Qingyang, User Level Page Faults (2020), 
http://reports-archive.adm.cs.cmu.edu/anon/2020/CMU-CS-20-124.pdf
- [BBMTMETAL12] Belay, Adam and Bittau, Andrea and Mashtizadeh, Ali and 
Terei, David and Mazi\`{e}res, David and Kozyrakis, Christos, Dune: Safe 
User-Level Access to Privileged CPU Features (2012)
- [BDS91] Boehm, Hans-J. and Demers, Alan J. and Shenker, Scott, Mostly 
Parallel Garbage Collection (1991)
- [CSDN17] Chamith, Buddhika and Svensson, Bo Joel and Dalessandro, Luke 
and Newton, Ryan R., Instruction Punning: Lightweight Instrumentation 
for X86-64 (2017)
- [CTW05] Click, Cliff and Tene, Gil and Wolf, Michael, The Pauseless GC 
Algorithm (2005)
- [DAKK19] Dogan, Halit and Ahmad, Masab and Kahne, Brian and Khan, 
Omer, Accelerating Synchronization Using Moving Compute to Data Model at 
1,000-core Multicore Scale (2019)
- [DHAKWETAL17] Dogan, Halit and Hijaz, Farrukh and Ahmad, Masab and 
Kahne, Brian and Wilson, Peter and Khan, Omer, Accelerating Graph and 
Machine Learning Workloads Using a Shared Memory Multicore Architecture 
with Auxiliary Support for in-Hardware Explicit Messaging (2017)
- [DGR20] Duck, Gregory J. and Gao, Xiang and Roychoudhury, Abhik, 
Binary Rewriting without Control Flow Recovery (2020)
- [ECGS92] von Eicken, Thorsten and Culler, David E. and Goldstein, Seth 
Copen and Schauser, Klaus Erik, Active Messages: A Mechanism for 
Integrated Communication and Computation (1992)
- [H96] Hauser, John R., Handling Floating-Point Exceptions in Numeric 
Programs (1996)
- [HFT94] Hull, T. E. and Fairgrieve, Thomas F. and Tang, Ping-Tak 
Peter, Implementing Complex Elementary Functions Using Exception 
Handling (1994)
- [Mono18] https://www.mono-project.com/docs/advanced/runtime/ (2018)
- [MPK17] 
https://www.kernel.org/doc/Documentation/x86/protection-keys.txt (2017)
- [Objc15] 
https://gcc.gnu.org/onlinedocs/gcc-4.8.5/gcc/Garbage-Collection.html (2015)
- [TIW11] Tene, Gil and Iyengar, Balaji and Wolf, Michael, C4: The 
Continuously Concurrent Compacting Collector (2011)
- [WBSACETAL08] Wong, Henry and Bracy, Anne and Schuchman, Ethan and 
Aamodt, Tor M. and Collins, Jamison D. and Wang, Perry H. and Chinya, 
Gautham and Groen, Ankur Khandelwal and Jiang, Hong and Wang, Hong , 
Pangaea: A tightly-coupled IA32 heterogeneous chip multiprocessor (2008)
- [WMUW19] Wenzl, Matthias and Merzdovnik, Georg and Ullrich, Johanna 
and Weippl, Edgar, From Hack to Elaborate Technique—A Survey on Binary 
Rewriting (2019)

>> # Preparing for future use cases
>> If someone could point out an example for Kernel to
>> user-space thread (K2UST) UI, we would appreciate it.
>>
>
> The idea here is improve the kernel-to-user event notification 
> latency. Theoretically, this can be useful when the kernel sees event 
> completion on one cpu but it want to signal (notify) a thread actively 
> running on some other CPU. The receiver thread can save some cycles by 
> avoiding ring transitions to receive the event.
>
> IO_URING is one of the examples for kernel-to-user event 
> notifications. We are evaluating whether providing a UINTR based 
> completion mechanism can have benefit over eventfd based completions. 
> The benefits in practice are yet to be measured and proven.
>
Thank you for the clarification.

- QUESTION: If the processor has D2UST capability, would this allow the 
device to directly send the interrupt to the target process (the process 
that initiates the I/O through io_uring) instead of the kernel?

>> In our work, we have also been exploring precise UIs from the
>> currently running thread.  We call these CPU to UST (CPU2UST) UIs.
>> For example, a SIGSEGV generated by writing to a read-only page, a
>> SIGFPE generated by dividing a number by zero.
>>
>
> It is definitely possible in future to delivery CPU events as User 
> Interrupts. The hardware architecture for this is still being worked 
> on internally.
>
> Though our focus isn't on exceptions being delivered as User 
> Interrupts. Do you have details on what type of benefit is expected?
>
Described in the use-cases we mentioned above.

>> - QUESTION: Is there is a rough draft/plan that we can refer to that 
>> describes the
>> current thinking on these three cases.
>>
>> - QUESTION: Are there use cases for K2UST, or is K2UST the same as 
>> CPU2UST?
>>
>
> No, K2UST isn't the same as CPU2UST. We would expect limited benefits 
> from K2UST but on the other hand CPU2UST can provide significant 
> speedup since it avoids the kernel completely.
>
> Unfortunately, due to the large scope of the feature, the hardware 
> architecture development is happening in stages. I don't have detailed 
> plans for each of the sources of User Interrupts.
>
> Here is our rough plan:
>
> 1. Provide a common infrastructure to receive User Interrupts. This is 
> independent of the source of the interrupt. The intention here is to 
> keep the software APIs generic and extendable so that future sources 
> can be added without causing much disturbance to the older APIs.
>
> 2. Introduce various sources of User Interrupts in stages:
>
> UST2UST - This RFC. Available in the upcoming Sapphire Rapids processor.
>
> K2UST - Also available in upcoming Sapphire Rapids. Working towards 
> proving the value before sending something out.
>
> D2UST - Future processor. Hardware architecture being worked on 
> internally. Not much to share right now.
>
> CPU2UST - Future processor. Hardware architecture being worked on 
> internally. Not much to share right now.

Thank you for the update, really appreciate it.


>
> The saving and restoring of the registers is done by gcc when the 
> muintr flag along with the 'interrupt' attribute is used. Applications 
> can choose to save floating point registers as part of the interrupt 
> handler as well.
>
> To make it easier for applications we are working on implementing a 
> thin library that can help with some of this common functionality like 
> saving floating point registers or redirecting to 64 sub-handlers.

- QUESTION: Would this thin library also provide a mechanism to share 
data between sender and receiver through shared memory (similar to 
implementing Active message)?
- QUESTION: Is there a plan in the future to allow data to be 
transmitted along with the interrupt?

>> # Multi-threaded parallel programming example
>>
>> One of the uses for UIs that we have been exploring is combining the
>> message-passing and shared memory models for parallel programming.  In
>> our approach, message-passing is used for synchronization and shared
>> memory for data sharing.  The message passing part of the programming
>> pattern is based loosely on Active Messages (See ISCA92), where a
>> particular thread can turn off/on interrupts to ignore incoming
>> messages so they can execute critical sections without having to
>> notify any other threads in the system.
>>
>
> This look like a good fit for the User IPI (UST2UST) implementation in 
> this RFC. Have you had a chance to evaluate the current API design for 
> this usage?

Our approach requires point-to-point communication to implement the 
UST2UST use cases described above. From my understanding, the current 
API requires (n-1)*n descriptors to enable point-to-point communication 
(assuming a private UITT). Here, each receiver assigns a vector to the 
UI file descriptor (uifd) and shares it with the appropriate sender. 
This way, the receivers know the sender based on the vector.

Have other approaches to handling the case where the receiver needs to 
know the sender's identity been explored? In particular, approaches that 
do not require n^2 descriptors be created? In the context of the RFC, 
one possibility we have thought about would be where the sender assigns 
a vector to uifd (maybe based on its cpuid) and shares this information 
to all receivers. This would possibly only require n descriptors.

> Also, is any of the above work publicly available?

Not yet. We are still working on it and hope to update you on it.

Best regards,
Chrisma and Seth