Message ID | 20181018154101.18750-1-hch@lst.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | userfaultfd: disable irqs when taking the waitqueue lock | expand |
On Thu, 18 Oct 2018 17:41:01 +0200 Christoph Hellwig <hch@lst.de> wrote: > userfaultfd contains howe-grown locking of the waitqueue lock, > and does not disable interrupts. This relies on the fact that > no one else takes it from interrupt context and violates an > invariat of the normal waitqueue locking scheme. With aio poll > it is easy to trigger other locks that disable interrupts (or > are called from interrupt context). So... this is needed in 4.19.x but not earlier? Or something else?
On Thu, Oct 18, 2018 at 03:56:11PM -0700, Andrew Morton wrote: > On Thu, 18 Oct 2018 17:41:01 +0200 Christoph Hellwig <hch@lst.de> wrote: > > > userfaultfd contains howe-grown locking of the waitqueue lock, > > and does not disable interrupts. This relies on the fact that > > no one else takes it from interrupt context and violates an > > invariat of the normal waitqueue locking scheme. With aio poll > > it is easy to trigger other locks that disable interrupts (or > > are called from interrupt context). > > So... this is needed in 4.19.x but not earlier? Or something else? Yes.
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index bfa0ec69f924..356d2b8568c1 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1026,7 +1026,7 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait, struct userfaultfd_ctx *fork_nctx = NULL; /* always take the fd_wqh lock before the fault_pending_wqh lock */ - spin_lock(&ctx->fd_wqh.lock); + spin_lock_irq(&ctx->fd_wqh.lock); __add_wait_queue(&ctx->fd_wqh, &wait); for (;;) { set_current_state(TASK_INTERRUPTIBLE); @@ -1112,13 +1112,13 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait, ret = -EAGAIN; break; } - spin_unlock(&ctx->fd_wqh.lock); + spin_unlock_irq(&ctx->fd_wqh.lock); schedule(); - spin_lock(&ctx->fd_wqh.lock); + spin_lock_irq(&ctx->fd_wqh.lock); } __remove_wait_queue(&ctx->fd_wqh, &wait); __set_current_state(TASK_RUNNING); - spin_unlock(&ctx->fd_wqh.lock); + spin_unlock_irq(&ctx->fd_wqh.lock); if (!ret && msg->event == UFFD_EVENT_FORK) { ret = resolve_userfault_fork(ctx, fork_nctx, msg);